This chapter describes Oracle GoldenGate parameters for configuring, running, and managing Oracle GoldenGate processes.
These parameters are grouped based on the processing to which they apply--logging changes, extracting data, replicating data, managing processes, or globally across all.
This section summarizes Oracle GoldenGate parameters for HP NonStop, based on their functionality. This is followed by an alphabetized reference of the parameters.
GLOBALS parameters set global Oracle GoldenGate values to help standardize configuration and to perform other miscellaneous tasks. Normally, you would set global parameters when you install Oracle GoldenGate. Once set, you rarely need to change them, however, some of the parameters set in GLOBALS can be overridden by other Oracle GoldenGate programs.
The GLOBALS parameter file is stored in the subvolume where Oracle GoldenGate is installed. Use the editor to enter or change parameters in the GLOBALS file. You cannot change this file's name.
Oracle GoldenGate recommends setting global parameters before starting your first process. If you update the GLOBALS parameter file after starting a process, you must stop, then restart the process for the changes to take affect.
Table 2-1 GLOBALS Parameters: All
| Parameter | Description |
|---|---|
ADD DEFINE |
Defines non-default locations for certain Oracle GoldenGate components. |
AUDITING |
Tells various programs in the system whether the installation is TMF, non-TMF, or both. |
COMMENT |
Indicates comments. |
EMSLOG |
Establishes a global EMS Collector name. |
ERREPLYTIMEOUT |
Sets the timeout value for GGSCI communication with Oracle GoldenGate components. |
HEARTBEAT |
Ensures that Extract and Logger can send data to Replicat. |
HOMETERM |
Sets up a global home terminal for Oracle GoldenGate processes. |
HOMETERMMESSAGES |
Suppresses or displays messages to the home terminal. |
HOST |
Identifies various NonStop nodes in the network. |
IGNOREPARAMERROR |
Prevents programs from terminating abnormally (abending) when an unrecognized |
IGNORETMFDUMPS |
Prevents Extract, Manager, and GGSCI from capturing information about TMF disk and tape dumps. |
LOGGERFLUSHRECS |
Determines how many records Logger buffers before flushing to the log trail. |
LOGFILEOPENS |
Determines how many opens Logger keeps on the log files. |
LOGGERFLUSHSECS | LOGGERFLUSHCSECS |
Determines how many seconds Logger buffers records before flushing to the log trail. |
LOGGGSCICOMMANDS |
Determines whether user commands are written to the |
MANAGERREQUIRED |
Determines whether Extract and Replicat can run without a Manager process. |
MAXWILDCARDENTRIES |
Sets the initial allocation for wildcard entries. |
OLDGROUPNAMING |
Determines whether new Extract and Replicat group naming is enforced. |
PARAMCHECK |
Specifies whether Extract and Replicat check to ensure that the correct parameter file and process name have been specified at startup. |
REPORTFILEEXTENTS |
Allows the user to change the default |
SUPPRESSMARKERMESSAGES |
Suppresses messages generated when markers are processed by Extract, Replicat, or GGSCI. |
SWAPVOL |
Designates a swap volume for Oracle GoldenGate processes. |
TMFDUMPAGE |
Limits information that is returned when accessing the TMF catalog to dumps created during the specified number of days. |
TMFDUMPTABLEENTRIES |
Limits information that is returned when accessing the TMF catalog to the number of dumps specified. |
TMFREFRESHINTERVAL |
Sets the refresh interval. |
Manager parameters control how Manager interacts with different Oracle GoldenGate components such as Extract, Collector, and Replicat. GGSCI expects a parameter file named MGRPARM in the GGSPARM (or prefixSPARM) subvolume on the Oracle GoldenGate install volume. This name cannot be changed.
Table 2-2 Manager Parameters: General
| Parameter | Description |
|---|---|
BACKUPCPU |
Sets the backup CPU for the Manager process. |
CLEANUPSAVECOUNT |
Changes the number of Extract and Replicat processing history records that are returned by the GGSCI |
COMMENT |
Starts a comment line in the parameter file. |
TMFDUMPAGE |
Limits the amount of dump information returned from TMF dump process. |
TMFDUMPTABLEENTRIES |
Lets you store up to 6,000 TMF dump information entries. |
Table 2-3 Manager Parameters: Process Management
| Parameter | Description |
|---|---|
AUTORESTART |
Specifies processes to be restarted by Manager after a failure. |
AUTOSTART |
Specifies processes that are to be automatically started when Manager starts. |
EXCLUDESUFFIXCHAR |
Specifies characters that are not to be used as the first character of the suffix when generating process names. |
MAXABENDRESTARTS |
Determines how many times in |
RESTARTINTERVAL |
Determines the period in which restart retries are counted before being reset. |
Table 2-4 Manager Parameters: Maintenance
| Parameter | Description |
|---|---|
CHECKMINUTES |
Determines how often Manager cycles through its maintenance activities. |
PURGEOLDEXTRACTS for Manager |
Purges Oracle GoldenGate trails that are no longer needed according to specified criteria. |
PURGEOLDTASKS |
Purges Extract and Replicat tasks after a specified period of time. |
Table 2-5 Manager Parameters: Port Management
| Parameter | Description |
|---|---|
DYNAMICPORTLIST |
Specifies a range of ports that Manager can dynamically allocate. |
IPINTERFACE |
Restricts the Manager process to the interface specified by an IP address or DNS name. |
PORT |
Establishes the TCP/IP port number on which Manager listens for requests. |
TCPIPPROCESSNAME |
Specifies a TCP/IP process over which Manager listens for remote requests. |
Table 2-6 Manager Parameters: Error and Event Management
| Parameter | Description |
|---|---|
DISKTHRESHOLD | NODISKTHRESHOLD |
Generates an event message when an audit trail file that needs to be processed is in danger of being recycled. |
DOWNCRITICAL |
Includes a process that has terminated normally in the report generated by |
DOWNREPORT |
Specifies how often down processes are reported to EMS. |
LAGCRITICAL |
Specifies a time lag for Extract and Replicat that is reported to EMS as a critical message. |
LAGINFO |
Specifies a time lag for Extract and Replicat that is reported to EMS as informational. |
LAGREPORT |
Sets an interval for reporting Extract and Replicat lags to EMS. |
LOGFILESBEHIND | LOGFILESBEHINDINFO |
Reports a critical or informational message when Extract or Replicat falls a certain number of files behind the current log trail file. |
THRESHOLD | NOTHRESHOLD |
Generates an event message when an audit trail file that needs to be processed is in danger of being recycled. |
UPREPORT |
Determines how often process "I'm alive" messages are reported. |
Logger performs data extracts when a NonStop source is non-TMF. By default, GGSCI expects the Logger parameter file to be named LOGPARM and located in install_volume.GGSPARM.LOGPARM. Enter parameters in the default file unless you have strong reasons to specify a different file name.
Table 2-7 Logger Parameters: General
| Parameter | Description |
|---|---|
COMMENT |
Starts a comment line in the parameter file. |
CPU |
Specifies the primary and backup CPUs for the current logging process. |
FORCESTOPDELAY |
Causes the logging process to delay for a period of time before it is stopped by an operating system |
HEARTBEAT |
Causes Logger to issue heartbeat records every 60 seconds. |
NOTSTOPPABLE |
Enables Logger to be stopped only with the GGSCI |
PRIORITY |
Specifies the priority at which to run the current log process. |
Table 2-8 Logger parameters: Trail and File Management
| Parameter | Description |
|---|---|
EXCLUDEFILE |
Excludes a file or file set from extraction. Overrides all |
FILE |
Establishes a file or file set to be logged. |
INCLUDESOURCEAPPINFO | EXCLUDESOURCEAPPINFO |
Includes the source application program name and process ID as tokens in the trail. |
LOGGERFLUSHRECS |
Determines how many records Logger buffers before flushing to the log trail. |
LOGGERFLUSHSECS | LOGGERFLUSHCSECS |
Determines how many seconds Logger buffers records before flushing to the log trail. |
LOG |
Establishes a log process and the dimensions of the log trails for that process. |
LOGFILEOPENS |
Determines how many opens Logger keeps on the log files. |
LOGGERFILENUM |
Specifies the file number that |
LOGGERTIMEOUTSECS |
Controls how long |
Table 2-9 Logger Parameters: Error and Event Management
| Parameter | Description |
|---|---|
DEBUGONSTACKCHECK |
Instructs |
RECEIVEQWARN |
Issues an EMS warning if it receives a trail that exceeds the specified threshold. |
TRACEALLOPENS |
Instructs |
TRACECLOSES |
Instructs |
TRACEOPENS |
Instructs |
TRACEPROCESSIOS |
Instructs Logger to precede each record logged with information regarding the process that created the record. |
TRACESTATS |
Instructs Logger to keep statistics for each process that sends it database operations. |
The CHGNOTE process is started whenever Logger is started, whenever a change is made to the Logger configuration, or whenever a FILE RENAME operation occurs. A separate CHGNOTE process is started in each CPU. Each process updates the last modified timestamp for each $SYSTEM.GGS.GGSCPUnn file, which notifies any user application that is bound to the Oracle GoldenGate Intercept library to reevaluate the Logger audit configuration segment.
By default, GGSCI expects the CHGNOTE parameter file to be named CHGPARM and located in the Oracle GoldenGate installation subvolume (not the GGSPARM subvolume). The CHGNOTE parameter file is optional.
| Parameter | Description |
|---|---|
RENAMEBUMPDELAY |
Determines the number of seconds to bump the last modified timestamp on the |
Extract extracts source data from the TMF audit trail and writes it to one or more files, called Oracle GoldenGate trails. Extract parameter files are typically stored in the GGSPARM subvolume with a file name that is the same as the group name. When using Extract checkpoints, the name of the Extract parameter file must match the entry designated in GGSCI for the Extract group.
Table 2-10 Extract Parameters: General
| Parameter | Description |
|---|---|
CHECKPARAMS |
Verifies parameter file contents. |
COMMENT |
Indicates comments. |
GETENV |
Retrieves an environment variable set with the |
INCLUDE |
Identifies a macro library to be included. |
OBEY |
Accepts parameters from a different parameter file. |
SETENV |
Sets a NonStop environment variable. |
Table 2-11 Extract parameters: Process Management
| Parameter | Description |
|---|---|
ALLOCFILES |
Controls the amount of incremental memory that is allocated when the amount of memory specified with |
ALTFILERESOLVE | NOALTFILERESOLVE |
Resolves wildcards and identifies audited alternate key files on startup. |
ALTINPUT |
Distributes files across multiple Extract processes so that Extract never processes two files in sequence. |
AUDSERVCACHEBLOCKS |
Determines how much caching of SQL table definitions occurs when reading update records. |
AUDSERVCPU |
Begins reading the audit trail in a CPU different from that in which the Extract is running. |
AUDSERVPARAM |
Passes parameters that are specific to the Audserv process. |
AUDSERVPREFIX |
Permits Extract to assign custom prefixes to Audserv processes. |
AUDSERVPROCESS |
Permits Extract to assign an Audserv process a 5-character name. |
AUDSERVPROGRAM |
Allows customer to run both Native mode and TNS mode Audserv programs. |
BEGIN |
Establishes the time for Extract to start processing TMF audit trails. Required when |
CHECKPOINTSECS |
Specifies the maximum amount of time that Extract waits before writing a checkpoint. |
DYNAMICPARTITIONS |
Retrieves data from dynamically split partitions without restarting Extract. |
END |
Establishes the |
EXTRACT |
Links this run to a particular Extract group. Required unless |
EXCLUDEGGSTRANSRECS | INCLUDEGGSTRANSRECS |
Suppresses the creation of trail records that track distributed network transactions. |
FILEAGEDAYS |
Specifies the number of days a file can be inactive before it is aged off Extract's file list. |
FILEEXCLUDE TABLEEXCLUDE |
Excludes one or more files or tables from a file list. |
FILERESOLVE |
Controls whether wildcard lists are resolved during startup or dynamically. |
FILTERVIEW | NOFILTERVIEW |
Outputs a SQL view update only when a column has changed. |
GETALTKEYS | IGNOREALTKEYS |
Instructs Extract to produce (or ignore) file create records for alternate key files after it outputs the file create record for the primary file. |
GETDEFAULTS |
Lets you reset all Extract parameters to their default settings. |
HEARTBEAT |
Informs receiving processes (such as Replicat) that the source system is processing data. |
LIMITRECS |
Limits the number of records when capturing directly from a source table. |
MAXWILDCARDENTRIES |
Sets the initial allocation of wildcard entries. |
NUMFILES |
Controls the initial allocation of memory dedicated to storing information about files to be processed by Oracle GoldenGate. |
OPENTIMEOUT | OPENTIMEOUTMINUTES |
Controls how long inactive files opened for |
PASSTHRU | NOPASSTHRU |
Allows Extract to distribute data from log trails when the source file has been renamed or purged. |
SOURCEISFILE | SOURCEISTABLE |
Reads all records from the source file rather than reading changes from the TMF audit trails. Useful for initial synchronization of source and target. |
SPECIALRUN |
Processes from user-specified |
SYSKEYCONVERT |
For direct file extraction only. Specifies the format of the syskey in the output file for entry-sequenced and ACI files. |
TMFREFRESHINTERVAL |
Sets the refresh interval in seconds. |
VERBOSE |
Allows Extract to allocate memory in large blocks. |
Table 2-12 Extract Parameters: Security
| Parameter | Description |
|---|---|
DECRYPTTRAIL |
Performs decryption when reading encrypted files in the Oracle GoldenGate trail. |
ENCRYPTTRAIL | NOENCRYPTTRAIL |
Encrypts data records in subsequent trail files until a |
LOGON |
Specifies an alternate user ID under which Extract should run. |
Table 2-13 Extract Parameters: File and Trail Management
| Parameter | Description |
|---|---|
EOFDELAY | EOFDELAYCSECS |
Determines how many seconds Extract delays before looking for more data to extract when the source is an Oracle GoldenGate trail. |
AUDITRETRYDELAY |
Controls the frequency with which TMF audit trails are checked for new data. |
DISPLAYTRAILSWITCH | NODISPLAYTRAILSWITCH |
Causes Extract to write messages to the report file when it switches trails. |
ETNEWFORMAT | ETOLDFORMAT |
Causes Extract to generate trails in a format that is compatible with Replicat version 6.0 and above or with earlier versions. |
EXTFILE |
Identifies the local Oracle GoldenGate trail as a flat file to which extracted data is output. |
EXTTRAIL |
Identifies the local Oracle GoldenGate trail to which extracted data is output. |
FLUSHSECS | FLUSHCSECS |
Determines the maximum time an extracted record remains buffered before being written to the Oracle GoldenGate trail. |
NUMEXTRACTS |
Allows Extract to exceed the default number of files in an Oracle GoldenGate trail. |
OMITAUDITGAPCHECK |
Enables Extract to continue processing even when a gap is detected between the oldest required audit trail and the current trail. |
PURGEOLDEXTRACTS for Extract and Replicat |
Purges Oracle GoldenGate trail files when they are no longer required. |
PURGERESTORE | NOPURGERESTORE |
Purges audit files restored from tape immediately after use by Extract. |
RESTORE | NORESTORE |
Controls whether audit dumps on tape are reloaded when the dumps are not available on disk. |
ROLLOVER |
Specifies conditions under which local and remote Oracle GoldenGate trails start a new file in a sequence. |
TMFTRAILTRACE |
Instructs Extract to write messages to the report file when it switches to the next TMF audit trail. |
Table 2-14 Extract Parameters: Remote Processing
| Parameter | Description |
|---|---|
IPINTERFACE |
Restricts the Extract process to the interface specified by an IP address or DNS name. |
POSITIONFIRSTRECORD |
Positions Extract at the beginning of the input file to reprocess all records. For direct file extraction with |
RMTBATCH |
Outputs data to a batch file on the target system. Enables activities such as trickle batch transfer. |
RMTFILE |
Identifies the remote Oracle GoldenGate trail as a single file. |
RMTHOST |
Establishes the remote TCP/IP host and port number to which output will be directed for subsequent |
RMTHOSTALT |
Specifies an alternative IP address in the event that the |
RMTTASK |
Creates a task on the target system. |
RMTTRAIL |
Identifies the remote Oracle GoldenGate trail. |
TCPBUFSIZE |
Sets the maximum size for the message buffer sent to remote systems. |
TCPFLUSHBYTES |
Controls the TCP/IP flush size for better communications performance. |
TCPIPPROCESSNAME |
Changes the name of the TCP/IP process. Specifies additional processes as backups. |
TCPIPSWITCHERRS |
Determines how many retries on a TCP/IP connection are attempted before trying alternate addresses and controllers. |
TCPSOURCETIMER | NOTCPSOURCETIMER |
Adjusts timestamps of source records to synchronize with timestamp of target system. |
Table 2-15 Extract Parameters: Performance Management
| Parameter | Description |
|---|---|
FASTPOSITION | NOFASTPOSITION |
Instructs the |
FASTIO |
Writes output using large I/O transfers to increase performance. |
FASTREADS | NOFASTREADS |
Enables larger reads (up to 28K bytes) of trails. |
FUNCTIONSTACKSIZE |
Controls the size of the memory stack that is used for processing Oracle GoldenGate functions. |
MAXTRANSMEM |
Controls the maximum amount of memory allocated for a transaction. |
SHORTREADDELAY |
Optimizes system resource usage when Extract runs online by instructing Extract to delay a specified number of seconds whenever a block shorter than the optimal block size is read from a trail. |
Table 2-16 Extract Parameters: Data Selection
| Parameter | Description |
|---|---|
COMPRESSDELETES | NOCOMPRESSDELETES |
Extracts only the primary key fields or columns for deleted operations, which reduces network bandwidth and trail storage requirements. |
FETCHCOMPS | FETCHLASTIMAGE |
Controls how data from missing fields in a compressed update is supplied. |
GETCOMPS | IGNORECOMPS |
Includes or excludes compressed records in the extracted data. |
GETCREATES | IGNORECREATES |
Includes or excludes create records in the extracted data. |
GETDELETES | IGNOREDELETES |
Includes or excludes delete records in the extracted data. |
GETFILEOPS | IGNOREFILEOPS |
Includes or excludes file alter, create, purge, purgedata, rename and setmode records in the extracted data. |
GETINSERTS | IGNOREINSERTS |
Includes or excludes insert records in the extracted data. |
GETMARKERS | IGNOREMARKERS |
Includes or excludes marker records in the extracted data. |
GETNETCHANGES | IGNORENETCHANGES |
Includes or excludes net update records in the extracted data. |
GETNEWCOLUMNS | IGNORENEWCOLUMNS |
Includes or excludes SQL column change records in the extracted data. |
GETPARTONLYPURGEDATAS | IGNOREPARTONLYPURGEDATAS |
Includes or excludes |
GETPURGES | IGNOREPURGES |
Includes or excludes file purge operations in the extracted data. |
GETPURGEDATAS | IGNOREPURGEDATAS |
Includes or excludes operations to purge file data in the extracted data. |
GETRENAMES | IGNORERENAMES |
Includes or excludes file rename records in the extracted data. |
GETREPLICATES | IGNOREREPLICATES |
Includes or excludes records produced by Replicat. |
GETROLLBACKS | IGNOREROLLBACKS |
Includes or excludes rollback records from being extracted. |
GETUPDATEAFTERS | IGNOREUPDATEAFTERS |
Includes or excludes update after-image records in the extracted data. |
GETUPDATEBEFORES | IGNOREUPDATEBEFORES |
Includes or excludes update before-image records in the extracted data. |
GETUPDATES | IGNOREUPDATES |
Includes or excludes update records in the extracted data. |
GETAUXTRAILS | IGNOREAUXTRAILS |
Ignores database changes stored in auxiliary TMF audit trails. |
READTHRULOCKS | NOREADTHRULOCKS |
Ignores record locks when performing |
Table 2-17 Extract Parameters: Data Mapping
| Parameter | Description |
|---|---|
COLMATCH |
Sets global column mapping rules. |
FILE | TABLE |
Outputs record changes from the named file into the current trail, subject to conditions and mapping specified as part of this parameter. At least one |
Table 2-18 Extract Parameters: Data Formatting and Conversion
| Parameter | Description |
|---|---|
CONVERTALLFLOATSTOIEEE | NOCONVERTALLFLOATSTOIEEE |
Converts all float data type numbers in a table from Oracle GoldenGate HP NonStop Tandem (TDM) format to the Institute of Electrical and Electronics Engineers (IEEE) format. |
CONVERTALLFLOATSTOTDM | NOCONVERTALLFLOATSTOTDM |
Converts all float data type numbers in a table from IEEE format to TDM format. |
FILLSHORTRECS | NOFILLSHORTRECS |
Fills out records that are shorter than maximum length. |
FORMATASCII | NOFORMATASCII |
Formats output in external ASCII format. |
FORMATSQL | NOFORMATSQL |
Formats extracted records into equivalent |
FORMATXML |
Formats extracted data into equivalent XML syntax. |
FORMATLOCAL |
Controls whether ASCII formatting occurs on NonStop or the target system. |
REPLACEBADCHAR |
Replaces invalid character values with specified entry. |
REPLACEBADNUM |
Replaces invalid numeric values with specified entry. |
Y2KCENTURYADJUSTMENT | NOY2KCENTURYADJUSTMENT |
Controls whether the Y2K window changes the century. |
Table 2-19 Extract Parameters: Data Customization
| Parameter | Description |
|---|---|
COBOLUSEREXIT |
Invokes COBOL85 user exit routines at different points during processing. |
CUSEREXIT |
Invokes C user exit routines at different points during processing. |
MACRO |
Creates an Oracle GoldenGate macro. |
MACROCHAR |
Changes the macro character to something other than #. |
EMBEDDEDMACROS | NOEMBEDDEDMACROS |
Processes macros embedded within a quoted string. |
TALUSEREXIT |
Invokes TAL user exit routines at different points during processing. |
Table 2-20 Extract Parameters: Reporting and Error Handling
| Parameter | Description |
|---|---|
DISCARDFILE |
Specifies file name to contain records that could not be processed. |
DISCARDROLLOVER |
Specifies times at which to create new discard files. |
DISPLAYFILEREFRESHES |
Displays a message when file attributes are refreshed. |
ERROR59ROLLOVER |
Enables reading to skip ahead to the next file in a sequence upon encountering a damaged block (Guardian Error 59). |
HOMETERMMESSAGES |
Suppresses message display to the home terminal. |
LAGSTATS |
Specifies parameters for capturing lag and related statistics. |
LIST | NOLIST |
Control whether the macros of a macro library are listed in the report file. |
REPORT |
Schedules a statistical report at a specified date or time. |
REPORTCOUNT |
Reports records processed at defined intervals. |
REPORTROLLOVER |
Specifies times at which to create new report files. |
RETRYERR |
Sets up error retry options (for example, to retry network errors automatically). |
STATOPTIONS |
Optionally reports zero counts for insert, update and delete operations |
SUPPRESSALLALTERMESSAGES | NOSUPPRESSALLALTERMESSAGES |
Suppress all messages produced when Extract finds a |
SUPPRESSALTERMESSAGES | NOSUPPRESSALTERMESSAGES |
Suppress all except the first messages produced when Extract finds a |
SUPPRESSFETCHCOMPRESSEDDISCARDS | NOSUPPRESSFETCHCOMPRESSEDDISCARDS |
Suppress display of messages when a |
SUPPRESSFILEOPMESSAGES | NOSUPPRESSFILEOPMESSAGES |
Suppress the output of messages generated after |
SUPPRESSMARKERMESSAGES |
Suppress messages generated when markers are processed. |
VERSIONERR |
Specifies error handling when database definitions are out of synchronization with audit trail records. |
Table 2-21 Extract Parameters: Application-Specific
| Parameter | Description |
|---|---|
DICTIONARY |
Opens a DDL dictionary. Used to associate record layouts with Enscribe files. |
EXPANDDDL |
Enables custom specification of Enscribe array items and formatting of field names. |
Replicat reads data from Oracle GoldenGate trails that were created by Extract or Logger. Replicat parameter files are typically stored in the GGSPARM subvolume with a file name that is the same as the group name. When using Replicat checkpoints, the name of the Replicat parameter file must match the entry designated in GGSCI for the Replicat group.
Table 2-22 Replicat Parameters: General
| Parameter | Description |
|---|---|
CHECKPARAMS |
Verifies parameter file contents. |
COMMENT |
Indicates comments. |
GETENV |
Retrieves an environment variable set with the |
INCLUDE |
Identifies a macro library to be included. |
OBEY |
Accepts parameters from a different parameter file. |
SETENV |
Sets a NonStop environment variable. |
Table 2-23 Replicat Parameters: Process Management
| Parameter | Description |
|---|---|
ALLOCFILES |
Controls the amount of incremental memory that is allocated when the amount of memory specified with |
ASSUMETARGETDEFS |
Assumes that the source files or tables are structured like the target, bypassing the retrieval process from source system. |
AUDITREPS | NOAUDITREPS |
Turns auditing on or off for Replicat operations. |
BEGIN |
Establishes the time for Replicat to begin processing the Oracle GoldenGate trails. Required when |
CHECKPOINTSECS |
Controls how often Replicat writes a checkpoint when checkpoints are not being generated as a result of transaction commits. |
CHECKUNIQUEKEY | NOCHECKUIQUEKEY |
Forces Replicat to check for unique key constraint violations on entry-sequenced files before inserting data. |
COORDINATOR |
Identifies the process that will coordinate transactions that are distributed across multiple nodes. |
DICTIONARY |
Opens a DDL dictionary. Used to associate definitions with Enscribe files for column mapping and selectivity condition evaluation. |
END |
Establishes the |
ENTRYSEQUPDATES |
Applies changes to entry-sequenced files exactly as they were on the source system (non-audited files only). |
EOFDELAY | EOFDELAYCSECS |
Determines how many seconds Replicat delays before looking for more data to replicate. |
EXPANDDDL |
Enables custom specification of Enscribe array items and formatting of field names. |
EXTFILE |
Provides the source of records to be replicated. Either |
EXTTRAIL |
See |
FASTREADS | NOFASTREADS |
Enables larger reads (up to 28K bytes) of Oracle GoldenGate trails. |
FILEAGEDAYS |
Specifies the number of days a file can be inactive before it is aged off Replicat's file list. |
FILEOPWARNING |
Controls Replicat's behavior when it attempts to purge non-existent files. |
FLUSHCHECKPOINT | NOFLUSHCHECKPOINT |
Controls whether files are flushed from an Oracle GoldenGate trail when Replicat records a checkpoint. |
FORCEUSESYSKEY | NOFORCEUSESYSKEY |
Forces Replicat to specify the |
GETDEFAULTS |
Lets you reset all Extract parameters to their default settings. |
GETNETWORKALTFILENAMES | IGNORENETWORKALTFILENAMES |
Controls whether the file system qualifies the alternate key file names with the local node name. |
GROUPTRANSOPS |
Groups multiple transactions within larger transactions, which can greatly enhance performance. |
MAXDISCARDRECS |
Limits the number of discarded records reported to the discard file. |
MAXETCHECKPOINTSECS |
Specifies the maximum amount of time that Replicat waits before writing a checkpoint. |
MAXWILDCARDENTRIES |
Sets the initial allocation of wildcard entries. |
NETWORKCHECKPOINTS |
When Replicat encounters partitions residing on a remote node, triggers update of the local checkpoint file to allow identification of replicated data. |
NUMFILES |
Controls the initial allocation of memory dedicated to storing information about files to be processed by Oracle GoldenGate. |
OPENTIMEOUT | OPENTIMEOUTMINUTES |
Controls how long inactive files opened for |
PARTMAP |
Specifies alternate partitioning schemes during file creation on the backup system. |
POSITIONFIRSTRECORD |
Positions Replicat at the beginning of the input file to reprocess all records. For direct file extraction with |
PURGEDATAALTFILES | NOPURGEDATAALTFILE |
Purges data on the alternate key files when an Enscribe |
PURGEOLDEXTRACTS for Extract and Replicat |
Purges Oracle GoldenGate trails when they are no longer required. |
REPLICAT |
Links this run to a particular Replicat group (set up with GGSCI) to facilitate continuous processing through the Oracle GoldenGate trails. Required unless |
REPNEWCOLUMNS | NOREPNEWCOLUMNS |
Replicates SQL |
REVERSEWINDOWSECS | REVERSEWINDOWCSECS |
Holds transactions for a specified number of seconds or centiseconds to wait for an out-of-order transaction to be processed. Valid for non-audited target files. |
SOURCEDEFS |
Specifies a text file that contains Oracle GoldenGate source file and table data definitions (created by the |
SPECIALRUN |
Processes from user specified |
WAITFILEEVENT |
Waits for a file-related event to occur before proceeding. |
Table 2-24 Replicat Parameters: Security
| Parameter | Description |
|---|---|
DECRYPTTRAIL |
Use |
LOGON |
Specifies an alternate user ID under which to run Replicat. |
Table 2-25 Replicat Parameters: Data Selection
| Parameter | Description |
|---|---|
GETCREATES | IGNORECREATES |
Includes or excludes file create records from a particular trail. |
GETDELETES | IGNOREDELETES |
Includes or excludes delete records from being replicated. |
GETFILEOPS | IGNOREFILEOPS |
Includes or excludes file alter, create, purge, purgedata, rename and setmode records from being replicated. |
GETINSERTS | IGNOREINSERTS |
Includes or excludes insert records from being replicated. |
GETNEWCOLUMNS | IGNORENEWCOLUMNS |
Includes or excludes detection of new SQL columns. |
GETPARTONLYPURGEDATAS | IGNOREPARTONLYPURGEDATAS |
Includes or excludes |
GETPURGES | IGNOREPURGES |
Instructs Replicat to include or ignore the |
GETPURGEDATAS | IGNOREPURGEDATAS |
Instructs Replicat to include or ignore the |
GETRENAMES | IGNORERENAMES |
Includes or excludes file rename records in the current Oracle GoldenGate trail. |
GETUPDATEAFTERS | IGNOREUPDATEAFTERS |
Includes or excludes update after-images from being replicated. |
GETUPDATEBEFORES | IGNOREUPDATEBEFORES |
Includes or excludes update before-images from being replicated. |
GETUPDATES | IGNOREUPDATES |
Includes or excludes update records from being replicated. |
INSERTALLRECORDS | NOINSERTALLRECORDS |
Causes Replicat to insert every change operation made to a record as a new record in the database. |
INSERTMISSINGUPDATES | NOINSERTMISSINGUPDATES |
Inserts update operations that have no corresponding records already inserted into the target file. |
OVERRIDEDUPS | NOOVERRIDEDUPS |
Overlays the current insert record onto an existing record whenever a duplicate record error occurs. Otherwise, an error results. |
UPDATEDELETES | NOUPDATEDELETES |
Changes deletes to update operations. Useful for archiving and reporting. |
Table 2-26 Replicat Parameters: Data Mapping
| Parameter | Description |
|---|---|
COLMATCH |
Sets global column mapping rules. |
MAP |
Specifies a source to target file pair. At least one |
MAPEXCLUDE |
Excludes one or more files from a wildcard map list. |
Table 2-27 Replicat Parameters: Data Formatting and Conversion
| Parameter | Description |
|---|---|
CONVERTALLFLOATSTOIEEE | NOCONVERTALLFLOATSTOIEEE |
Converts all float data type numbers in a table from Oracle GoldenGate HP NonStop Tandem (TDM) format to the Institute of Electrical and Electronics Engineers (IEEE) format. |
CONVERTALLFLOATSTOTDM | NOCONVERTALLFLOATSTOTDM |
Converts all float data type numbers in a table from IEEE format to TDM format. |
FILLSHORTRECS | NOFILLSHORTRECS |
Fills out records that are shorter than maximum length. |
INSERTDELETES | NOINSERTDELETES |
Changes deletes to insert operations. Useful for archiving and reporting. |
INSERTUPDATES | NOINSERTUPDATES |
Changes updates to insert operations. |
REPLACEBADCHAR |
Replaces invalid character values with specified entry. |
REPLACEBADNUM |
Replaces invalid numeric values with specified entry. |
UPDATEDELETES | NOUPDATEDELETES |
Changes deletes to update operations. Useful for archiving and reporting. |
UPDATEINSERTS | NOUPDATEINSERTS |
Changes insert operations to update operations. Useful for archiving and reporting. |
Y2KCENTURYADJUSTMENT | NOY2KCENTURYADJUSTMENT |
Controls whether the Y2K window changes the century. |
Table 2-28 Replicat Parameters: Data Customization
| Parameter | Description |
|---|---|
COBOLUSEREXIT |
Instructs Replicat to invoke COBOL85 user exit routines at different points during processing. |
CUSEREXIT |
Instructs Replicat to invoke C user exit routines at different points during processing. |
MACRO |
Creates an Oracle GoldenGate macro. |
MACROCHAR |
Changes the macro character to something other than #. |
EMBEDDEDMACROS | NOEMBEDDEDMACROS |
Processes macros embedded in a quoted string. |
TALUSEREXIT |
Invokes TAL user exit routines at different points during processing. |
Table 2-29 Replicat Parameters: Reporting and Error Handling
| Parameter | Description |
|---|---|
DISCARDFILE |
Contains records that could not be processed. |
DISCARDROLLOVER |
Specifies times at which to create new discard files. |
DISPLAYFILEREFRESHES |
Displays a message when file attributes are refreshed. |
HANDLECOLLISIONS | NOHANDLECOLLISIONS |
Ignores duplicate record and missing record errors. Useful for overlaying database changes that occur during initial loading and other purposes. |
LAGSTATS |
Specifies parameters for capturing lag and related statistics. |
LIST | NOLIST |
Controls whether the macros of a macro library are listed in the report file. |
OPENWARNINGS |
Determines how Replicat processes Enscribe file open warnings. |
REPERROR |
Determines how Replicat responds to certain Guardian and SQL errors, including how transactions are backed out. |
REPORT |
Schedules a statistical report at a specified date or time. |
REPORTCOUNT |
Reports records processed at defined intervals. |
REPORTROLLOVER |
Specifies times at which to create new report files. |
REPORTTMFEXCEPTIONS |
Reports when Replicat detects a |
REPSQLLOG |
Redirects the log to a file other than the Replicat report file. |
RESTARTCOLLISIONS | NORESTARTCOLLISIONS |
Enables |
RETRYDELAY |
Specifies the delay between attempts at retrying a failed insert, update, or delete operation. |
SQLFORMATDISCARDFILE |
Contains SQLCI input for discarded SQL/MP operations |
SQLFORMATDISCARDROLLOVER |
Specifies when to create new SQL formatted discard files |
STATOPTIONS |
Optionally reports on time spent creating files, purging files, and ending transactions. Optionally reports zero counts for insert, update and delete operations. |
SUPPRESSFILEOPMESSAGES | NOSUPPRESSFILEOPMESSAGES |
Suppresses the output of messages generated after |
SUPPRESSMARKERMESSAGES |
Suppresses messages generated when markers are processed. |
TMFEXCEPTIONS | NOTMFEXCEPTIONS |
Resolves certain out-of-order records in the TMF audit trail within a given transaction. |
WARNRATE |
Determines how often database errors are reported. Useful when many errors are anticipated; controls the size of the log. |
Coordinator tracks distributed network transactions to coordinate updates on multiple nodes.
Table 2-30 Coordinator Parameters: All
| Parameter | Description |
|---|---|
COORDINATOR |
Links this run to a particular Coordinator group. |
FASTREADS | NOFASTREADS |
Enables larger reads (up to 28K bytes) of trails. |
READER |
Triggers the creation of a Reader process to access a local trail and communicate with the system Coordinator |
Syncfile lets you schedule and manage file duplication when you wish to copy files in their entirety.
Table 2-31 Syncfile Parameters: All
| Parameter | Description |
|---|---|
ABORTDUPERRWINDOW |
Specifies a time period for Syncfile to restart aborted processes before ending abnormally. |
CHECKINTERVAL |
Lets you change the time interval for evaluating scheduled events. |
COMMENT |
Starts a comment line in the parameter file. |
DUP |
Specifies options for duplication, including the file set to duplicate, the target file set, the duplication method and schedule. At least one |
DUPONLYAFTEREVENT |
Specifies a time for events to execute. |
DUPPROCESS |
Lets you specify the process name used for the TACL or FUP process used to duplicate files. This parameter is useful for debugging. |
EVENT |
Identifies an event, by name, and supplies its schedule. At least one |
HOMETERMMESSAGES |
Suppresses message display to the home terminal. |
REPORTROLLOVER |
Specifies when to age the current report file and create a new one. |
SYNCFILE |
For use when the parameter file has a different name than the group name. |
TRACE |
Logs Syncfile processing messages to a report file. |
VERBOSE |
Turns on display of FUP and TACL messages. |
Syncfile
Use ABORTDUPERRWINDOW to specify an alternative time interval for Syncfile to restart processes that abend.
To duplicate files, Syncfile starts either a FUP or TACL process to perform the duplication. If one of these processes abends, Syncfile automatically restarts it. However, if the process abends more than three times within one hour, the Syncfile process abends.
1 HOUR
ABORTDUPERRWINDOW num_units unit;
num_unitsThe number of units of time, as in 1 or 10.
unitThe time unit. Specify one of: SECONDS, MINUTES or HOURS.
ABORTDUPERRWINDOW 4 HOURS;
GLOBALS
ADD DEFINE provides the ability to specify valid DEFINE statements to help with resolution of default locations, file names, or anything that can be set using a standard DEFINE.
Oracle GoldenGate recommends that you use the defaults, however, if your installation requires that you use other definitions, you can define them in the GLOBALS parameter file using ADD DEFINE. By specifying defines in the GLOBALS parameter file, they are established for all the affected components you install; you don't have to specify them in other parameter files.
ADD DEFINE changes the following defaults:
The default for the Oracle GoldenGate environment prefix, GGS.
The default volume and subvolume for AUDCFG, $SYSTEM.GGS.AUDCFG.
The default number of seconds the Logger process delays before closing the old file when rolling to a new file.
The default volume and subvolume for parameter files, $VOL.GGSPARM.
The default volume and subvolume for report files, $VOL.GGSRPT.
Note:
Defaults set by ADD DEFINE for =GGS_PARAMS or =GGS_REPORT can be overridden by the appropriate GGSCI ADD or ALTER command.
The default TCPIP process name, $ZTC0
Note:
Defaults set by the ADD DEFINE for =TCPIP^PROCESS^NAME can be overridden by using the TCPIPPROCESSNAME parameter for Manager or Extract or using the RMTHOST TCPIPPROCESSNAME option.
When you create a new volume, subvolume, or file name, you need to specify the new location and file name in the appropriate places, such as when you execute a GGSCI ADD or ALTER command, or in any parameter file where you specified a location or file name.
ADD DEFINE {define}, [class_node_vol_spec]
defineThe define must be one of the following:
=GGS_AUDCFGChanges the default file name of the AUDCFG file.
=GGS_DB_SUBVOLDefines the subvolume location of the Oracle GoldenGate database.
=GGS_LOGFILE_CLOSE_DELAYChanges the delay before Logger closes the old file when rolling to a new one. The default is 120 seconds.
=GGS_PARAMSChanges the default subvolume used to store parameter files.
=GGS_PREFIXLets you define a different Oracle GoldenGate prefix for process names and subvolumes.
=GGS_REPORTChanges the default subvolume used to store the report file.
=TCPIP^PROCESS^NAMEChanges the default TCPIP process name.
class_node_vol_specIf you are defining a new volume and subvolume or a file name enter class_node_vol_spec as follows:
To specify a file name, enter:
ADD DEFINE =GGS_type, CLASS MAP, FILE \NODE.$volume.subvolume.file_name
To specify a new default volume and subvolume, enter:
ADD DEFINE =GGS_type, CLASS DEFAULTS, VOLUME \NODE.$volume.subvolume
To specify a new prefix, enter:
ADD DEFINE =GGS_PREFIX, CLASS MAP, FILE $prefix
To specify a delay time for closing the log file, enter:
ADD DEFINE =GGS_LOGFILE_CLOSE_DELAY, CLASS MAP, FILE $xseconds
Where FILE is up to five digits for seconds and x can be any character a to z.
This example changes the default subvolume used to store parameter files to $GGSPROD.
ADD DEFINE =GGS_PARAMS, CLASS DEFAULTS, VOLUME \PROD.$DATA3.GGSPROD
This example changes the default subvolume used to store the report files.
ADD DEFINE =GGS_REPORT, CLASS DEFAULTS, VOLUME \PROD.$DATA3.NEWRPT
This example changes the location of the audit configuration file.
ADD DEFINE =GGS_AUDCFG, CLASS MAP, FILE \NODE.$DATA1.GGS.AUDCFG
This example identifies the subvolume where the default database is installed. This is not recommended because moving the database can cause problems when migrating to a new release of Oracle GoldenGate. If Extract, Replicat, or Syncfile are moved to a different subvolume it may be necessary to use the defines since they must know where to find checkpoint files.
ADD DEFINE =GGS_DB_SUBVOL, CLASS DEFAULTS, VOLUME, \PROD.$DATA6.GGS
This example sets a delay of 240 seconds for Logger to wait before closing the old file when rolling to a new file.
ADD DEFINE =GGS_LOGFILE_CLOSE_DELAY, CLASS MAP, FILE $a240
This example changes the default $ZTC0 TCPIP process name to $ZTC3.
ADD DEFINE =TCPIP^PROCESS^NAME, FILE $ZTC3
Note:
For more information on working with defines, see the Administering Oracle GoldenGate for HP NonStop (Guardian).
Extract, Replicat
Use ALLOCFILES to control the incremental number of memory structures allocated once the initial memory allocation specified by the NUMFILES parameter is reached (see "NUMFILES"). Together, these parameters control how process memory is allocated for storing information about the source and target tables being processed.
The default values should be sufficient for both NUMFILES and ALLOCFILES, because memory is allocated by the process as needed, system resources permitting.
ALLOCFILES must occur before any TABLE or MAP entries to have any effect.
500
ALLOCFILES num_structures
num_structuresThe additional number of memory structures to be allocated. Do not set ALLOCFILES to an arbitrarily high number, or memory will be consumed unnecessarily. The memory structures of Oracle GoldenGate support up to two million tables.
ALLOCFILES 1000
Extract
Use ALTFILERESOLVE to resolve wildcards during the startup of Extract. As the wildcards are resolved, audited alternate key files are placed on the "exclude list" so no data will be sent for them. This avoids duplicate key errors that can otherwise occur when alternate key file I/O is encountered before primary file I/O. A message is produced for each wildcard entry as it is processed.
Unstructured files, SQL/MP tables, SQL/MX tables, and NSK standard subvolumes are ignored during the scan.
After startup, any file create or alter operation triggers Extract to evaluate the file for audited alternate key files to add to the exclude list.
To avoid the initial scan and its startup overhead, set NOALTFILERESOLVE. When this is set, alternate key files will be identified when the first primary file I/O is encountered.
ALTFILERESOLVE
ALTFILERESOLVE | NOALTFILERESOLVE
The following is an example of the message produced during a wildcard scan at startup.
2010-02-16 08:10:58.161307 Scanning $*.A12GEN* for Alternate Key Files 2010-02-16 08:10:58.230067 Files scanned 16 2010-02-16 08:10:58.230788 Scanning $*.A12SQL* for Alternate Key Files 2010-02-16 08:10:58.289293 Files scanned 72
Extract
Use ALTINPUT for direct file extraction. With ACI files, multiple files can be in use at one time. For example, processing can continue on Monday's file after midnight, while Tuesday's file is opened for new data. To handle a multiple file situation, run more than one Extract process for the file sequence. Use the ALTINPUT RANGE option to distribute the files across the processes so that Extract never processes two files in sequence.
You can also use ALTINPUT to specify the access mode of the file open, and to move Extract to the next sequential file if an application has a file open that it is not updating.
To set up Extract for direct file extraction, specify the FILETYPE option in the GGSCI commands ADD or ALTER EXTRACT. When FILETYPE is specified, Extract uses a series of application files as the source of database changes rather than TMF audit trails or log trails. ALTINPUT sets up processing rules for the source files.
ALTINPUT [RANGE (x OF y)] [, SHARED | EXCLUSIVE | PROTECTED] [, OPENTIMEOUT minutes] [, TEMPLATE template] [, USENEXTMODIFIED] [, NOWAITNEXTMODIFIED] [, FASTREADS] [, WAITNEXTRBA num_bytes] [, JTSOFFSET byte_offset] [, TANDEMTSOFFSET byte_offset] [, ONEFILE]
RANGE (x OF y)Use RANGE to process multiple files within a sequence simultaneously. For information on specifying the RANGE option see "Specifying the ALTINPUT RANGE Option"
SHARED | EXCLUSIVE | PROTECTEDSpecifies the access mode of the OPEN for the input file. Defaults to SHARED, in read-only mode.
OPENTIMEOUT minutesDetermines when it is safe to assume that there are no more updates to the input file.
If OPENTIMEOUT is not specified, Extract assumes there are no more updates five seconds after the last one is received.
OPENTIMEOUT minutes instructs Extract to wait for the specified number of minutes before assuming there are no more updates.
An application can have an open file that is not being updated, such as when a report program has opened a file in other than read-only mode.
TEMPLATE templateSpecifies a file name template when FILETYPE is ENTRY that can be a wildcard.
The following example specifies files on any volume starting with $DATA, on the MYDAT subvolume, beginning with FL and ending in exactly three more characters.
TEMPLATE $DATA*.MYDAT.FL???
In GGSCI, the ADD EXTRACT or ADD REPLICAT command could specify that the first file to process would be $DATA5.MYDAT.FLABC.
NOWAITNEXTMODIFIEDBy default, Extract waits until the next file in the sequence is modified before processing it. Use NOWAITNEXTMODIFIED to move to the next file regardless of when it was modified.
FASTREADSCauses Extract to perform bulk reads of the source file set, boosting read performance.
WAITNEXTRA num_bytesCauses Extract to wait until the next file in the sequence has accumulated a specified number of bytes.
JTSOFFSET byte_offsetSpecifies the byte offset of a 64-bit Julian timestamp field within each record when specifying FILETYPE ENTRY. Various Oracle GoldenGate processes use the Julian timestamp to help determine replication lag. Use a timestamp field that reflects when the record was inserted into the database.
TANDEMTSOFFSET byte_offsetSpecifies the byte offset of a 48-bit timestamp field within each record when specifying FILETYPE ENTRY. Various Oracle GoldenGate processes use the Julian timestamp to help determine replication lag. Use a timestamp field that reflects when the record was inserted into the database.
ONEFILESpecify ONEFILE if FILETYPE ENTRY is used and the file set to process consists of a single file.
Specifying the ALTINPUT RANGE Option
ALTINPUT RANGE allows processing of multiple ACI BASE24 files within a sequence simultaneously. Since two TLF or PTLF files can be active at the same time, using ALTINPUT RANGE enables one Extract to process even Julian date files and the other Extract to process odd Julian date files. Even and odd Julian dates can be determined by computing a Julian date from the last six digits of each file name.
The "even" Extract process retrieves data from files where the remainder of the Julian date divided by 2 is zero (range 1 of 2). The "odd" Extract retrieves data from files where the remainder of the Julian date divided by 2 is 1 (range 2 of 2).
For example, an "odd" Extract instance processes files named PO990101, PO990103, PO990105 (Julian dates 2451181, 2451183, and 2451185, respectively). An "even" Extract instance processes files PO990102, PO990104, PO990106 (Julian dates 2451180, 2451182, and 2451184, respectively). This enables extraction for files PO990101 and PO990102 at the same time.
Note:
The RANGE option of ALTINPUT should not be confused with the RANGE argument of FILE and MAP or the function @RANGE that can be used within a FILTER. The application of each one is different.
Replicat
Use ASSUMETARGETDEFS when the source and target files have the same record or columns structure (for example, in hot site replication). This parameter is useful when source and target data definitions match and a definitions file is not available for the source database.
Use SOURCEDEFS to use the source data definitions.
ASSUMETARGETDEFS
GLOBALS
Use AUDITING to automatically set other system parameters to TMF, non-TMF, or both. For example, setting AUDITING to NONTMF automatically sets the Replicat NOAUDITREPS parameter.
ALL
AUDITING {TMF | NONTMF | ALL}
Replicat
Use AUDITREPS to determine if Replicat transactions are framed within TMF transactions. It is highly recommended that you use AUDITREPS to ensure the integrity of the target database.
Use NOAUDITREPS when the target files are not protected by TMF. If any target tables or files are audited, AUDITREPS is required.
AUDITREPS
AUDITREPS | NOAUDITREPS
Extract
Use AUDITRETRYDELAY to control the amount of time Extract waits at the end of TMF audit trails before attempting to read more data. Setting AUDITRETRYDELAY to a higher value can save system resources, but can also result in longer lag time when replicating data.
AUDITRETRYDELAY 1
AUDITRETRYDELAY seconds | AUDITRETRYDELAYCSECS centiseconds
secondsThe number of seconds to wait before reading more data.
centisecondsThe number of centiseconds to wait before reading more data.
Extract
Use AUDSERVCACHEBLOCKS to determine the amount of cache space reserved for SQL table definitions. This parameter affects the amount of memory reserved by Audserv for SQL table definitions. Cache blocks are useful for processing SQL update statements quickly. The default is sufficient for most installations.
300
AUDSERVCACHEBLOCKS cache
cacheThe amount of cache space. The maximum recommended value is 1000. At a minimum, allocate one cache block per frequently accessed table partition.
Extract
Use AUDSERVCPU to start the audit reading process on a different CPU from Extract. For example, AUDSERVCPU 1 starts Audserv for the master audit trail and any auxiliary trails in CPU 1. AUDSERVCPUS 3, 5, 6 starts Audserv for the master trail in CPU 3, for AUX01 in CPU 5, and for AUX02 in CPU 6.
Using AUDSERVCPU results in the constant gathering of audit while Extract performs its own processing. This technique can reduce batch run times by up to 20%.
When TMF uses auxiliary TMF audit trails, AUDSERVCPU can specify different processors for each Audserv process. If you specify fewer CPU than the number of master and auxiliary TMF audit trails, the last CPU specified is used as the default.
When you specify IGNOREAUXTRAILS, you should still specify AUDSERVCPU with the CPU for the ignored auxiliary trail as a placeholder for any subsequently included auxiliary trails.
The last CPU specified
AUDSERVCPU[S] cpu_num [, cpu_num...]
cpu_numThe CPU identifier.
Extract
Use AUDSERVPARAM to pass parameters that are specific to the Audserv process.
Guardian must be version D46 or G06 and later for AUDSERVPARAM.
Note:
AUDSERVPARAM GETPURGEDATAS and IGNOREPURGEDATAS options are deprecated. Extract will instruct Audserv to return purge data operations if any file or wildcard in its parameter file has GETFILEOPS or GETPURGEDATAS.
ABENDONSECURITYCHECK, IGNORENONDATACHANGES, IGNOREALTFILES, SQLCATCLOSEDELAY 60
AUDSERVPARAM [ABENDONSECURITYCHECK | NOABENDONSECURITYCHECK] [ARLIBERROR error_number, response] [ARERRORREPORTINTERVAL seconds] [GETNONDATACHANGES | IGNORENONDATACHANGES] [GETALTFILES | IGNOREALTFILES] [EXCLUDEFILECODES (file_code [, ...]) [SQLCATCLOSEDELAY seconds]
ABENDONSECURITYCHECK | NOABENDONSECURITYCHECKWith ABENDONSECURITYCHECK Audserv will log a message and then abend when the security check omits a file. NOABENDONSECURITYCHECK triggers Audserv to log a message, but not abend. The default is ABENDONSECURITYCHECK.
ARLIBERROR error_number, responseSpecifies the action to take when the error number is triggered. Valid response are:
IGNOREContinue processing and do not issue a message.
WARNIssue a warning message and continue processing.
ABENDIssue an error message and end processing.
ARERRORREPORTINTERVALSpecifies the number of seconds to wait before the reissue of a warning message.
GETNONDATACHANGES | IGNORENONDATACHANGESFilters records for SQL partition moves and splits. The default is IGNORENONDATACHANGES. NOGETNONDATACHANGES is a synonym for IGNORENONDATACHANGES.
When using NOGETNONDATACHANGES, you cannot extract any data changes produced by the RDF subsystem.
GETALTFILES | IGNOREALTFILESBy default Audserv excludes Enscribe alternate keys. This means, when a file is added to Audserv's "include" list, it will search for any alternate keys and automatically exclude them.
If you are using wildcards in your files to denote alternate keys, you may wish to override this feature. There are two ways to override the default programming. If you wish to include alternate keys for specific files, you may specify the following syntax in your parameter file:
FILE $VOL.SUBVOL.PRIMARY; FILE $VOL.SUBVOL.ALTFILE0; FILE $VOL.SUBVOL.ALTFILE1;
where ALTFILE0 and ALTFILE1 represent alternate keys in the PRIMARY file.
If you wish to replicate all alternate keys, you can use the GETALTFILES option in your Extract parameter file as follows:
AUDSERVPARAM GETALTFILES
The default setting for this option is IGNOREGETALTFILES.
EXCLUDEFILECODES (file_code)The numeric file code of a type of file whose audit is to be excluded.
SQL/MX and SQL/MP catalog files (file codes 563, 564, 565, 572, and 585) are automatically excluded and do not need to be listed.
SQLCATCLOSEDELAY secondsSets the time delay after which the SQL/MP catalog tables that have not been accessed are closed. The value must be between 10 and 3600 seconds.
The default is to close the catalog tables when they have not been accessed for 60 seconds.
Extract
Use AUDSERVPREFIX to tell Extract to assign its Audserv process a sequence of names with the same prefix. The names are processed in the same order as MAT, AUX01, AUX02, and so on.
AUDSERVPREFIX prefix
prefixA 3-character prefix, as in $GAX.
Specifying:
AUDSERVPREFIX $GAX
would assign:
MAT = $GAX00 AUX01 = $GAX01 AUX02 = $GAX02
If you use AUDSERVPROCESS, you can not use AUDSERVPREFIX, and vice versa.
Extract
Use AUDSERVPROCESS to control the names assigned to Audserv processes. The names are processed in the same order as MAT, AUX01, AUX02, and so on.
AUDSERVPROCESS name
nameAssign a $5-character name
Specifying:
AUDSERVPROCESS $GGMAT, $GGX01, $GGX02
results in the following assignments:
MAT - $GGMAT AUX01 - $GGX01 AUX02 - $GGX02
If you use AUDSERVPROCESS, you can not use AUDSERVPREFIX, and vice versa.
Extract
Use AUDSERVPROGRAM to run both Native mode and TNS mode Audserv programs. AUDSERVPROGRAM overrides the default program file name Audserv with the program name of your choice.
AUDSERVPROGRAM program_name
The following example starts Audserv using the program EXTRACTN and with the name AUDSERVN.
GGSCI> ADD EXT FINANCE, BEGIN NOW, PROCESS, PROGRAM EXTRACTN, DESCRIPTION "NATIVE TMF EXTRACT" GGSCI> ADD EXTTRAIL $DATA.GGSDAT.ET, EXTRACT FINANCE EXTRACT FINANCE AUDSERVPROGRAM $DATA.GGS.AUDSERVN EXTTRAIL $DATA.GGSDAT.ET FILE $PROD.ACCOUNT.*;
Manager
Use the AUTORESTART parameter to start one or more Extract and Replicat processes automatically after they fail. AUTORESTART provides fault tolerance when something temporary interferes with a process, such as intermittent network outages or programs that interrupt access to transaction logs.
You can use multiple AUTORESTART statements in the same parameter file.
Do not auto-restart
Syntax AUTORESTART process_type group_name [, RETRIES max_retries] [, WAITMINUTES wait_minutes] [, RESETMINUTES reset_minutes]
process_typeSpecify one of the following:
EXTRACT or EXT
REPLICAT or REP
ER (Extract and Replicat)
LOGGER
SYNCFILE or SYNC
COORD (Coordinator)
group nameA group name or wildcard specification for multiple groups. When wildcards are used, Oracle GoldenGate starts all groups of the specified process_type on the local system that satisfy the wildcard.
RETRIES max_retriesThe maximum number of times that Manager should try to restart a process before aborting retry efforts.
If RETRIES is not set, MAXABENDRESTARTS is used. If neither is set, the default number of tries is 2.
WAITMINUTES wait_minutesThe amount of time to pause between discovering that a process has terminated abnormally and restarting the process. Use this option to delay restarting until a necessary resource becomes available or some other event occurs. The default delay is 1 minute.
RESETMINUTES reset_minutesThe window of time a restarted process must run without abending for the retries count to be reset to the maximum. If the process abends within this time, the maximum retries value is decremented. When it reaches zero, no more restarts are attempted.
For example, RETRIES is set to 2 and RESETMINUTES is 15. If process A is restarted and runs without abending for 15 minutes RETRIES will be reset to 2. If instead, process A abends in less than 15 minutes, RETRIES becomes 1. If it is restarted and abends again within 15 minutes, no more retries will be attempted.
If RETSETMINUTES is not set for AUTORESTART, RESTARTINERVAL is used. If neither option is set, the default is 20 minutes.
In the following example, Manager tries to start all Extract processes three times after failure within a one hour time period, and waits five minutes before each attempt.
AUTORESTART EXTRACT *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 60
Manager
The AUTOSTART parameter specifies the processes to be automatically started by Manager. When Manager starts up it scans the list and attempts to start any listed process that is not already running. To activate changes to the list, stop and restart the Manager process.
AUTOSTART [group_type] {group_name | process_name} [, ALLPROCESSES]
group_typeThe type of group to be started. This is an optional entry, but if used must be one of the following.
EXTRACT or EXT
REPLICAT or REP
ER (Extract and Replicat)
LOGGER
SYNCFILE or SYNC
COORD (Coordinator)
group_nameThe group name. Required entry for group types other than Logger. Wildcards can be used for part or all of the name.
Note: TASK groups that match the wildcard will not be started. To start a TASK group either specify a group without a wildcard or use the ALLPROCESSES option.
process_nameThe Logger process name in the format $xxnnn. Required entry for the LOGGER group_type. Wildcards can be used for all or part of the name.
ALLPROCESSESSpecifies that TASK groups should be included in wildcarded groups to be started.
The following will start all Replicat processes that begin with R20.
AUTOSTART REPLICAT R20*
The following will start all Extracts.
AUTOSTART EXT *
The following will start the logger named $ABC01.
AUTOSTART LOGGER $ABC01
The following will start all groups that begin with R20 except TASK groups.
AUTOSTART R20*
The following will start all groups, including TASK groups, that begin with R20.
AUTOSTART R20*, ALLPROCESSES
Manager
Use BACKUPCPU to specify a CPU that is running the backup Manager process.
BACKUPCPU cpu_number
cpu_numberThe identifier for the CPU that is running the backup Manager process.
Extract, Replicat
Use BEGIN to specify a date and time at which to start capturing data. Any record with a timestamp greater than or equal to the time specified by BEGIN that satisfies other criteria is extracted.
SPECIALRUN uses BEGIN to determine its start date and time.
BEGIN date time[:seconds][.centiseconds]
dateThe date in yyyy-mm-dd format.
timeThe time in hh:mm format.
secondsYou can optionally specify seconds with the time option, as in hh:mm:ss.
centisecondsYou can optionally specify centiseconds seconds with the time and seconds options, as in hh:mm:ss.cccccc.
Replicat
Use BULKIOLOAD to enable bulk I/O, in 28K blocks, whenever writing unstructured data to structured or unstructured files. Typically, this occurs when replicating FUP LOAD or FUP DUP operations, and allows Replicat to process those types of operations many times faster than with conventional I/O.
BULKIOLOAD applies to all subsequent MAP entries. Use NOBULKIOLOAD to turn off BULKIOLOAD for subsequent MAP statements.
BULKIOLOAD | NOBULKIOLOAD
Syncfile
Use CHECKINTERVAL to change the interval between the time a scheduled event occurs and the time that Syncfile duplicates an associated file. Once every minute, by default, Syncfile determines which of the scheduled events have occurred. When the event takes place, Syncfile duplicates the associated files.
1 minute
CHECKINTERVAL num_units unit;
num_unitsThe number of units of time.
unitThe time unit type. Specify one of: SECONDS, MINUTES or HOURS.
CHECKINTERVAL 10 SECONDS;
Manager
Use CHECKMINUTES to determine how often Manager performs maintenance activities. If audit trails roll over frequently and the trails are actively managed, decreasing the frequency of maintenance activities can significantly affect performance.
10
CHECKMINUTES minutes
minutesThe frequency, in minutes, for performing maintenance.
The following example specifies maintenance activities are performed every 20 minutes.
CHECKMINUTES 20
Extract, Replicat
Use CHECKPARAMS to verify parameter file contents before processing data. The program performs parameter checking, then quits before processing data.
No parameter check
CHECKPARAMS
Extract and Replicat
Use the CHECKPOINTSECS parameter to control how often Extract and Replicat make their routine checkpoints.
Decreasing the value causes more frequent checkpoints. This reduces the amount of data that must be reprocessed if the process fails, but it could cause performance degradation because data is written to disk more frequently.
Increasing the value causes less frequent checkpoints. This might improve performance, but it increases the amount of data that must be reprocessed if the process fails. When using less frequent Extract checkpoints, make certain that the transaction logs remain available in case the data has to be reprocessed.
Note:
In addition to its routine checkpoints, Replicat also makes a checkpoint when it commits a transaction.
This parameter is only valid when using log-based extraction. Avoid changing CHECKPOINTSECS unless directed to do so by Oracle GoldenGate Technical Support.
10
CHECKPOINTSECS seconds
secondsThe number of seconds to wait before issuing a checkpoint.
CHECKPOINTSECS 20
Replicat
Use CHECKUNIQUEKEY specify that the target file should be checked to ensure that the key does not already exist before inserting a record.
If a unique alternate key condition is violated while attempting to insert a record to an entry-sequenced Enscribe file, an "empty record" results in the target file.
Use NOCHECKUNIQUEKEY to reset CHECKUNIQUEKEY behavior for subsequent entries. CHECKUNIQUEKEY parameter affects only files with unique alternate keys. CHECKUNIQUEKEY applies only to MAP statements that follow it in the parameter file.
NOCHECKUNIQUEKEY
CHECKUNIQUEKEY | NOCHECKUNIQUEKEY
Manager
Use CLEANUPSAVECOUNT to tell Manager how many old run history records to save for Extract and Replicat groups. Every evening after midnight, Manager cleans up old history records.
10
CLEANUPSAVECOUNT max_history_count
max_history_countThe number of history records to save. You can set the number of history records to a number between 5 and 50.
Extract, Replicat
Use COBOLUSEREXIT to call a custom COBOL routine at different points during processing.
If COBOLUSEREXIT is specified in the parameter file, but a user exit is not bound to the Extract or Replicat object, the process will abend.
COBOLUSEREXIT
Extract, Replicat
Use COLMATCH to map columns when source and target tables are different. The COLMATCH parameter enables mapping between databases with similarly structured tables but different names. COLMATCH specifies rules for default column mapping that apply to all columns that match the specified name.
COLMATCH is required when the source and target columns are different.You can also use the COLMAP option of the Replicat MAP parameter.
COLMATCH
{NAMES target_column = source_column |
PREFIX prefix | SUFFIX suffix | RESET}
NAMES target_column = source_columnMatches a target column to a source column.
target_column is the name of the target column.
= is the assignment operator.
source_column is the name of the source column.
PREFIX prefixSpecifies a prefix to ignore.
SUFFIX suffixSpecifies a suffix to ignore.
RESETTurns off any COLMATCH rules previously specified.
It may be that a source and target database are identical except for slightly different names, as shown in the following table.
| Source Database | Target Database |
|---|---|
|
CUST_CODE CUST_NAME CUST_ADDR PHONE |
CUSTOMER_CODE CUSTOMER_NAME CUSTOMER_ADDRESS PHONE |
|
CUST_CODE CUST_NAME ORDER_ID ORDER_AMT |
CUSTOMER_CODE CUSTOMER_NAME ORDER_ID ORDER_AMT |
To map the source database columns to the target, you could specify each individual column mapping, but an easier method is to specify global rules using COLMATCH as follows:
COLMATCH NAMES CUSTOMER_CODE = CUST_CODE COLMATCH NAMES CUSTOMER_NAME = CUST_NAME COLMATCH NAMES CUSTOMER_ADDRESS = CUST_ADDR;
Specifying matches this way enables all columns in ACCT to be mapped to ACCOUNT, and all columns in ORD to map to ORDER. When performing default mapping, Extract checks for any matches according to global rules (this enables mapping of CUST_CODE, CUST_NAME and CUST_ADDR). In addition, exact column name matches are checked as always (enabling mapping of ORDER_ID, ORDER_AMT and PHONE).
Another frequently encountered situation is:
| Source table | Target table |
|---|---|
CUST_CODE CUST_NAME ORDER_ID ORDER_AMT |
CUST_CODE_K CUST_NAME_K ORDER_ID ORDER_AMT |
In this case, a global rule can specify that the _K suffix appended to columns in the target table be ignored, as in: COLMATCH SUFFIX _K.
This also resolves the opposite situation:
| Source table | Target table |
|---|---|
CUST_CODE_K CUST_NAME_K ORDER_ID ORDER_AMT |
CUST_CODE CUST_NAME ORDER_ID ORDER_AMT |
The same principle can be applied to column prefixes: COLMATCH PREFIX P_
| Source table | Target table |
|---|---|
P_CUST_CODE P_CUST_NAME ORDER_ID ORDER_AMT |
CUST_CODE CUST_NAME ORDER_ID ORDER_AMT |
Global rules can be turned off for subsequent map entries with COLMATCH RESET.
All
Use COMMENT to insert comments within a parameter file. Anything on the same line after COMMENT is ignored during processing. Two hyphens (--) also denote a comment.
A comment can be entered anywhere within the parameter file. Comments continuing to the next line must be preceded by another double hyphen or COMMENT keyword.
If any columns in the tables being synchronized contain the word "comment," there may be conflicts with the COMMENT parameter, so double hyphens are the recommended option.
COMMENT comment_text} | {-- comment_text}
Both of the following are valid comments. The second uses the recommended syntax.
COMMENT Oracle GoldenGate NSK SQL Extract parameter file -- Oracle GoldenGate NSK SQL Extract parameter file
Extract
Use COMPRESSDELETES and NOCOMPRESSDELETES to control the way columns are written to the trail record for delete operations.
COMPRESSDELETES will extract only the primary key fields or columns for deleted operations. NOCOMPRESSDELETES, the default, sends all columns to the trail. By sending only the primary key, Oracle GoldenGate has all of the data required to delete the target record, while restricting the amount of data that must be processed. This creates a net performance gain.
COMPRESSDELETES and NOCOMPRESSDELETES can be used globally for all TABLE statements in the parameter file, or they can be used as on-off switches for individual TABLE statements.
NOCOMPRESSDELETES
COMPRESSDELETES
Extract, Replicat
Use CONVERTALLFLOATSTOIEEE to convert the Tandem (TDM) float data type numbers used by Oracle GoldenGate for HP NonStop to the Institute of Electrical and Electronics Engineers (IEEE) format used by Oracle GoldenGate for Windows and UNIX. CONVERTALLFLOATSTOIEEE converts all the float data type numbers in a file or table if the following conditions are met:
There is no column mapping for the table or file
Any SQL/MP table is not a view
Definitions are provided for the Enscribe files
CONVERTALLFLOATSTOIEEE converts 64 bit float data types (SQL/MP type float or Enscribe type double.) It does not convert 32 bit float data types (SQL/MP Real or Enscribe type float) because these are converted by Oracle GoldenGate for Windows and UNIX.
Use CONVERTALLFLOATSTOIEEE and NOCONVERTALLFLOATSTOIEEE as toggles in the parameter file to turn the conversion on for some of the files or tables and off for others. A CONVERTALLFLOATSTOIEEE parameter will be in affect until a NOCONVERTALLFLOATSTOIEE parameter is encountered and vice versa.
Note:
To use CONVERTALLFLOATSTOIEEE, the operating system must be G06.06 or later, and the Extract and Replicat must be native objects; not TNS.
NOCONVERTALLFLOATSTOIEE
CONVERTALLFLOATSTOIEE
EXTRACT EXTORD RMTHOST host01, MGRPORT 12345 CONVERTALLFLOATSTOIEEE TABLE $DATA01.SALES.ORDERS; NOCONVERTALLFLOATSTOIEEE TABLE $DATA03.SALES.CUSTOMER;
Extract, Replicat
Use CONVERTALLFLOATSTOTDM to convert the Institute of Electrical and Electronics Engineers (IEEE) format used for float data type numbers from Oracle GoldenGate Windows and UNIX to the Tandem (TDM) format used by Oracle GoldenGate for HP NonStop. This will convert all the float data type numbers in a file or table if the following conditions are met:
There is no column mapping for the file or table
Any SQL/MP table is not a view
Definitions are provided for the Enscribe files
CONVERTALLFLOATSTOTDM converts both 32 and 64 bit float data types because these are not converted by Oracle GoldenGate for Windows and UNIX.
Use CONVERTALLFLOATSTOTDM and NOCONVERTALLFLOATSTOTDM as toggles around the tables in the parameter file to turn the float data type conversion on for some of the tables and off for others. The NOCONVERTALLFLOATSTOTDM parameter will be in affect until the CONVERTALLFLOATSTOTDM parameter is encountered and vice versa.
Note:
To use CONVERTALLFLOATSTOTDM, the operating system must be G06.06 or later, and the Extract and Replicat must be native objects; not TNS.
NOCONVERTALLFLOATSTOTDM
CONVERTALLFLOATSTOTDM
REPLICAT REPORD CONVERTALLFLOATSTOTDM TABLE $DATA03.SALES.ORDERS, TARGET $DATA03.SALES.ORDERS; NOCONVERTALLFLOATSTOTDM TABLE $DATA03.SALES.ORDERS, TARGET $DATA03.SALES.CUSTOMER;
Coordinator, Replicat
Use COORDINATOR in the Coordinator parameter file to identify the name of the group.
COORDINATOR group_name
group_nameThe name of the group.
COORDINATOR ACCTCO
Use COORDINATOR in the Replicat parameter file to identify the name of the process that coordinates transactions distributed across multiple nodes in the network.
COORDINATOR process_name [MAXRETRIES number] [DELAYSECS seconds | DELAYCSECS centiseconds]
process_nameThe process name of the Coordinator group with which the Replicat will communicate to track distributed network transactions. The format for the default name is \node.$GGCnn with the node designator required.
MAXRETRIES numberThe number of times Replicat will try to start the process before allowing the process to end abnormally. The default is 5.
DELAYSECS seconds | DELAYCSECS centisecondsSets the number of seconds or centiseconds that Replicat waits between tries. The default is 20 seconds.
The following Replicat parameter specifies Coordinator group $GGC00 on the NY system.
REPLICAT REPNET COORDINATOR \NY.$GGC00
Logger
Use CPU to specify primary and backup CPUs for the current Logger process. For example: CPU 9, 3 directs Logger to switch from the primary (CPU 9) to the backup (CPU 3) in the event of system problems.
The process on the primary CPU pings the process on the backup CPU every minute. If the backup CPU becomes unavailable, the primary process detects it and recreates it.
Both primary and backup are required
CPU primary, backup
primaryThe primary CPU identifier.
backupThe backup CPU identifier.
Extract, Replicat
Use CUSEREXIT to call custom C routines at different points during processing. If your user exit is written in COBOL, see "COBOLUSEREXIT" for information on the COBOLUSEREXIT parameter.
If CUSEREXIT is specified in the parameter file, but a user exit is not bound to the Extract or Replicat object, the process will abend.
CUSEREXIT
Logger
Use DEBUGONSTACKCHECK to call DEBUG whenever an application's process stack is close to overflowing. Oracle GoldenGate recommends using this parameter only when instructed to do so by Oracle GoldenGate support.
Omit DEBUGONSTACKCHECK
DEBUGONSTACKCHECK
Extract, Replicat
Use DECRYPTTRAIL to decrypt Oracle GoldenGate trails that were encrypted by an upstream Extract process (for information on ENCRYPTTRAIL see "ENCRYPTTRAIL | NOENCRYPTTRAIL"). Specify DECRYPTTRAIL only when the ENCRYPTTRAIL parameter is specified in the upstream Extract process.
Syntax DECRYPTTRAIL
Extract, Replicat
Use DICTIONARY to establish an Enscribe DDL dictionary to use for evaluating WHERE, FILTER, and COLMAP clauses for FILE entries. For example, DICTIONARY $DATA5.PRODDICT specifies a physical subvolume.
Each DICTIONARY entry closes any previously open dictionary. This means only one is active at any given time while processing startup parameters. For this reason, you must specify your DICTIONARY parameter before the FILE/MAP entry that uses it.
DICTIONARY dictionary_subvol
dictionary_subvolA physical subvolume or an existing define name of class catalog.
Extract, Replicat
Use DISCARDFILE to create a file containing records discarded by the process. Records can be discarded for a number of reasons, such as attempting to update a missing record. Each entry in the discard file provides the field names, field values and operation attempted, along with associated transaction information.
If DISCARDFILE is not present in the Replicat parameter file, Replicat will create a default discard file. See Administering Oracle GoldenGate for HP NonStop (Guardian) for details on the naming and characteristics of the default discard file.
DISCARDFILE file_name [, APPEND | PURGE] | ROLLOVER] [, EXTENTS (primary, secondary, maximum) | MEGABYTES megabytes] [, OWNER (group_number, user_number)] [, SECURE "rwep"]
file_nameA physical file name or an existing define name of class map.
APPEND | PURGE | ROLLOVERAPPEND - Appends records to an existing file. This is the default if no value is entered.
PURGE - Purges an existing file and creates a new one
ROLLOVER - Renames the discard files by appending a sequence number to the file name, according to these rules:
If the file name is 7 characters or less, 1 digit is appended.
If the file name is 8 characters, the file will not rollover and a warning is given that the name is too long. This is true when the rollover is requested from GGSCI as well as from the DISCARDFILE parameter.
EXTENTS (primary, secondary, maximum)Sets the extent sizes and maximum extents for the file. The default setting is (4, 4, 100).
MEGABYTES megabytesSets the maximum size of the file in megabytes. The maximum size is 2 gigabytes.
OWNER (group_number, user_number)Defines ownership of the discard file.
SECURE "rwep"Secures the file using standard Guardian security for read, write, execute and purge operations.
DISCARDFILE =DISCARD_FILE, OWNER 100,1, SECURE "NNNN", PURGE
Extract, Replicat
Use DISCARDROLLOVER to specify when a discard file is aged and a new one is created. Old files are renamed in the format of group_name(n), where group_name is the name of the Extract or Replicat group and (n) is a number that gets incremented by one each time a new file is created, for example: $DATA.GGSDISC.DISCARD0, $DATA.GGSDISC.DISCARD1, $DATA.GGSDISC.DISCARD2, and so forth.
Either the AT or ON option can be entered, or both options can be used together. The following rules apply to the possible variations in entries:
Entering AT with a time but without a day creates a new discard file at the specified time every day.
Entering AT without a time generates an error.
Entering ON with a day but without a time creates a new discard file at midnight on the specified day.
Entering ON without a day will generate an error.
If no option is entered, the discard file will roll over at midnight every night.
To have more than one rollover, enter multiple AT, ON, or AT ON options. Up to 30 will be used.
A discard file will not roll over if:
The discard file is empty
The discard file was created by default rather than declared using the DISCARDFILE parameter
Disabled. No rules specified.
DISCARDROLLOVER {
AT hh:mm |
ON day_of_week |
AT hh_mm ON day_of_week
}
AT hh:mmThe time of day to age the file based on a 24-hour clock.
Valid values:
hh is an hour of the day from 1 through 23.
mm is minutes from 00 through 59.
ON day_of_weekThe day of the week to age the file.
Valid values are SUNDAY through SATURDAY. They are not case-sensitive.
Below are examples of the use of DISARDROLLOVER.
This example closes the existing discard file and creates a new one at 5:30 a.m. each day.
DISCARDROLLOVER AT 05:30
This example rolls the discard file over every Friday at midnight.
DISCARDROLLOVER ON friday
This example rolls the discard file over at 5:30 a.m. every Friday.
DISCARDROLLOVER AT 05:30 ON FRIDAY
This example will roll over the discard file at 5:30 a.m. every Wednesday and at 5:30 am every Friday.
DISCARDROLLOVER AT 05:30 ON WEDNESDAY, AT 05:30 ON FRIDAY
Manager
Use DISKTHRESHOLD to generate an event message when the percentage of all audit left on disk falls below the specified percentage. Use NODISKTHRESHOLD to eliminate reporting disk related thresholds.
Because audit is always being created, old audit eventually has to be recycled. If an Extract process needs audit that is about to be recycled, processing integrity is in danger.
To report audit thresholds, specify a percentage level for DISKTHRESHOLD in the Manager parameter file. When Extract program processing falls to the specified percentage, Manager sends a threshold message to EMS or to another specified location at regular intervals.
For example: You have ten audit trail files numbered 10-19, and you specify DISKTHRESHOLD 25 to generate an event message when audit files to process reaches 25% of the entire audit trail. When Extract's checkpoint is positioned in audit file 10 or 11, Manager generates a threshold message, because less than 25% of audit available for writing new data before the current position is recycled.
Alternatively, you can use the THRESHOLD or NOTHRESHOLD parameters to report on disk or tape audit thresholds. For information on these parameters see "THRESHOLD | NOTHRESHOLD".
DISKTHRESHOLD 20%
DISKTHRESHOLD percent_left | NODISKTHRESHOLD
percent_leftThe percentage level at which to generate an event message.
Extract, Replicat
Use DISPLAYFILEREFRESHES to display a message when attributes are refreshed for a file. DISPLAYFILEREFRESHES is a global parameter that is set once for the parameter file. It triggers generation of a refresh message when file attributes for each file included in the processing are refreshed.
File refresh messages are suppressed.
DISPLAYFILEREFRESHES
When DISPLAYFILEREFRESHES is used, each file refresh displays a message similar to:
File Attributes Refreshed for \NODE1.$VOL1.SUBVOL.FILE1 at 2013-05-13 09:13:36
Extract
Use DISPLAYTRAILSWITCH or NODISPLAYTRAILSWITCH to print or suppress printing of messages to the report file when Extract switches trails.
DISPLAYTRAILSWITCH
DISPLAYTRAILSWITCH | NODISPLAYTRAILSWITCH
Manager
Use DOWNCRITICAL to include a process that has terminated normally in the report generated by DOWNREPORT. When events are sent to the event log, they are reported as critical if either a process terminates abnormally or the DOWNCRITICAL parameter is specified. Processes that terminate abnormally are automatically reported by the DOWNREPORT parameter.
See the DOWNREPORT parameter "DOWNREPORT" for information on setting the frequency for sending information to the event log.
Manager does not report a normally terminated process.
DOWNCRITICAL
Manager
Use DOWNREPORTMINUTES or DOWNREPORTHOURS to report Extract and Replicat groups that are not running every n minutes or hours.
Events are generated any time Extract and Replicat processes are started, stopped, or interrupted. The Manager reports when a process is terminated.
One hour
DOWNREPORTMINUTES minutes | DOWNREPORTHOURS hours
minutesThe reporting interval, in minutes.
hoursThe reporting interval, in hours.
Syncfile
Use DUP to specify the source and target file sets to duplicate and the name of the event to execute. Optionally, you can exclude individual files and/or types of files from duplication, duplicate based on whether records have changed, and execute FUP or TACL commands.
Follows the EVENT parameter. At least one DUP entry is required.
DUP source_file_set, TARGET target_file_set {, EVENT event_name} [, NAME identifier] [, EXCLUDE exclude_file], [, INCLUDEFILECODE (include_codes) [, EXCLUDEFILECODE (exclude_codes)], [, CHANGED | ALWAYS] [, FUPOPTIONS "fup_options"] [, GETAUDITED | IGNOREAUDITED] [, TACLCMD tacl_command];
source_file_setRequired. Identifies the source file set. You can use standard wildcards.
TARGET target_file_setRequired. Identifies the target file set. You can use standard wildcards.
EVENT event_nameRequired. Specifies an event that has been defined by the EVENT parameter. You can specify multiple events as:
EVENT event_name, EVENT event_name,...
NAME identifierOptional logical name to be assigned to the DUP. Enclose in quotes if the name contains spaces. If NAME is not specified, the logical name will default to ID_num where num is the ordinal number of the DUP item within the parameter file. The logical name is displayed when Syncfile starts processing a DUP.
EXCLUDE exclude_fileSpecify EXCLUDE when you want to exclude certain files in the file set from duplication.
INCLUDEFILECODE (include_codes)A comma delimited list to specify the file codes to be excluded from duplication. Wildcards are not accepted. The list must be enclosed in parenthesis, for example:
INCLUDEFILECODE (32767, 65000)
If an INCLUDEFILECODE list is specified, any file code that is not in the list will be excluded.
EXCLUDEFILECODE (exclude_codes)A comma delimited list to specify the file codes to be included in the duplication. Wildcards are not accepted. The list must be enclosed in parenthesis, for example:
EXCLUDEFILECODE (0, 100, 700, 800)
If a file code is on the EXCLUDEFILECODE list it will be excluded, even if it is on the INCLUDEFILECODE list.
CHANGED | ALWAYSCHANGED is the default. Duplicates files only if the source file has changed more recently than the target file.
ALWAYS duplicates source files whether or not source records have been modified since the corresponding target file records.
FUPOPTIONS "fup_options"Enables options to be appended to the FUP DUP command. By default, Syncfile executes:
FUP DUP source, target, PURGE, SAVEALL
You can replace the PURGE and SAVEALL options with others. For example: specifying: FUPOPTIONS "SOURCEDATE, NEW"; results in the command:
FUP DUP source, target, SOURCEDATE, NEW.
GETAUDITED | IGNOREAUDITEDGETAUDITED duplicates files that have the audit flag set to on.
IGNOREAUDITED duplicates files that have the audit flag set to off. IGNOREAUDITED is the default.
TACLCMD tacl_commandThe TACLCMD tacl_command option executes a user-supplied TACL macro to perform the duplication. The macro can execute virtually any set of Guardian functions. For example, you can write a TACL macro to make a copy of the file to duplicate to a temporary location; edit the temporary file and change occurrences of the string "$DATA3" to "$D16"; age the previous file on the secondary system to a backup location; and FTP the temporary file to the backup.
When specifying TACLCMD, enclose the entire command to execute in quotes as shown below:
TACLCMD "RUN $DATA1.TACLMAC.DUPMAC source target"
As part of this command, you can specify the source and target file names as the source and target arguments. For example, the following command causes Syncfile to invoke TACL with $D16.TEST.CFG1 as the source and \BKUP.$DATA6.TEST.CFG1 as the target:
RUN $DATA1.TACLMAC.DUPMAC $D16.TEST.CFG1 \BKUP.$DATA6.TEST.CFG1
Enter <source> and <target> without substituting any file names to trigger Syncfile to take the source and target arguments from the DUP statement. For example, the following command will duplicate $DATA1.GGSPARM.* to $DATA5.GGSPARM.*.
DUP $DATA1.GGSPARM.*, TARGET $DATA5.*.*, TACLCMD "RUN $DATA1.GGSPARM.TACL5 <SOURCE> <TARGET>", ALWAYS, EVENT DAILY 1330;
For more information on this see the Administering Oracle GoldenGate for HP NonStop (Guardian).
Syncfile
Use DUPONLYAFTEREVENT to change the time events execute.
As part of the EVENT definition, you can specify that events should execute at a certain time each day (for example, 01:00). By default, Syncfile executes an event if the current time is greater than time specified, and the event has not yet been performed that day. Therefore, if Syncfile is started at 11:00, an event specified at 01:00 executes when Syncfile starts and then not again until 01:00 the following day.
In some cases you may want the event to execute later than the intended time. Specifying DUPONLYAFTEREVENT executes the event after the specified time is passed, ensuring that DUPs happen close to the intended time.
DUPONLYAFTEREVENT follows the EVENT parameter.
DUPONLYAFTEREVENT;
Syncfile
Use DUPPROCESS to specify the process name used by the TACL or FUP process used to duplicate files. This is generally for debugging purposes.
You must specify either the FUPOPTIONS or TACLCMD options for the DUP parameter.
DUPPROCESS process_name;
process_nameThe process name used to duplicate files.
Extract
Use DYNAMICPARTITIONS if your environment dynamically partitions tables; this ensures all data is captured.
By default, Extract scans tables on startup to determine the base table, primary partition, and any secondary partitions containing data, so it can begin its captures. However, if your system dynamically creates another partition, Extract does not know it exists unless the process stops, starts, and conducts another scan.
Using the DYNAMICPARTITIONS parameter allows Extract to recognize additional secondary partitions without having to stop and restart. This means that any data stored in the newly-created partition will be captured as part of Extract's regular processing.
For example, Extract is set up to look at $DATA2.SQL.ACCOUNT, and includes DYNAMICPARTITIONS in its parameter file. $DATA2.SQL.ACCOUNT is a table with a single secondary partition, $DATA4.SQL.ACCOUNT. When Extract starts, it sees both $DATA2.SQL.ACCCOUNT and $DATA4.SQL.ACCOUNT, and records are captured from both. As processing continues, a new partition called $DATA6.SQL.ACCOUNT is created. Because the DYNAMICPARTITIONS parameter is specified, data stored in $DATA6.SQL.ACCOUNT is captured as well.
Note:
Do not use DYNAMICPARTITIONS if the same subvolume and file combination exists more than once in your system as primary partitions.
Not activated
DYNAMICPARTITIONS
Manager
Use DYNAMICPORTLIST to specify the ports that Manager can dynamically allocate to Collector and Replicat processes and to GGSCI sessions. You can specify ports individually or a range of ports.
Note:
The DYNAMICPORTLIST is used by the Manager only for processes that are started dynamically, such as a Collector. Processes that are started by the user, such as a typical Extract, do not draw from this list.
When specifying individual ports, delimit each port with a comma. To specify a range of ports, use a dash (-) to separate the first and last port in the range. You can combine a range of ports and individual ports in the same statement.
DYNAMICPORTLIST {port | port-port} [, ...]
portA port (or ports) that can be dynamically allocated. Port entries are limited to 256.
DYNAMICPORTLIST 7820 - 7830, 7833
Manager
Use DYNAMICPORTREASSIGNDELAY to specify the time to wait before a port can be reused.
3
DYNAMICPORTREASSIGNDELAY time
timeThe number of seconds to delay before reusing a port.
Extract, Replicat
Use EMBEDDEDMACROS to control whether a macro can be expanded in a quoted string. Use NOEMBEDDEDMACROS to make text inside a quoted string invisible.
NOEMBEDDEDMACROS
EMBEDDEDMACROS | NOEMBEDDEDMACROS
GLOBALS
Use EMSLOG to direct EMS messages to a Collector other than the default ($0).
The Extract, Replicat, and Manager programs check for the NonStop define name =EMS_COLLECTOR. As part of process initialization these programs take the value set for EMSLOG and use it to override any pre-existing value that was set for =EMS_COLLECTOR.
When Manager creates a process, it sets the NonStop define values for =EMSCOLLECTOR and =EMS_COLLECTOR to the value specified for EMSLOG (or the default of $0).
$0
EMSLOG {collector | NONE}
collectorSpecify either the Collector name, or NONE when there is no Collector.
Extract
Use ENCRYPTTRAIL to encrypt data records in subsequent Oracle GoldenGate trails until a NOENCRYPTTRAIL is encountered. All records going into an Oracle GoldenGate trail are encrypted both across any data links and within the trail. This applies to EXTFILE, EXTTRAIL, RMTFILE and RMTTRAIL entries. ENCRYPTTRAIL is not recommended for RMTBATCH.
Parameter files for downstream Extract or Replicat processes must specify a DECRYPTTRAIL to read the files.
NOENCRYPTTRAIL
ENCRYPTTRAIL | NOENCRYPTTRAIL
Extract, Replicat
Use END to specify the point at which the process stops processing in the TMF audit or Oracle GoldenGate trails. If END is omitted, processing continues until you stop it manually with GGSCI.
With END, processing terminates when an audit record is encountered with a timestamp equal to or greater than the time specified. You can specify the day, the time of day, including seconds and centiseconds as in: END 2010-08-12 17:00:00.
END is used to determine when SPECIALRUN processing will terminate.
END {date [time] | RUNTIME}
date timeCauses Extract or Replicat to terminate when it encounters a record with a timestamp equal to, or greater than, the time specified.
Valid values:
date is a date in the format of yyyy-mm-dd.
time is the time in the format of hh:mi[:ss[.cccccc]] based on a 24-hour clock.
RUNTIMECauses Extract or Replicat to terminate when it reaches process startup time. One advantage of using RUNTIME is that you do not have to alter the parameter file to change dates and times from run to run.
Replicat
Use ENTRYSEQUPDATES to enable entry-sequenced records to be replicated with exactly the same key as the record on the source system. Because standard Guardian functions do not permit control of entry-sequenced record keys, omitting ENTRYSEQUPDATES results in the following limitations:
During an initial load (or similar situation), duplicate record conditions may not be detected; therefore, multiple instances of the same record can occur in the target file.
When updates occur on the source database, there is no guarantee that the corresponding key on the target database is the same. Therefore, to guarantee updates are applied properly, a unique alternate key must be specified.
When ENTRYSEQUPDATES is specified, Replicat manipulates file contents directly through unstructured file access. This type of access imposes the following restrictions on the target file:
The file is open for PROTECTED access (no other processes, including other Replicat processes, can update the file).
The target file cannot be TMF audited.
No more than one source file can be associated with the target file (to guarantee that the keys are unique).
A trade-off exists between flushing file contents at any given time and performance.
ENTRYSEQUPDATES causes Replicat to manipulate file blocks directly. This ensures that each record is inserted into the target file in exactly the same place as in the source file (for other file types, this occurs automatically). To maximize performance, Replicat attempts to build these blocks in memory, keeping a private cache, and assumes subsequent records will be inserted in close proximity. This maximizes performance by minimizing the amount of messages with the disk process.
You must specify ENTRYSEQUPDATES in the parameter file before any MAP statements for entry-sequenced files.
ENTRYSEQUPDATES {EXACTKEY | NOEXACTKEY}
[, FLUSHALWAYS | NOFLUSHALWAYS]
[, HIDEGAPS | NOHIDEGAPS]
[, EXCLUSIVEOPEN | PROTECTEDOPEN]
EXACTKEY | NOEXACTKEYSubsequent maps should always have exact key replication specified. Specify NOEXACTKEY to cancel exact key replication for subsequent entries.
FLUSHALWAYS | NOFLUSHALWAYSEach time a block is updated, it is flushed to disk. This makes the record immediately visible to the application. The default is NOFLUSHALWAYS.
HIDEGAPS | NOHIDEGAPSHIDEGAPS — The default. If records for a particular block arrive out of order, hide records in the block from view until all prior records in the block arrive. This guarantees that records with a length of zero are never returned. Use HIDEGAPS when replicating non-TMF entry-sequenced files.
NOHIDEGAPS — Make visible all records in a block, regardless of whether prior records have arrived. The trade-off is that missing records will appear to have a length of zero when reading the file.
EXCLUSIVEOPENOpens the target file exclusively. This is recommended by Oracle GoldenGate when opening structured files for unstructured write access, as is done when ENTRYSEQUPDATES is specified.
PROTECTEDOPENAllows users access to read-only files while Oracle GoldenGate is writing to the file.
The following example causes exact replication of the key, optimized for the highest throughput.
ENTRYSEQUPDATES, EXACTKEY, NOFLUSHALWAYS
Extract, Replicat
Use EOFDELAY or EOFDELAYCSECS to specify the number of seconds or centiseconds to delay before looking for more data. Increase the time interval to increase the lag time between updates on the source and target systems, especially when the source system is experiencing a small amount of activity.
This parameter only applies when Extract or Replicat is reading an Oracle GoldenGate trail.
1 second
EOFDELAY seconds | EOFDELAYCSECS centiseconds
secondsThe number of seconds to delay.
centisecondsThe number of centiseconds to delay.
GLOBALS
Use ERREPLYTIMEOUT to set the timeout, in seconds, when GGSCI communicates with Oracle GoldenGate components.
30
ERREPLYTIMEOUT seconds
secondsSpecify a timeout value in seconds.
Extract
Use ERROR59ROLLOVER during direct file extraction to tell Extract to skip ahead to the next file in a sequence upon encountering a damaged block (Guardian Error 59).
Certain applications leave file blocks in a damaged state to signal that the end of data within the file has been reached. All data prior to the first damaged block is considered valid. Extract, when processing a sequence of files directly, will correctly process all data up until the damaged block. By default, if Extract subsequently reads a damaged block, it will abend.
ERROR59ROLLOVER
Extract
Use ETNEWFORMAT to cause Extract to generate trails in formats that are compatible with Replicat version 7.0 or later.
Use ETOLDFORMAT to cause Extract to generate trails in formats that are compatible with versions of Replicat prior to 7.0. You must use this parameter before any other parameter when running pre-7.0 versions of Oracle GoldenGate for NonStop, and pre-7.2 versions of Oracle GoldenGate for Windows and UNIX.
ETNEWFORMAT
ETNEWFORMAT | ETOLDFORMAT
Syncfile
Use EVENT to define an event and its schedule. Associate the event with a file set via the EVENT keyword in DUP parameter argument. For example, you might define an EVENT named DAILY that executes a duplication at 03:00 daily:
EVENT DAILY, EVERY DAY AT 03:00; . . . DUP $DATA1.GGSCFG.Z*, TARGET \BKUP.$DAT2.*.*, ALWAYS EVENT DAILY;
For this example, the DUP arguments includes EVENT DAILY. At 03:00, every day, data set $DATA1.GGSCFG.Z* is duplicated to data set \BKUP.$DAT2.*.*.
You can include multiple EVENT parameters. You can schedule for time of day, day of week, or date. You can also exclude specific dates or days of the week.
EVENT precedes all other parameters. At least one EVENT entry is required.
EVENT event_name, every_options [, exclude_options];
event_nameAny name to identify the EVENT. The name is used by subsequent DUP entries to link file set duplication to an appropriate schedule.
every_optionsCan be one of the following:
EVERY DAY AT timeEnter the time as 0100, 0200...1400, 1500, etc.
EVERY date AT timeSpecify the month and day of the month, as in: August 3. Month is spelled out. Enter the time as 01:00, 02:00...14:00, 15:00, etc.
EVERY time_intervalEnter an interval, such as 2 HOURS or 10 MINUTES.
exclude_optionsCan be one or both of the following:
EXCLUDE day_of_weekSpelled out, such as SUNDAY or SATURDAY.
EXCLUDE dateSpecify the month and day of the month, as in: August 3. Month is spelled out.
For this example, two EVENTS are defined, then called by subsequent DUP parameters.
EVENT DAILY, EVERY DAY AT 1:00; EVENT FREQUENT, EVERY 2 HOURS; DUP $DATA1.GGSCFG.Z*, TARGET \BKUP.$DATA2.*.*, ALWAYS, EVENT DAILY; DUP $DATA1.GGSPARM.*, TARGET \BKUP.$DATA2.*.*, EVENT FREQUENT;
Logger
Use EXCLUDEFILE to exclude specific files from extraction to the current log. You can implicitly and explicitly exclude files that have been included by the FILE parameter. For example, if the Logger parameter file includes FILE and EXCLUDEFILE entries similar to:
FILE $DATA4.*.* EXCLUDEFILE $DATA4.DAT.TRANSFL
The EXCLUDEFILE parameter excludes the specified file set, even though the preceding FILE statement implicitly included it with a wildcard argument.
If used, EXCLUDEFILE must follow the FILE parameter. If a file is included by the FILE parameter without filters, then excluded by EXCLUDEFILE according to one or more filters, the file is excluded when the filter criteria are met.
You can exclude files by:
Specifying file sets to exclude.
Filtering file sets that are opened by a specified process or program.
Filtering file sets that are associated with an opening program's user ID.
EXCLUDEFILE file_set [, PROCESS process_set] [, PROGRAM program_set] [, USER user_set]
file_setThe name of the file set to exclude.
PROCESS process_setExcludes data when the opener is the process or set of processes specified (process set can be a single process or a wildcard, for example $APP*).
PROGRAM program_setExcludes data when the opener is the program or set of programs specified (program set can be a single program or a wildcard: $DATA1.PROGS.*).
USER user_setExcludes data when the creator access ID of the opener is the user specified (user set can be a single user or wildcard: FINANCE.JOE or SUPER.*).
Extract
Use INCLUDEGGSTRANSRECS to create network transaction tracking records. You need these records if you replicate distributed network transactions using Readers and a Coordinator. These processes depend on tracking records in the trail.
The default EXCLUDEGGSTRANSRECS suppresses the creation of network transaction tracking records.
EXCLUDEGGSTRANSRECS
INCLUDEGGSTRANSRECS
Manager
Use EXCLUDESUFFIXCHAR to specify characters that are not to be used as the first character of the suffix when generating process names. This can be useful in avoiding conflicts with process naming conventions already in use.
The process name is made up of a two-character prefix, which can be set using ADD DEFINE for GGS_PREFIX (see "ADD DEFINE"), and a three character suffix. The suffix can have one or two alphanumeric characters and will end with a sequential number. The process name $GGL00, for example, uses the prefix GG. The first character of the suffix is L, indicating a Logger process, and the zeroes indicate it is the first process generated for $GGL.
Characters that are used to name Oracle GoldenGate processes are excluded by default. These include C (Coordinator), E (Extract), L (Logger), R (Replicat), and S (Server/Collector). The characters, MG, used in naming Manager processes, are also excluded by default.
CELRS and MG
EXCLUDESUFFIXCHAR characters
charactersThe characters to be excluded. These should be entered as a string of characters without commas. The characters can optionally be enclosed in single or double quotation marks (e.g. "WTZ" or 'BP').
The following examples all exclude the characters T and X from the first position of the suffix for generated process names.
EXCLUDESUFFIXCHAR TX EXCLUDESUFFIXCHAR 'TX' EXCLUDESUFFIXCHAR "TX"
To reset to the defaults, enter an empty set of characters as shown below.
EXCLUDESUFFIXCHAR ""
Extract, Replicat
Use EXPANDDDL to format Enscribe data. Enscribe DDL definitions frequently contain occurs, or array items. For example, a definition might contain:
05 GROUP1 OCCURS 3 TIMES. 10 FIELD1 PIC X(5) OCCURS 20 TIMES.
To reference items within arrayed structures in a WHERE, FILTER, or COLMAP clause, you must identify the occurrence. The default syntax for doing so is: field_name-occurrence. For example, to retrieve the second occurrence of FIELD1 in the third group, the syntax would be GROUP1-3.FIELD1-2
The EXPANDDDL parameter changes this array notation. For example:
EXPANDDDL USEBRACKETS specifies field as GROUP1[3].FIELD1[2]
EXPANDDDL USETWOUNDERSCORES, ZEROFILL ARRAYWIDTH specifies field as GROUP1__3.FIELD1__02, when the maximum array width in GROUP1 is less than 10 (requiring at most one digit) and the maximum array width of FIELD1 is 20 (requiring two digits).
EXPANDDDL also determines how field occurrences are output when FORMATASCII or FORMATSQL are specified.
EXPANDDDL format [, ZEROFILL width | ARRAYWIDTH] [, INCLUDEREDEFS | OMITREDEFS]
formatCan be one of the following:
USEDASH — Reference array items by –n, where n is the occurrence number.
USEBRACKETS — Reference array items by [n], where n is the occurrence number.
USEUNDERSCORE — Reference array items by _n, where n is the occurrence number.
USETWOUNDERSCORES — Reference array items by __n, where n is the occurrence number.
ZEROFILL width | ARRAYWIDTHDirects Extract to reference occurrences of each field adjusting for a maximum width.
INCLUDEREDEFS | OMITREDEFSINCLUDEREDEFS includes redefined fields.
OMITREDEFS is the default. It excludes redefined fields, which has the following consequences:
Data is only output to columns that do not redefine another field.
When Extract specifies FORMATASCII or FORMATSQL, Extract does not output redefined fields.
Extract, Replicat
Use EXTFILE when Extract writes to an Oracle GoldenGate trail that contains a single file. EXTFILE defines the name, dimensions, and security of a file in the Oracle GoldenGate trail. The parameter file must include at least one EXTFILE or EXTTRAIL entry. EXTFILE must precede the names of files and tables containing data you want to extracted into the file.
All FILE and TABLE entries after the current entry but before the next EXTFILE, EXTTRAIL, RMTFILE, or RMTTRAIL parameter result in output to the current trail.
The trail must contain record headers or an error is returned at run-time.
EXTFILE file_name [, APPEND | PURGE] [, EXTENTS (primary, secondary, maximum)] [, MAXFILES num_files] [, MEGABYTES number] [, OWNER (group_number, user_number)] [, SECURE "rwep"]
file_nameA physical file name or an existing define name of class map.
APPEND | PURGESpecify APPEND to append to the file.
Specify PURGE to purge the file.
EXTENTS (primary, secondary, maximum)Sets up the extent sizes and maximum extents for the file.
MAXFILES num_filesValid for Extract. Forces a sequence of files to be created, rather than a single file. MAXFILES permits up to num_files to be created as needed. Aged files are appended with a six-digit sequence number. When using MAXFILES, MEGABYTES should also be specified in order to explicitly set the maximum size of each file in the sequence.
MEGABYTES numberSets up the maximum size of the file, or sequence of files if you specified MAXFILES. The maximum size of each file is 2 gigabytes.
OWNER (group_number, user_number)Gives ownership of the file to a different owner. Securing a file is useful when you extract logical sets of records to different files (for example, accounting data to one and sales data to another), and only want the sales group to have access to sales information.
SECURE "rwep"Secures the file using standard Guardian security.
The following example illustrates how the EXTFILE parameters work in conjunction with the file and table parameters. This example extracts data owned by the EAST region into one file and data owned by WEST into another.
EXTFILE $DATA1.EXTRACT.EAST, OWNER (100,255),
SECURE "GGGG", PURGE
TABLE $DATA5.SALES.ORDERS WHERE (REGION = "EAST");
TABLE $DATA5.ACCTING.RECEIPTS WHERE (REG = "E");
EXTFILE $DATA1.EXTRACT.WEST, OWNER (200,255),
SECURE "GGGG", PURGE
TABLE $DATA5.SALES.ORDERS WHERE (REGION = "WEST");
TABLE $DATA5.ACCTING.RECEIPTS WHERE (REG = "W");
Extract
Use EXTRACT to associate with the Extract group defined by the GGSCI ADD EXTRACT command. The association with the group ensures that each extracted record is processed exactly once.
One of EXTRACT, SPECIALRUN or SOURCEISFILE entry is required.
EXTRACT group_name
group_nameThe name of the group. Extract group_name can contain no more than 7 characters. It must be the first parameter in the Extract parameter file.
Extract, Replicat
Use EXTTRAIL to establish the current Oracle GoldenGate trail to which data will be output. All FILE and TABLE entries after the current entry but before the next parameter (EXTFILE, EXTTRAIL, RMTBATCH, RMTFILE, RMTTRAIL) result in output to the current trail. Unlike EXTFILE, EXTTRAIL parameters are set up externally to the parameter file using GGSCI.
The trail entered must correspond with an Oracle GoldenGate trail created with GGSCI. In addition, the Extract group specified in the parameter file must match the entry linked with EXTTRAIL. For more information about adding Oracle GoldenGate trails, refer to "ADD EXTTRAIL".
EXTTRAIL file_prefix
file_prefixA file name with two characters in the file portion. file_prefix can also be a define with the same characteristics.
An Extract group, FINANCE, has two associated trails. This configuration sends ACCOUNTS records to the XX trail and ORDERS to the YY trail. The parameter file includes the following commands:
EXTRACT FINANCE ... EXTTRAIL $DATA1.EXTDAT.XX FILE $DATA2.FINANCE.ACCOUNTS; EXTTRAIL $DATA2.EXTDAT.YY FILE $DATA3.FINANCE.ORDERS;
Extract
Use FASTIO to output records in large blocks of up to 28K bytes, resulting in high performance gains. Use FASTIO in high volume scenarios. It is less important when data extract rates fall below several hundred megabytes per hour.
Note:
FASTIO only applies to EXTTRAIL, not to RMTTRAIL.
FASTIO
Extract
Use FASTPOSITION to instruct the Audserv program to perform a binary search of the TMF audit trail at startup, before the initial checkpoint is established. This significantly reduces the startup time and CPU overhead associated with starting the process for the first time. This parameter is particularly useful for systems that have implemented auxiliary TMF audit trails.
FASTPOSITION does not apply to Oracle GoldenGate trails, SOURCEISFILE, or direct file reads.
FASTPOSITION
FASTPOSITION
Coordinator, Extract, Replicat
Use FASTREADS to change the number of bytes that Extract reads when processing Oracle GoldenGate trails. Extract reads up to 4096 bytes at a time by default. FASTREADS enables larger reads of up to 28K bytes. When data volumes are significant, FASTREADS can result in greater throughput and lower overhead.
Note:
FASTREADS only applies when Extract is reading an Oracle GoldenGate trail and does not apply when reading TMF audit trails, since Oracle GoldenGate always reads audit trails with a large block read.
NOFASTREADS
FASTREADS
Extract
Use FETCHCOMPS or FETCHLASTIMAGE to extract full update images. When audit compression is used for a SQL table or Enscribe file, update operations to the table or file are recorded in a compressed format. Therefore, only part of the image is available for updates. (Full delete and insert images are always available). For some applications, this is acceptable. For example, when delivering the operation to another file, usually you need only the portion of the record that changed.
When either FETCHCOMPS or FETCHLASTIMAGE is specified, Extract attempts to retrieve the full record images of any compressed records from the original database using a SQL SELECT or Guardian READ statement.
You can fetch compressed images for selected files or tables. FETCHCOMPS and FETCHLASTIMAGE apply only to FILE or TABLE entries listed below the FETCHCOMPS or FETCHLASTIMAGE specification. Specify NOFETCHCOMPS or NOFETCHLASTIMAGE to turn off fetching for subsequent entries.
FETCHCOMP is more useful for audit applications, and FETCHLASTIMAGE is more appropriate for hot site applications. Several other items to note when using these parameters:
FETCHLASTIMAGE outputs the latest image of the record found on disk. Changes that occurred during the current transaction are ignored.
FETCHCOMPS uses data from the latest image to fill in missing values in the compressed record. This approximates but is not always exactly the same as the actual transaction values (if subsequent updates occurred).
If the original record has been deleted from the database, Extract discards the record from the compressed record with a warning message.
The record retrieved may be a newer version than the record processed from the audit trail, so intermediate update information will be lost.
Fetching each update from the database can result in significant performance penalties because each record must be fetched with random access into the database (Extract processes other updates sequentially).
FETCHCOMPS | FETCHLASTIMAGE | NOFETCHCOMPS | NOFETCHLASTIMAGE
Logger
Use FILE to specify file sets and record types to the current log. You can also use the EXCLUDEFILE parameter to subsequently exclude files that have been included by FILE.
You can include files in the current log by:
Specifying file sets
Specifying filtering values
Optionally, you can specify a variety of attributes to enhance file processing.
FILE file_set [, attribute...][, filter];
file_setThe name of the file, or file set to be processed.
attributeFor details about the attributes you can specify as FILE options see "Using parameters as FILE options".
filterCan be one or more of the following:
PROCESS process_file_set
PROGRAM program_file_set
USER user_id
If more than one filter is specified, all of the specified criteria must be met. See "Including with filters" for details about using filters.
FILE can explicitly name a file set, as in $DATA3.DAT.TRANS, or specify wildcard arguments as in $DATA4.*.*.
When multiple FILE statements are included in a parameter file, only the first match is logged. The following is an example:
FILE $DATA4.GGSDAT.FIN* FILE $DATA4.GGSDAT.*
In this case, the file set $DATA4.GGSDAT.FIN* is logged only once, to the corresponding log, even if a subsequent FILE (such as $DATA4.GGSDAT.*) implicitly includes it in a wildcard argument.
The primary partition of the file must be satisfied by a FILE entry for extraction to take place. Also, excluding secondary partitions has no effect.
If you wish to retrieve a file set from a remote node, Logger must be configured and running on that node locally.
Using parameters as FILE options
You can use other parameters as options with FILE; this lets you apply a parameter to a single file or file set, rather than all the files in your Logger group. Parameters that can be set as an option in FILE include:
COMPRESSUPDATES | NOCOMPRESSUPDATES (default)
GETAUDITED | IGNOREAUDITED (default)
GETBEFOREUPDATES | IGNOREBEFOREUPDATES (default)
GETBULKIO | IGNOREBULKIO (default)
GETUNSTRUCTURED | IGNOREUNSTRUCTURED (default)
RENAMEDELAY delay_seconds
SUSPENDED | ACTIVE (default)
A file can also be included based on filters. If you specify more than one filter, all of the filter criteria must be met to include the file for logging. For example, the following only includes the specified file set when both the process and user ID filter criteria are met.
FILE $DATA3.APPL.TL*, PROCESS $APP*, USER SUPER.*
If a file is included by FILE without filters, then excluded according to one or more filters, the file is excluded when the filter criteria are met.
FILE file_set [, PROCESS process_file_set] [, PROGRAM program_file_set] [, USER user_id] ;
PROCESS process_file_setIncludes files sets that are opened by the specified process.
PROGRAM program_file_setIncludes file sets that are opened by the specified program.
USER user_idIncludes file sets when the creator ID of the opening process or program is associated with the specified user ID.
You can optionally specify attributes for:
Compressing updates
Capturing or omitting record types
Supporting file renaming
Suspending and resuming logging
To specify multiple attributes, include each attribute in a separate FILE statement, similar to:
COMPRESSUPDATES FILE $DATA1.DAT.PAYMENT, NOCOMPRESSUPDATES FILE $DATA1.DAT.ACCOUNT, GETBULKIO FILE $DATA1.DAT.TRANS, RENAMEDELAY 10
This example does the following:
Compresses updates for subsequent FILE statements.
Turns compression off for the PAYMENT file.
Logs bulk I/O for the ACCOUNT file.
Supports file renaming by delaying the actual rename of the TRANS file until the name is changed in the log.
See the following subsections for details about using the FILE parameter attributes.
You can compress and decompress update records using the COMPRESSUPDATES and NOCOMPRESSUPDATES options.
By default, update records are not compressed. The default can result in lower throughput, especially across Wide Area Networks, due to the additional traffic load. For example, consider an application that updates a 1000-byte customer record with a key 20 bytes long. If typically only the balance field within the customer record is changed, and that field is 10 bytes long, only 38 bytes rather than 1000 need to be transmitted across the network to execute a replicated update (20 bytes for the key, 10 bytes for the balance field, and 8 bytes to indicate the position within the record of the changed bytes and the key).
If you wish to use a field in KEYCOLS which is not part of the source file's primary key, you cannot use COMPRESSUPDATES.
FILE file_set, {COMPRESSUPDATES | NOCOMPRESSUPDATES}
NOCOMPRESSUPDATESNo compression (the default).
COMPRESSUPDATESCOMPRESSUPDATES directs Logger to compress update record images by comparing before and after-images. Compressed images include the key of the changed record and only the changed bytes within the record. Note that if you wish to use a field in KEYCOLS which is not part of the source file's primary key, you cannot use COMPRESSUPDATES.
Capturing or omitting record types
You can specify that Logger include or omit the following record types:
TMF audited records
Before updates
Bulk I/O updates
Unstructured file changes
Omit inserts, updates, or deletes.
FILE file_set
[, GETAUDITED | IGNOREAUDITED]
[, GETBEFOREUPDATES | IGNOREBEFOREUPDATES]
[, GETBULKIO | IGNOREBULKIO]
[, GETUNSTRUCTURED | IGNOREUNSTRUCTURED]
[, OMITINSERTS | OMITUPDATES | OMITDELETES] ;
GETAUDITED | IGNOREAUDITEDRetrieves or omits data from files that are TMF audited.
Note:
Carefully consider possible outcomes before using GETAUDITED. Logger cannot capture TMF abort operations. When GETAUDITED is used, Logger will capture operations that it will not be able to back out if TMF aborts them.
GETBEFOREUPDATES | IGNOREBEFOREUPDATESRetrieves or omits the before-images of records.
GETBEFOREUPDATES retrieves images of records before they are changed in addition to capturing the after record image. For example, if an account balance was $100 before a transaction, and after it was $1000, both records will be written to the log trail. This information can be useful for data warehousing, archival, and other applications. When records are deleted, before-images are always logged. The default is to not extract before-images.
GETBULKIO | IGNOREBULKIORetrieves or omits bulk I/O updates. IGNOREBULKIO (omit bulk I/O updates) is the default. Bulk I/O occurs when Guardian SETMODE 141 is invoked by a program, such as during FUP LOAD or FUP DUP.
GETUNSTRUCTURED | IGNOREUNSTRUCTUREDRetrieves or omits unstructured file changes. IGNOREUNSTRUCTURED (omit unstructured file changes) is the default.
OMITINSERTS | OMITUPDATES | OMITDELETESExcludes the specified record operation from being captured. You can combine these options.
If your application renames database files while they are still open and being updated, the RENAMEDELAY option ensures that the new file name changes in the log trail. For RENAMEDELAY, specify a delay interval to give the system time to detect and record the new name.
FILE file_set, RENAMEDELAY delay_seconds
delay_secondsRepresents the delay interval, as in RENAMEDELAY 10. The interval should range from 5-15 seconds, depending on overall system resource usage and hardware capacity. In general, more powerful systems require less delay. Use this feature with caution, since the process invoking the rename will be delayed before being allowed to continue.
Suspending and resuming logging
You can temporarily suspend, then resume logging for a specified file set by using the SUSPENDED and ACTIVE options.
FILE file_set, {SUSPENDED | ACTIVE} ;
SUSPENDEDTemporarily suspends logging for the particular file set.
ACTIVEResumes logging for the file set.
Extract
Use FILE or TABLE to specify the files or tables for which to capture data. You can specify a file name, or a wildcard arguments such as $DATA3.*.*. If you are retrieving records from remote locations, you must fully qualify the file name with its node as well as the volume, subvolume, and file. (For simplicity, references to FILE in this section also refer to TABLE unless explicitly stated otherwise.)
For Enscribe, unless you specify otherwise, records from every partition of the specified file are retrieved.
You can invoke FILE or TABLE more than once in a parameter file, and you can invoke the same FILE or TABLE argument more than once. This is useful, for example, to split records into different trails according to column values, to put inserts, updates and deletes into separate files, and to segment data for other reasons.
Note:
At least one FILE or TABLE statement per parameter file is required.
FILE file_name [, ALTNAME alternate_file_name] [, AUTOTRUNCATE] [, COLMAP (column_map_specification) | NOCOLMAP] [, COMPRESSDELETES] [, DEF source_ddl_definition] [, EXITPARAM "exitparam_string"] [, FILTER (expression)] [, KEYCOLS (key_column_specification)] [, PARTITIONS partition_specification] [, RANGE (range_specification)] [, SQLEXEC (sqlexec_clause)] [, SQLNAME table_alias] [, STARTKEY key_specification, ENDKEY key_specification] [, TARGET target_file_name] [, TARGETDEF target_ddl_definition] [, TARGETNAME target_file_name] [, USETARGETDEFLENGTH] [, TOKENS (token_specification)] [, WHERE (where_condition)] ;
file_nameA physical file name or an existing define name of CLASS MAP or a wildcard file name. The file can be a SQL table, SQL view or Enscribe file
ALTNAMEAUTOTRUNCATECOMPRESSDELETESCOLMAP SQLNAME TARGETDEF TARGETNAME USETARGETDEFLENGTHSee "Mapping Data".
DEF FILTER PARTITIONS STARTKEY, ENDKEY RANGE WHERESee "Selecting Records".
EXITPARAMSSQLNAMESQLEXECSee "Performing a query".
TOKENSSee "Using tokens".
Use COMPRESSDELETES to replicate only the primary keys for deleted records. Without this parameter, all columns are replicated. By sending only the primary key, Oracle GoldenGate has all of the data required to delete the target record, while restricting the amount of data that must be processed.
FILE file_name, COMPRESSDELETES
You can select records by:
Selecting or excluding records using FILTER.
Selecting based on a conditional statement using WHERE.
Selecting a subset of records using RANGE.
Selecting a specific data partition using PARTITIONS.
Selecting Enscribe records based on a STARTKEY and ENDKEY.
Note:
Using the RANGE option of FILE or MAP provides different capabilities than using the @RANGE function within a FILTER. And both of these are different than the RANGE option of ALTINPUT.
Selecting or Excluding Records Using FILTER
In the FILTER expression, records are selected according to a filter clause. Options specify the record types to include or omit when applying the filter. You can combine the filter clause with one or more options, but the filter_clause must always be included.
If you are selecting from an Enscribe file using FILTER, you must also specify the DEF option.
FILE file_name, FILTER (filter_clause [, ON INSERT | ON UPDATE| ON DELETE] [, IGNORE INSERT | IGNORE UPDATE | IGNORE DELETE]) [, DEF source_ddl_definition] ;
ON INSERT | ON UPDATE | ON DELETEInclude in the filter expression to specifically limit the filter clause to be executed on an insert, update or delete. You can specify more than one option. For example, ON UPDATE, ON DELETE executes on updates and deletes, but not inserts.
IGNORE INSERT | IGNORE UPDATE | IGNORE INSERTIgnores the specified operation. You can specify more than one IGNORE option.
DEF source_ddl_definitionHas meaning only for Enscribe files. Use a DDL definition or record within the open dictionary. This definition describes the record that is extracted from the TMF audit trails. You cannot specify more than one definition for any FILE statement.
Selecting Based on a Conditional Statement
With the WHERE option, you can select information based on a conditional statement. If you are selecting from an Enscribe file using WHERE, you must also specify the DEF option.
FILE file_name, WHERE (where_condition) [, DEF source_ddl_definition];
where_conditionSelects a subset of records from a source file or table, based on a condition, such as WHERE (branch = "NY"). For a list of valid operators, see Table 2-32.
DEF source_ddl_definitionHas meaning only for Enscribe files. Use a DDL definition or record within the open dictionary. This definition describes the record that is extracted from the TMF audit trails. You cannot specify more than one definition for any FILE statement.
Table 2-32 Permissible WHERE operators
| Operator | Example |
|---|---|
|
Column names |
PRODUCT_AMT |
|
Numeric values |
-123, 5500.123 |
|
Literal strings enclosed in quotes |
"AUTO", "Ca" |
|
Column tests |
|
|
Comparison operators |
=, <>, >, <, >=, <= |
|
Conjunctive operators |
AND, OR |
|
Grouping parentheses |
Use open and close parentheses for logical grouping of multiple elements. |
Selecting a Subset of Records Using RANGE
Use the RANGE clause to select a subset of the records in the source file or table. Unlike WHERE, RANGE does not require knowledge of the table or file structure.
FILE file_name, RANGE (x [, x, ...] OF y);
(x [, x,...] OF y)Selects a subset of records from a source file or table, based on a condition, such as RANGE (3 of 5) or RANGE (1, 3 of 5).
Duplicate unique index errors are possible when using the RANGE option for a file with a unique alternate key. For example, you delete the primary key for a record with primary key A and alternate key B. Then you insert with a different primary key but the same alternate key (e.g. primary key C, alternate key B). RANGE separates records based on the primary key, so transaction A and C can be sent to different Replicats and encounter a duplicate unique index on B when the insert is picked up first.
To avoid this, use the FILTER (@RANGE) function explained on "RANGE"and supply the columns that make up the unique alternate key as illustrated in the following example.
In this example table TAB1 has three columns. The first two, COL1 and COL2, form the primary key and the third is the unique alternate index named U_INDX_COL. The range for the trail files is set up to override the primary key with the unique index column as shown below.
EXTTRAIL AA TABLE TAB1, FILTER (@RANGE (1, 2, U_INDX_COL)); EXTTRAIL BB TABLE TAB1, FILTER (@RANGE (2, 2, U_INDX_COL));
Selecting a Specific Data Partition
Use the PARTITIONS option to specify which partitions of the file or table to write to the current Oracle GoldenGate trail. Particular partitions can be output to specific files in the trail, or skipped altogether.
Use PARTITIONS only when you have specified SOURCEISFILE and either FASTUNLOAD or FASTUNLOADSHARED. Otherwise, PARTITIONS has no effect.
FILE file_name, PARTITIONS (volume_specification);
volume_specificationA volume name, a volume number, or a range of volume numbers. Volume numbers begin with zero, and the last volume number can be specified as L. You can specify multiple volumes, delimited by commas, as in the following examples:
TABLE XYZ, PARTITIONS (\LA.$DATA1, $DATA3, \XYZ.$SYSTEM); TABLE ABC, PARTITIONS (0, 2 - 5, 10 - L);
Selecting Enscribe Records Based on Key
STARTKEY and ENDKEY can be used to limit the range of the keys that will be selected if your parameter settings meet the following requirements:
The FILE statement must specify an Enscribe file as the source.
SOURCEISFILE must apply.
Either SOURCEDEFS or both DICTIONARY and DEF must be present.
If SOURCEDEFS is used, the access cannot be by ALTKEY.
The PARTITIONS option cannot be used.
The FASTUNLOAD option cannot be used.
The columns used in the key specification must make up the high order portion of a system key, primary key, or alternate key defined for the Enscribe file. A SOURCEDEFS or DICTIONARY must be present to define the column name and value.
Both STARTKEY and ENDKEY are required and both are evaluated when deciding whether to select the record. Any existing FILTER or WHERE clauses are processed for records selected based on key range.
STARTKEY and ENDKEY can be defined for different FILE statements in the same parameter file.
FILE file_name [, STARTKEY key_specification, ENDKEY key_specification]
FILE $NY.ACCT.MASTER, STARTKEY (DIV="A1", ACCTNO=00000),
ENDKEY (DIV="Z9", ACCTNO=49999),
DEF $NY.DDLDEF.ACTDEF;
Oracle GoldenGate has the following data mapping capability for the FILE or TABLE parameters:
Mapping columns.
Retrieving data layout during Replicat processing.
Invoking a user exit.
Using a COLMAP clause, you can extract fields or columns from one record and map them to a differently structured record. This is useful, for example, when delivering data from an Enscribe file to an SQL table with similar, but not identical, fields. COLMAP selects, translates, and moves the fields you want into the new structure. When associated records are output, they are identified by the target file rather than the source to reflect the new structure of the record.
When mapping from an Enscribe file, include either the DEF or the TARGETDEF option. Use DEF when capturing and mapping from the trail. If the target is an Enscribe file, you must associate a DDL definition with the target so that mapping instructions can be interpreted with TARGETDEF.
When SQL tables are identified as the target, the layout of the target record after mapping is known since the SQL table structure is retrieved from the SQL catalog.
Additionally, you can match the source record length to the target record length. See "Matching Source and Target Record Lengths".
COLMAP requires the TARGET option. A DDL definition for an Enscribe target is required only once in the parameter file, and only when using a COLMAP clause.
FILE file_name, COLMAP (column_map_specification) | NOCOLMAP TARGET target_file_name [, DEF source_ddl_definition | TARGETDEF target_ddl_definition] ;
column_map_specificationThe column mapping expression, as in:
(target_column = source_expression)
Explicitly defines a source-target column map where target_column is the name of the target column and source_expression can be any of the following:
Numeric constant, such as 123
String constant enclosed within quotes, such as "ABCD"
The name of a source column, such as ORD_DATE
An expression using an Oracle GoldenGate column-conversion function, such as @STREXT (COL1, 1, 3)
NOCOLMAPAllows the user to specify a DEF for filtering purposes, but prevents the column-mapping command from completing. Example:
MAP \PROD.$DATA06.CER1ATLF.TL*, TARGET \GGS2.$DATA10.GGSLOGS.*, DEF TLF, NOCOLMAP, WHERE (TLF.HEAD.REC-TYP <> "00");
TARGET target_file_nameNames the target file. Must be an existing Enscribe file or SQL table, and can be an active define name.
DEF source_ddl_definitionHas meaning only for Enscribe files. Use a DDL definition or record within the open dictionary. This definition describes the record that is extracted from the TMF audit trails. You cannot specify more than one definition for any FILE statement.
TARGETDEF target_ddl_definitionUse TARGETDEF when invoking column mapping to an Enscribe file structure and the Enscribe file has not yet been specified in the parameter file, or did not have a definition associated with it.
If you assigned a definition to the target file earlier in the parameter file, you can omit TARGETDEF.
Matching Source and Target Record Lengths
Use the USETARGETDEFLENGTH option to adjust the source record length to the length of the target record. Precede the COLMAP statement with the USETARGETDEFLENGTH option.
USETARGETDEFLENGTH USETARGETDEFLENGTH COLMAP (USEDEFAULTS, CRD-TYP = @STRSUB (CRD-TYP, "VD", "PV"), FIID = @STRSUB (FIID, "WA", "SFNB"), FIID = @STRSUB (FIID, "ID", "BAID"));
Retrieving Data Layout during Replicat Processing.
The table name stored in the remote trail can be different for each record. Using the TARGETNAME option, you can specify a different file name in the header for each record retrieved from the file. Replicat uses the new file name to resolve the file or table layout.
If the specified file exists at the target node, Replicat can retrieve the layout from a local catalog or dictionary, which can save a significant amount of time.
FILE file_name, TARGETNAME target_file_name;
target_file_nameThe name of the target file for which Replicat retrieves the layout.
Purging records for Initial Load
When extracting data for an initial load, you can use the AUTOTRUNCATE option to send a PURGEDATA record to the trail as the first record. The PURGEDATA purges the target file before Replicat applies any data, so that the target file is loaded from a clean state.
Use AUTOTRUNCATE with extreme caution, since it causes all existing data to be purged from the target file.
Do not use AUTOTRUNCATE if you are performing multiple direct loads to the same target file, such as when using a range function to distribute the processing load.
AUTOTRUNCATE is not suited to a bi-directional configuration. PURGEDATA is a DDL statement that is automatically committed and not linked to any transaction, making loop detection difficult. Without effective loop detection, AUTOTRUNCATE could cause not only target, but also original source files, to be purged of data. If you use AUTOTRUNCATE in a bi-directional configuration, you should use IGNOREPURGEDATAS in your online Extract groups.
Use the ALTNAME option to specify an alternate name for a file that may no longer exist. Many applications use daily transaction files; purging the previous file and creating a new transaction file each day. A problem can occur when Extract expects to process data from a purged or renamed file.
Using ALTNAME, you can specify a generic definition for these types of files. This requires that a generic file exist that accurately describes the structure of each of the files in a group. The generic file does not require any data.
This option applies only when the source is an Oracle GoldenGate or Logger trail.
FILE file_name, ALTNAME alternate_file_name [, DEF source_ddl_definition];
alternate_file_nameThe alternate file name.
DEF source_ddl_definitionHas meaning only for Enscribe files. Use a DDL definition or record within the open dictionary. This definition describes the record that is extracted from the TMF audit trails. You cannot specify more than one definition for any FILE statement.
FILE $DATA1.DAT.TR*, ALTNAME $DATA1.TEMPLATE.TRANS, DEF TRANS-DEF;
Passing literal strings to user exits
Use EXITPARAM "exitparam_string" to pass a literal string to user exit routines whenever a record from FILE is encountered.
The string must be enclosed in double quotes and an ampersand used if it continues to additional lines. It is unlimited in size, but you must use the new function GET_EXIT_PARAM_VALUE to access values over the default of 256 bytes.
FILE file_name, EXITPARAM "exitparam_string";
When you specify the FORMATASCII, FORMATSQL, or FORMATXML parameters, you can use SQLNAME to substitute a string for the table name in the output. To preserve lowercase attributes of the string, enclose the string in quotes.
FILE file_name, SQLNAME table_alias;
Use SQLEXEC to perform a SQL query when processing a record for a SQL/MP table. SQLEXEC enables Oracle GoldenGate to communicate directly with the database to perform any query that SQL supports. The database function can be part of the synchronization process, such as retrieving values for column conversion, or it can be independent of extracting or replicating data.
Note:
This feature is not available for Enscribe files. SQLEXEC queries should not be used to change a value in the primary key column. The primary key value is passed from Extract to Replicat, so Replicat can perform further update/delete operations. If Replicat does not know the primary key value, these operations cannot be completed.
By using SQLEXEC within multiple FILE or MAP statements, you can create different rules for different tables; these rules can be as simple or as complex as needed. A query that is executed can accept input parameters from source or target rows and pass output parameters.
In the following example, SQLEXEC runs a select statement, extracting the timestamp from the target table, then filters out records as needed.
FILE $DATA1.SQLDAT.ORDERS, SQLEXEC (ID check, QUERY " SELECT TIMESTAMP FROM $DATA1.SQLDAT.ORDERS " " WHERE PKCOL =?P1 ", PARAMS (P1 = PKCOL), ERROR REPORT);
A SQLEXEC statement expects legal SQL syntax for the database being affected. Refer to the SQL for NonStop reference guide for permissible SQL syntax.
Important:
If a SQLEXEC query fails, the Extract or Replicat process will exit. As such, you must structure your query correctly.
FILE file_name, SQLEXEC ( ID logical_name, QUERY "sql_query", {PARAMS param_spec | NOPARAMS} [, AFTERFILTER | BEFOREFILTER] [, DBOP] [, EXEC frequency] [, MAXVARCHARLEN bytes] [, PARAMBUFSIZE num_bytes] [, TRACE option] [, ALLPARAMS option] [, ERROR action] [, ...] )
ID logical_nameDefines a logical name for the query. A logical name is required in order to extract values from the query results. ID logical_name references the column values returned by the query.
QUERY "sql_query"Specifies the SQL query syntax to execute against the database. The query must be valid, standard query statement for the database against which it is being executed. It can either return results with a SELECT statement or update the database with an INSERT, UPDATE, or DELETE statement.
For any query that produces output with a SELECT statement, only the first row returned by the SELECT is processed. Do not specify an "INTO..." clause for any SELECT statements.
Enclose the query within quotes. For a multi-line query, use quotes on each line. To ensure success, place a space after each begin quote and each end quote, or at least before the end quote.
For example, in the following, there are spaces before the words select and where and after the words ggs_notify and ?p1."
SQLEXEC ( ID ggs, ON UPDATES, ON INSERTS, QUERY " select notified from $DATA1.SQLDAT.NOTIFY " " where account_no = ?p1 ", PARAMS (p1 = account_no) )
PARAMS param_spec | NOPARAMSDefines whether the query accepts parameters. One of these options must be used.
AFTERFILTER | BEFOREFILTERSpecifies when to execute the query in relation to a FILTER clause. AFTERFILTER executes after the filter and enables you to skip the overhead of executing the SQL unless the filter is successful. This is the default. BEFOREFILTER executes before the filter and enables you to use the results of the procedure or query in the filter.
DBOPCommits INSERT, UPDATE, DELETE, and SELECT statements executed within the query. Otherwise, they could potentially be rolled back. Oracle GoldenGate issues the commit within the same transaction boundaries as the source transaction. Use caution: any changes that are committed by the procedure can result in overwriting existing data.
EXEC frequencyControls the frequency with which a query executes and how long the results are considered valid, if extracting output parameters. Takes one of the following arguments:
MAPExecutes the query once for each source-target table map for which it is specified. MAP renders the results invalid for any subsequent maps that have the same source table. For example, if a source table is being synchronized with more than one target table, the results would only be valid for the first source-target map. MAP is the default.
ONCEExecutes the query once during the course of an Oracle GoldenGate run, upon the first invocation of the associated FILE or MAP statement. The results remain valid for as long as the process remains running.
TRANSACTIONExecutes the query once per source transaction. The results remain valid for all operations of the transaction.
SOURCEROWExecutes the query once per source row operation. Use this option when you are synchronizing a source table with more than one target table, so that the results of the procedure or query are invoked for each source-target mapping.
MAXVARCHARLEN bytesSpecifies the maximum length allocated for any output parameter in a query. Beyond this maximum, output values are truncated. The default is 255 bytes without an explicit MAXVARCHARLEN clause.
PARAMBUFSIZE num_bytesSpecifies the maximum size of the memory buffer that stores parameter information, including both input and output parameters. Oracle GoldenGate issues a warning whenever the memory allocated for parameters is within 500 bytes of the maximum. The default is 10,000 bytes without an explicit PARAMBUFSIZE clause.
TRACE optionTakes one of the following arguments:
TRACE ALLWrites the input and output parameters of each invocation of a query to the report file.
TRACE ERRORWrites parameters to the report file only if an error occurs.
ALLPARAMS optionTakes one of the following arguments:
ALLPARAMS REQUIREDIndicates that all parameters must be present for the queries to execute.
ALLPARAMS OPTIONALAllows the query to execute without all parameters being present.
ERROR actionRequires one of the following arguments:
ERROR IGNOREDatabase error is ignored and processing continues.
ERROR REPORTDatabase error is written to a report.
ERROR RAISEDatabase error is handled just as a table replication error.
ERROR FINALDatabase error is handled as a table replication error, but does not process any additional queries.
ERROR FATALDatabase processing abends.
Use TOKENS to define a user token and associate it with data. Tokens enable you to extract and store data within the user token area of a trail record header. Token data can be retrieved and used in many ways to customize the delivery of Oracle GoldenGate data. For example, you can use token data in column maps or macros.
To use the defined token data in target tables, use the @TOKEN column-conversion function in the COLMAP clause of a Replicat MAP statement. The @TOKEN function maps the name of a token to a target column.
FILE file_name, TOKENS (token_name = token_data [, ...]) ;
token_nameA name of your choice for the token. It can be any number of alphanumeric characters and is not case-sensitive.
token_dataA character string of up to 2000 bytes. The data can be either a constant that is enclosed within double quotes or the result of an Oracle GoldenGate column-conversion function.
The following creates tokens named TK-OSUSER, TK-GROUP, and TK-HOST and maps them to token data obtained with the @GETENV function.
TABLE $DATA.MASTER.ACCOUNT, TOKENS (
TK-OSUSER = @GETENV ("GGENVIRONMENT", "OSUSERNAME"),
TK-GROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME")
TK-HOST = @GETENV ("GGENVIRONMENT", "HOSTNAME"));
Extract, Replicat
Use FILEAGEDAYS to specify the number of days a file can be inactive before the Extract or Replicat process ages it off of its file list.
Only files added with a wildcard match are eligible to be removed. Files cannot be aged off if they are designated in FILE and MAP statements that have filters, SQLEXEC statements, or column mappings that use column functions.
Audserv will inherit this parameter from the Extract.
Files are not aged off the list.
FILEAGEDAYS days
daysThe number of days a file can be inactive before the process will remove it from the file list. Entries can range from 1 to 365 days.
The following MAP example includes the @DATE function in the column mapping so the TL* files are not eligible for aging off the file list.
MAP $DATA.SOURCE.TL*, TARGET $DATA4.TARGET.*
COLMAP (USEDEFAULTS,
DATETIMECOL = @DATE("YYYY-MM-DD:HH:MI:SS", "YYYY-MM-DD:HH:MI:SS",
@DATENOW())
);
The following MAP example does not include a function, so the TL* files are eligible to be aged off the file list.
MAP $DATA.SOURCE.TL*, TARGET $DATA4.TARGET.*
COLMAP (USEDEFAULTS,
DATETIMECOL = "2010-02-14:01:02:03"
);
Extract
Use FILEEXCLUDE to exclude files or file sets from a wildcard list. Specify the FILEEXCLUDE parameter after the EXTFILE or EXTTRAIL entry to which the rule applies. This parameter is an alias for TABLEEXCLUDE.
FILEEXCLUDE file_identifier
file_identifierThe file set name or a wildcard argument.
The following example includes file names that begin with A, except for files in subvolume BADSUB, files that begin with AB and the file $DATA1.BAD.A123. Data is output into EXTFILE $DATA1.EXTDAT.EXTFILE1. Data from any file beginning with A would be included in EXTFILE $DATA1.EXTDAT.EXTFILE2.
EXTFILE $DATA1.EXTDAT.EXTFILE1 FILE $*.*.A; FILEEXCLUDE $*.BADSUB.* FILEEXCLUDE $*.*.AB* FILEEXCLUDE $DATA1.BAD.A123 EXTFILE $DATA1.EXTDAT.EXTFILE2 FILE $*.*.A;
Replicat
Use FILEOPWARNING to control Replicat's behavior when it attempts to purge non-existent files. FILEOPWARNING instructs Extract to ABEND, ignore the attempted purge, or issue a warning.
ABEND
FILEOPWARNING {ABEND | IGNORE | WARN}
ABENDStops processing
IGNOREIgnores the purge attempt and continues processing.
WARNIssues a warning to the report file.
Extract
Use FILERESOLVE to alter the rules for processing wildcard file and table entries. By default, wildcard FILE entries are processed each time the wildcard rule is satisfied. If incorrect syntax is entered in the FILE parameter, this can lead to ABEND conditions after startup parameters have been processed.
Replicat also processes file lists with wildcards, but it does so dynamically and the default value of FILERESOLVE DYNAMIC can only be altered for Extract.
DYNAMIC
FILERESOLVE {DYNAMIC | IMMEDIATE | BOTH}
DYNAMICWildcard FILE entries are processed each time the wildcard rule is satisfied.
IMMEDIATEExisting files or tables that satisfy the wildcard definition are processed at startup.
BOTHFiles existing at startup are processed at that time, and files created after startup are processed when encountered.
Consider the following scenario. $DATA1.DAT.FILE1 exists when Extract starts, and $DATA1.DAT.FILE2 is created after startup.The following example results only in extraction from FILE1. In addition, the list of files to which the wildcard applies is immediately available in the report.
FILERESOLVE IMMEDIATE FILE $DATA1.DAT.*
In this case, the file list for $DATA1.DAT.* is resolved immediately, and future additions to this list are ignored.
The following example results in the extraction from both TABLE1 and TABLE2. However, the FILE parameter is not resolved until a record for TABLE1 or TABLE2 is encountered. Therefore, the MAP statement is not checked for validity until that point.
FILERESOLVE DYNAMIC TABLE $DAT1.DAT.*, TARGET $DATA2.DAT.TEMPLATE, COLMAP (COL1 = COL2);
In the following example, if TABLE1 exists at runtime, but TABLE2 does not, the MAP for TABLE1 is checked for syntax immediately, and TABLE2 will still be picked up.
FILERESOLVE BOTH TABLE $DATA1.DAT.*, TARGET $DATA2.DAT.TEMPLATE, COLMAP (COL1 = COL2);
Extract, Replicat
Use FILLSHORTRECS to tell the Extract to fill the end of data records to fit their maximum configured length with spaces, zeros or SQL default column values. Use this to generate records in the trails, that are a constant length or contain the same number of columns.
FILLSHORTRECS can also be used for Replicat MAP statements that have a COLMAP statement to trigger mapping.
This functionality can be toggled on and off around FILE or MAP statements by using NOFILLSHORTRECS.
NOFILLSHORTRECS
FILLSHORTRECS | NOFILLSHORTRECS
Extract
Use FILTERVIEW to process SQL views by entering a view as the object in a FILE or TABLE entry. When processing views, Extract is actually processing the underlying table in the TMF audit trail. Extract internally maps the table to the view structure.
By default, when processing updates for views, Extract outputs a view update only if one of the underlying columns in the view changes. As a result, updates to the underlying table are not always output. Use NOFILTERVIEW to ensure that a record is output whenever the underlying table is updated regardless of whether or not any of the view's columns changed. FILTERVIEW ensures that records are output only when one of the columns changed.
FILTERVIEW
FILTERVIEW | NOFILTERVIEW
Replicat
Use FLUSHCHECKPOINT to control whether Enscribe files are flushed when Replicat records a checkpoint. Specify FLUSHCHECKPOINT to guarantee that when a checkpoint is recorded all data replicated before the checkpoint is stored on disk. If FLUSHCHECKPOINT is off, data may remain in file buffers.
Although FLUSHCHECKPOINT is safer, it can result in significant performance impact to Replicat processes.
NOFLUSHCHECKPOINT
FLUSHCHECKPOINT | NOFLUSHCHECKPOINT
Extract
Use these parameters to specify the number of seconds (FLUSHSECS) or centiseconds (FLUSHCSECS) for Extract to buffer records before flushing to the current log trail.
2 seconds
FLUSHSECS seconds | FLUSHCSECS centiseconds
secondsThe maximum number of seconds to buffer records before flushing.
centisecondsThe maximum number of centiseconds to buffer records before flushing.
Logger
Use FORCESTOPDELAY with the STOPDELAYCSECS keyword to instruct BASELIB to delay stopping a process for specified time to give Logger time to read messages in its $RECEIVE queue. The delay interval is specified with the STOPDELAYCSECS parameter.
If the process being stopped has messages queued on Logger's $RECEIVE queue and that process is stopped before Logger reads them, the message system will remove those messages from the queue and discard them. If this happens, changes that were done to the database will never get logged to the log trail.
FORCESTOPDELAY applies to a STOP, ABEND, or other operating system instruction that stops a process. It also applies to stops issued from an application, an operator issuing a stop command, or to a process issuing a STOP or ABEND.
If the process that is stopping another process is not bound with BASELIB then this parameter cannot delay the stop.
No delay
FORCESTOPDELAY STOPDELAYCSECS centiseconds
centisecondsThe delay interval in centiseconds.
Replicat
Use FORCEUSESYSKEY to force Replicat to use the mapped SYSKEY when executing update and delete operations on entry-sequenced SQL tables. For relative SQL tables, the default is to include the syskey unless other KEYCOLS have been specified. When using FORCEUSESYSKEY, include SYSKEY in a COLMAP statement to ensure that the proper row is updated or deleted in the target table. NOFORCEUSESYSKEY results in SYSKEY being omitted from the WHERE clause for entry-sequenced SQL updates and deletes (an alternative path is assumed).
For inserts on relative Enscribe files, FORCEUSESYSKEY forces the SYSKEY to be -1 (insert at end of file). If FORCEUSESYSKEY is used on the target file, the source file SYSKEY cannot be used for update and delete operations since there is no assurance the two keys will match. In this case a unique alternate key or index should be used for updates and deletes.
This parameter has no effect on entry-sequenced or key-sequenced Enscribe files, or on standard key-sequenced or cluster key SQL tables.
Do not use when you are using the MAP KEYCOLS option.
NOFORCEUSESYSKEY
FORCEUSESYSKEY | NOFORCEUSESYSKEY
In this example the SYSKEY from the source table is stored in the SAVE_SYSKEY column when the record is an insert.
GETINSERTS IGNOREUPDATES IGNOREDELETES MAP $DATA1.DAT.SOURCE, TARGET $DATA2.DAT.TARGET, COLMAP (AMOUNT = AMT, SAVE_SYSKEY = SYSKEY);
Then the saved SYSKEY from the source identifies the target row when the record is an update or delete.
FORCEUSESYSKEY IGNOREINSERTS GETUPDATES GETDELETES MAP $DATA1.DAT.SOURCE, TARGET $DATA2.DAT.TARGET, COLMAP (SYSKEY = SAVE_SYSKEY, AMOUNT = AMT); NOFORCEUSESYSKEY
Extract
Use FORMATASCII to format subsequent output in external ASCII format. FORMATASCII applies to all FILE or TABLE specifications that follow the FORMATASCII entry, and can be turned off with the NOFORMATASCII parameter.
Using FORMATASCII, you can format output compatible with the majority of popular database load utilities and other tools.
You can specify the parameter with or without options. If you don't include options, records are output as follows:
An operation type character, I, D, U, V (insert, delete, update, compressed update).
A before or after-image indicator, B or A.
The file or table name.
If the NAMES option (the default) is selected: field name, field value, field name, field value...
If the NONAMES option is selected: field value, field value... (NAMES format is always used for compressed records).
Between each of the above items, a field delimiter (which defaults to tab).
A new line character (line feed).
Before the beginning of the first record output for each transaction, a begin transaction record will appear. This record includes the following information:
The begin transaction indicator, B.
The timestamp at which the record committed.
The sequence number of the audit trail in which the commit was found.
The relative byte address (RBA) of the commit record within the audit trail.
Delimiters following each of the above fields, followed by a new line character.
After the last record in each transaction is output, a commit record will appear. This record contains C, the delimiter and a new line character.
Every record in a transaction, and no other records, are contained between the begin and commit indicators. Each combination of commit timestamp and RBA is unique and increases as records are output.
You can customize the output format with options described in the syntax.
Do not use this format if the data is to be processed by Replicat. Replicat expects the default format.
NOFORMATACSII | FORMATASCII [, option, ...]
option is one of the following.
BCPFormats the output for compatibility with SQL Server's BCP (Bulk Copy Program) high-speed load utility.
Options that can be used with BCP are:
BCPRECORDLIMITERThe option changes the line delimiter from only line feed ('\n') to line feed and carriage return ('\r\n'). Use when sending data to Windows or UNIX SQL in bulk copy format. NonStop will not receive data in this format.
TIMESTAMP {0 | 3 | 6}Valid when sending data to Windows or UNIX SQL in bulk copy format. The numeric options set the fractional part of the datetime data type:
0 — truncate the fractional portion of the timestamp
3 — include three digits in the fractional portion
6 — include six digits in the fractional portion
For example, the datetime 2013-12-11 18:26:06:35.123456 would be output as 2013-12-11 18:26:06:35 with the 0 option, 2013-12-11 18:26:06:35.123 with the 3 option, and 2013-12-11 18:26:06:35.123456 with 6.
The default format for the datetime data type is 0, truncate the fractional portion.
Note:
FORMATLOCAL must be used with datetime fractional options 3 or 6 for proper processing by the Collector on a Windows or Linux/UNIX system.
COLHDRSOutputs the table's column names before the data. COLHDRS takes effect only when extracting directly from the table (rather than the TMF audit trails).
DATE | TIME | TS}Specifies one of the following:
DATE — date (year to day),
TIME — time (year to second) before record data,
TS — transaction timestamp (year to fraction).
DELIMITER delimiterAn alternative field delimiter (the default is tab). Use the word TAB to delimit with tabs, otherwise use a single character enclosed in single quotes (for example, '/').
EXTRACOLS number_of_columnsIncludes placeholders for additional columns at the end of each record. Use this when a target table has more columns than the source.
FILEIncludes just the file name portion of the file or table (default is the fully qualified file name).
NAMES | NONAMESIncludes or excludes column names from the output. For compressed records, column names are included unless you also specify PLACEHOLDERS.
NOHDRFIELDS header_optionSuppresses output of transaction information, the operation type character, the before or after-image indicator, and the file or table name.
You can customize header information by including a header_option as follows:
IND includes the before or after indicator
OP includes the operation type character
WHOLEFILE includes the fully qualified file name
FILE includes the file name portion of the file or table (as specified above)
NOQUOTEExcludes quotation marks from character-type data. The default is to use quotation marks.
NOSYSKEYOmits the record SYSKEY (relative or entry key) from the output, if one exists.
NOTRANSTMTSExcludes transaction information.
NULLISSPACEOutputs NULL fields as empty fields. The default is to output null fields as the word NULL.
PLACEHOLDERSOutputs a placeholder for missing fields or columns. For example, if the second and fourth columns are missing in a four column table, the data would appear similar to:
'ABC',,123,,
SQLLOADERGenerates a file compatible with the Oracle SQL*Loader high-speed data load utility. SQLLOADER produces a fixed-length, ASCII-formatted file. Use this option only when one table's data is written to the Oracle GoldenGate trail (usually in the initial-load, SOURCEISFILE case).
The following is a sample table and description. CUSTNAME is the primary key.
$DATA1.TEST.CUSTOMER: CUSTNAME CHAR(10) LOCATION CHAR(10) BALANCE INTEGER
The transaction on this table is:
BEGIN WORK;
INSERT INTO CUSTOMER VALUES ("Eric", "San Fran", 550);
UPDATE CUSTOMER SET BALANCE = 100 WHERE CUSTNAME = "Eric";
COMMIT WORK;
Entering FORMATASCII produces:
B,2010-02-17:14:09:46.421335,8,1873474, I,A,\GGS.$DATA1.TEST.CUSTOMER,CUSTNAME,'Eric',LOCATION, 'San Fran',BALANCE,550, V,A,\GGS.$DATA1.TEST.CUSTOMER,CUSTNAME,'Eric',BALANCE,100, C,
Entering FORMATASCII, NONAMES, DELIMITER '|', FILE produces:
B|2010-02-17:14:09:46.421335|8|1873474| I|A|CUSTOMER|'Eric'|'San Fran'|550| V|A|CUSTOMER|CUSTNAME|'Eric'|BALANCE|100| C|
The last record returns column names because the record is a compressed update.
Entering FORMATASCII, NOHDRFIELDS, FILE, OP, TS, NONAMES, NOQUOTE produces:
I,CUSTOMER,2010-02-17:14:09:46.421335,Eric,San Fran,550, V,CUSTOMER,2010-02-17:14:09:46.421335,Eric,,100,
The absence of the second field in the update record is indicated by two consecutive commas. Ordering of header fields is predetermined and is not affected by the ordering of options.
The ampersand (&) in the parameter entry continues the parameter to the next line in the parameter file.
Extract
Use FORMATLOCAL to determine whether formatting extracted data occurs on the source or the target. When data is output on NonStop, formatting is always local.
When formatting remotely, database definitions need to be exported from NonStop to the remote system. Retrieving this information is automatic when formatting is local, eliminating the requirement to keep remote definitions synchronized with the most current definitions.
Format data remotely (offers far better performance)
FORMATLOCAL
Extract
Use FORMATSQL to format subsequent output records in external SQL DML format. This is the format SQL needed to create the operation, that is, an INSERT, UPDATE or DELETE statement. You can apply this format to both SQL and Enscribe records.
FORMATSQL applies to all Oracle GoldenGate files specified below the FORMATSQL entry. Turn off FORMATSQL with the NOFORMATSQL parameter.
The output contains the following transaction-related information before the first record output for each transaction:
the begin transaction indicator, B
the timestamp at which the record committed
the sequence number of the audit trail in which the commit was found
the relative byte address (RBA) of the commit record within the audit trail
commas following each of the above fields
a new line character
After the last record in each transaction is output, a commit record will appear. This record contains C, the delimiter and a new line character.
Every record in a transaction, and no other records, are contained between the begin and commit indicators. Each combination of commit timestamp and RBA is unique and increases as records are output.
Do not use this format if the data will be processed by Replicat. Replicat expects the default format.
NOFORMATSQL | FORMATSQL {ORACLE | NONAMES | FILE}
ORACLERecords are formatted for compatibility with Oracle databases. Primarily this means that date and time fields are converted to a format suitable to SQL*Plus (for example, TO_DATE('2010-05-01','YYYY-MM-DD').
NONAMESOmits column names when all columns are present in insert records to conserve space.
FILEOutputs only the file portion of the NonStop file name in the SQL statement.
Extract
Use the FORMATXML parameter to output data in XML format, instead of the default Oracle GoldenGate canonical format. A FORMATXML statement affects all extract files or trails that are defined after it.
Note:
Do not use FORMATXML if the data will be processed by the Replicat process. Replicat expects the default canonical format. Do not use FORMATXML if FORMATASCII or FORMATSQL is being used.
None
FORMATXML [option] [, ...]
option is one of the following.
INLINEPROPERTIES | NOINLINEPROPERTIESControls whether properties are included within the XML tag or written separately. INLINEPROPERTIES is the default.
ONERECPERTRANSCauses Extract to create one FORMATXML transaction per record if the SOURCEISFILE parameter is present. The default is to create one FORMATXML transaction per file.
TRANS | NOTRANSControls whether transaction boundaries and commit timestamps should be included in the XML output. TRANS is the default.
FORMATXML NOINLINEPROPERTIES, NOTRANS
Extract, Replicat
Use FUNCTIONSTACKSIZE to control the size of the memory stack that is used for processing Oracle GoldenGate functions. You should not need to use this parameter unless Oracle GoldenGate returns a message indicating that the size of the stack should be increased.
The memory stack holds arguments supplied to and from an Oracle GoldenGate function. When a very large number of functions or arguments are used, the size of the stack may need to be increased.
FUNCTIONSTACKSIZE is a global parameter. It affects all clauses in a parameter file.
The default without FUNCTIONSTACKSIZE is 200 arguments, which optimizes the performance of Oracle GoldenGate and its usage of system memory. Increasing this parameter can adversely affect the performance and use of system memory.
FUNCTIONSTACKSIZE must appear in the parameter file before any parameter clauses are listed.
200
FUNCTIONSTACKSIZE stack_size
stack_sizeA value between 50 and 5000 that denotes the number of arguments to allow in a parameter clause.
FUNCTIONSTACKSIZE 300
Extract
Use GETALTKEYS or IGNOREALTKEYS to tell Extract to produce or not produce file create records for alternate key files after it outputs the file create record for the primary file.
GETALTKEYS causes Extract to create primary and alternate key files.
IGNOREALTKEYS causes Extract to create primary key files.
GETALTKEYS
GETALTKEYS | IGNOREALTKEYS
Extract
Use GETAPPLOPS or IGNOREAPPLOPS to include or exclude records produced by any program except Replicat in a particular Extract file. GETAPPLOPS remains in effect until IGNOREAPPLOPS is entered. Once entered, IGNOREAPPLOPS is in effect until GETAPPLOPS is entered.
IGNOREAPPLOPS provides a method for isolating database operations performed by Replicat against audited files from other work.
See "GETREPLICATES | IGNOREREPLICATES" for information on how to use GETREPLICATES and IGNOREREPLICATS to include or exclude records created by Replicat.
GETAPPLOPS
GETAPPLOPS | IGNOREAPPLOPS
Extract
Use IGNOREAUXTRAILS to tell Extract to bypass TMF auxiliary TMF audit trails when capturing database changes. If relevant database changes are recorded in the master audit trail, this can have significant performance advantages.
IGNOREAUXTRAILS can specify particular trails to ignore, rather than all the trails.
GETAUXTRAILS
IGNOREAUXTRAILS aux_trail_num
aux_trail_numA value between 0 and 15.
The first example ignores all auxiliary TMF audit trails, while the second example ignores auxiliary trails AUX02 and AUX03.
IGNOREAUXTRAILS IGNOREAUXTRAILS 2, 3
Extract
Use GETCOMPS or IGNORECOMPS to include or exclude compressed update records for the specified EXTFILE. GETCOMPS remains in effect until IGNORECOMPS is entered. Once entered, IGNORECOMPS remains in effect until GETCOMPS is entered.
GETCOMPS
GETCOMPS | IGNORECOMPS
This example includes compressed update records in EXTRACT1 and EXTRACT3, but excludes compressed update records in EXTRACT2.
GETCOMPS
EXTFILE $DATA1.EXTDAT.EXTRACT1
TABLE $DATA2.FINANCE.ACCOUNTS;
IGNORECOMPS
EXTFILE $DATA1.EXTDAT.EXTRACT2
TABLE $DATA2.FINANCE.ACCOUNTS;
GETCOMPS
EXTFILE $DATA1.EXTDAT.EXTRACT3
TABLE $DATA2.FINANCE.ACCOUNTS;
Extract, Replicat
Use GETCREATES or IGNORECREATES to include or exclude file create records from the specified EXTFILE. GETCREATES remains in effect until IGNORECREATES is entered. Once entered, IGNORECREATES remains in effect until GETCREATES is entered.
IGNORECREATES
GETCREATES | IGNORECREATES
This example separates inserts, updates and creates into three files, EXTRACT1, EXTRACT2, and EXTRACT3.
IGNOREUPDATES IGNORECREATES GETINSERTS EXTFILE $DATA1.EXTDAT.EXTRACT1 TABLE $DATA2.FINANCE.ACCOUNTS; IGNOREINSERTS GETUPDATES IGNORECREATES EXTFILE $DATA1.EXTDAT.EXTRACT2 TABLE $DATA2.FINANCE.ACCOUNTS; IGNOREINSERTS IGNOREUPDATES GETCREATES EXTFILE $DATA1.EXTDAT.EXTRACT3 TABLE $DATA2.FINANCE.ACCOUNTS;
Extract, Replicat
Use GETDEFAULTS to reset Extract parameters to their original default settings. This is useful if you have changed multiple Extract parameters, and now wish to reverse those changes.
Using GETDEFAULTS sets the following parameters to ON:
GETINSERTS GETUPDATES GETDELETES GETUPDATEAFTERS GETCOMPS GETPURGEDATAS GETALTKEYS
Using GETDEFAULTS sets the following parameters to OFF:
GETUPDATEBEFORES GETNETCHANGES GETPURGES GETCREATES GETALTERS GETRENAMES
GETDEFAULTS
Extract, Replicat
Use GETDELETES or IGNOREDELETES to include or exclude delete records from the specified data source. GETDELETES is in effect until IGNOREDELETES is entered. Once entered, IGNOREDELETES is in effect until GETDELETES is entered.
GETDELETES
GETDELETES | IGNOREDELETES
This example separates inserts, updates and deletes into three files.
IGNOREUPDATES IGNOREDELETES GETINSERTS EXTFILE $DATA1.EXTDAT.EXTRACT1 TABLE $DATA2.FINANCE.ACCOUNTS; IGNOREINSERTS GETUPDATES IGNOREDELETES EXTFILE $DATA1.EXTDAT.EXTRACT2 TABLE $DATA2.FINANCE.ACCOUNTS; IGNOREINSERTS IGNOREUPDATES GETDELETES EXTFILE $DATA1.EXTDAT.EXTRACT3 TABLE $DATA2.FINANCE.ACCOUNTS;
Extract, Replicat
Use GETENV to retrieve environment variables that were set with the SETENV parameter. The retrieved results can be used as input parameters for stored procedures or macros. The results are printed to screen and the Extract report file. Use one GETENV statement per variable to be retrieved. See ""GETENV"" for more information on the GETENV variables.
None
GETENV (environment_variable)
environment_variableThe name of the environment variable
GETENV TRAIL1
Extract, Replicat
Use GETFILEOPS or IGNOREFILEOPS to include or exclude file-level operations in the current EXTFILE. IGNOREFILEOPS remains in effect until GETFILEOPS is entered. Once entered, GETFILEOPS remains in effect until IGNOREFILEOPS is entered.
File-level operations that are included or excluded are CREATE, ALTER, PURGE, PURGEDATA, RENAME and some SETMODE, CONTROL and file label change operations. You must specify GETFILEOPS to extract bulk I/O records. When extracting from the TMF audit trail, only CREATE and PURGE records are available. For CREATE operations on audited files, Extract needs the file on disk to extract the record.
To implement GETFILEOPS or IGNOREFILEOPS, specify the desired parameter on one line, followed by the operation to include or exclude as shown in the syntax.
Using GETFILEOPS turns on the following values:
GETPURGES GETPURGEDATAS GETRENAMES GETALTERS GETCREATES GETCHANGELABELS GETSETMODES GETUNSTRUCTOPS GETCLOSES
IGNOREFILEOPS
GETFILEOPS operation | IGNOREFILEOPS operation
Extract, Replicat
Use GETINSERTS or IGNOREINSERTS to include or exclude insert records in the current data source. GETINSERTS remains in effect until IGNOREINSERTS is entered. Once entered, IGNOREINSERTS remains in effect until GETINSERTS is entered.
GETINSERTS
GETINSERTS | IGNOREINSERTS
Extract
Use GETMARKERS or IGNOREMARKERS to include or exclude marker records from the specified EXTFILE.
Marker records are special audit records created by users with the GGSCI command ADD MARKER. You can use marker records to identify application-specific critical points in Extract and Replicat processing.
The most common use of a marker is to identify a point at which all data has been replicated from a source to a target database. To do this, an operator brings down application activity against the source database, then adds a marker. After seeing a message that the marker was processed by Replicat, you can safely assume all records from the source database have been processed. At this point, for example, a hot site switch to the backup database could safely occur.
GETMARKERS
GETMARKERS | IGNOREMARKERS
Extract
Use GETNETCHANGES to retrieve fields that were changed in a particular update record, plus the primary key. Use IGNORENETCHANGES to output all fields, changed or not.
GETNETCHANGES remains in effect until IGNORENETCHANGES is entered. Once entered, IGNORENETCHANGES remains in effect until GETNETCHANGES is entered.
GETNETCHANGES guarantees that only changed records are retrieved. Compressed update records do not guarantee this will happen, because some update records do not actually change fields and will not be returned (for example, UPDATE TABX SET PRICE = 10 WHERE PRICE = 10). This is common in applications that use, for example, a generic update of each column to accomplish all update tasks, regardless of what is changed.
GETNETCHANGES retrieves and compares both before and after-images. This can result in a system resource usage increase.
IGNORENETCHANGES
GETNETCHANGES | IGNORENETCHANGES
Replicat
GETNETWORKALTFILENAMES lets the file system qualify the alternate key file names with the local node name. Use IGNORENETWORKALTFILENAMES to tell the file system not to qualify the alternate key file names with the local node name. You can add this parameter as part of a MAP statement or toggle it around MAP statements.
GETNETWORKALTFILENAMES
GETNETWORKALTFILENAMES | IGNORENETWORKALTFILENAMES
Extract, Replicat
Use GETNEWCOLUMNS to retrieve records from the TMF audit trails that describe new SQL column changes. GETNEWCOLUMNS enables two different activities:
Updates table definitions in Extract and downstream in Replicat, without bringing either process down first (NonStop replication only). This lets you replicate columns added to a table after Oracle GoldenGate startup.
Downstream in Replicat (NonStop only), replicates the statement that created the column.
Apply GETNEWCOLUMNS to files or Oracle GoldenGate trails individually. This means that you can withhold new column records from specific Extract files. To ensure that the column exists in the target database before replicating into it, it is recommended that GETNEWCOLUMNS apply to all files in the Oracle GoldenGate trail.
GETNEWCOLUMNS
GETNEWCOLUMNS | IGNORENEWCOLUMNS
Extract, Replicat
Use GETPARTONLYPURGEDATAS or IGNOREPARTONLYPURGEDATAS to include or exclude PARTONLY PURGEDATA operations. From the point it is entered in the parameter file, the parameter remains in effect until its opposite is encountered. If GETPARTONLYPURGEDATAS is entered, for example, it remains in effect until an IGNOREPARTONLYPURGEDATAS.
GETPARTONLYPURGEDATAS is required to capture PURGEDATA operations on Enhanced Key Sequenced Enscribe files (files that have more than 16 partitions.)
Note:
If a file has more than 16 partitions, PURGEDATA operations can only be captured on a per-partition basis because the primary partition will always be empty.
Before using GETPARTONLYPURGEDATAS carefully consider the following restrictions.
GETPARTONLYPURGEDATAS applies only to like-to-like replication and cannot be used in heterogeneous configurations.
GETPARTONLYPURGEDATAS is not supported for bi-directional configurations.
When you use GETPARTONLYPURGEDATAS, the key range of the partitions on the target must match the key range of those on the source. The PARTONLY PURGEDATA operation will be rejected if the source partition file is out of range for the target file of the same number.
GETPARTONLYPURGEDATAS is not supported for non-audited files.
GETPARTONLYPURGEDATAS is not supported for SQL tables.
If you want an Extract pump to pass on PARTONLY PURGEDATA operations, you must explicitly specify GETPARTONLYPURGEDATAS.
GETPARTONLYPURGEDATAS has no effect on GETPURGEDATAS, and neither GETDEFAULTS nor GETFILEOPS have an effect on GETPARTONLYPURGEDATAS.
The following parameters turn off (disable) GETPARTONLYPURGEDATAS:
IGNOREFILEOPS
IGNOREPURGEDATAS
Note:
If PARTONLY PURGEDATA operations are received by older (previous to version 9.5) or open systems Replicats, there will be no attempt to process the PARTONLY PURGEDATA operations.
IGNOREPARTONLYPURGEDATA
GETPARTONLYPURGEDATAS | IGNOREPARTONLYPURGEDATAS
Extract, Replicat
Includes or excludes file purge operations.
IGNOREPURGES
GETPURGES | IGNOREPURGES
Extract, Replicat
Use GETPURGEDATAS or IGNOREPURGEDATAS to control whether Extract writes purge data operations to a trail.
Oracle GoldenGate supports PURGEDATA for Enscribe file base tables and complete table PURGEDATA for SQL/MP tables.
GETPURGEDATAS is not supported for audited files in a bidirectional configuration. In this case IGNOREPURGEDATAS must be added to the Extract parameters.
Note:
GETPURGEDATAS captures full file PURGEDATA operations. If the primary partition is empty, however, GETPURGEDATAS will not capture PURGEDATA operations for the secondary partitions. In this case, the PURGEDATA for the secondary partitions will be picked up only by GETPARTONLYPURGEDATAS.
GETPURGEDATAS
GETPURGEDATAS | IGNOREPURGEDATAS
Extract, Replicat
Includes or excludes file rename records.
IGNORENAMES is the default unless GETFILEOPS is used.
GETRENAMES | IGNORERENAMES
Extract
Use GETREPLICATES or IGNOREREPLICATES to include or exclude records created by Replicat. GETREPLICATES remains in effect until IGNOREREPLICATES is entered. Once entered, IGNOREREPLICATES remains in effect until GETREPLICATES is entered.
Use IGNOREREPLICATES when two files are delivering different data to each other. For example, assume A replicates to B and B replicates to A. When A sends an insert record to B, Extract detects the replicated record and extracts it from B. In turn, Replicat replicates it back to A, which produces an error (duplicate record). Specify IGNOREREPLICATES to discard the replicated record.
GETREPLICATES can cascade replication through a chain of files. For example, if A replicates to B, and B replicates to C, IGNOREREPLICATES would prevent data from A being replicated at C. GETREPLICATES enables this to happen.
Even with GETREPLICATES set, you can avoid delivering back to the source by using WHERE or FILTER clauses in Extract or Replicat.
See "GETAPPLOPS | IGNOREAPPLOPS" for information on using GETAPPLOPS and IGNOREAPPLOPS to include or exclude records not created by Replicat.
IGNOREREPLICATES
GETREPLICATES | IGNOREREPLICATES
Extract
Use GETROLLBACKS or IGNOREROLLBACKS to include or exclude records created by TMF rollback processes.
Use GETROLLBACKS only when performing change synchronization while extracting initial-load data. If you perform initial load while the source database remains up, GETROLLBACKS guarantees that aborted transactions occurring during the initial extraction phase are rolled back on the target database. This is because initial extraction uses browse access against the source table, which can result in retrieval of records that are later rolled back.
After initial load is complete, issue the SEND EXTRACT group_name IGNOREROLLBACKS command from GGSCI.
GETROLLBACKS is only required when the initial-load method uses a bulk method, such as BACKUP, to extract the data or when using Oracle GoldenGate with the FASTUNLOADSHARED parameter.
IGNOREROLLBACKS
GETROLLBACKS | IGNOREROLLBACKS
Extract, Replicat
Use GETUPDATEAFTERS or IGNOREUPDATEAFTERS to include or exclude after-images of update records from a specified file in an Oracle GoldenGate trail. After-images contain record details after an update (as opposed to before-images). GETUPDATEAFTERS remains in effect until IGNOREUPDATEAFTERS is entered. Once entered, IGNOREUPDATEAFTERS remains in effect until GETUPDATEAFTERS is entered.
GETUPDATEAFTERS
GETUPDATEAFTERS | IGNOREUPDATEAFTERS
Extract, Replicat
Use GETUPDATEBEFORES or IGNOREUPDATEBEFORES to include or exclude before-images of update records from the specified EXTFILE. Before-images contain record details before an update (as opposed to after-images). IGNOREUPDATEBEFORES remains in effect until GETUPDATEBEFORES is entered. Once entered, GETUPDATEBEFORES remains in effect until IGNOREUPDATEBEFORES is entered.
Within your user exit code, you can compare before-images with after-images to identify the net results of a transaction, rather than the final state of a record. For example, if a BALANCE field is $100 before a transaction and $120 after, comparison would show the difference of $20 occurring in that transaction.
IGNOREUPDATEBEFORES
GETUPDATEBEFORES | IGNOREUPDATEBEFORES
Extract, Replicat
Use GETUPDATES or IGNOREUPDATES to include or exclude update records from the specified data source. GETUPDATES remains in effect until IGNOREUPDATES is entered. Once entered, IGNOREUPDATES remains in effect until GETUPDATES is entered. If you specify IGNOREUPDATES, compressed updates are ignored as well.
GETUPDATES
GETUPDATES | IGNOREUPDATES
Replicat
Use GROUPTRANSOPS to group transactions when applying changes to the target database. Small transactions from the source database are grouped into a single large transaction on the target side, which can result in significant performance benefits. In general, GROUPTRANSOPS speeds up processing for smaller transaction sizes.
With GROUPTRANSOPS, the integrity of the original transaction is preserved at the target. GROUPTRANSOPS controls the frequency of checkpoints written to keep track of activity, minimizing I/O (a checkpoint is written and flushed to disk after each transaction completes).
1000
GROUPTRANSOPS min_op_count
min_op_countThe minimum number of operations to be applied in a transaction. Avoid setting min_op_count to an arbitrarily high number as tests show that the benefits diminish after about 25-100 operations. Transactions commit before min_op_count when appropriate, such as when end-of-file is reached on the trail file.
Replicat
Use HANDLECOLLISIONS to overlay duplicate records into the target database and ignore missing record errors.
If your installation collects data from several sources to update the same record on the same target, data can be lost. Ideally, applications should ensure that each source file manages a specific range of keys. If this is not the case, there can be conflicts on the target. For example, if two source tables receive an insert with the same key, both operations cannot be applied at the target because a duplicate error results.
Replicat supplies the HANDLECOLLISIONS parameter to overlay duplicate records with the latest version of the record, even if they key exists. HANDLECOLLISIONS ignores missing update and delete conditions.
HANDLECOLLISIONS is especially useful during the initial load of a target table while the source table remains online. In this phase, the following steps occur:
Processing initial data to the target. When the source database remains online, this step will not read everything. Step 1 begins at time 0.
Extracting changes that occurred since the beginning of Step 1. Step 2 begins at time 0.
After Step 1 completes (time 1), processing of changes extracted in Step 2 (finishes at time 2).
In Step 3, duplicate errors are possible because both Step 1 and Step 2 may extract the same insert records (which occurred since time 0). In such cases, you can use the change record. While you could ignore the duplicate, overlaying the change is safer from an operational standpoint.
There may be instances of missing records when an update or delete operation is attempted on the target table. This is considered normal, since the following chain of events may have occurred:
Update of record A in source table.
Delete of record A in source table.
Extracting update A by Extract process (Step 2).
Initial-load extraction (Step 1) sees no trace of A, therefore A never inserted into target by initial-load processing.
Replicating update A attempted in Step 3, but record is missing. Delete of A will result in a missing record error as well.
When all changes have been extracted and applied (time 2), HANDLECOLLLIONS is no longer required. Turn off HANDLECOLLISIONS while Replicat remains online with the GGSCI SEND REPLICAT NOHANDLECOLLISIONS command.
HANDLECOLLISIONS can also be turned off by including NOHANDLECOLLISIONS in the parameter file.
NOHANDLECOLLISIONS
HANDLECOLLISIONS | NOHANDLECOLLISIONS
The following turns HANDLECOLLISIONS on for TARGET1 and off for TARGET2.
HANDLECOLLISIONS MAP $DATA1.DAT.SOURCE1, TARGET $DATA2.DAT.TARGET1; NOHANDLECOLLISIONS MAP $DATA1.DAT.SOURCE2, TARGET $DATA2.DAT.TARGET2;
GLOBALS, Logger, Extract
Use HEARTBEAT to write a small record to the trail to make sure that Extract or Logger can send data to a remote system. For TMF audited processing, HEARTBEAT can be set in the GLOBALS or the Extract parameter file. For non-audited processing it can be set in the Logger parameter file.
In the Extract or GLOBALS parameter file, HEARTBEAT can be configured to send this small record at the interval you choose, up to 120 seconds. In the Logger file, the record is sent every 60 seconds. The Logger parameter does not allow you to configure HEARTBEAT at a frequency you choose.
Do not issue HEARTBEAT records.
For Logger:
HEARTBEAT
For GLOBALS and Extract:
HEARTBEAT seconds
secondsThe time interval, in seconds, to write HEARTBEAT records. The value set in the Extract parameter file will override a setting in GLOBALS.
GLOBALS
Use HOMETERM to change the default Oracle GoldenGate system home terminal to another terminal.
The terminal residing on the system on which Oracle GoldenGate is installed.
HOMETERM home_terminal_name
home_terminal_nameSpecify a terminal, such as $VHS.
GLOBALS
Use HOMETERMMESSAGES to suppress or direct Oracle GoldenGate messages to the home terminal. By default, Oracle GoldenGate processes write messages to EMS, but not to the home terminal.
This parameter can be overridden in other parameter files by specifying a value for HOMETERMMESSAGES that is different from the GLOBALS value.
NO
HOMETERMMESSAGES {YES | NO}
YES | NOSpecify YES to write to the home terminal and EMS. The default is NO, suppress writing to the home terminal and write only to EMS.
GLOBALS
The HOST parameter does the following:
Identifies the remote NonStop system to communicate with GGSCI.
Helps Oracle GoldenGate resolve file names when delivering data over TCP/IP. For installations without Expand connections, the HOST parameter is required for mapping node numbers to node names. If an Expand link exists between systems, use the NODENUM parameter instead of HOST.
Identifies the NonStop volume and subvolume where the Oracle GoldenGate environment can be found for remote nodes. When the NETWORKCHECKPOINTS parameter is set, Replicat uses this information to identify the local checkpoint file to update for replication to a file partition. This then allows Replicat to recognize replicated data in a bi-directional system.
HOST system_name [, GGSSUBVOL subvolume] [, NODENUM node_number]
system_nameIdentifies the host system name.
GGSSUBVOL subvolumeIdentifies the subvolume. Required with NODENUM.
NODENUM node_numberIdentifies the NonStop node.
The first example below identifies an Oracle GoldenGate installation on remote node \SF as subvolume $DATA3.GGS. The second identifies node number 109 as NonStop system \NY.
HOST \SF, GGSSUBVOL $DATA3.GGS HOST \NY, NODENUM 109
This example identifies multiple nodes that contain data and an Oracle GoldenGate installation.
HOST \SF, NODENUM 112, GGSSUBVOL $DATA3.GGS HOST \BACK, NODENUM 113, GGSSUBVOL $VOL03.GGS HOST \LA, NODENUM 114, GGSSUBVOL $DATA6.GGS HOST \NY, NODENUM 115, GGSSUBVOL $PROD1.GGS
GLOBALS
Use IGNOREPARAMERROR to prevent programs from terminating abnormally (abending) when an unrecognized or bad GLOBALS entry is encountered.
This can be useful when different versions of Oracle GoldenGate modules are used in the same installation, and a new GLOBALS parameter has been added since the older module was created.
NO
IGNOREPARAMERROR {YES | NO}
GLOBALS
Use IGNORETMFDUMPS to prevent GGSCI and the Extract and Replicat processes from capturing information about TMF disk and tape dumps. Specifying IGNORETMFDUMPS YES avoids the time-consuming process of updating internal tables containing TMF catalog information. Updating catalog information can also cause SPI error 291 on systems that are processing near their limit.
NO
IGNORETMFDUMPS {YES | NO}
Extract, Replicat
Use INCLUDE to include a macro library in a parameter file.
None (no file included)
INCLUDE file_name
file_nameThe full path to the library file.
The following example includes macro library DATELIB.
INCLUDE $DATA4.GGSMACR.DATELIB
Logger
Use INCLUDESOURCEAPPINFO to capture the name and process ID (PID) of the source application program that alters the files being logged for non-audited Enscribe. The program name and process name are added to the trail as token elements PROGRAMNAME and PROCESSNAME.
The information can be retrieved by using @GETENV. For example @GETENV ("RECORD", "PROCESSNAME") retrieves the name of the process.
EXCLUDESOURCEAPPINFO
INCLUDESOURCEAPPINFO | EXCLUDESOURCEAPPINFO
Replicat
Use INSERTALLRECORDS to apply all record types as inserts in the target. Normally, Replicat applies inserts, updates and deletes to the target database as they occur on the original database. Consider the following sequence of transactions:
| Sequence | Operation | Table | ID | BALANCE |
|---|---|---|---|---|
1 |
INSERT |
CUSTOMER |
DAVE |
1000 |
2 |
UPDATE |
CUSTOMER |
DAVE |
900 |
3 |
UPDATE |
CUSTOMER |
DAVE |
1250 |
4 |
DELETE |
CUSTOMER |
DAVE |
1250 |
These operations, after replication, would leave no trace of the ID DAVE. No transaction information would be kept, only the ending balance. Therefore, we would have no knowledge that the first update reduced BALANCE by 100, or that the second update increased BALANCE by 350. Finally, we would have no idea that DAVE was ever deleted from the database, or what his ending BALANCE was.
INSERTALLRECORDS allows this information to be recorded. Instead of applying updates and deletes as they originally occurred, INSERTALLRECORDS forces Replicat to insert the information as a new record into the target table. INSERTALLRECORDS results in the storage of all images—before and after—into the target database.
Combining this information with special transaction information provides a way to create a database that contains more useful information. You can add special column values related to each transaction to the target data to make better reporting possible.
Using INSERTALLRECORDS increases the size of your target tables, so you should only enable it where complete records are required. INSERTALLRECORDS applies to all tables listed below it in the parameter file, until you turn it off again by specifying NOINSERTALLRECORDS. INSERTALLRECORDS can also be limited to a specific file or table by using it as an option under the MAP parameter.
NOINSERTALLRECORDS
INSERTALLRECORDS | NOINSERTALLRECORDS
To build a more transaction-oriented view of customers, rather than the latest state of the database, enter the following into the parameter file:
INSERTALLRECORDS
MAP =CUSTOMER, TARGET =CUSTHIST,
COLMAP (USEDEFAULTS,
TS = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),
BEF_AFT = @GETENV ("GGHEADER", "BEFOREAFTERINDICATOR"),
OP_TYPE = @GETENV ("GGHEADER", "OPTYPE"),
ID = ID,
BALANCE = BALANCE);
This generates the net effect of each transaction, as in the following SQL query that returns the net sum of each transaction along with the time of the transaction and the customer ID.
SELECT A.ID, A.TS, A.BALANCE - B.BALANCE FROM CUSTHIST A, CUSTHIST B WHERE A.ID = B.ID AND A.TS = B.TS AND A.OP_TYPE = 'A' AND B.OP_TYPE = 'B';
The following example applies all record types as inserts only for the $DATA3.TARGET.HISTORD order history file.
MAP $DATA.SOURCE.CUSTORD TARGET $DATA3.TARGET.HISTORD
INSERTALLRECORDS
COLMAP (USEDEFAULTS,
TRAN_TIME = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),
BEF_AFT = @GETENV ("GGHEADER", "BEFOREAFTERINDICATOR"),
OP_TYPE = @GETENV ("GGHEADER", "OPTYPE"));
Replicat
Use INSERTDELETES to convert all delete records to insert operations. INSERTDELETES applies to all maps specified below it in the parameter file until NOINSERTDELETES is specified.
NOINSERTDELETES
NSERTDELETES | NOINSERTDELETES
Replicat
Use INSERTMISSINGUPDATES to insert a record when Replicat attempts to apply an update to a record that is missing from the target. This applies only to uncompressed updates, including full images fetched by Extract using FETCHCOMPS.
When NOINSERTMISSINGUPDATES is in effect, this situation causes a missing record error and the transaction may abort depending on REPERROR settings. INSERTMISSINGUPDATES applies to all maps specified below it in the parameter file until NOINSERTMISSINGUPDATES is specified.
NOINSERTMISSINGUPDATES
INSERTMISSINGUPDATES | NOINSERTMISSINGDATES
Replicat
Use INSERTUPDATES to convert uncompressed update records to insert operations. INSERTUPDATES applies to all maps specified below it in the parameter file until NOINSERTUPDATES is specified.
NOINSERTUPDATES
INSERTUPDATES | NOINSERTUPDATES
Manager, Extract
Under NonStop parallel TCP/IP architecture, a process can have multiple interfaces available. Use IPINTERFACE to restrict the Manager or Extract process to the interface specified by an input IP address or DNS name.
If the specified address is not associated with the TCP/IP process, Extract will ABEND with an error and STATUS MANAGER will return an error that varies depending on the address protocol that is used and installed (IPv4 or IPv6). The errors returned by STATUS MANAGER will be something like:
Manager process $EXMGR is running (IP socket not open error 4115 (Can't assign requested address, completing bind)). Manager process $EXMGR is running(Process $ZTC0, IP (null) port 12345). Manager process $EXMGR is running(IP socket not open Invalid function argument).
Handles the requesting IP address.
IPINTERFACE {ip_address | dns_name}
ip_addressThe IP address to which Manager or Extract is to be restricted.
dns_nameThe domain name to which Manager or Extract is to be restricted.
The IP address can be restricted either by using IPINTERFACE or by using the optional @ip_address with TCPIPPROCESSNAME as shown in the examples below.
The IP address can be specified by using the IPINTERFACE parameter as shown in the example below.
TCPIPPROCESSNAME $ZTC1 IPINTERFACE 192.0.2.2
Or the IP address can be specified by attaching the IP address to the TCPIPPROCESSNAME parameter as shown in the example below.
TCPIPPROCESSNAME $ZTC1@192.0.2.2
Manager
Use LAGCRITICALSECONDS, LAGCRITICALMINUTES, or LAGCRITICALHOURS to specify the time interval at which Extract or Replicat processing lag is reported to the event log as a critical message. For example, LAGCRITICALMINUTES 5 specifies a reporting interval of five minutes. Likewise, LAGCRITICALSECONDS 20 and LAGCRITICALHOURS 2 specify reporting intervals of 20 seconds and two hours.
When using a LAGCRITICAL parameter, specify LAGREPORT to determine how often the lag times are evaluated and reported.
LAGCRITICALSECONDS seconds | LAGCRITICALMINUTES minutes | LAGCRITICALHOURS hours
secondsThe reporting interval, in seconds, as in LAGCRITICALSECONDS 10.
minutesThe reporting interval, in minutes, as in LAGCRITICALMINUTES 5.
hoursThe reporting interval, in hours, as in LAGCRITICALHOURS 1.
Manager
Use LAGINFOSECONDS, LAGINFOMINUTES, or LAGINFOHOURS to determine the point at which lag should be reported to the event log as an informational message. For example, LAGINFOSECONDS 5 specifies a reporting interval of five seconds. Likewise, LAGINFOMINUTES 2 and LAGINFOHOURS 2 specify reporting intervals of two minutes and two hours.
When using a LAGINFO parameter, specify LAGREPORT to determine how often the lag times are evaluated and reported.
LAGINFOSECONDS seconds | LAGINFOMINUTES minutes | LAGINFOHOURS hours
secondsThe reporting interval, in seconds.
minutesThe reporting interval, in minutes.
hoursThe reporting interval, in hours.
Manager
LAGREPORTMINUTES and LAGREPORTHOURS determine the interval at which Manager checks lag for Extract and Replicat processing.
For example, LAGREPORTHOURS 1 reports lag every hour. If the lag is greater than the time specified by LAGCRITICAL, Manager reports the lag as critical, otherwise, it reports the lag as an informational message. Lag is not reported if the values are below the threshold set with the LAGINFO parameter.
LAGREPORTMINUTES minutes | LAGREPORTHOURS hours
minutesThe reporting interval, in minutes, as in: LAGREPORTMINUTES 5.
hoursThe reporting interval, in hours, as in: LAGREPORTHOURS 1.
Extract, Replicat
Use LAGSTATS to periodically collect, and optionally report, lag and other performance-related statistics to the report file.
LAGSTATS [, INTERVALunit num_units] [, THRESHOLDunit num_units] [, REPORT]
INTERVAL unit num_unitsThe interval for which data is collected and optionally reported. For example, INTERVALSECONDS 30 collects statistics for the last thirty seconds; statistics related to peaks and averages are reset every thirty seconds.
unit can be one of:
MSECS (milliseconds)
CSECS (centiseconds)
SECONDS
MINUTES
HOURS
DAYS
THRESHOLD unit num_unitsReports percentages of records processed above or below a time lag threshold. For example, THRESHOLDMSECS 500 reports the percentage of records replicated on the target database within 500 milliseconds of entry into the source database. You can specify up to four thresholds.
REPORTStatistics are output to the report file whenever a reporting interval is passed. If REPORT is not specified, you can retrieve statistics with the GGSCI SEND command, using the LAGSNAPSHOT option.
Extract
Use LIMITRECS to set a maximum number of records to be extracted from a source table. This parameter has no effect unless you specify SOURCEISFILE. Use LIMITRECS to generate a sample of test data.
LIMITRECS num_recs
num_recsThe number of records to extract.
Extract, Replicat
Use LIST and NOLIST to control whether the macros of a macro library are listed in the report file. Listing can be turned on and off by placing the LIST or NOLIST parameters within the parameter file or within the macro library file. Using NOLIST reduces the size of the report file.
LIST
LIST | NOLIST
In the following example, NOLIST excludes the macros in the macro library from being listed in the report. Using LIST after the INCLUDE statement restores normal listing for subsequent macros.
NOLIST INCLUDE macro_library LIST INCLUDE macro_library
Logger
Use LOG to identify both a Logger process and the volume to which the Logger process writes data (a log trail). The trails hold data for Replicat and are maintained by Oracle GoldenGate management processes.
The number of LOG entries in the Logger parameter file indicates the number of Logger processes in the system and the number of log trails. Each Logger process writes data to exactly one log trail. By default, LOG process names have the format $GGLnn. Therefore, if two LOG entries are made in the parameter file, processes $GGL00 and $GGL01 perform logging for the system.
The first LOG entry must precede all other parameter entries. LOG space is allocated after the configuration is processed.
LOG logtrail_name [, EXTENTS (primary, secondary, maximum) | MEGABYTES megabytes] [, NUMFILES number] [, NEWFORMAT | OLDFORMAT] [, OWNER group_number, user_number] [, PROCESS process_name] [, RECSIZE bytes| BLOCKSIZE bytes] [, SECURE "rwep"]
logtrail_nameThe location of the log trail files for the current Logger process and the two-character process prefix. You can use the Oracle GoldenGate defaults, or specify a custom subvolume or process prefix.
To specify a custom subvolume or a custom process prefix, enter $vol.subvol.xx, where $vol. is the volume where Oracle GoldenGate is installed, subvol is the custom name of your subvolume, and xx is the two-character process prefix.
See "Specifying a custom subvolume, process prefix, or process name" for more information.
EXTENTS primary, secondary, maximumThe storage dimensions of each log trail file, where primary, secondary, maximum represents the extents. You must include EXTENTS if you omit MEGABYTES.
MEGABYTES megabytesThe storage in megabytes allocated per log trail file where megabytes represents the number of megabytes. You must include MEGABYTES if you omit EXTENTS.
NUMFILES numberThe number of files to include in the log trail.
NEWFORMAT | OLDFORMATNEWFORMAT produces Oracle GoldenGate version 7 format trails. OLDFORMAT produces trails in the format used prior to version 7. NEWFORMAT is the default.
OWNER group_number, user_numberThe Guardian group and user that owns the log trail files. The default is the group user running GGSCI.
PROCESS process_nameLets you override the default process name, for example $GGL01. This can be useful when you add or alter an Extract or Replicat group.
For more information, see "Specifying a custom subvolume, process prefix, or process name".
RECSIZE bytes | BLOCKSIZE bytesThe record or block size in 4096 byte increments up to 56K (57344). It will be rounded to the next higher multiple if the entered value is not a multiple of 4096.
SECURE "rwep"The Guardian security applied to each of the files. Defaults to the default security of the owner.
The following example creates and pre-allocates 10 files sized at 50 megabytes each in $DATA.GGSLOG, owned by SUPER.SUPER with security NUUU, with a prefix of LT.
LOG $DATA.GGSLOG.LT, MEGABYTES 50, NUMFILES 10, OWNER 255,255, SECURE "NUUU"
The following changes the default process name.
LOG $DATA.GGSLOG.LT MEGABYTES 500, PROCESS $GGL01, NUMFILES 10, SECURE "NNNN"
Specifying a custom subvolume, process prefix, or process name
By default, Oracle GoldenGate defaults to the subvolume GLOGGGL, and assigns a default process prefix and name. For example, files are created in volume.GLOGGGL and the default processes would be $GGL00 writing to log trail files AA000000, AA000001 and so on and $GGL01 writing to BB000001, BB000001, etc. To use the Oracle GoldenGate default, enter $vol.GLOGGGL, where $vol is the volume where Oracle GoldenGate is installed.
However, should you change or add a Logger, Extract or Replicat component, you may need to change the subvolume, process prefix, or process name to ensure the correct trails are read.
To change the subvolume or process prefix, see the logtrail_name argument.
To change the process name, see the PROCESS argument.
GLOBALS, Manager
Use LOGFILESBEHIND or LOGFILESBEHINDINFO to report Extract and Replicat processing lags.
Manager can monitor log trails to help prevent premature recycling. Premature recycling can happen when an Extract or Replicat process is down for an extended period and unable to process data written by Logger. Whenever Logger moves to a new file in the log trail sequence, the oldest log file is recycled and the data is lost.
LOGFILESBEHIND sends a critical message when Extract or Replicat processing lags a specified number of files behind the current log trail file.
LOGFILESBEHINDINFO sends an informational message when Extract or Replicat falls the specified number of files behind the current log trail file. You can specify both LOGFILESBEHIND and LOGFILESBEHINDINFO.
The messages are generated on the host system of the process.
No report when falling behind
LOGFILESBEHIND num_files | LOGFILESBEHINDINFO num_files
num_filesThe number of files falling behind.
For these examples, assume that these parameters are executed on \LA, and also on \LA several Replicat programs deliver data logged to \NY.$DATA1.GLOGGGL.AA.
The log process on \NY is currently logging to file AA000100. If any Replicat at \LA is processing log trail AA000097 or earlier, Manager generates an informational message at \LA. If any Replicat is processing log trail AA000092 or earlier, Manager sends a critical message to the \LA EMS console.
LOGFILESBEHINDINFO 3 LOGFILESBEHIND 8
Logger, GLOBALS
Use LOGFILEOPENS to determine how many opens Logger keeps on the current log files.
8
Note:
The default setting is recommended unless otherwise specified by Oracle GoldenGate Support.
LOGFILEOPENS num_opens
num_opensThe number of opens. The maximum is 16.
Logger
Use LOGGERFILENUM when you are running applications with preset file numbers.
As BASELIB opens Logger, Logger file numbers can collide with the preset file numbers the application expects to use. BASELIB deals with this by opening Logger multiple times until it gets a file number greater than those used. Then BASELIB closes the temporary opens containing file numbers that could not be used.
LOGGERFILENUM instructs BASELIB to force its open on Logger to be a file number greater than the value set for LOGGERFILENUM. For example, if the application uses preset file numbers 1 through 10 you would set LOGGERFILENUM to a number greater than 10.
This parameter is unnecessary for most installations. It is only required if the user application opens files with a specified file number.
LOGGERFILENUM file_number
file_numberThe file number on which BASELIB bases its Logger file numbering sequence.
LOGGERFILENUM 30
Logger, GLOBALS
Use LOGGERFLUSHRECS to specify the number of records for Logger to buffer before flushing to the current log trail. Logger flushes data when the LOGGERFLUSHRECS threshold is met.
8
LOGGERFLUSHRECS num_recs
num_recsThe maximum number of records to buffer before a flush occurs.
Logger, GLOBALS
Use LOGGERFLUSHSECS or LOGGERFLUSHCSECS to specify the number of seconds or centiseconds for Logger to buffer records before flushing to the current log trail.
0.01 second (1 centisecond)
LOGGERFLUSHSECS seconds | LOGGERFLUSHCSECS centiseconds
secondsThe maximum number of seconds to buffer records before flushing.
centisecondsThe maximum number of centiseconds to buffer records before flushing.
Logger
Use LOGGERTIMEOUTSECS to specify how long GGSLIB waits for a response from Logger before allowing the application to resume normal operations. For example, LOGGERTIMEOUTSECS 50 specifies that GGSLIB waits up to 50 seconds.
After a timeout, GGSLIB stops logging data until either an ALTER LOGGER or START LOGGER is issued from GGSCI. The LOGGERTIMEOUTSECS parameter is global and applies to all Logger processes.
60
LOGGERTIMEOUTSECS seconds
secondsThe number of seconds for GGSLIB to wait for a response from Logger.
GLOBALS
Use LOGGGSCICOMMANDS to include or omit user commands to the LOGGGS file. You can view this file with the GGSCI command VIEW GGSEVT.
YES
LOGGGSCICOMMANDS {YES | NO}
Extract, Replicat
Use LOGON to run Extract or Replicat under an ID different from the process that starts it (normally Manager). Extract and Replicat normally inherit the user ID of the Manager process. This parameter provides password encryption options.
You must place the LOGON parameter near the top of your parameter file so that all other files are created with the correct ownership. For example, a DISCARDFILE may be created with the incorrect user ID and password, causing problems when Extract and/or Replicat try to start up.
Either place the entire LOGON parameter statement on one line, or use an ampersand (&) continuation to split the statement into two or more lines.
If Manager is running under SUPER.SUPER authority, only the user_id portion of this parameter is necessary (the comma is still required). If password is specified, it is not echoed in the report file.
LOGON user_id, PASSWORD password [ENCRYPTKEY DEFAULT | ENCRYPTKEY keyname]
user_idThe user ID for running Extract.
passwordThe password for running Extract.
ENCRYPTKEY DEFAULTRequired if the password was encrypted with a default Oracle GoldenGate key by means of the ENCRYPT PASSWORD ENCRYPTKEY command without arguments.
ENCRYPTKEY keynameRequired if the password was encrypted with a user-defined key by means of the ENCRYPT PASSWORD ENCRYPTKEY keyname command. For keyname, use the logical name as shown in the ENCKEYS file.
Extract, Replicat
Use MACRO to create a Oracle GoldenGate macro. For instructions on creating and using Oracle GoldenGate macros, see the Administering Oracle GoldenGate for HP NonStop (Guardian).
The following syntax must be in the order shown and terminated with a semicolon (;).
MACRO# macro_name PARAMS (param1, param2 ,...) BEGIN macro_body END;
macro_nameThe name for the macro. Macro and parameter names are not case-sensitive.
Macro and parameter names must begin with a macro character. The default is the pound (#) character, as in #macro1 and #param1. Anything in the parameter file that begins with the macro character is assumed to be either a macro or a macro parameter.
You can change the macro character with the MACROCHAR parameter. Valid macro and parameter characters are alphanumeric and can include the underscore character (_). Parameter or macro names within quotation marks are treated as text and ignored.
PARAMS (param1, param2, ...)Optional. Describes parameters to the macro. Parameter names are not case-sensitive.
Every parameter used in a macro must be declared in the PARAMS statement, and when the macro is invoked, the invocation must include a value for each parameter. By default, Oracle GoldenGate permits up to 30 parameters in a PARAMS clause, but you can change the default with the FUNCTIONSTACKSIZE parameter.
BEGINBegins the macro body. Must be specified before the macro body.
macro_bodyRepresents the macro body, which consists of one or more statements to be used as parameter file input. The macro body can include simple parameter statements such the first example below, and it can include complex statements like the second example. It also can include invocations of other macros, as in the third example.
Example 1: COL1 = COL2
Example 2: COL1 = #val2
Example 3: #colmap(COL1, #sourcecol)
ENDEnds the macro definition.
;(Semicolon) Marks the end of the macro parameter.
The following example defines a macro that takes parameters.
MACRO #make_date
PARAMS (#year, #month, #day)
BEGIN
@DATE("YYYY-MM-DD", "CC", @IF(#year < 50, 20, 19),
YY", #year, "MM", #month, "DD", #day)
END;
The following example defines a macro that does not require parameters.
MACRO #option_defaults BEGIN GETINSERTS GETUPDATES GETDELETES INSERTDELETES END;
The following example defines a macro that calls other macros.
MACRO #assign_date PARAMS (#target_col,#year,#month,#day) BEGIN #target_col = #make_date (#year, #month, #day) END;
Extract, Replicat
Use MACROCHAR to change the macro character to something other than #. For example, you might want to change the character when table names include the # character. Anything in the parameter file that begins with the specified macro character is assumed to be either a macro or a macro parameter. The MACROCHAR can only be specified once.
To define a different macro character, precede the first MACRO statement with the MACROCHAR parameter in the parameter file.
# (pound symbol)
MACROCHAR character
characterThe character to be used as the macro character. Must precede the first macro statement.
In the following example, $ is defined as the macro character.
MACROCHAR $ MACRO $mymac PARAMS (#p1) BEGIN col = #p1 END;
GLOBALS
Use MANAGERREQUIRED to specify whether Extract or Replicat can run without a Manager process.
NO
MANAGERREQUIRED {YES | NO}
Replicat
Use MAP to deliver records from a source to the target. Normally, the source is an Oracle GoldenGate trail containing records that were processed by Extract. The MAP parameter is similar to Extract's FILE and TABLE parameters in its mapping capabilities and functionality for executing user exits and stored procedures.
At least one MAP statement is required.
You can invoke MAP more than once in a parameter file, and you can invoke the same MAP argument more than once. This is useful, for example, to split records into different trails to be replicated to different targets.
MAP source_file_name, TARGET target_file_name [, EXCEPTIONSONLY] [, DEF source_ddl_definition] [, DICTIONARY source_ddl_dictionary] [, EXITPARAM "exitparam_string"] [, WHERE (where_condition)] [, FILTER (expression)] [, RANGE (range_specification)] [, COLMAP ([USEDEFAULTS], column_map_specification)] [, COMPENSCRIBEMAPS | NOCOMPENSCRIBEMAPS] [, INSERTALLRECORDS] [, MAPID] [, PARTIALCOLSOK | NOPARTIALCOLSOK] [, SQLNAME] [, USESOURCERECLENGTH] [, TARGETDEF target_ddl_definition] [, TARGETDICT target_ddl_dictionary] [, KEYCOLS (column_list)] [, USEALTKEY (key_specifier)] [, UNMAPPEDALTFILECREATES {ALTFILEVOL | PRIMARYVOL}] [, CREATETEMPLATE file_name] [, ALTFILECHAR num_chars] [, GETNETWORKALTFILENAMES | IGNORENETWORKALTFILENAMES] [, HANDLECOLLISIONS | NOHANDLECOLLISIONS] [, DETECTLOCKS] [, REPERROR (error_number, response)] [, PARTMAP (source_partition_spec, target_partition_spec)] [, MAPEXCEPTION (TARGET exception_name, mapping_arguments)] [, SHOWSYNTAX] [, SQLEXEC (sqlexec_clause)] ;
source_file_nameThe origin of the record to deliver. Most often, this is the name of the file, table or SQL view from which the record was originally extracted. If Extract performed column mapping on the associated records, however, a different file identifier is attached to reflect the new column structure.
source_file_name can also be an existing define name of CLASS MAP, or a wildcard specification.
TARGET target_file_nameThe name of the target file or table.
DEF source_ddl_definitionDICTIONARY source_ddl_dictionaryFILTER (expression)RANGE (range_specification)WHERE (where_condition)See "Selecting Records".
COLMAP ([USEDEFAULTS], column_map_specification)COMPENSCRIBEMAPS | NOCOMPENSCRIBEMAPSINSERTALLRECORDSMAPIDPARTIALCOLSOK | NOPARTIALCOLSOKSQLNAMETARGETDEF target_ddl_definitionTARGETDICT target_ddl_dictionaryUSESOURCERECLENGTHSee "Mapping Data".
KEYCOLS (column_list)EXITPARAM "exitparam_string"CREATETEMPLATE file_nameALTFILECHAR num_charsUSEALTKEY (key_specifier)UNMAPPEDALTFILECREATES {ALTFILEVOL | PRIMARYVOL}See "Replicating File Create Operations for Alternate Key Files"
HANDLECOLLISIONS | NOHANDLECOLLISIONSDETECTLOCKSSee "Locking Records".
REPERROR (error_number, response)See "Using REPERROR".
EXCEPTIONSONLYMAPEXCEPTION (TARGET exception_name, mapping_arguments)GETNETWORKALTFILENAMES | IGNORENETWORKALTFILENAMESPARTMAP (source_partition_spec, target_partition_spec)Use PARTMAP to specify alternative mapping of partitions during file creation operations. For details see the PARTMAP parameter on "PARTMAP"
SHOWSYNTAXSQLEXEC (sqlexec_clause)See "Performing a Query".
You can select records by:
Selecting or excluding records using FILTER.
Selecting based on a conditional statement using WHERE.
Selecting a subset or records using RANGE.
Note:
Using the RANGE option of FILE or MAP provides different capabilities than using the @RANGE function within a FILTER. And both of these are different than the RANGE option of ALTINPUT.
Selecting or Excluding Records Using FILTER
Use FILTER expressions to select or exclude data based on a numeric value. You can use a filter expression with conditional operators (such as @IF), column-conversion functions, or both. When using a FILTER expression, you can apply the filter clause to only certain record types, or specify one or more record types to omit.
If you are selecting from an Enscribe file using FILTER, you must also specify the DEF and DICTIONARY keywords.
MAP source_file_name, TARGET target_file_name, FILTER (filter_clause [, ON INSERT | ON UPDATE| ON DELETE] [, IGNORE INSERT | IGNORE UPDATE | IGNORE DELETE] [, RAISEERROR error_number] ) [, DEF source_ddl_definition] [, DICTIONARY source_ddl_dictionary] ;
filter_clauseSelects records from a source MAP based on an expression.
ON INSERT | ON UPDATE| ON DELETESpecifically limits the filter to be executed on an insert, update or delete. You can specify more than one ON option. For example, ON UPDATE, ON DELETE executes on updates and deletes, but not inserts.
IGNORE INSERT | IGNORE UPDATE | IGNORE DELETEIgnores the specified operation. You can specify more than one IGNORE option.
RAISEERROR error_numberRaises a user-defined error number if the filter fails. Can be used as input to the REPERROR parameter to invoke error handling. Make certain that the value for error_number is outside the range of error numbers that is used by the database or Oracle GoldenGate. For example: RAISEERROR 21000
DEF source_ddl_definitionHas meaning only for Enscribe files. Use a DDL definition or record within the open dictionary. This definition describes the record that is extracted from the audit trails. You cannot specify more than one definition for any FILE statement.
DICTIONARY source_ddl_dictionaryPoints to the location of the source DDL dictionary. DICTIONARY establishes an Enscribe DDL dictionary to use for evaluating WHERE, FILTER, and COLMAP clauses. For example, DICTIONARY $DATA5.PRODDICT specifies a physical subvolume.
Each DICTIONARY entry closes any previously open dictionary, so only one dictionary is active at any given time while processing the startup parameters.
The following example inserts the state CA if the column string matches "CA".
MAP $PROD1.CUST.BRANCH, TARGET $DATA01.C9701.BRANCH, DEF BRANCH-REC, FILTER (@IF(@STREQ(STATE, "CA"), 1, 0), ON INSERT);
Selecting Based on a Conditional Statement
With the WHERE option, you can select information based on a conditional statement. If you are selecting from an Enscribe file using WHERE, you must also specify the DEF and DICTIONARY keywords.
MAP source_file_name, TARGET target_file_name, WHERE (where_condition) [, DEF source_ddl_definition] [, DICTIONARY source_ddl_dictionary] ;
where_conditionSelects a subset of records from a source MAP, based on a condition, such as WHERE (BRANCH = "NY"). For a list of valid operators, see Table 2-33.
DEF source_ddl_definitionHas meaning only for Enscribe files. Use a DDL definition or record within the open dictionary. This definition describes the record that is extracted from the audit trails. You cannot specify more than one definition for any FILE statement.
DICTIONARY source_ddl_dictionaryPoints to the location of the source DDL dictionary. DICTIONARY establishes an Enscribe DDL dictionary to use for evaluating WHERE, FILTER, and COLMAP clauses. For example, DICTIONARY $DATA5.PRODDICT specifies a physical subvolume.
Each DICTIONARY entry closes any previously open dictionary, so only one dictionary is active at any given time while processing the startup parameters.
Table 2-33 Permissible WHERE Operators
| Operator | Example |
|---|---|
|
Column names |
PRODUCT_AMT |
|
Numeric values |
-123, 5500.123 |
|
Literal strings enclosed in quotes |
"AUTO", "Ca" |
|
Column tests |
|
|
Comparison operators |
=, <>, >, <, >=, <= |
|
Conjunctive operators |
AND, OR |
|
Grouping parentheses |
Use open and close parentheses for logical grouping of multiple elements. |
Use the RANGE clause to select a subset of the records from Replicat's source. Unlike WHERE, RANGE does not require knowledge of the source file's structure.
MAP source_file_name, TARGET target_file_name, RANGE (x [, x, ...] of y);
(x [, x, ...] of y)Selects a subset of records from Replicat's source, based on a condition, such as RANGE (3 of 5) or RANGE (1, 3 of 5).
Oracle GoldenGate has the following data mapping capability for the MAP parameters:
Mapping columns.
Matching source and target record lengths.
If no explicit column mapping is specified with COLMAP, Replicat determines the column map using the following rules:
Columns with the same name are mapped to each other if the data types of the source and target are compatible.
If the target definition has column names corresponding to special transaction values those values are mapped to the target columns.
Column names are changed to uppercase for name comparison.
Global rules set up with COLMATCH parameters enable different column names to be mapped by default.
Target columns that do not correspond to any source column take default values determined by the database.
The default mapping is displayed in the report file just after the corresponding MAP entry.
Using a COLMAP clause, you can retrieve fields or columns from one record and map them to a differently structured record. This is useful, for example, when replicating data from an Enscribe file to an SQL table with similar, but not identical, fields. COLMAP selects, translates, and moves the fields you want into the new structure. When associated records are output, they are identified by the target file rather than the source to reflect the new structure of the record.
When mapping from an Enscribe file, include either the DEF and DICTIONARY keywords, otherwise the source and target structures are assumed to be identical. DEF and DICTIONARY are required only once in the parameter file, and only when using a COLMAP clause.
When Enscribe files without corresponding DEF parameters are encountered, source and target structures are assumed to be identical.
When SQL tables are identified as the target, the layout of the target record after mapping is known since the SQL table structure is retrieved from the SQL catalog.
Additionally, you can match the source record length to the target record length. See "Matching Source and Target Record Lengths".
MAP source_file_name, TARGET target_file_name, COLMAP ([USEDEFAULTS], column_map_specification) [, option];
column_map_specificationThe column mapping expression, as in
(target_column = source_expression)
Explicitly defines a source-target column map.
target_columnThe name of the target column.
source_expressionAny of the following:
Numeric constant, such as 123
String constant enclosed within quotes, such as "ABCD"
The name of a source column, such as ORD_DATE
An expression using an Oracle GoldenGate column-conversion function, such as @STREXT (COL1, 1, 3)
Example:
COLMAP (USEDEFAULTS, targcol1 = srccol1, targcol2 = srccol2, targcol3 = srccol3)
TARGET target_file_nameNames the target file. Must be an existing Enscribe file or SQL table, and can be an active define name.
optionA DDL definition obtained by the following:
DEF source_ddl_definitionHas meaning only for Enscribe files. Use a DDL definition or record within the open dictionary. This definition describes the record that is extracted from the audit trails. You cannot specify more than one definition for any MAP statement.
NOCOLMAPAllows the user to specify a DEF for filtering purposes, but prevents the columns mapping command from completing. Example:
MAP \PROD.$DATA06.CER1ATLF.TL*, TARGET \GGS2.$DATA10.GGSLOGS.*, DEF TLF, NOCOLMAP, WHERE (TLF.HEAD.REC-TYP <> "00");
DICTIONARY source_ddl_dictionaryPoints to the location of the source DDL dictionary. DICTIONARY establishes an Enscribe DDL dictionary to use for evaluating WHERE, FILTER, and COLMAP clauses. For example, DICTIONARY $DATA5.PRODDICT specifies a physical subvolume.
Each DICTIONARY entry closes any previously open dictionary, so only one dictionary is active at any given time while processing the startup parameters.
TARGETDEF target_ddl_definitionUse TARGETDEF when invoking column mapping to an Enscribe file structure and the Enscribe file has not yet been specified in the parameter file, or did not have a definition associated with it.
If you assigned a definition to the target file earlier in the parameter file, you can omit TARGETDEF.
TARGETDICT target_ddl_dictionaryPoints to the target DDL dictionary. Use in conjunction with TARGETDEF when the target definitions are in a DDL dictionary different from DEF. Follows the TARGETDEF targetdef entry, similar to:
MAP $vol.subvol.source, DEF sourcedef, TARGET $vol.subvol.target, TARGETDEF targetdef, TARGETDICT $vol.subvol, COLMAP (USEDEFAULTS targcol1 = srccol1, targcol2 = srccol2, targcol3 = srccol3);
USEDEFAULTSCauses Oracle GoldenGate to automatically map source and target columns that have the same name. USEDEFAULTS eliminates the need to explicitly map every source column unless the target column has a different name. This is the default unless an explicit column mapping is used.
Matching Source and Target Record Lengths
Use the USESOURCERECLENGTH option to adjust the target record length to the length of the source record. Precede the COLMAP statement with the USESOURCERECLENGTH option.
USESOURCERECLENGTH
USESOURCERECLENGTH COLMAP (USEDEFAULTS, CRD-TYP = @STRSUB (CRD-TYP, "VD", "PV"), FIID = @STRSUB (FIID, "WA", "SFNB"), FIID = @STRSUB (FIID, "ID", "BAID"));
Use COMPENSCRIBEMAPS to compress an update record after column mapping on an Enscribe target. Compressing ensures that updates to an Enscribe file after column mapping can only update the fields that were changed.
COMPENSCRIBEMAPS
MAP source_file_name, TARGET target_file_name {, COMPENSCRIBEMAPS | NOCOMPENSCRIBEMAPS};
When Replicat is compressing an update record and the size of the column exceeds the amount of available data, a compressed fragment is produced for the available partial data. Use NOPARTIALCOLSOK to not compress an update fragment.
PARTIALCOLSOK (Compress partial column values)
MAP source_file_name, TARGET target_file_name, DEF sourcedef, TARGETDEF targetdef, NOPARTIALCOLSOK;
Use the INSERTALLRECORDS option to apply all record types as inserts for the current map. This will create a record of all operations made to the target record instead of maintaining only the current version. This can be useful when complete transaction history is needed. See "INSERTALLRECORDS | NOINSERTALLRECORDS" for details on INSERTALLRECORDS.
MAP source_file_name, TARGET target_file_name, INSERTALLRECORDS;
Use the KEYCOLS option to define one or more columns of the table as a primary key for use by Oracle GoldenGate. Oracle GoldenGate uses the key to locate rows for updates and deletes and to prevent duplicate inserts.
KEYCOLS also applies to view definitions when a view is used as the file parameter. KEYCOLS has the following dependencies:
You must use key columns when the primary key cannot be determined, or when a SYSKEY does not exist and either FORMATSQL or FORMATASCII are specified in the parameter file.
If the same file name is specified in multiple MAP entries, only the first KEYCOLS entry takes effect.
MAP source_file_name, TARGET target_file_name, KEYCOLS (column [, column] [, ...]);
column)Defines a column to be used as a substitute primary key. To specify multiple columns, create a comma-delimited list as in:
KEYCOLS (COL1, COL2);
Consider a relative SQL table R1 with the columns SYSKEY, CUSTOMER, BALANCE and a unique index on the CUSTOMER column. For the following transaction, you will get different results depending on whether you have specified KEYCOLS or FORMATSQL.
UPDATE R1 SET BALANCE = 20 WHERE CUSTOMER = "SMITH";
If your FILE parameter statement is simply: FILE R1; and FORMATSQL is included in the parameter file, but no KEYCOLS are specified, the output is similar to:
UPDATE R1 SET BALANCE = 20 WHERE SYSKEY = 1334519;
If instead the FILE entry is: FILE R1, KEYCOLS (CUSTOMER); the output is:
UPDATE R1 SET BALANCE = 20, SYSKEY = 1334519 WHERE CUSTOMER = "SMITH";
Passing Literal Strings to User Exits
Use EXITPARAM to pass a literal string to user exit routines whenever a record from MAP is encountered.
The string must be enclosed in double quotes and an ampersand used if it continues to additional lines. It is unlimited in size, but you must use the new function GET_EXIT_PARAM_VALUE to access values over the default of 256 bytes.
MAP source_file_name, TARGET target_file_name, EXITPARAM "exitparam_string"
Creating a Target Enscribe File
To deliver to an Enscribe file that does not yet exist, a file is created according to rules in a file template. Using the template file name, a file is created with the same structure, alternate keys and partitions.
The new file substitutes the target file name in the map for the template name (file name only, not subvolume), and uses the partition name, alternate key volumes and subvolumes of the template. The altkeys file names are derived from the new file name: by default, by appending 0, 1, 2 to the file name. If ALTFILECHAR is specified, a 0, 1, 2 is inserted at the indicated character.
CREATETEMPLATE is invoked upon an insert if the file does not exist. It is not invoked on file create operations, so when CREATETEMPLATE is used, file creates should not be captured. The CREATETEMPLATE option is valid for MAP statements that use wildcards or fully qualified file names. It applies only to Enscribe files.
If a file is renamed or removed after OPENTIMEOUT, the next operation on the missing file will trigger creation of a new file if the CREATETEMPLATE parameter is specified. Replicat will recognize that the create time of the target file changed and refresh the file attributes.
MAP source_file_name, TARGET target_file_name, CREATETEMPLATE file_name, ALTFILECHAR num_chars
MAP $DATA2.DAT.TL*, TARGET $DATA5.DAT.*, CREATETEMPLATE $DATA3.GGSMASK.TLYYMMDD, ALTFILECHAR 2;
Use USEALTKEY when replicating updates to Enscribe entry-sequenced or relative files, since Replicat cannot guarantee that the keys in the source and target will match. If target records can be uniquely identified using an alternate key, USEALTKEY enables Replicat to update the correct record.
Do not use this option when the unique alternate key is updated.
To ensure USEALTKEY works correctly when replicating data, turn off update compression for any entry-sequenced files that depend on this method. Turn off compression using FUP for TMF-audited files and using the Logger configuration for non-TMF files.
MAP source_file_name, TARGET target_file_name, USEALTKEY "key_specifier"
"key_specifier"The unique key identifier.
USEALTKEY "TS"
The following parameter file example uses an alternate key for updates to an Enscribe relative file. The first set of statements insert the record ignoring the SYSKEY. The second set uses the alternate key "TM" to locate records for updates and deletes.
IGNOREUPDATES IGNOREDELETES GETINSERTS MAP \NY.$DATA1.PRDDAT.FILEA, TARGET \LA.$DATA3.BKDAT.FILEA, DEF FILEA-REC, TARGETDEF FILEA-REC, COLMAP (USEDEFAULTS, SYSKEY = -2); GETUPDATES GETDELETES IGNOREINSERTS MAP \NY.$DATA1.PRDDAT.FILEA, TARGET \LA.$DATA3.BKDAT.FILEA, USEALTKEY "TM";
Replicating File Create Operations for Alternate Key Files
Use UNMAPPEDALTFILECREATES when you want to replicate file create operations for alternate key files, but have not included a MAP statement for the alternate key file. You can create your alternate key file one of two ways: by creating it from the alternate key file on the source system, or by creating it from the location of the primary file's volume.
Note:
If you have provided a MAP for the alternate keys, you will not require this option.
MAP source_file_name, TARGET target_file_name, UNMAPPEDALTFILECREATES {ALTFILEVOL | PRIMARYVOL}
ALTFILEVOLCreates the alternate key file based on the location of the source system's alternate keys file. If this is not possible, the process abends with the -2 mapping error.
PRIMARYVOLAttempts to create the alternate key file based on the location of the source system's alternate key file. If this is not possible, it creates an alternate key file based on the location of the primary file's volume.
The following example will create an alternate key file based on the location of the file's primary volume if the source system alternate key file cannot be located.
MAP $DATA02.TSSOUT.ALTXX*, TARGET $DATA4.TSSIN.*, UNMAPPEDALTFILECREATES PRIMARYVOL;
Turning error handling on and off
HANDLECOLLISIONS can be set to ignore duplicate and missing record errors for the MAP statement. NOHANDLECOLLISIONS will turn this off.
MAP source_file_name, TARGET target_file_name, HANDLECOLLISIONS;
The following example will turn HANDLECOLLISIONS on for all of the target files included in the ORD* wildcard and off for CUSTOMERS.
MAP $DATA2.GGSSOU.ORD*, TARGET $DATA4.GGSTAR.*, HANDLECOLLISIONS; MAP $DATA2.GGSOUT.CUSTOMERS, target $DATA4.GGSIN.*, NOHANDLECOLLISIONS;
For Enscribe files, DETECTLOCKS causes Replicat to issue a SETMODE 4 to reject a lock request with an error 73. For SQL tables, it causes Replicat to issue the "CONTROL TABLE RETURN IF LOCKED" statement.
Setting a REPERROR clause allows Replicat to retry a locked record a specified number of times.
Note:
Use of DETECTLOCKS could cause an increase in the number of error 73s reported. There can be a lag between when TM/MP tells the application that the transaction has been committed and when DP2 actually releases the locks. Without the DETECTLOCKS parameter set, Oracle GoldenGate waits for DP2 to release its locks before continuing. With the DETECTLOCKS parameter active, Oracle GoldenGate returns an error 73 when it is first encountered. You can use REPERROR 73 to address these errors when they occur. Before implementing DETECTLOCKS review your processing needs to determine the best solution.
MAP source_file_name, TARGET target_file_name, DETECTLOCKS
Use REPERROR to specify an error and a response that together control how Replicat responds to the error when executing the MAP statement. You can use REPERROR at the MAP level to override and supplement global error handling rules set with the REPERROR parameter (see "REPERROR"). Multiple REPERROR statements can be applied to the same MAP statement to enable automatic, comprehensive management of errors and interruption-free replication processing.
MAP source_file_name, TARGET target_file_name, REPERROR (error_number, response) [, REPERROR (error_number, response)] [, ...];
Refer to the REPERROR parameter on "REPERROR" for details on the error_number and response specifications. Note that RESET is not a valid option for REPERROR used under MAP.
The following examples show different ways that REPERROR can be used in a MAP statement in conjunction with a global REPERROR statement.
In the following example, when error_1 occurs for the first MAP statement, the action is response_2, not response_1, because an override was specified. However, if an error_1 occurs for the second MAP statement, the response is response_1, the global response. The response for error_2 is response_3, which is MAP-specific.
REPLICAT group_name REPERROR (error_1, response_1) MAP source_1, TARGET target_1, REPERROR (error_1, response_2); MAP source_2, TARGET target_2, REPERROR (error_2, response_3);
In the following example, when replicating from src1 to src2, all errors and actions (1-3) apply, because all REPERROR statements address different errors (there are no MAP-specific overrides).
REPLICAT group_name REPERROR (error_1, response_1) MAP source_1, TARGET target_1, REPERROR (error_2, response_2), REPERROR (error_3, response_3);
In the following example, if error1 occurs for the first MAP statement, the action is response2. For the second one it is response1 (the global response), and for the third one it is response4 (because of the second REPERROR statement). A global REPERROR statement applies to all MAP statements that follow it in the parameter file until another REPERROR statement starts new rules.
REPLICAT group_name REPERROR (error_1, response_1) MAP source_1, TARGET target_1, REPERROR (error_1, response_2); MAP source_2, TARGET target_2, REPERROR (error_2, response_3); REPERROR (error_1, response_4) MAP source_2, TARGET target_2, REPERROR (error_3, response_3);
Creating an Exceptions Statement
Errors that have REPERROR set to EXCEPTION can be captured to an exception file using either EXCEPTIONSONLY or MAPEXCEPTION.
EXCEPTIONSONLY specifies that the current map is executed only when an error occurred for the last record processed in the preceding map. You must set REPERROR to EXCEPTION for the error that occurred, and the exception map must specify the same source table as the map in error. The exception map must follow the map in error in the parameter file.
MAP source_file_name, TARGET target_file_name, EXCEPTIONSONLY;
Note:
The source and target file names in the preceding MAP statement (the one the EXCEPTIONSONLY applies to) cannot include wildcards.
MAPEXCEPTION specifies a target file for exceptions in processing the source and target files of the MAP. The source and target file names can include wildcards and all exceptions that apply to the file set are written to the exception_file_name. Any valid mapping arguments, such as COLMAP or KEYCOL, can be included.
MAP source_file_name, TARGET target_file_name, MAPEXCEPTION (TARGET exception_file_name, mapping_arguments);
exception_file_nameIdentifies the target file for errors that have been identified as EXCEPTION with REPERROR.
mapping_argumentsAny valid mapping argument that can be used with the MAP statement.
This example uses wildcarded source and target file names. Exceptions that occur when processing any file on $DATA02.ACCT with a name beginning with TRX will be captured to $DATA08.ACCT.OLDTRX using the designated mapping.
MAP $DATA04.ACCT.TRX*, TARGET $DATA05.ACCT.*,
MAPEXCEPTION (TARGET $DATA08.ACCT.OLDTRX,
COLMAP (USEDEFAULTS,
ACCT-NO = ACCT-NO,
OPTYPE = @GETENV ("LASTERR", "OPTYPE"),
DBERR = @GETENV ("LASTERR", "DBERRNUM"),
DBERRMSG = @GETENV ("LASTERR", "DBERRMSG")
)
);
Qualifying Alternate Key File Names
GETNETWORKALTFILENAMES lets the file system qualify the alternate key file names with the local node name. Use IGNORENETWORKALTFILENAMES to tell the file system not to qualify the alternate key file names with the local node name. This parameter can also be toggled around MAP statements.
GETNETWORKALTFILENAMES
MAP source_file_name, TARGET target_file_name, [GETNETWORKFILENAMES | IGNORENETWORKALTFILENAMES];
Use SHOWSYNTAX to display a SQL statement before it is executed. The default is to display SQL statement text in the report file.
MAP source_file_name, TARGET target_file_name, SHOWSYNTAX;
REPSQLLOG $DATA.SQLLOG.JSR01 MAP $DATA.SOURCE.ACCOUNT, TARGET $DATA.TARGET.ACCOUNT, SHOWSYNTAX, MAPID "My Mapid", COLMAP (USEDEFAULTS);
The MAPID is displayed along with the SQL statement. Use the MAPID option to help in complicated mapping situations that require conditional mapping using a WHERE clause. If MAPID is not specified, a default ID of the form "MAP" is built with a sequential number indicating the COLMAP for the file. The MAPID text can be up to 32 bytes long. If it contains spaces, it must be enclosed in either single or double quotes.
An optional Replicat parameter REPSQLLOG redirects the output to a separate log file and keeps it out of the Replicat report file. See "REPSQLLOG".
Use SQLEXEC to perform a SQL Query when processing a record for a SQL/MP table. SQLEXEC enables Oracle GoldenGate to communicate directly with the database to perform any query that SQL supports. The database function can be part of the synchronization process, such as retrieving values for column conversion, or it can be independent of extracting or replicating data.
Note:
This feature is not available for Enscribe files. SQLEXEC queries should not be used to change a value in the primary key column. The primary key value is passed from Extract to Replicat so Replicat can perform further update/delete operations. If Replicat does not know the primary key value, these operations cannot be completed.
By using SQLEXEC within multiple FILE or MAP statements, you can create different rules for different tables; these rules can be as simple or as complex as needed. A query that is executed can accept input parameters from source or target rows and pass output parameters.
In the following example, SQLEXEC runs a select statement, extracting the timestamp from the target table, then filters out records as needed.
MAP $DATA1.SQLDAT.ORDERS, TARGET $DATA1.MASTER.ORDERS, SQLEXEC (ID check, QUERY " SELECT TIMESTAMP FROM $DATA1.SQLDAT.ORDERS " " WHERE PKCOL = ?P1 ", PARAMS (P1 = PKCOL), ERROR REPORT);
A SQLEXEC statement expects legal SQL syntax for the database being affected. Refer to the SQL for NonStop reference guide for permissible SQL syntax.
Important:
If a SQLEXEC query fails, the Extract or Replicat process will exit. As such, you must structure your query correctly.
MAP source_file_name, TARGET target_file_name, SQLEXEC ( ID logical_name, QUERY "sql_query", {PARAMS param_spec | NOPARAMS} [, AFTERFILTER | BEFOREFILTER] [, DBOP] [, EXEC frequency] [, MAXVARCHARLEN bytes] [, PARAMBUFSIZE num_bytes] [, TRACE option] [, ALLPARAMS option] [, ERROR action] [, option] [, ...] )
ID logical_nameDefines a logical name for the query. A logical name is required in order to extract values from the query results. ID logical_name references the column values returned by the query.
QUERY "sql_query"Specifies the SQL query syntax to execute against the database. The query must be valid, standard query statement for the database against which it is being executed. It can either return results with a SELECT statement or update the database with an INSERT, UPDATE, or DELETE statement.
For any query that produces output with a SELECT statement, only the first row returned by the SELECT is processed. Do not specify an "INTO..." clause for any SELECT statements.
Enclose the query within quotes. For a multi-line query, use quotes on each line. To ensure success, place a space after each begin quote and each end quote, or at least before the end quote. For example, in the following, there are spaces before the words select and where and after the words ggs_notify and ?p1."
SQLEXEC ( ID ggs, ON UPDATES, ON INSERTS, QUERY " select notified from $DATA1.SQLDAT.NOTIFY " " where account_no = ?p1 ", PARAMS (p1 = account_no) )
PARAMS param_spec | NOPARAMSDefines whether the query accepts parameters. One of these options must be used.
AFTERFILTER | BEFOREFILTERSpecifies when to execute the query in relation to a FILTER clause. AFTERFILTER executes after the filter and enables you to skip the overhead of executing the SQL unless the filter is successful. This is the default. BEFOREFILTER executes before the filter and enables you to use the results of the procedure or query in the filter.
DBOPCommits INSERT, UPDATE, DELETE, and SELECT statements executed within the query. Otherwise, they could potentially be rolled back. Oracle GoldenGate issues the commit within the same transaction boundaries as the source transaction. Use caution: any changes that are committed by the procedure can result in overwriting existing data.
EXEC frequencyControls the frequency with which a query executes and how long the results are considered valid, if extracting output parameters. Takes one of the following arguments:
MAPExecutes the query once for each source-target table map for which it is specified. MAP renders the results invalid for any subsequent maps that have the same source table. For example, if a source table is being synchronized with more than one target table, the results would only be valid for the first source-target map. MAP is the default.
ONCEExecutes the query once during the course of an Oracle GoldenGate run, upon the first invocation of the associated FILE or MAP statement. The results remain valid for as long as the process remains running.
TRANSACTIONExecutes the query once per source transaction. The results remain valid for all operations of the transaction.
SOURCEROWExecutes the query once per source row operation. Use this option when you are synchronizing a source table with more than one target table, so that the results of the procedure or query are invoked for each source-target mapping.
MAXVARCHARLEN bytesSpecifies the maximum length allocated for any output parameter in a query. Beyond this maximum, output values are truncated. The default is 255 bytes without an explicit MAXVARCHARLEN clause.
PARAMBUFSIZE num_bytesSpecifies the maximum size of the memory buffer that stores parameter information, including both input and output parameters. Oracle GoldenGate issues a warning whenever the memory allocated for parameters is within 500 bytes of the maximum. The default is 10,000 bytes without an explicit PARAMBUFSIZE clause.
TRACE optionTakes one of the following arguments:
TRACE ALLWrites the input and output parameters of each invocation of a query to the report file.
TRACE ERRORWrites parameters to the report file only if an error occurs.
ALLPARAMS optionTakes one of the following arguments:
ALLPARAMS REQUIREDIndicates that all parameters must be present for the queries to execute.
ALLPARAMS OPTIONALAllows the query to execute without all parameters being present.
ERROR actionRequires one of the following arguments:
ERROR IGNOREDatabase error is ignored and processing continues.
ERROR REPORTDatabase error is written to a report.
ERROR RAISEDatabase error is handled just as a table replication error.
ERROR FINALDatabase error is handled as a table replication error, but does not process any additional queries.
ERROR FATALDatabase processing abends.
Replicat
Use MAPEXCLUDE to exclude files or file sets from a wildcard map list. MAPEXCLUDE operates globally within the Replicat parameter file, therefore a file is excluded from mapping if it is specified in any MAPEXCLUDE entry.
MAPEXCLUDE file_name
file_nameThe file to exclude from mapping. You can use wildcards.
The following example maps file names that begin with A, except for files in subvolume BADSUB, files that begin with AB and the file $D1.BAD.A123.
MAP $D1.*.A*, TARGET $D2.*.*; MAPEXCLUDE $D1.BADSUB.* MAPEXCLUDE $D1.*.AB* MAPEXCLUDE $D1.BAD.A123
GLOBALS, Manager
Use MAXABENDRESTARTS to specify the number of times Manager tries to restart a process that stopped due to either of the following conditions:
The process ended abnormally.
The process fails because it cannot write to disk.
Restart counts are initialized every RESTARTINTERVAL minutes.
By default, restarts are attempted each minute. If they need to be attempted at a different interval, use AUTORESTART with the WAITMINUTES option instead.
If AUTORESTART with RETRIES is used, MAXABENDRESTARTS is ignored.
If neither MAXABENDRESTARTS nor AUTORESTART RETRIES are specified and the process has been restarted and failed two times, Manager will stop trying to restart the process.
2
MAXABENDRESTARTS count
countThe number of times Manager tries to restart a process. If you do not want Manager to restart an operation, set count to zero.
Replicat
Use MAXDISCARDRECS to limit the number of errors reported to the discard file per map. Limiting the number of errors reported can reveal the types of errors being experienced without causing Replicat to terminate due to full discard files.
MAXDISCARDRECS num_recs
num_recsThe number of records to discard.
Replicat
Use MAXETCHECKPOINTSECS to specify the maximum amount of time that Replicat waits before writing a checkpoint. A checkpoint saves the state of processing and is used for recovery in the event of an application failure. The length of time between checkpoints defines the amount of data that may need to be reprocessed from the data source in the event of an application failure.
Reducing MAXETCHECKPOINTSECS prevents Replicat from having to read too far back into the data source if a failure occurs. However, since checkpoints commit data to disk, it could cause performance degradation because data is written to disk more frequently.
Increasing MAXETCHECKPOINTSECS reduces the number of checkpoints, which might improve performance, but it requires Replicat to read further back into the data source in the event of a failure. To make that data available, prevent the logs containing it from being removed or overwritten.
Avoid using MAXETCHECKPOINTSECS unless directed to do so by Oracle GoldenGate Technical Support.
10
MAXETCHECKPOINTSECS seconds
secondsThe number of seconds.
Extract
Use MAXTRANSMEM to control the maximum amount of memory allocated for a transaction. If a transaction exceeds the maximum memory value, Extract only tracks the checkpoint of the begin transaction until the commit transaction is found. Once a commit is found, Extract returns and extracts data starting from the begin checkpoint. Do not set MAXTRANSMEM to an arbitrarily high number, since doing so leads to excessive memory consumption.
5000000
MAXTRANSMEM megabytes
megabytesThe number of megabytes. The minimum value is 2000000; the maximum value is 100000000.
MAXTRANSMEM 5000000
Replicat
Use MAXTRANSOPS to limit the number of I/Os in a transaction. Use this parameter to reduce the number of record locks required on the target database for large transactions. MAXTRANSOPS splits large transactions into smaller portions and can reduce transaction log requirements. Use MAXTRANSOPS to alleviate Guardian error 35 caused by exceeding the lock limit when performing Enscribe replication.
MAXTRANSOPS op_count
op_countThe maximum number of operations for a single transaction.
Extract, Replicat
Use MAXWILDCARDENTRIES to set the initial allocation for the number of wildcard entries. The default is 100 if nothing is set. Once this initial MAXWILDCARDENTRIES allocation is exhausted, the program will allocate an additional 100 entries each time it needs more.
You can override the default with the MAXWILDCARDENTRIES parameter in the Extract or Replicat parameter files.
100
MAXWILDCARDENTRIES number
numberThe number of entries to be allocated initially for wildcards. It can be any number between 100 and 32,780.
Replicat
NETWORKCHECKPOINTS specifies local checkpoint updates for file partitions residing on remote nodes. These updates allow the identification of replicated data in a bi-directional environment with data partitioned across nodes. The following processing occurs:
As Replicat resolves MAP entries, it checks the partition list for partitions residing on remote nodes.
When one is found, Replicat checks the GLOBALS parameter file for the location of the Oracle GoldenGate environment for that node.
Replicat then uses this node, volume and subvolume to identify the REPCTXT to add to a list of checkpoint files that will be updated.
By default Replicat will set NETWORKCHECKPOINTS whenever it finds that files or partitions are on remote nodes.
Checkpoint files that are found on the partition's remote node are updated.
NETWORKCHECKPOINTS [, MAXRETRIES number] [, DELAYSECS number | DELAYCSECS number]
MAXRETRIES numberSets the maximum number of times Replicat will retry EXPAND related errors 60, 248, 249, and 250. The default is 3 and the maximum is 100 retries.
DELAYSECS numberSets the number of seconds that Replicat will wait between tries. The default is 5 seconds and the maximum is 60 seconds.
DELAYCSECS numberSets the number of centiseconds that Replicat will wait between tries.
Logger
NOTSTOPPABLE enables Logger to be stopped only with the GGSCI STOP LOGGER command. To use this parameter, Logger must be run under SUPER group authority. This parameter applies only to the current log.
NOTSTOPPABLE
Extract
Use NUMEXTRACTS to override the default number of Oracle GoldenGate trails or files in the trail, that can be specified in the parameter file. When exceeding the default, a small amount of additional memory is consumed by Extract.
50
NUMEXTRACTS number
numberThe number of Oracle GoldenGate trails.
Extract, Replicat
Use NUMFILES to control the initial number of memory structures allocated to contain information about source and target tables specified in TABLE or MAP statements. NUMFILES must occur before any TABLE or MAP entries to have any effect.
To control the number of additional memory structures that are allocated dynamically once the NUMFILES value is reached, use the ALLOCFILES parameter (see "ALLOCFILES").
The default values should be sufficient for both NUMFILES and ALLOCFILES, because memory is allocated by the process as needed, system resources permitting.
1000
NUMFILES num_structures
num_structuresThe number of memory structures to be allocated. Do not set NUMFILES to an arbitrarily high number, or memory will be consumed unnecessarily. Oracle GoldenGate memory supports up to two million tables.
NUMFILES 4000
Extract, Replicat
Use OBEY to retrieve parameters from a file other than the current parameter file, then return processing to current parameter file.
OBEY file_name
file_nameThe name of the parameter file to obey.
Note:
The file to be obeyed (file_name) cannot contain any OBEY parameters. A file that is used as an obey file cannot call another obey file.
GLOBALS
Use OLDGROUPNAMING to specify whether new group naming rules are enforced. New group naming limits Extract and Replicat group names to seven characters. Old group naming allows group names of up to ten characters.
NO
OLDGROUPNAMING {YES | NO}
Extract
Use OMITAUDITGAPCHECK to enable Extract to continue processing even when a gap is detected between the oldest required audit trail and the current trail. By default, Extract checks for the presence of required TMF audit trails at the beginning of processing. When the audit trials are not available at the beginning of processing, but will be available later, OMITAUDITGAPCHECK can be specified to avoid terminating abnormally at startup.
OMITAUDITGAPCHECK
Extract, Replicat
Use OPENTIMEOUT or OPENTIMEOUTMINUTES to control when individual Enscribe files are closed after a period of inactivity.
Replicat opens target Enscribe files when the first replicated database change is received and the files remain open to receive additional records.
Extract opens source Enscribe files for FETCHCOMPS and FETCHLASTIMAGE and the files remain open.
If there is no activity for these opened files, the file closes when the OPENTIMEOUT value is reached. Low values can have a negative effect on performance
60 minutes
OPENTIMEOUTMINUTES minutes
minutesThe inactivity threshold in minutes.
Replicat
Use OPENWARNINGS to control how Replicat responds when Guardian issues a warning during a target Enscribe file open. This can happen, for example, when an alternate key file for the target file is unavailable (Guardian error 4) or a secondary partition is unavailable (error 3).
ABEND
OPENWARNINGS {WARN | IGNORE | ABEND}
WARNIssues a warning but continues.
IGNOREContinues without warning.
ABENDQuits processing with an irrecoverable error message.
Replicat
Use OVERRIDEDUPS to direct Replicat to always apply the record it is currently processing. When NOOVERRIDEDUPS is in effect, a duplicate error is returned and the transaction may abort.
Duplicate errors can be caused when multiple sources insert the same record into a single target. If both of these operations are replicated to the same target, one will fail because each has the same primary key.
NOOVERRIDEDUPS remains in effect until OVERRIDEDUPS is specified. OVERRIDEDUPS remains in effect for all maps specified below it until NOOVERRIDEDUPS is entered.
Records may not be processed in chronological order across Replicat processes. To bypass duplicate records without causing Replicat to terminate abnormally, specify either of the following REPERROR parameters. Even so, when duplicate errors occur, records are output to the discard file.
REPERROR (10, IGNORE) REPERROR (-8227, IGNORE)
OVERRIDEDUPS is automatically in effect when HANDLECOLLISIONS is specified.
NOOVERRIDEDUPS
OVERRIDEDUPS | NOOVERRIDEDUPS
Replicat
Use PARTMAP to specify alternative mapping of partitions during file creation operations. PARTMAP affects only the creation of files having secondary partitions. You can specify wildcards to map sets of disk drives on the source system to similar but not equal sets on the backup system. Specific entries (those without wildcards) take precedence when applicable over wildcard entries.
When PARTMAP is added as an independent parameter it turns on that alternative mapping only for MAP statements that follow it.
PARTMAP can be also be used as an option to a MAP statement. See "MAP".
PARTMAP source_partition_spec, target_partition_spec
source_partition_specThe source partition specification of a volume name or a partial volume name followed by an asterisk.
target_partition_specThe target partition specification of a volume name or a partial volume name followed by an asterisk.
To map secondary partitions created on $DATA1 to $D15:
PARTMAP $DATA1, $D15
GLOBALS
Use PARAMCHECK to specify whether Extract and Replicat check to ensure that the correct parameter file and process name have been specified at startup.
PARAMCHECK {YES | NO}
Extract
Use PASSTHRU and NOPASSTHRU to control whether a data-pump Extract processes files in pass-through mode or normal mode. In pass-through mode, the Extract process does not look up file and table definitions, either from the database or from a data-definitions file. In normal mode the definitions are used to perform mapping and conversion functions.
Using pass-through mode, you can cascade the captured data to a data pump on an intermediary system that has no database installed on it. Source and target file names and structures must be identical; no filtering, column mapping, SQLEXEC functions, transformation, or other functions requiring data manipulation or translation can be used.
You can use PASSTHRU to allow Extract to move data after a file is purged or renamed. By default, Extract does not output data for purged or renamed source files, because no definition is available for the file. However, there are instances when this is appropriate, as when Extract distributes data created by Logger to other systems or acts as a data pump.
To ensure your trails process correctly, you must set the PASSTHRU parameter before you set your EXTTRAIL or RMTTRAIL parameters. Later in the parameter file, you can apply NOPASSTHRU then set up your where clauses or filters on files that require them. Use PASSTHRU only when Extract is distributing data already in a Oracle GoldenGate trail (LOGTRAILSOURCE/EXTTRAILSOURCE). Do not use PASSTHRU in combination with CUSEREXIT or COBOLUSEREXIT.
Enter PASSTHRU above any FILE entries that may fit this description.
NOPASSTHRU
PASSTHRU | NOPASSTHRU
The following example passes through all data from $DATA.DAT even if the source file no longer exists. Only account codes less than 100 are distributed from the ACCOUNT file.
EXTTRAIL \XYZ.$DATA3.GGSDAT.ET
PASSTHRU
FILE $DATA1.DAT.*;
NOPASSTHRU
FILE $DATA2.DAT2.ACCOUNT, DEF ACCOUNT-REC,
WHERE (ACCOUNT-CODE < 100);
Manager
For remote processes to request dynamic services, such as creating Collector processes, Manager must listen on a designated TCP/IP port. This port is defined in the parameter files of Extract groups. To specify a dynamic port, enter the RMTHOST parameter with its MGRPORT option in the Extract parameter file.
The default TCP/IP process for Manager is $ZTC0. To change the default process, before you start or restart the Manager process:
Set the TCPIPPROCESSNAME parameter in the Manager parameter file. See "TCPIPPROCESSNAME" for more information.
Set the DEFINE =TCPIP^PROCESS^NAME to the process you want. For example:
ADD DEFINE =TCPIP^PROCESS^NAME, FILE $ZTC3
None (Do not listen on a port if none is specified)
PORT port_number
port_numberThe port number.
Extract
Use POSITIONFIRSTRECORD for direct read file extraction only. POSITIONFIRSTRECORD positions Extract at the beginning of the input file to process all records again.
Use this parameter only when you are trickling data throughout the day to the target system with RMTBATCH and the records are variable length (for example, the ACI BASE24 Super files).
If you specify SYSKEYCONVERT, Extract turns on POSITIONFIRSTRECORD.
POSITIONFIRSTRECORD
Logger
PRIORITY indicates the NonStop operating system priority for the current log process. Run Logger at a significantly high priority to ensure that it can keep up with the application. For example, PRIORITY 190 sets priority at 190.
Same priority as the Manager process (default of 190)
PRIORITY priority
priorityThe priority level.
Replicat
Use PURGEDATAALTFILES to purge data on the alternate key files when an Enscribe PURGEDATA is received for the primary file. To not purge the data, specify NOPURGEDATAALTFILES.
PURGEDATAALTFILES
PURGEDATAALTFILES | NOPURGEDATAALTFILES
Extract and Replicat
Use PURGEOLDEXTRACTS in an Extract or Replicat parameter file to delete old trail files whenever Oracle GoldenGate starts processing from a new one. Using PURGEOLDEXTRACTS conserves disk space by preventing the accumulation of trail files.
Purging by Extract is appropriate if the process is a data pump. After the data is sent to the target system, the files can be purged. Otherwise, purging would ordinarily be done by Replicat.
Do not use PURGEOLDEXTRACTS in an Extract or Replicat parameter file if more than one Extract or Replicat is reading the same set of trail files. If multiple processes are reading the same files, one process could purge a file before another is finished with it. In this case use PURGEOLDEXTRACTS in the Manager. As a general rule, letting the Manager handle PURGEOLDEXTRACTS is preferred for all Oracle GoldenGate configurations because it provides centralized management of your trail files.
Purge the trail file when moving to the next file in the sequence.
PURGEOLDEXTRACTS
Manager
Use PURGEOLDEXTRACTS in a Manager parameter file to purge trail files when Oracle GoldenGate has finished processing them. By using PURGEOLDEXTRACTS as a Manager parameter, you can use options that are not available with the Extract or Replicat version. Allowing Manager to control the purging helps to ensure that no file is purged until all groups are finished with it.
To purge trail files, you can use the following rules:
Purge if all processes are finished with a file as indicated by checkpoints. Use the USECHECKPOINTS option. (This is the default.)
MINKEEP rules set the time or number of files to keep. Specify only one of the following three parameters.
Keep files or rows for at least a specified number of hours or days. Use the MINKEEPHOURS and MINKEEPDAYS options.
Keep at least n files including the active file. Use the MINKEEPFILES option.
The Manager process determines which files to purge based on Extract and Replicat processes configured on the local system. If at least one process reads a trail file, Manager applies the specified rules; otherwise, the rules do not take effect.
Refer to the information on managing trails in the Administering Oracle GoldenGate for HP NonStop (Guardian) for recommendations on using the rules.
None (do not purge)
PURGEOLDEXTRACTS trail_name
[, USECHECKPOINTS | NOUSECHECKPOINTS]
[, min_rule]
trail_nameThe trail to purge. Use the fully qualified name.
USECHECKPOINTSPurges after all Extract or Replicat processes are finished with the file as indicated by checkpoints.
NOUSECHECKPOINTSAllows purging without considering checkpoints, based on keeping a minimum of one file (if no MINKEEP rule is used) or the number of files specified with a MINKEEP rule.
min_ruleUse only one of the following to set rules for the minimum amount of time to keep data.
MINKEEPHOURS numKeeps an unmodified file for at least the specified number of hours.
MINKEEPDAYS numKeeps an unmodified file for at least the specified number of days.
MINKEEPFILES numKeeps at least num unmodified files, including the file being considered for purging.
The following example purges all files in the \NY.$DATA3.GGSDAT subvolume after checkpoints indicate that they no longer are needed, but it keeps the files at least 3 days.
PURGEOLDEXTRACTS \NY.$DATA3.GGSDAT.*, USECHECKPOINTS, MINKEEPDAYS 3
Trail files AA000000, AA000001, and AA000002 exist. Replicat has been down for four hours and has not completed processing. The Manager parameters include:
PURGEOLDEXTRACTS /ggs/dirdat/AA*, NOUSECHECKPOINTS, MINKEEPHOURS 2
Result: All trail files will be purged because the minimums have been met.
Manager
Use PURGEOLDTASKS to purge Extract and Replicat tasks after a specific amount of time or when they have stopped gracefully.
You can indicate when to delete a task according to the following rules:
The task was last started a specific number of days or hours ago. If the task never was started, then its creation time is used as the basis for applying the rules.
The task stopped gracefully or never was started. This rule takes precedence over the time the task was last started. Use this rule to prevent abnormally terminated tasks from being purged.
None
PURGEOLDTASKS process_type group_name [, purge_option]
process_typeValid values:
EXTRACT
REPLICAT
ER (for both processes)
group_nameThe group name or a wildcard to specify multiple groups.
purge_optionPurges if the task has not been updated for a specific number of hours or days.
Valid values:
AFTER number DAYS
AFTER number HOURS
USESTOPSTATUSPurges if the task was stopped gracefully or never was started.
The following example deletes Extract tasks updated at least three days ago, and it deletes the INITREP Replicat task if it stopped gracefully and was updated at least two hours ago.
PURGEOLDTASKS EXTRACT *, AFTER 3 DAYS PURGEOLDTASKS REP INITREP, AFTER 2 HOURS, USESTOPSTATUS
GLOBALS
Use PURGERESTORE to tell Extract to purge audit trail files that are restored from tape after processing them. NOPURGERESTORE leaves restored audit files on disk.
PURGERESTORE
PURGERESTORE | NOPURGERESTORE
Coordinator
Use READER to trigger the creation of a Reader process to monitor a local trail and communicate with the system Coordinator. The HOST parameter from GLOBALS file will be used to determine the location of the object to be used. Typically this will be the installation location for Oracle GoldenGate, but if it varies the PROGRAM option can be used to point to another object.
READER EXTTRAIL trail_name [, PROCESS process_name], [, CPU primary_cpu] [, BACKUPCPU cpu] [, PRI priority] [, PROGRAM program_name]
EXTTRAIL trail_nameThe fully qualified identifier of the trail that will be monitored by the Reader. This is a required entry.
PROCESS process_nameThe process name of the associated Reader.
Oracle GoldenGate recommends you allow the process name to be generated by the system. However, if you must specify an alternative process, PROCESS process_name lets you do so.
CPU primary_cpuSpecifies the primary CPU on which the Reader process runs. The default is the CPU on which Manager runs.
BACKUPCPU cpuSpecifies an alternate CPU on which the process runs if the primary CPU becomes unavailable.
PRI priorityThe NonStop priority for the process. This defaults to the priority assigned to the TACL process underlying the ADD.
PROGRAM program_nameThis specifies the name of the program that Manager uses when starting the Reader process. Typically this is not entered, and Manager uses the HOST parameter in the GLOBALS files to determine the Oracle GoldenGate installation subvolume as indicated in the GGSSUBVOL option.
The following example identifies a Reader process to monitor Oracle GoldenGate trail AA on \NY. READER EXTTRAIL \NY.$DATA5.GGSDAT.AA, PROCESS $GGRD1, CPU 1, PRI 180
Extract
Use READTHRULOCKS to ignore record locks when performing a FETCHCOMPS or NOFETCHCOMPS operation. The default is to wait for locks to clear before reading the record. In some applications, this can help avoid a deadlock.
NOREADTHRULOCKS waits for locks to clear before reading records.
NOREADTHRULOCKS
READTHRULOCKS | NOREADTHRULOCKS
Logger
Use RECEIVEQWARN to specify a maximum number of messages to hold in queue. Oracle GoldenGate issues an EMS warning when the maximum is exceeded.
200
RECEIVEQWARN num_messages
num_messagesThe message threshold for the queue.
CHGNOTE
Use RENAMEBUMPDELAY to specify the number of seconds to bump the last modified timestamp on the GGSCPUnn files. By default, the last modified timestamp is changed to the current system time plus one second. If a user program that is bound with the Oracle GoldenGate Intercept library notes that GGSCPUnn has a new timestamp, it reevaluates file opens to make sure it has the proper file name. It continues reevaluating file opens until it reaches a time in the future (current time plus the bump specified with RENAMEBUMPDELAY).
You can increase the number of bump seconds into the future to address situations where the user application reevaluates file opens before an actual file rename is completed. Increasing the value will cause the user application to reevaluate file opens for a longer period. The default is the recommended setting.
1
RENAMEBUMPDELAY seconds
secondsCan be any value from 1 through 15.
RENAMEBUMPDELAY 2
Replicat
Use REPERROR to determine how Replicat responds to record errors that occur during replication. Using REPERROR, you can handle most record errors in a default manner and specific errors differently. For example, you can ignore duplicate record errors but abort processing in all other cases.
The parameter "HANDLECOLLISIONS | NOHANDLECOLLISIONS" describes how to handle missing and duplicate record errors as special cases.
See the Oracle GoldenGate for HP NonStop Administrator Guide for information on error handling for primary key updates, where multi-staged I/O can result in partial replication for non-audited files.
ABEND
REPERROR (error, response) | RESET
The following are possible response values and options:
REPERROR (error, [IGNORE | EXCEPTION | DISCARD | ABEND]) REPERROR (error, RETRYOP [, MAXRETRIES number] [, DELAYSECS seconds] [, DELAYCSECS csecs]) REPERROR (error, TRANSABORT [, MAXRETRIES number] [, DELAYCSECS csecs]) [, DELAYSECS seconds] REPERROR (error, FILEOP [, MAXRETRIES number] [, DELAYCSECS csecs]) [, DELAYSECS seconds] [, ABEND | WARN | IGNORE] [, CREATE | ALTER | RENAME | PURGE | SETMODE | CONTROL | CHANGELABEL]) REPERROR (error, RETRYOPEN [, MAXRETRIES number] [, DELAYCSECS csecs]) [, DELAYSECS seconds]
errorValid values are as follows.
DEFAULTSets a global response to all errors except those for which explicit REPERROR statements are specified.
DEFAULT2Signals a backup default exception handling action when DEFAULT is set to EXCEPTION. Use DEFAULT2 when an exception map is not specified for a map that experiences an error.
error_numberA Guardian or SQL error number.
responseCan be one of the following:
ABENDRoll back the transaction and terminate processing abnormally. ABEND is the default.
DISCARDLog the error in the discard file but continue processing the transaction and subsequent transactions.
EXCEPTIONHandle the error as an exception. In anticipation of possible errors, you can create a separate MAP statement that executes only after an error. Use this MAP statement, for example, to map columns from a failed update statement into a "missing update" table. In the parameter file, specify the map that will handle the exception after the map that failed.
FILEOPSpecifies error handling behavior for each I/O type. The default is to ABEND. You cannot specify multiple actions for a given error and operation type, nor can you specify multiple operation types on a single statement.
IGNOREIgnore the error.
RETRYOPENRetry a file open error. If the error is on file open or during the mapping evaluation, RETRYOPEN retries the operation indefinitely every 60 seconds. For example, if a file does not yet exist on the target platform, an operator could create it after seeing the open error, and processing will continue uninterrupted.
RETRYOPRetry an insert, update, or delete operation. Use the MAXRETRIES option to limit the number of retries to the value of number For example, if Error 45 (file is full) indicates the table is out of extents, RETRYOP with MAXRETRIES gives you time to add extents so the transaction does not fail. Replicat abends after the specified number of MAXRETRIES.
To specify the length of time between attempts, set RETRYDELAY as described on "RETRYDELAY".
TRANSABORTAbort the transaction and reposition to the beginning of the transaction. This will continue either until the record(s) are processed successfully or MAXRETRIES expires. If MAXRETRIES is not set, then the TRANSABORT action will loop continuously. To force an infinite loop of retries you can set MAXRETRIES to -1. Use the DELAY option to delay the retry.
The default delay between retries is 60 seconds, but this can be changed using the DELAY option.
The TRANSABORT option is useful for handling timeouts and deadlocks on databases that support those conditions.
RESETUse RESET to remove all of the REPERROR settings made at the root level of the parameter file above the RESET.
Note:
The RESET option clears only the global error settings. It does not apply to and is not valid with REPERROR used within a MAP statement.
This example shows valid actions and operation types for the option FILEOP.
REPERROR (11, FILEOP, RENAME, WARN) REPERROR (14, FILEOP, RENAME, ABEND) REPERROR (48, FILEOP, RENAME, ABEND) REPERROR (10, FILEOP, CREATE, ABEND) REPERROR (14, FILEOP, CREATE, ABEND) REPERROR (11, FILEOP, PURGE, ABEND) REPERROR (14, FILEOP, PURGE, ABEND) REPERROR (48, FILEOP, PURGE, ABEND) REPERROR (48, FILEOP, SETMODE,IGNORE) REPERROR (48, FILEOP, CONTROL, IGNORE) REPERROR (48, FILEOP, CHANGELABEL, ABEND)
This example aborts processing for most file errors, but ignores duplicate record errors for both SQL tables (error -8227) and Enscribe files (Error 10).
REPERROR (DEFAULT, ABEND) REPERROR (-8227, IGNORE) REPERROR (10, IGNORE) REPERROR (11, RETRYOPEN) REPERROR (45, RETRYOP, MAXRETRIES 50)
This example invokes an exception map for all errors on the ACCOUNT table. Errors on other tables cause Replicat to abend (due to the absence of an exception map for those tables).
REPERROR (DEFAULT, EXCEPTION)
REPERROR (DEFAULT2, ABEND)
-- The following MAP has no exception handler, so errors cause Replicat to
-- abend.
MAP $DATA1.DAT.PRODUCT, TARGET PRODUCT;
-- The following MAP has an exception handler, so errors will
-- cause Replicat to insert problem information into the target table.
MAP $DATA1.DAT.ACCOUNT, TARGET ACCOUNT;
INSERTALLRECORDS
MAP $DATA1.DAT.ACCOUNT, TARGET ACCOUNT_EXCEPTION,
EXCEPTIONSONLY,
COLMAP (ACCOUNT_NO = ACCOUNT_NO,
OPTYPE = @GETENV ("LASTERR", "OPTYPE"),
DBERR = @GETENV ("LASTERR", "DBERRNUM"),
DBERRMSG = @GETENV ("LASTERR", "DBERRMSG"));
Extract, Replicat
Use REPLACEBADCHAR to substitute a specified value whenever an unprintable character is encountered during mapping or ASCII formatting.
REPLACEBADCHAR applies globally. If data is ASCII formatted and transported to a platform other than NonStop, you must specify the –r parameter in the Collector startup command.
No replacement when mapping data; Replace with spaces when formatting ASCII data
REPLACEBADCHAR char
[, SPACE]
[, NULL]
[, NONE]
[, UNPRINTABLE]
charA character value with which to replace unprintable character bytes.
SPACEReplaces unprintable character values with spaces.
NULLReplaces the field with NULL whenever possible, otherwise, replaces the field with spaces.
NONESuppresses transformation of double-byte character set values to default characters when the incoming data is converted to ASCII format. This is useful when delivering double-byte character sets.
UNPRINTABLEWhen mapping data, rejects character fields or columns with unprintable data.
When formatting data in ASCII or SQL output, outputs the word "UNPRINTABLE" or a question mark output for the field or column.
Extract, Replicat
Use REPLACEBADNUM to specify a value to substitute whenever a numeric field that contains non-numeric data is encountered during mapping or ASCII formatting. This is most often seen in Enscribe data when, for example, a PIC 9(4) field contains spaces.
REPLACEBADNUM applies globally. If data is ASCII formatted and transported to a platform other than NonStop, you need to specify the –R parameter in the Collector startup command.
Replace invalid numbers with zero when both mapping and formatting data
REPLACEBADNUM number [, NULL] [, UNPRINTABLE]
numberReplaces the invalid field with a specified value.
NULLReplaces the field with NULL whenever possible; otherwise, replaces the field with zero.
UNPRINTABLEWhen mapping data, rejects invalid numeric fields.
When formatting invalid data in ASCII or SQL output, outputs a zero instead.
Replicat
Use REPLICAT to link the current run with a Replicat group for continuous processing of trails. The group's checkpoints ensure that each record is processed exactly once.
To use REPLICAT establish the Replicat group with the GGSCI ADD REPLICAT command.
REPLICAT group_name
group_nameThe Replicat group name.
REPLICAT FINANCE
Replicat
Use REPNEWCOLUMNS to tell Replicat to deliver SQL ALTER TABLE ADD COLUMN statements from the source database to the target database. REPNEWCOLUMNS can be applied on a table-by-table basis.
NOREPNEWCOLUMNS
REPNEWCOLUMNS | NOREPNEWCOLUMNS
The following example delivers new columns to TABLE1, TABLE2 and TABLE4, but not TABLE3.
REPNEWCOLUMNS MAP \LA.$DATA1.DAT.TABLE1, TARGET \NY.$DATA1.DAT.TABLE1; MAP \LA.$DATA1.DAT.TABLE2, TARGET \NY.$DATA1.DAT.TABLE2; NOREPNEWCOLUMNS MAP \LA.$DATA1.DAT.TABLE3, TARGET \NY.$DATA1.DAT.TABLE3; REPNEWCOLUMNS MAP \LA.$DATA1.DAT.TABLE4, TARGET \NY.$DATA1.DAT.TABLE4;
Extract, Replicat
Use REPORT to specify times when a report is written to the report file. The report details the number of records extracted per file and table.
END
REPORT [ AT hh:mm | ON day_of_week | AT hh:mm ON day_of_week | END ]
AT hh:mmThe time of day. Using AT without ON generates a report at the specified time every day.
ON day_of_weekThe day of the week. Valid values are SUNDAY through SATURDAY.
ENDThe end of the run.
The following example generates a report every day at 5:00 pm and every Sunday at 1:00 am.
REPORT AT 17:00 REPORT ON SUNDAY AT 1:00
Extract, Replicat
Use REPORTCOUNT to write a count of records processed since the beginning of processing. Record counts can be output at scheduled intervals or after a specified number of records. Only the last REPORTCOUNT entry in the parameter file is used.
REPORTCOUNT [EVERY]
count {RECORDS | SECONDS | MINUTES | HOURS}
[EVERY]Specifies that a frequency follows.
countThe interval after which to write the count.
RECORDS | SECONDS | MINUTES | HOURSBases the interval on either the number of records or a time schedule.
GLOBALS, Extract, Replicat
Use REPORTFILEEXTENTS to set the primary, secondary, and maximum file extents for the report file. This can be set at the global level, or in an Extract or Replicat parameter file. If you do not set it in the GLOBALS parameter file, your settings will not take effect until the second report file opens. The first report will contain default values since the report file is opened before an Extract or Replicat parameter executes.
Primary is 32; Secondary is 32; Maximum is 900
REPORTFILEEXTENTS (primary, secondary, maximum)
GLOBALS, Replicat, Syncfile
Use REPORTROLLOVER to specify times at which the current report file is aged and a new one is created. Old report files are renamed group_name0, group_name1, etc.
REPORTROLLOVER
{AT time |
AT time ON day_of_week}
AT timeThe time of day. If AT is specified without ON, a new report file is created at the given time every day.
ON day_of_weekSUNDAY through SATURDAY.
Replicat
Use REPORTTMFEXCEPTIONS for debugging purposes, generally in conjunction with Oracle GoldenGate Technical Support. REPORTTMFEXCEPTIONS reports when Replicat detects a TMFEXCEPTIONS type of condition. Messages are written to the report file in the following format:
location_id FILE target_file_name POS (seqno, rba)
Values for location_id include:
A duplicate record that occurred on insert was placed on the list of records to be tracked.
A TMF backout delete record was matched to a duplicate insert (so no processing is required and the duplicate error is no longer tracked).
A delete record matched to out-of-order duplicate insert
An update record matched to correct a duplicate record
Attempting to process record from missing update list
Attempting to process record from duplicate list
A missing update/delete record was placed on list
An insert record was overlaid with an update/delete from the missing updates list
When applying an update record to a record on the duplicate list, a column in the source update was not present in the record on the duplicate list. This column was added to the target record.
An update record was overlaid with a record on the duplicate list.
A delete record was matched to an out-of-order duplicate insert
The attempt to process the updated duplicate record failed.
The timer expired before the reversed situation was corrected.
The timer expired before the reversed situation was corrected.
The timer expired before the reversed situation was corrected.
OFF
REPORTTMFEXCEPTIONS {OFF | ON}
Replicat
Use REPSQLLOG to redirect the log to a file other than the Replicat report file. The default is to write the SQL log to the report file.
If you specify the SHOWSYNTAX option of the MAP parameter, the text of the SQL statement is displayed before the statement is executed.
You can also redirect the SQL log with the define =GGS_REPSQLLOG.
APPEND
REPSQLLOG file_name [APPEND | PURGE | !]
file_nameThe fully qualified path and file name of the alternative file, such as $data1.sqllog.jsr01.The file_name cannot be a structured file or SQL table, it can be a spooler, a process, a terminal or an EDIT file.
APPENDThe default. Appends current log information to an existing file.
PURGEPurges any existing file at startup.
REPSQLLOG $data1.sqllog.jsr01
Replicat
Use RESTARTCOLLISIONS to enable HANDLECOLLISIONS until the first checkpoint is finished. After the first checkpoint is finished RESTARTCOLLISIONS is turned off.
NORESTARTCOLLISIONS
RESTARTCOLLISIONS | NORESTARTCOLLISIONS
Manager
Use RESTARTINTERVAL to reinitialize the restart count specified by the MAXABENDRESTARTS parameter.
When you specify MAXABENDRESTARTS count, the value of count is reset by default every 20 minutes if the process continues to run without abending. Specify RESTARTINTERVAL if you want to reinitialize at a different interval.
If AUTORESTART with RESETMINUTES has been set, RESTARTINTERVAL will be ignored.
For more information on using MAXABENDRESTARTS for restarting processes, see "MAXABENDRESTARTS".
20
RESTARTINTERVAL minutes
minutesThe number of minutes a restarted process must run successfully before the restart count specified by the MAXABENDRESTARTS parameter is reset.
Extract
Use RESTORE to reload audit dumps on tape when the dumps are not available on disk. Use NORESTORE to avoid reloading tapes.
RESTORE
RESTORE | NORESTORE
Replicat
Use RETRYDELAY to specify the delay between attempts at retrying a failed insert, update, or delete operation. Set the retry attempts with REPERROR RETRYOP, described under REPERROR on "REPERROR".
RETRYDELAY seconds
secondsThe number of seconds between retry attempts
Extract
Use RETRYERR to retry errors encountered while outputting records. For example:
Extracted data is output to another node, and the network experiences a temporary problem. Rather than ending abnormally, you could specify a periodic retry of specific network-related errors numbers (such as 250).
A file in the trail fills up prematurely (error 45). Messages indicating the problem alert an operator to increase MAXEXTENTS on the file, after which processing would resume automatically.
RETRYERR {error_num | DEFAULT | EXPAND}
[, MAXRETRIES retries]
[, DELAY seconds]
error_numRetries a particular error number.
DEFAULTRetries Guardian errors not specifically dealt with in other RETRYERR entries.
EXPANDRetries EXPAND-related errors. This does not cover TCP/IP-related problems.
MAXRETRIES retriesThe number of retries before the process ends abnormally (default is 10). Set retries to zero to turn off retries for specific errors when a DEFAULT has been specified.
DELAY secondsThe number of seconds before the next retry (default is 20).
The following example retries Error 45 every 30 seconds a maximum of 10 times, and retries Expand errors every minute for an hour.
RETRYERR 45, DELAY 30, MAXRETRIES 10 RETRYERR EXPAND, DELAY 60, MAXRETRIES 60
Replicat
In both TMF and non-TMF applications, an anomaly can occur in which an insert and subsequent update to the same record appear in reverse order in the log trail or extract trail. When processing such reversed data, Replicat processes the update first and this results in a "record not found" error. TMFEXCEPTIONS can be used to hold the update in anticipation of the insert. With TMF audit trails, Replicat will hold such transactions until the end of the TMF transaction. With non-TMF audit trails, however, there is no TMF transaction, so the number of seconds or centiseconds needs to be set using one of the REVERSEWINDOW parameters.
REVERSEWINDOW enables Replicat to hold the transactions for a specified number of seconds or centiseconds (the reverse window) without attempting to process them. If a subsequent insert is found within the reverse window, the insert is applied to the target file followed by the update and no error is recorded. If the reverse window expires without finding a matching insert, the process abends or discards the transaction depending on the selected option.
REVERSEWINDOWSECS or REVERSEWINDOWCSECS alone will not trigger Replicat to hold exception transactions. For this to work you must also:
Set TMFEXCEPTIONS on for that file.
Use NOAUDITREPS to ensure a non-audited target.
If both REVERSEWINDOW and TMFEXCEPTIONS are specified, but NOAUDITREPS is not, there will be a message that REVERSEWINDOWSECS is ignored. TMFEXCEPTIONS will still be applied.
Note:
REVERSEWINDOWSECS should only be used when the source is a non-TMF audited database. It should not be used when the source is a TMF audit trail.
ABENDEXPIREDWINDOW
{REVERSEWINDOWSECS seconds | REVERSEWINDOWCSECS cseconds}
[ABENDEXPIREDWINDOW | DISCARDEXPIREDWINDOW]
REVERSEWINDOWSECS seconds | REVERSEWINDOWCSECS csecondsREVERSEWINDOWSECS seconds specifies the wait time in seconds. The range is 1 to 30 seconds and the recommended setting is 1.
REVERSEWINDOWCSECS cseconds specifies it in centiseconds. The range is 10 to 100 centiseconds and the recommended setting is 10.
ABENDEXPIREDWINDOWCauses Replicat to abend at the expiration of the wait time.
Note:
Using this option will cause Replicat to checkpoint to the current position in the trail.
DISCARDEXPIREDWINDOWCauses Replicat to discard the expired record and continue. This may be the preferred option if it is important to minimize Replicat down time and file inconsistencies can be corrected at a later time.
Extract
Use RMTBATCH to specify the name of a remote target file to which Extract writes extracted records. RMTBATCH is like RMTFILE in that it specifies the name of a remote target file. However, rather than outputting the data in a Oracle GoldenGate proprietary format, Extract outputs the data to a batch file on the target system. The batch file name is determined by a combination of the target file name and the source file name.
Using this feature, you can transfer the batch file contents a little at a time throughout the day ("trickle" transfer), rather than all at once at the end of the day.
Use wildcards to accommodate different file names with a single RMTBATCH parameter entry.
RMTBATCH enables full restart capabilities without loss or repetition of data. This is enabled using Extract checkpoints (GGSCI ADD EXTRACT command).
When using direct read functions to transfer batch files with RMTBATCH, use the following Extract parameters whenever the record length is variable and the source file type is entry-sequenced:
SYSKEYCONVERT USEBYTEPOSITION
POSITIONFIRSTRECORD
With these parameters, Extract can build the target file in all cases, including after a recovery.
If SYSKEYCONVERT is specified, you cannot mix RMTBATCH activity with standard extraction of data for replication.
To create OS/390 QSAM files, you can specify certain file creation parameters in a configuration file on the OS/390 platform or using RMTBATCH.
RMTBATCH has the following dependencies:
You must include at least one EXTFILE, EXTTRAIL, RMTFILE, RMTTRAIL, or RMTBATCH entry in your parameter file.
RMTBATCH must precede the names of files and tables containing data you want to write to the remote file. Precede the RMTBATCH entry by a RMTHOST entry to identify the target computer and TCP/IP port.
RMTBATCH destination_file_spec [, PARAMS param [, param, ...]];
Note:
See also the information on RMTBATCH for QSAM on "RMTBATCH for QSAM".
destination_file_specEither a file name, or a file name combined with wildcard characters. The wildcard characters are * and %. The name of the source file is substituted for each occurrence of * in the file specification, and the name of the source subvolume is substituted for each occurrence of %.
PARAMS paramparam can be any one of the following:
AUTOTRUNCATEExecuted by Extract or Collector, depending on its placement on the command line.
If AUTOTRUNCATE is specified before the parameters are defined, Extract deletes the contents of the batch file when it positions to the first record. When you periodically move the same file to the target system, use this parameter to delete previous data from the file.
If AUTOTRUNCATE is specified after the parameters are defined, Collector deletes the contents of the batch file when it positions to the first record.
BLKSIZE block_sizeIndicates the block size of the source file (default is 4096). This option is required when the source file is entry-sequenced and its block size is less than 4096.
CLOSEWIN minutesCloses the target file after minutes of inactivity.
CLOSEACTION "system_command"Submits a shell script or system command to the system upon file close, when close was a result of completed processing of the corresponding source file (indicated by a "file close on source" message). Extract waits for the command to complete. To return control to Extract immediately, specify a nowait type operation in the script or command (as with the & in UNIX).
FTYPE {E | K | R | U}Identifies the source file type to the Collector (E is entry-sequenced, K is key-sequenced, R is relative, U is unstructured). The default is E, entry-sequenced.
FTYPE is used by the collector to compute an appropriate record position into the target file (which is always unstructured).
Because keys are identified differently for different file types, this parameter is required if the file type is relative or unstructured.
RECLEN record_lengthIdentifies the length in bytes of each record in the target file. Use this to pad records that are shorter than record_length to a constant length. Records are padded with spaces.
Note:
All RMTBATCH statements must end with a semi-colon (;).
Assume the following configuration:
RMTHOST ggs2, MGRPORT 7809 RMTBATCH /usr/ggsdat/%_*, PARAMS FTYPE E, BLKSIZE 2048, RECLEN 1916; FILE $DATA2.GGS.TL*;
Also assume that a new TL file is created every day, with the naming convention of TLyymmdd.
On April 5, 2010, the file $DATA2.GGS.TL100405 is created. On April 6, $DATA2.GGS.TL100406 is created. On the target system, the following files are created, each containing data from the corresponding source file:
/usr/ggsdat/GGS_TL100405 /usr/ggsdat/GGS_TL100406
FTYPE E identifies each source file as entry-sequenced; BLKSIZE specifies the block length of those files as 2048; and RECLEN 1916 outputs all records as 1916 bytes by padding short records with spaces.
The following parameters are for QSAM files only:
TARGETFTYPE {QSAMV | QSAMF}Mandatory parameter for QSAM files. Specify QSAMV for variable-length MVS QSAM files or QSAMF for fixed-length. MVS QSAM files.
TARGETRECLEN record_lengthMandatory parameter for QSAM files. Specify the length in bytes of a fixed-length record or the maximum length in bytes for variable-length records.record_lengthcan be from 1 to 32760.
TARGETBLKSIZE block_lengthParameter for QSAM files. Specify a multiple of the record length for fixed-length records, or the maximum record length +4 for variable-length records.block_lengthcan be from 1 to 32760.
VOLUME serial_number [, serial_number, ...]Mandatory parameter for QSAM files. Identifies the serial numbers of the volumes on which the target data set will reside. Specify up to 255 volume serial numbers.
SPACE (primary_quantity, secondary_quantity, unit)Mandatory parameter for QSAM files. Requests space for a new data set.
primary_quantityThe number of tracks, cylinders, or blocks to be allocated for a new data set.
secondary_quantityThe number of additional units to allocate if more space is needed.
unitThe units in which space is requested: TRK for tracks, CYL for cylinders, BLK for blocks.
For examples of QSAM parameters, see "QSAM Configuration Parameters".
RMTBATCH and Generation Data Groups
Use RMTBATCH to create and/or access IBM QSAM Generation data sets within Generation Data Groups (GDG). This feature catalogs successive updates, or generations, of data sets within a GDG. It is typically used on BASE24 Extract and Super Extract output files, but can be used on any entry-sequenced file type.
Generation data sets have sequentially ordered absolute and relative names that represent their age. (Smaller absolute names indicate older data.) A relative name is a signed integer indicating the latest generation (0), next to latest generation (-1), etc. The relative number +1 creates a new generation each time it is used.
Use the following parameters with RMTBATCH to create or access your generation data sets.
LIKECopies the allocation attributes of a model data set to specify the allocation attributes of a new data set. The model data set must be an existing, cataloged data set.
UNITAsks the systems to place the data set on a specific device, device type, or group of devices as another data set.
DCBSN (-DCB Model data set)Names a catalogued data set. Instructs the system to copy data control block information from the data set's label.
To create a generation data set using RMTBATCH:
Create a Generation Data Group index using IDCAMS.
Create an Extract group and parameter file.
Create a RMTBATCH parameter file as follows:
Specify your GDG file name then a relative number for the GDG file (negative integer, positive integer, or zero)
Add the -P parameter
Add an Extract as a file type entry.
Sample Extract Parameter File with RMTBATCH. The remote batch statement is in bold.
EXTRACT EXTTLF1
DISCARDFILE $DATA1.DISCARD.EXTFLT1
SYSKEYCONVERT USEBYTEPOSITION
ALTINPUT OPENTIMEOUT 5, TEMPLATE $DATA01.EXTRACT.P*, USENEXTMODIFIED
POSITIONFIRSTRECORD
RMTHOST OS390, MGRPORT 7809, params "-c ON -P qsamvb.prm"
RMTBATCH file.acct(+1),params ftype u, closewin 2;
File $DATA01.EXTRACT.P*;
Extract
Use RMTFILE when the remote Oracle GoldenGate trail is a flat file.
All FILE and TABLE entries after the current entry but before the next trail parameter (EXTFILE, EXTTRAIL, RMTBATCH, RMTFILE, RMTTRAIL) result in output to the current trail.
Remote Oracle GoldenGate trails are used over TCP/IP connections only. Do not specify an Expand node name in the file_name, even if the remote system is also connected with Expand. To specify a trail on a different NonStop node over an Expand connection, use the EXTFILE or EXTTRAIL parameters.
RMTFILE must be preceded by a RMTHOST statement, and it must precede any TABLE, FILE or MAP statements.
You must include at least one EXTFILE, EXTTRAIL, RMTBATCH, RMTFILE or RMTTRAIL entry in your parameter file. It is required for initial loads that use a trail.
RMTFILE file_name [, PURGE | APPEND] [, EXTENTS (primary, secondary, maximum)] [, MAXFILES num_files] [, MEGABYTES megabytes] [, OWNER group_number, user_number] [, SECURE "rwep"]
file_nameThe remote file on the remote system.
PURGE | APPENDPurges file_name before capturing data, or appends the extracted data to file_name.
EXTENTS (primary, secondary, maximum)Defines the primary, secondary and maximum extents when the target is a NonStop system.
num_filesEnables a sequence of files to be created on the target system, rather than a single file. When MAXFILES is specified, a six-digit sequence number is appended to the end of file_name for each file. MAXFILES does not actually limit the number of files created. When using MAXFILES, MEGABYTES should also be specified in order to explicitly set the maximum size of each file in the sequence.
MEGABYTES megabytesSets up the maximum size of the file, or sequence of files if you specified MAXFILES. The maximum size for each file is 2 gigabytes.
OWNER group_number, user_numberDetermines which user group owns the RMTFILE.
SECURE "rwep"Sets the standard Guardian security for read, write, execute, and purge operations on the RMTFILE.
The following example extracts data from SALES to a UNIX or Windows system, depending on region. Collectors are started dynamically by Manager. The Manager port for NY is the default, port 7809. To resolve any possible conflict, the LA port number is explicitly identified. Note that the TCPIPPROCESSNAME is also specified (the default is $ZTC0).
TCPIPPROCESSNAME $ZTC4 RMTHOST NY, MGRPORT 7809 RMTFILE /usr/dat/sales1 TABLE $DATA5.SALES.ORDERS WHERE (REGION = "EAST"); TABLE $DATA5.ACCTING.RECEIPTS WHERE (REG = "E"); RMTHOST LA, MGRPORT 7888 RMTFILE d:\dat\wstsales TABLE $DATA5.SALES.ORDERS WHERE (REGION = "WEST"); TABLE $DATA5.ACCTING.RECEIPTS WHERE (REG = "W");
Extract
Use RMTHOST to identify the target system and TCP/IP port number for subsequent RMTBATCH, RMTFILE, or RMTTRAIL entries. You can make one or more RMTHOST entries to identify multiple target nodes. Only one entry is active at any time.
There are multiple ways to specify the TCPIPPROCESSNAME when the NonStop TCP/IP process name is not $ZTC0. When assigning a process name, the system selects the option to use based on this hierarchy:
RMTHOST TCPIPPROCESSNAME option in the Extract parameter file
Stand-alone parameter TCPIPPROCESSNAME entry in the Extract parameter file prior to the RMTHOST entry.
DEFINE =TCPIP^PROCESS^NAME added before the Extract process was started
Default $ZTC0
By default Extract uses STREAMING and does not wait for a response from the remote host Collector before sending the next message. With STREAMING, Extract waits for a response only when it needs to checkpoint. It can potentially send multiple megabytes of data without waiting and therefore improve through-put.
Extract checks the remote Collector when using STREAMING. If the Collector does not support streaming, Extract reverts to waiting for a response before sending the next message. To always wait for a response before sending, specify NOSTREAMING.
Note:
RMTTASK does not allow STREAMING. If the RMTTASK parameter is used with the STREAMING option, STREAMING is disabled.
Identifying Static and Dynamic Collectors
RMTHOST can be used to identify a dynamic Collector that is started by the Manager or a static Collector that is defined and started by the user.
When using the static Collector method, the target computer must be running the Collector program before starting Extract.
When using the dynamic collector method, Extract requests Manager on the remote system to start a Collector process. In this case, Collector terminates itself when the connection is lost or Extract terminates. If network problems occur, Extract and Manager coordinate restart and retry efforts.
RMTHOST {host_name | ip_address}
[, PORT port_number | MGRPORT port_number]
[, IPINTERFACE ip_address]
[, STREAMING | NOSTREAMING]
[, TCPIPPROCESSNAME process_name [@ip_address]]
[, USEIPV4ONLY]
[, dynamic_options]
host_nameThe remote host name. If host_name is specified, do not specify ip_address
ip_addressThe IP address of the host. Either version (IPv4 or IPv6) of the address internet protocol is acceptable. If ip_address is specified, do not specify host_name.
PORT port_numberAn optional argument specifying the port for the static Collector. The default port is 7819.
MGRPORT port_numberUsed when Manager starts the dynamic Collector. Can be DEFAULT or the port number on which Manager is listening. The default port number is 7809. Do not specify a Collector port when Manager dynamically starts the Collector.
IPINTERFACE ip_addressRestricts Extract to the specified IP address.
STREAMING | NOSTREAMINGSpecifies whether Extract will wait for a response from the Collector before sending the next message.
STREAMINGExtract will wait for a response only when it needs to checkpoint, otherwise it will not wait before sending the next message. STREAMING is the default.
NOSTREAMINGUse NOSTREAMING to turn off STREAMING and cause Extract to wait for a response before sending the next message.
TCPIPPROCESSNAME process_name @ip_addressprocess_name restricts Extract to the specified process name. The optional @ip_address can be used instead of IPINTERFACE to force the process to bind to a specified IP address.
USEIPV4ONLYForces the use of version 4 of the address internet protocol (IPv4) when both IPv4 and IPv6 protocols are configured and compatible.
dynamic_optionsSpecifies options for a dynamic Collector. Can be any of:
CPU cpu_numThe CPU in which Manager will start the Collector process when NonStop is the target system.
COMPRESS | NOCOMPRESSCompresses outgoing blocks of extract records. The destination Collector decompresses the data stream before writing it to the remote file or remote trail. This typically results in compression ratios of at least 4:1 and sometimes much better. However, compression can require significant CPU resources.
COMPRESSTHRESHOLDSets the minimum block size for which compression is to occur. The default for compression is 1000 blocks.
ENCRYPT typeEncrypts the data stream sent to the remote host. The destination Collector decrypts the data stream before writing it to the remote file or remote trail. For details on the encryption algorithm, contact Golden Gate Software.
type is one of: NONE, GGS, or BLOWFISH. NONE specifies no encryption. GGS specifies a default key generated by Oracle GoldenGate. BLOWFISH specifies a user-defined key.
The destination Collector must be version 5.40 or above.
KEYNAME keynameKEYNAME indicates the logical name of an encryption key to look up in the ENCYKEYS file. KEYNAME is required when ENCRYPT is set to BLOWFISH. The KEYNAME value is validated at startup of Extract or Collector, and if it is invalid, the process abends with an error message.
PARAMS "collector_parameters"See the parameter description in "Collector Parameters". When the target system is NonStop, this parameter is rarely required.
PRI priorityThe NonStop system priority at which Manager will start the collector process when the target is a NonStop system. Consult the NonStop documentation for more information about NonStop priorities.
NAME $processSends request to Manager to start a specific process as part of the RMTHOST startup request.
The following examples use different RMTHOST options to control the IP address.
The following example sets an IP address using the TCPIPPROCESSNAME process_name@ option.
RMTHOST host01, MGRPORT 12345, & TCPIPPROCESSNAME $ztc1@2001:db8:2010:5040:4fff:ffff:ffff:28
The following example sets an IP address using the IPINTERFACE option.
RMTHOST host01, MGRPORT 12345, & IPINTERFACE 2001:db8:2010:5040:4fff:ffff:ffff:28
The following example forces use of the IPv4 address when both versions are configured.
RMTHOST host01, MGRPORT 12345, USEIPV4ONLY
Extract
Use RMTHOSTALT to identify an alternative IP address and Collector in case the primary address or collector cannot be reached. After n retries of any TCP/IP errors (default is five), Extract attempts to connect and send data over the connection specified by RMTHOSTALT.
Specify RMTHOSTALT immediately after the corresponding RMTHOST entry. RMTHOSTALT applies only to the previous RMTHOST. Only one RMTHOSTALT entry is allowed per RMTHOST entry.
RMTHOSTALT requires a separate collector to be running that will accept data in the event that the RMTHOST becomes unavailable.
You can make one or more RMTHOST entries to identify multiple target nodes. Only one entry is active at any time.
In the event that a switch is made to the alternate address, and more errors occur, attempts will be made to switch back to the primary path.
RMTHOSTALT does not support dynamically created collectors. To use RMTHOSTALT, you must specify the RMTHOST PORT number explicitly (the static method).
RMTHOSTALT {host_name | ip_address}, PORT port_number
[, IPINTERFACE ip_address]
[, TCPIPPROCESSNAME process_name [@ip_address]]
[, USEIPV4ONLY]
host_nameThe host name. If ip_address is specified, do not specify host_name.
ip_addressThe IP address. Either version (IPv4 or IPv6) of the address internet protocol is acceptable. If host_name is specified, do not specify ip_address.
port_numberThe port number. There is no default port for RMTHOSTALT. The port entered here is required and must match the port on which the target Collector is listening.
IPINTERFACE ip_addressRestricts Extract to the specified IP address.
TCPIPPROCESSNAME process_name @ip_addressRestricts Extract to the specified process name. The optional @ip_addresss restricts Extract to the specified IP address.
USEIPV4ONLYForces the use of version 4 of the address internet protocol (IPv4) when both IPv4 and IPv6 protocols are configurated and compatible.
The following example causes data to be output over TCP/IP process $ZTC0 to the system designated by 192.0.2.1 on port 7830. If errors are encountered, and three consecutive retries fail, data is output over TCP/IP process $ZTC1 to the system designated by 192.0.2.2 on port 7831.
TCPIPSWITCHERRS 3 RMTHOST 192.0.2.1, PORT 7830, TCPIPPROCESSNAME $ZTC0 RMTHOSTALT 192.0.2.2, PORT 7831, TCPIPPROCESSNAME $ZTC1 RMTTRAIL $DATA5.GGSDAT.RT
Extract
Use RMTTASK to create a one-time processing task on the target system. RMTTASK directs the Manager process on the target system to start Replicat process automatically to load the data and then stop it when the task is finished. This parameter is used for initial data loads.
A RMTHOST statement must precede the RMTTASK statement. RMTTASK is required for batch tasks.
RMTTASK task_name, GROUP group_name [, PARAMS "runtime_params"]
task_nameEnter REPLICAT, currently the only task on the remote system.
GROUP group_nameThe name of the initial-load Replicat group on the target system. This option defines the target Replicat group processes the load data.
PARAMS "runtime_params"Optional. Specifies runtime parameters to pass to the initial-load Replicat process other than standard parameters, such as report file, parameter file, and group name, which are supplied automatically.
In the following example, records from table PRODUCT and ACCOUNT are sent to initial-load tasks on target system NY.
EXTRACT EXTTASK RMTHOST NY, MGRPORT 7809 RMTTASK REPLICAT, GROUP tabload TABLE $DATA.MASTER.PRODUCT; TABLE $DATA.MASTER.ACCOUNT;
Extract
Use RMTTRAIL to establish the current remote trail on a remote system connected by TCP/IP. All FILE and TABLE entries after the current entry but before the next Extract file parameter (EXTFILE, EXTTRAIL, RMTBATCH, RMTFILE, RMTTRAIL) result in output to the current trail.
The remote trail must correspond with a RMTTRAIL entry created with GGSCI. The Extract group in the parameter file must match the Extract entry linked with the RMTTRAIL. For more information about adding remote Oracle GoldenGate trails, refer to"ADD RMTTRAIL".
RMTTRAIL file_prefix
file_prefixThe path name of the remote file. A six character sequence indicator is added to each file in the remote trail to make up individual file names.
Remote Oracle GoldenGate trails are used over TCP/IP connections only. Do not specify an Expand node name in the file_prefix, even if the remote system is also connected with Expand. To specify a trail on a different NonStop node over an Expand connection, use the EXTFILE or EXTTRAIL parameters.
The following example assumes that an Extract group FINANCE was established with GGSCI. The example also assumes that two remote Oracle GoldenGate trails were created in GGSCI and associated with EXTRACT FINANCE. This configuration sends ACCOUNTS records to the XX trail, and ORDERS to the YY trail. The parameter file includes the following commands:
EXTRACT FINANCE RMTHOST 192.0.2.2, MGRPORT 7809 RMTTRAIL /ggs/dirdat/XX FILE $DATA2.FINANCE.ACCOUNTS; RMTTRAIL /ggs/dirdat/YY FILE $DATA3.FINANCE.ORDERS;
Extract
Use ROLLOVER to specify times at which local or remote trails automatically roll over, in sequence. Each ROLLOVER rule applies to all Oracle GoldenGate trails and remote trails.
Use ROLLOVER to extract data continuously but create files representing distinct periods of time (for example, each day). Also, to send extracted data to external systems periodically, use ROLLOVER to cut off one file and start the next.
Files roll over only between the output of distinct transactions, not in the middle of a transaction. Checkpoints are recorded as soon as files roll over to ensure the previous files are no longer required by Extract.
Rollover occurs if the rollover conditions are met during the run. In other words, if ROLLOVER ON TUESDAY is specified, and Extract starts on Tuesday, rollover will not occur until the next Tuesday (unless other ROLLOVER rules are specified).
You can specify up to thirty rollover rules.
ROLLOVER [AT hh:mm] [ON day_of_week] [AT hh:mm ON day_of_week] [AT END] [REPORT]
AT hh:mmThe time of day. Using AT without ON generates a report at the specified time every day.
ON day_of_weekThe day of the week. Valid values are SUNDAY through SATURDAY.
AT ENDA STOP requested from GGSCI or the end of a SPECIALRUN.
REPORTA report regarding the number of records extracted from each table and file since the last report was generated. The report represents the number of records output to the corresponding file (unless other reports are generated by means of the REPORT parameter).
The following example rolls over local and remote Oracle GoldenGate trails every day at 3:00 p.m. as well as every Sunday at 8:00 a.m. For instance, if the file $DATA2.EXTDATA.AA000122 is the current file Monday morning, then at 3:00 p.m. on Monday AA000122 will be closed and AA000123 will be created.
ROLLOVER AT 15:00 ROLLOVER AT 08:00 ON SUNDAY
Extract, Replicat
Use SETENV to set a NonStop environment variable. When Extract or Replicat starts, it uses the specified value instead of the one set in the environment.
Use one SETENV statement per variable to be set. Any variables set in the SETENV statement override any existing variables set at the operating-system level.
The results from setting variables with SETENV vary based on the quotation marks used.
No quotation marks
When no quotation marks are used, the result is upshifted so for SETENV (NAME = name1),the result is NAME1.
For example:
SETENV repl = replicat Set environment variable (REPL=REPLICAT) SETENV (repa = repall) Set environment variable (REPA=REPALL) ?repl ?repa 2013-05-06 14:33:48 OGG ?REPL = REPLICAT ?REPA = REPALL.
Double quotation marks
When double quotation marks are used, the quotation marks are removed from the result, which is otherwise unchanged. For SETENV NAME = "Name", the result is Name.
For example:
SETENV name = "Name" Set environment variable (NAME=Name) MAP $data01.abdat.tcustmer, TARGET $data02.abdat.tcustmer, COLMAP (USEDEFAULTS, name = ?name); 2013-05-06 14:33:48 OGG ?NAME = Name.
Outer single quotation marks
When the value is enclosed in single quotation marks, the result is enclosed in double quotation marks and otherwise unchanged. For SETENV NAME = 'a literal', the result is "a literal".
For example:
SETENV name = 'With Single quotes, we get a literal' Set environment variable (NAME="With Single quotes, we get a literal") MAP $data01.abdat.tcustmer, TARGET $data02.abdat.tcustmer, COLMAP (USEDEFAULTS, name = ?name); 2013-05-06 14:33:48 OGG ?NAME ="With Single quotes, we get a literal".
Outer double quotation marks
When outer double and inner single quotation marks are used, the result is still everything within the double quotation marks unchanged. For SETENV NAME = "'name'", the result is 'name'.
For example:
SETENV WILL = "'WILL'"
Set environment variable (WILL='WILL')
MAP $data01.abdat.tcustmer, target $data02.abdat.tcustmer,
COLMAP (USEDEFAULTS),
SQLEXEC (ID ex1, TRACE ALL, BEFPREFILTER, ERROR RAISE,
QUERY " select * from $data01.tssout.tcustmer "
" WHERE cust_code = ?WILL ",
PARAMS (c1 = cust_code ) );
2013-05-06 14:33:48 OGG ?WILL = 'WILL' .
Embedded quotation marks
Use the escape character to retain embedded quotation marks. For SETENV NAME = 'John\'s', the result is "John's".
For example:
SETENV name ='Use the escape character for the embedded quote in John\'s name' Set environment variable (NAME="Use the escape character for the embedded quote in John's name") MAP $data01.abdat.tcustmer, TARGET $data02.abdat.tcustmer, COLMAP (USEDEFAULTS, name = ?name); 2013-05-06 14:33:48 OGG ?NAME ="Use the escape character for the embedded quote in John's name".
None
SETENV (environment_variable)
environment_variableThe name of the environment variable
EXTRACT GGS01 SETENV (TRAIL1=$DATA01.LOGGER.A1) SETENV (TRAIL2=$DATA01.LOGGER.A2) FILE $*.*.*; EXTTRAIL ?TRAIL1 FILE $*.*.* EXTTRAIL ?TRAIL2
Extract
Use SHORTREADDELAY to optimize system resource usage when Extract runs online. SHORTREADDELAY instructs Extract to delay a specified number of seconds whenever a block shorter than the optimal block size is read from a trail. This parameter enables efficient processing of many records at once. By specifying a longer delay, data is sometimes not replicated as quickly.
SHORTREADDELAY seconds
secondsThe number of seconds to delay.
Extract, Replicat
Use SOURCEDEFS to specify the name of the file on the target system that contains the source table and file definitions. Create the SOURCEDEFS file by running the DEFGEN utility on the source system. Once created, the file must be transferred to the target system before running Replicat.
You can use SOURCEDEFS to facilitate faster loading of definitions at startup rather than accessing remote SQL catalogs or DDL dictionaries for the information.
You must specify SOURCEDEFS before any MAP entries that require related source definitions. Multiple SOURCEDEFS entries can exist within the same parameter file if more than one file holds the necessary definitions (for example, if each SOURCEDEFS file held the definitions for a distinct application).
SOURCEDEFS is required only when the source system is unreachable through Expand.
SOURCEDEFS file_name
file_nameThe name of the file containing the data definitions.
Extract
Use SOURCEISFILE or SOURCEISTABLE to specify whether the source is a file or a table. In many cases, source and target files are synchronized with utilities such as FUP DUP or SQLCI LOAD. These utilities establish a point at which subsequent changes made to the source can be applied accurately at the target. However, if mapping is specified between source and target, or if the target is a non-NonStop database, conversion must take place before load.
SOURCEISFILE reads and processes records from the source file itself rather than the audit trail. If you specified user exits or column maps when Extract processes database changes through the audit trail, also specify SOURCEISFILE so that the same mapping can be applied before a full-scale file load. Therefore, only one set of mapping routines is required for managing both full synchronization and incremental change activities.
SOURCEISFILE | SOURCEISTABLE [, FASTUNLOAD | FASTUNLOADSHARED] [, SQLPREDICATE] [, SELECTVIEW] [, RESTARTCHECKPOINTS]
FASTUNLOADProcesses the file or table several times faster than the default method. Records are output in random order, rather than primary key order. FASTUNLOAD has no effect when an SQL view is specified. The FILE parameter option PARTITIONS can restrict the data retrieved to a certain subset of the file or table.
FASTUNLOAD fails if other openers exist for the file or table (the file is opened in protected mode).
FASTUNLOADSHAREDAllows a shared open of the source file or table. Use this only on files that are not receiving updates at the same time data is being extracted.
SQLPREDICATEUses any WHERE clause designated as part of a TABLE entry as criteria when selecting data from the source table. SQLPREDICATE can dramatically reduce extraction time when only part of the source table needs to be extracted. This option only works when the source is an SQL table.
You can start separate Extract processes to retrieve data from selected partitions from large tables in parallel. Each process selects a range of data from one or more partitions. When SQLPREDICATE is specified, only relevant partitions are scanned.
SELECTVIEWSelects data from a specified SQL view in the FILE parameter. Without SELECTVIEW, Extract selects data from the base table of the view, then maps the base table columns to the view columns (this also occurs when processing audit trails and a view is specified).
SELECTVIEW is required to process multiple table joins and aggregation views. Use SELECTVIEW to format complex query output for compatibility with other platforms. Rather than using standard utilities that output a record at a time, this method takes advantage of Extract's high performance buffering and cross-platform features.
RESTARTCHECKPOINTSInstructs Extract to save checkpoint information allowing restart at the last record read if the process stops or abends. See"Using RESTARTCHECKPOINTS" for conditions and restrictions.
You can use RESTARTCHECKPOINTS for:
SQL/MP source tables with or without the SQLPREDICATE option
Enscribe whether or not you use the FILE STARTKEY and ENDKEY options
Both SQL/MP and Enscribe with or without FASTUNLOAD.
RESTARTCHECKPOINTS is valid only for Extracts added from GGSCI using SOURCEISFILE or SOURCEISTABLE.
A positioned restart is valid only if RESTARTCHECKPOINTS is in the parameter file when the Extract starts initially. The process will start from the beginning of the file if RESTARTCHECKPOINTS is added or removed for any subsequent restart.
Either SOURCEDEFS or DICTIONARY and DEF are required for Enscribe source files.
You must not make any of the following changes to the parameter file when restarting from a position other than the beginning of the file. If you must make these changes, you must delete and re-add the Extract group.
Removing checkpoint information by:
Removing the current Data File Checkpoint file from the parameter file. This file can be identified with the INFO EXTRACT command.
GGSCI> INFO EXTRACT EXTPART Extract EXTPART Last Started 2010-12-06 12:39 Status RUNNING Task type SourceIsFile Process $TSE01 Checkpoint Lag: unknown Data File Checkpoint \NY.$DATA02.ACDATA.ACPART . . .
Removing the current output checkpoint. This is also identified in the INFO EXTRACT command.
GGSCI> INFO EXTRACT EXTPART
.
.
.
Record 30977
Target Extract Trails Rba Max MB
\NY.$DATA02.ACDATA.TX000001 2168775 3.
Adding or removing the FASTUNLOAD option
Adding a new EXTFILE, RMTFILE, or RMTTASK
Changing the source file or table significantly, such as changing the key columns or their length or datatype
Changing a STARTKEY or ENDKEY in a way that alters the file access method.
It is not recommended that you add files or tables to the parameter file after start-up and prior to a restart. They may not be extracted.
Replicat processes require the HANDLECOLLISIONS parameter during a restart of a RMTTASK Extract.
Extract, Replicat
Use SPECIALRUN for one-time batch processing. SPECIALRUN tasks process activity that occurs within a time period, such as the changes logged between 8 a.m. and 3 p.m. on a particular date. Checkpoints are not recorded so the process can not be restarted from an intermediate processing point. An Extract SPECIALRUN process can read from Oracle GoldenGate or TMF audit trails; a Replicat from Oracle GoldenGate trails.
You can specify EXTTRAILSOURCE or EXTFILESOURCE for Extract to read data from Oracle GoldenGate trails rather than TMF audit trails.
SPECIALRUN has the following dependencies:
When you specify SPECIALRUN, you must also specify the BEGIN and END parameters.
SPECIALRUN must precede EXTFILE, RMTFILE, EXTTRAIL or RMTTRAIL entries.
SPECIALRUN [, EXTTRAILSOURCE trail_name] [, EXTFILESOURCE trail_file_name]
EXTTRAILSOURCE trail_nameThe name of the Oracle GoldenGate trail to use as the data source.
EXTFILESOURCE trail_file_nameThe name of the Oracle GoldenGate trail containing the single file to use as the data source.
SPECIALRUN EXTTRAILSOURCE $data1.ggsdat.et
SPECIALRUN EXTFILESOURCE $data3.etdat.datfile1
For Replicat you must specify EXTTRAIL, RMTTRAIL, EXTFILE or RMTFILE with a SPECIALRUN.
Replicat always uses BEGIN and END to determine when to start and stop processing for a SPECIALRUN. If nothing is entered for BEGIN, the process will start with the first record. If nothing is entered for END, the process start time will be used to determine the limit of the records to process.
SPECIALRUN
The following is an example of a Replicat parameter file that would be run from a TACL prompt to load records from the file GGSINI.ECUSTMER on the target system to the GGSTAR.TCUSTMER table on the target. The GGSINI.ECUSTMER file was previously loaded to the target by an Extract one-time run.The data definition for GGSSOU.ECUSTMER stored in GGSDEF.ECUSTDEF is used to map the structure of the source records to the target.
SPECIALRUN END RUNTIME SOURCEDEFS \B.$DATA3.GGSDEF.ECUSTDEF EXTFILE \B.$DATA3.GGSINI.ECUSTMER MAP \A.$DATA1.GGSSOU.ECUSTMER, TARGET \B.$DATA3.GGSTAR.ECUSTMER;
Replicat
Use SQLFORMATDISCARDFILE to create a file for SQLCI formatted statements of operations that have been discarded because of errors. Once the original problem is corrected, SQL discard files can be used as input to replicate the failed operations.
The SQL formatted records include SET SESSION ABORT ERROR ON to cause the SQL action to abend if there is an error. BEGIN and COMMIT WORK statements are also included.
The requirements for using the SQL discard file are:
The target for the Replicat must be SQL/MP. (The source database may be any that Oracle GoldenGate supports.)
The contents of the SQL formatted discard file can be edited, but must be valid input to SQLCI. Because of the possible size of the file, TEDIT should be used; EDIT can fail on a memory error.
The SQL discard file is an edit type file so changes can be made if necessary. The size of the file is set at extents (70, 70, 128) and this should not be changed.
Note:
The extents should not be changed because making it 1) smaller may result in an error 45 indicating the file is full when more lines could be processed 2) larger may suppress an error 46 indicating there are more lines than an edit file may contain.
See Administering Oracle GoldenGate for HP NonStop (Guardian) for information on editing the file and the rules that must be followed for valid input to SQLCI.
No SQL discard file created.
SQLFORMATDISCARDFILE file_name [, APPEND | PURGE | ROLLOVER] [, LINECOUNT number] [, OWNER (group_number, user_number)] [, SECURE "rwep"]
file_nameA physical file name or an existing define name of class map. The file_name must include the file location.
APPEND | PURGE | ROLLOVERSpecifies the handling of an existing SQL discard file.
APPEND - Adds a record to the end of the file.
PURGE - Purges an existing file and creates a new one. This is the default if no option is entered.
ROLLOVER - Renames the existing discard file by appending a sequence number to the file_name according to these rules:
If the file-name is 7 characters or less, 1 digit is appended.
If the file name is 8 characters, the file will not rollover and a warning is given that the name is too long. This is true when the rollover is requested from GGSCI as well as from the DISCARDFILE parameter.
LINECOUNT numberSets the number of lines in the edit file that will trigger rollover. number must be at least 5000 and no greater than 98000. The default is 98000 when no value is entered.
OWNER (group_number, user_number)Defines ownership of the discard file.
SECURE "rwep"Secures the file using standard Guardian security for read, write, execute and purge operations.
SQLFORMATDISCARDFILE $DATA01.G11RPT.SQLDIS, APPEND, LINECOUNT 24000
Replicat
Use SQLDISCARDROLLOVER to specify when a SQL discard file is aged and a new one is created. Old files are renamed in the format of group_name(n), where group_name is the name of the Replicat group and(n) is a number that gets incremented by one each time a new file is created, for example: $DATA.ACRPT.SQLDIS0, $DATA.ACRPT.SQLDIS1, $DATA.ACRPT.SQLDIS2, and so forth.
Either the AT or ON option can be entered, or both options can be used together. The following rules apply to the possible variations in entries:
Entering AT with a time but without a day creates a new discard file at the specified time every day.
Entering AT without a time generates an error.
Entering ON with a day but without a time creates a new discard file at midnight on the specified day.
Entering ON without a day will generate an error.
If no option is entered, the discard file will roll over at midnight every night.
To have more than one rollover, enter multiple AT, ON, or AT ON options. Up to 30 will be used.
No rules. File is not rolled over unless it reaches the edit file maximum line count of 98000.
SQLFORMATDISCARDROLLOVER {
AT hh:mm |
ON day-of-week |
AT hh:mm ON day_of_week
}
AT hh:mmThe time of day to age the file based on a 24-hour clock.
Valid values:
hh is an hour of the day from 1 through 23.
mm is minutes from 00 through 59.
ON day_of_weekThe day of the week to age the file. Valid values are SUNDAY through SATURDAY. They are not case-sensitive.
Below are examples of the use of SQLFORMATDISARDROLLOVER.
This example closes the existing SQL discard file and creates a new one at 5:30 a.m. each day.
SQLFORMATDISCARDROLLOVER AT 05:30
This example rolls the SQL discard file over every Friday at midnight.
SQLFORMATDISCARDROLLOVER ON friday
This example rolls the SQL discard file over at 5:30 a.m. every Friday.
SQLFORMATDISCARDROLLOVER AT 05:30 ON FRIDAY
This example will roll over the SQL discard file at 5:30 a.m. every Wednesday and at 5:30 am every Friday.
SQLFORMATDISCARDROLLOVER AT 05:30 ON WEDNESDAY, AT 05:30 ON FRIDAY
Extract, Replicat
Use STATOPTIONS to report zero counts for insert, update, and delete operations. For Replicat also use STATOPTIONS to report on the time spent creating a file, purging a file, or ending a transaction.
ZEROSUPPRESS, no reporting on Replicat timings
STATOPTIONS [ZEROSUPPRESS | NOZEROSUPPRESS}[, PROGSTATS]
ZEROSUPPRESS | NOZEROSUPPRESSUse to suppress or display zero insert, update, and delete counts.
PROGSTATSValid for Replicat only. Reports on the time spent creating a file, purging a file, or ending a transaction.
Sample results for STATOPTIONS NOZERSUPPRESS:
Total # records written to RFUNLDS 349
\NY015.$DATA06.QABASE.BRANCH # inserts: 100
# updates: 0
# deletes: 0
\NY015.$DATA07.LQASRC.HISTORY # inserts: 249
# updates: 0
# deletes: 0
Sample results for STATOPTIONS ZEROSUPPRESS:
Total # records written to RFUNLDS 349 \NY015.$DATA06.QABASE.BRANCH # inserts: 100 \NY015.$DATA07.LQASRC.HISTORY # inserts: 249
Sample results for STATOPTIONS PROGSTATS:
EndTransaction 7 elapsed 00:00:00.079211 PerOp 0 FileCreate 1 elapsed 00:00:00.021796 PerOp 0 FilePurge 3 elapsed 00:00:00.011630 PerOp 0
Extract
Use SUPPRESSALLALTERMESSAGES to instruct Extract to suppress all ALTER messages.
NOSUPPRESSALLALTERMESSAGES
SUPPRESSALLALTERMESSAGES | NOSUPPRESSALLALTERMESSAGES
Extract
Use SUPPRESSALTERMESSAGES to instruct Extract to display the first ALTER message it encounters for a table, but suppress that table's subsequent ALTER messages. Secondary partition ALTER messages resulting from the initial ALTER operation will also be suppressed.
When SUPPRESSALTERMESSAGES is set, only one ALTER will be seen in the Extract report and EMS for a base table, even if numerous ALTER operations are performed on that table during the run of the Extract process.
No ALTER messages will be seen for a table that has been excluded for the Extract.
NOSUPPRESSALTERMESSAGES
SUPPRESSALTERMESSAGES | NOSUPPRESSALTERMESSAGES
Extract
Use SUPPRESSFETCHCOMPRESSEDDISCARDS to stop the display of messages generated when a FETCHCOMP fails to find the needed record. For example, a message like the following will be suppressed:
20013-07-16 15:08:58 GGSINFO 213 Compressed record discarded. . .
NOSUPPRESSFETCHCOMPRESSEDDISCARDS
SUPPRESSFETCHCOMPRESSEDDISCARDS
Extract, Replicat
Use SUPPRESSFILEOPMESSAGES to suppress the output of messages generated after FILE RENAME, PURGE, CREATE, ALTER, SETMODE.
NOSUPPRESSFILEOPMESSAGES
GLOBALS, Extract, Replicat
Use SUPPRESSMARKERMESSAGES to suppress the output of messages generated when markers are processed.
Enter in the GLOBALS parameter file to suppress messages when markers are processed by Extract, Replicat or using GGSCI commands.
SUPPRESSMARKERMESSAGES {YES | NO}
As a process-specific parameter
Enter in the Extract or Replicat parameter file to suppress messages for that Extract or Replicat process.
SUPPRESSMARKERMESSAGES
GLOBALS
Use SWAPVOL to designate a swap volume for processes created by Oracle GoldenGate. For SWAPVOL to take effect, you must:
Stop Manager with the GGSCI STOP MANAGER command.
Exit and restart GGSCI after modifying the GLOBALS parameter file.
The volume on which Oracle GoldenGate is installed
SWAPVOL volume
volumeSpecifies the swap volume.
Syncfile
Use SYNCFILE when the parameter file has a different name than the group name.
Syncfile group_name
The Syncfile group name.
Extract
Use SYSKEYCONVERT for direct file extraction only. SYSKEYCONVERT specifies the format of the syskey in the Oracle GoldenGate output for entry-sequenced and ACI files. Use this parameter only when you are trickling data throughout the day to the target system with RMTBATCH and the records are variable length (for example, the ACI BASE24 Super files).
Note:
If SYSKEYCONVERT is specified, you cannot mix RMTBATCH activity with standard extraction of data for replication.
The Guardian file position
SYSKEYCONVERT {USEBYTEPOSITION | USERECORDNUMBER}
USEBYTEPOSITIONExtract uses a byte count from the start of the file as if the input file was an unstructured file. The first byte position is zero. Use this option for the ACI Super files.
USERECORDNUMBERExtract uses a record number from the start of the file of the record. The first record is zero.
Extract
TABLE is a synonym for the FILE parameter. The object of the TABLE parameter can be a SQL table or view or an Enscribe file.
For a description of the options available with TABLE, see "FILE | TABLE".
Extract
Use TABLEEXCLUDE with the TABLE or MAP parameter to explicitly exclude tables from being included in a wildcard specification. Must follow a TABLE or MAP statement that uses wildcards.
TABLEEXCLUDE exclude_specification
exclude_specificationThe name or wildcard specification of the table to exclude.
In the following example, TABLE retrieves all tables from $DATA.MASTER, except those tables containing the letter X, and the table named BIGTAB. Those tables not excluded by the TABLEEXCLUDE parameter are output to the aa trail. All tables containing the letter X are output to bb trail.
RMTTRAIL /ggdat/aa TABLE $DATA.MASTER.*; TABLEEXCLUDE $DATA.MASTER.*X*; TABLEEXCLUDE $DATA.MASTER.BIGTAB; RMTTRAIL /ggdat/bb TABLE $DATA.MASTER.*X*;
Extract, Replicat
Use TALUSEREXIT to call custom TAL routines at different points during processing.
If TALUSEREXIT is specified in the parameter file, but a user exit is not bound to the Extract or Replicat object, the process will abend.
TALUSEREXIT
Extract
Use TCPBUFSIZE to set the maximum size of the message buffer sent to the Collector process over TCP/IP. The valid range for TCPBUFSIZE is 0 through 65519.
Note:
Extract will lower the default and maximum to 28000 bytes if the RMTTASK parameter is used or if the Collector is not release 11.1.1 or later.
TCPFLUSHBYTES affects TCPBUFSIZE so review your setting for TCPFLUSHBYTES when you set TCPBUFSIZE. For example, if TCPFLUSHBYTES is set to 8000, even though you set TCPBUFSIZE to 65000, the buffer will be flushed each time it reaches 8000 bytes.
65519
TCPBUFSIZE bytes
bytesThe buffer size in bytes.
Extract
Use TCPFLUSHBYTES to set the size of the record buffer that Extract sends to remote systems. Depending on the remote system, changing this parameter can have a significant impact on performance. Valid values are 0 to 65519.
Note:
Extract will lower the default and maximum to 28000 bytes if the RMTTASK parameter is used or if the Collector is not release 11.1.1 or later.
Review your setting for TCPBUFSIZE when you set TCPFLUSHBYTES. Because TCPBUFSIZE is the maximum buffer size, it limits TCPFLUSHBYTES. For example, if TCPBUFSIZE is set to 8000 bytes, TCPFLUSHBYTES cannot exceed 8000 bytes regardless of its setting.
65519
TCPFLUSHBYTES bytes
bytesThe buffer size in bytes.
Manager, Extract
Use TCPIPPROCESSNAME to specify a TCP/IP process other than the default. For Manager this is the process it uses to listen for requests coming from other systems. For Extract this specifies the process for data transfer.
Selection Hierarchy for Manager
Manager selects the option to use based on this hierarchy:
Stand-alone parameter TCPIPPROCESSNAME entry in the Manager parameter file
DEFINE =TCPIP^PROCESS^NAME added before the Manager process was started
Default $ZTC0
Selection Hierarchy for Extract
There are other ways to specify the TCPIPPROCESSNAME when the NonStop TCP/IP process name is not $ZTC0. When assigning a process name, Extract selects the option to use based on this hierarchy:
RMTHOST TCPIPPROCESSNAME option in the Extract parameter file
Stand-alone parameter TCPIPPROCESSNAME entry in the Extract parameter file prior to the RMTHOST entry
DEFINE =TCPIP^PROCESS^NAME added before the Extract process was started
Default $ZTC0
You can include additional TCPIPPROCESSNAME entries for Extract. With a RMTHOSTALT address and TCPIPSWITCHERRS, Extract will change both the backup TCP/IP process and the destination IP address after the specified number of retries. Refer to RMTHOSTALT on page "RMTHOSTALT" for an example.
$ZTC0
TCPIPPROCESSNAME process_name [@ip_address]
process_nameA valid HP NonStop TCP/IP process name.
ip_addressRestricts the process to the specified IP address.
The following Extract parameter setting will cause data to be output over TCP/IP process $ZTC4 instead of the default $ZTC0.
TCPIPPROCESSNAME $ZTC4
The following Extract parameter setting will cause data to be output over TCP/IP process $ZTC4 using the IP address 2001:db8:2010:5040:4fff:ffff:ffff:28.
TCPIPPROCESSNAME $ZTC4@2001:db8:2010:5040:4fff:ffff:ffff:28
Extract
Use TCPIPSWITCHERRS to switch to alternate controllers and destination IP addresses when an error occurs. You can specify the number of retries to attempt before switching. The default is 5.
If you specify an alternate TCPIPPROCESSNAME and RMTHOSTALT, both the process and destination IP address are switched when the retry limit is reached.
5
TCPIPSWITCHERRS number_retries
number_retriesThe number of retries.
Refer to "RMTHOSTALT" for an example using TCPIPSWTICHERRS.
Extract
Use TCPSOURCETIMER or NOTCPSOURCETIMER to specify whether the Collector adjusts the timestamps of each record from the source system to the timestamps of the target system. When transmitting data to trails or files (RMTFILE or RMTTRAIL) on the target system using the TCP/IP Collector, the default is to adjust the timestamps. Although timestamps are stored in Greenwich Mean Time (GMT), the clocks of the source and target systems are frequently un-synchronized. Adjusting those timestamps allows Manager and GGSCI to calculate the true latency between the source and target systems.
However, when data is moved over TCP/IP between two NonStop systems and the systems are also linked with an Expand network, this adjustment is already made. Under those conditions, specify NOTCPSOURCETIMER to avoid the incorrect "double adjustment." NOTCPSOURCETIMER should be used when a Replicat on the target system uses @GETENV to get the COMMITTIMESTAMP header field for any column mapping.
TCPSOURCETIMER
TCPSOURCETIMER | NOTCPSOURCETIMER
Manager
Use THRESHOLD to generate an event message when the percentage of all audit left on disk or tape falls below the specified percentage. Use NOTHRESHOLD to eliminate all threshold reporting.
See "DISKTHRESHOLD | NODISKTHRESHOLD" for a description of the DISKTHRESHOLD parameter and audit threshold processing.
THRESHOLD 20
THRESHOLD percent_left | NOTHRESHOLD
percent_leftThe specified percentage.
GLOBALS, Manager
TMFDUMPAGE limits information returned by the TMFDUMPINFO command to dumps that are created during the specified number of days.
30
TMFDUMPAGE number_days
number_daysThe number of days for limiting TMF dump information.
GLOBALS, Manager
TMFDUMPTABLEENTRIES limits information returned by TMFDUMPINFO to the number of specified dumps.
1024
TMFDUMPTABLEENTRIES max_dumps
max_dumpsThe number of dump entries to display. The maximum allowed is 6000.
Replicat
Use TMFEXCEPTIONS when the application performs multiple operations on the same record within the same transaction. Certain application scenarios can cause out-of-order records in the TMF audit trail within a given transaction:
An insert on a record can occur in the audit trail before a delete on the same primary key, even though the application performed the delete first, followed by the insert (resulting in a duplicate record condition when the insert is performed)
An update can occur in the audit trail before the insert on the same primary key (resulting in a missing record error when the update is performed)
TMFEXCEPTIONS resolves these conditions.
In the event of a duplicate insert, TMFEXCEPTIONS saves the duplicated insert until the end of the transaction. If a delete with the same primary key is subsequently encountered, the delete is performed, then the saved insert is performed.
In the event of a missing update record, TMFEXCEPTIONS saves the missing update until the end of the transaction. If an insert with the same primary key is subsequently encountered, the insert is performed, then the saved update is performed.
Use NOTMFEXCEPTIONS to turn off the exception processing.
TMFEXCEPTIONS cannot be used with INSERTMISSINGUPDATES.
NOTMFEXCEPTIONS
TMFEXCEPTIONS | NOTMFEXCEPTIONS
This example turns on TMFEXCEPTIONS for ORDERS but toggles it off for any MAP statements that are defined later.
TMFEXCEPTIONS MAP $DATA1.SQLDAT.ORDERS, TARGET $DATA1.MASTER.ORDERS; NOTMFEXCEPTIONS
GLOBALS, Manager, Extract
Use TMFREFRESHINTERVAL to set the refresh interval in seconds.
The refresh interval can be set in the either the GLOBALS or Manager parameter, but should not be set in both. The GLOBALS or Manager file setting can be overridden by:
TMFREFRESHINTERVAL in Extract.
Executing the Oracle GoldenGate TMF commands in GGSCI (see "" on page 1-44).
15
TMFREFRESHINTERVAL minutes
minutesThe refresh interval in minutes.
Extract
Use to tell the TMFTRAILTRACE program to write messages to the report file when it is checking for the next trail.
TMFTRAILTRACE
Syncfile
Use TRACE for debugging purposes. TRACE logs messages regarding Syncfile processing to the report file. You can use the report file to help debug Syncfile issues, generally during an Oracle GoldenGate support session.
TRACE
Logger
Use TRACEALLOPENS to record all open and close attempts on any file. This parameter instructs GGSLIB to send all open and close activity to Logger, including activity on files that are not on the extraction list.
Use the Logdump utility to examine these records and correlate them to Logger activity.
Do not trace.
TRACEALLOPENS
Logger
Use TRACECLOSES to send close records to Logger for the purpose of tracing activity. This parameter captures all open attempts on the list of files that GGSLIB has been configured to capture.
Use the Logdump utility to examine these records and correlate them to Logger activity.
Do not trace.
TRACECLOSES
Logger
Use TRACEOPENS to send open records to Logger for tracing system activity. This parameter captures all open attempts on the list of files that GGSLIB has been configured to capture.
Use the Logdump utility to examine these records and correlate them to Logger activity.
Do not trace.
TRACEOPENS
Logger
Use TRACEPROCESSIOS to instruct Logger to precede each logged record with information regarding the process that created the record. Use this information to determine and trace the process, program, and user that handle particular updates to the application database. If TRACEPROCESSIOS is on, a separate trace record is added to the log trail, which creates a small amount of system overhead.
Use the Logdump utility to examine these records and correlate them with database operations. This parameter applies only to the current log. When you specify TRACEPROCESSIOS, TRACESTATS is automatically turned on.
Do not trace.
TRACEPROCESSIOS
Logger
Use TRACESTATS to instruct Logger to record statistics for each process that sends it operations. When the process stops, or upon a SEND LOGGER FLUSHSTATS command, process statistics are output to the log trail.
Statistics include the number of file inserts, updates, deletes, and other information. Use the Logdump utility to examine these statistics. This parameter applies only to the current log. If TRACEPROCESSIOS is specified, TRACESTATS is automatically turned on.
Do not trace.
TRACESTATS
Replicat
Use UPDATEDELETES to convert delete records to update operations. UPDATEDELETES applies to all maps specified below it in the parameter file until NOUPDATEDELETES is specified.
NOUPDATEDELETES
UPDATEDELETES | NOUPDATEDELETES
Replicat
Use UPDATEINSERTS to change insert operations in the target to update operations.
NOUPDATEINSERTS
UPDATEINSERTS | NOUPDATEINSERTS
Manager
Use UPREPORTMINUTES or UPREPORTHOURS to periodically report Extract and Replicat process that are running. By default, running processes are not reported, but events are automatically generated when a process is started or stopped.
Do not report running processes
UPREPORTMINUTES minutes | UPREPORTHOURS hours
minutesThe reporting interval in minutes.
hoursThe reporting interval in hours.
Syncfile
Use VERBOSE to display FUP and TACL messages. By default, Syncfile does not display the output from FUP or TACL during duplication, but produces summary messages describing the number of files duplicated at any given time.
VERBOSE follows the DUP parameter statement that specifies the FUP or TACL options.
Do not display output.
VERBOSE;
Extract
Use VERSIONERR to specify error handling when database definitions are out-of-sync with audit trail records. Extract interprets data from the audit trails according to database definitions current at run-time. When a database definition changes during the course of an Extract run, data can be missed or misinterpreted.
For example, suppose a database column NEWCOL is added to table TAB1 while Extract is running. Any rows added to TAB1 after NEWCOL is added will contain a value for NEWCOL. However, since Extract is working with the old definition (before NEWCOL), NEWCOL values will be missed.
There are three situations to consider with respect to changing database definitions:
The record being processed is an old version of the record. For example, the record is missing one or more columns that have since been added to the table. By default Extract considers this a normal condition and processes the record (VERSIONERR OLD CONTINUE).
The record being processed is a new version of the record. More columns are present in the record than Extract expects. The danger here is that data can be missed. By default, Extract terminates abnormally in this situation (VERSIONERR NEW ABEND) and can be restarted to pick up the new table definition and any new columns as well. However, if the target application has no need to use the new column, this may be undesirable.
The record being processed is simply out of synchronization with the table definition according to Extract. The default response is to terminate abend (VERSIONERR OUTOFSYNC ABEND).
VERSIONERR lets you override these defaults by specifying custom responses to old, new and out-of-sync records.
VERSIONERR type, {CONTINUE | WARN | DISCARD | ABEND}
typeThe description of the record being processed. Valid values:
OLD NEW OUTOFSYNC
CONTINUEProcess the record as normal.
WARNProcess the record, but issue a warning to the error file.
DISCARDOutput the record to a discard file, but process more records.
ABENDTerminate Extract abnormally. This option is appropriate when a new definition should be retrieved.
VERSIONERR NEW, WARN
Extract, Replicat
Use WAITFILEEVENT to wait for a specific event before proceeding. Bi-directional scenarios in particular may need to wait for an external event before proceeding.
For example, a file called SOURCE.FLTODAY is mapped to a file called TARGET.FLTODAY. Every day at 17:00, SOURCE.FLTODAY is renamed to SOURCE.FLyymmdd. To propagate the data from SOURCE.FLTODAY to the appropriate target file, a WAITFILEEVENT command is used.
With WAITFILEEVENT, Replicat delays upon receiving information that SOURCE.FLTODAY has been created, until TARGET.FLTODAY is also created.
WAITFILEEVENT produces event messages (which also appear in the Replicat report file) regarding Replicat status while waiting. To bypass the wait event, issue the GGSCI command: SEND REPLICAT group_name, BYPASSFILEEVENT
To use WAITFILEEVENT, the source file must be specified in a corresponding MAP.
WAITFILEEVENT source_filespec [, CREATES] [, CREATIONWINDOW unit num_units] [, EXEC "command"] [, PURGES] [, RENAMES] [, WARNAFTER unit warn_time]
source_filespecThe file name or wildcard name to which the wait rules apply.
CREATESWait until source_filespec exists. If source_filespec does not exist in the CREATIONWINDOW time period, Replicat keeps checking every 10 seconds until the file has been created.
You must specify one or more CREATES to indicate the events on which to wait.
CREATIONWINDOW unit num_unitsThe amount of time Replicat checks for the new file.
unit can be SECONDS, MINUTES, HOURS; for example CREATIONWINDOW HOURS.
num_units can be any positive integer.
The default is CREATIONWINDOW HOURS 20.
EXEC "command"Execute any valid TACL command or program instead of processing the specified CREATES, PURGES, RENAMES, or some other event. The success of the command is not evaluated. If error handling is necessary, implement it within the command or program being executed.
When you specify EXEC, CREATIONWINDOW and WARNAFTER have no effect.
PURGESWait until source_filespec does not exist.
You must specify one or more of PURGES to indicate the events on which to wait.
RENAMESWait until source_filespec is renamed. If source_filespec has not been renamed in the CREATIONWINDOW time period, Replicat keeps checking every 10 seconds until the file has been renamed.
You must specify one or more RENAMES to indicate the events on which to wait.
WARNAFTER unit warn_timeThe amount of time before a critical event is issued to EMS indicating that the event has not been satisfied.
unit can be SECONDS, MINUTES, HOURS; for example WARNAFTER HOURS.
warn_time can be any positive integer.
The following example demonstrates the use of CREATIONWINDOW, WARNAFTER, CREATES, and RENAMES.
WAITFILEEVENT \GGS.$DATA1.DAT.ACCOUNT*, & CREATIONWINDOW HOURS 5, & WARNAFTER MINUTES 30, & CREATES, RENAMES
The following example uses the EXEC option to rename a target file after the source has been purged.
WAITFILEEVENT $data1.dat.account, PURGE, & EXEC "rename $data2.target.account, & $data2.targold.account"
Replicat
Use WARNRATE to set the rate at which SQL errors encountered are reported. For example, WARNRATE 1000 issues a warning for every 1000 errors for SQL errors encountered for a particular target table. If many SQL errors are expected, WARNRATE helps minimize the size of the report file and error log.
100
WARNRATE num_errors
num_errorsThe error report rate.
Extract, Replicat
Use Y2KCENTURYADJUSTMENT to enable the Y2K window to change the century. By default the column conversion functions that deal with dates put a Y2K window on a two-digit year when the century is zero. If the two-digit year is greater than 49, then the century becomes 19. If the year is 49 or less, the century becomes 20. This damages input date times such as 0001-01-01:00:00:00.000000 by making the century 20, and producing the incorrect result: 2001-01-01:00:00:00.000000.
Use NOY2KCENTURYADJUSTMENT to prevent the Y2K window from changing the century. Enter before relevant date calculations.
NOY2KCENTURYADJUSTMENT does not have an effect if the century field is not present at all. When the century field is not present, the century will be assigned a value based on the year. When the year is greater than 49, the century is set to 19; otherwise it will default to 20.
Y2KCENTURYADJUSTMENT
Y2KCENTURYADJUSTMENT | NOY2KCENTURYADJUSTMENT