8 Adding Job Types

By defining new job types, you can extend the utility and flexibility of the Enterprise Manager job system. Adding new job types also enables you to enhance corrective actions. This chapter assumes that you are already familiar with the Enterprise Manager job system.

For information about the Enterprise Manager job system, refer to the Oracle Enterprise Manager Administrator's Guide.

This chapter includes the following topics:

8.1 Introduction to Adding Job Types

As a plug-in developer, you are responsible for the following steps with regard to adding job types:

  1. Defining Job Types

    You define a job type by using an XML specification that defines the steps in a job, the work (command) that each step performs, and the relationships between the steps.

    For more information, see "About Job Types".

  2. Executing long-running commands

    The job system enables plug-in developers to write commands that perform their work at the Management Service level.

    For more information. see"Executing Long-Running Commands at the Oracle Management Service".

  3. Specifying parameter sources

    By default, the job system expects plug-in developers to provide values for all job parameters, either when the job is submitted or at execution time (by adding or updating parameters dynamically).

    For more information, see "Specifying Parameter Sources".

  4. Specifying credential information

    For more information, see "Specifying Credential Information".

  5. Specifying security information

    For more information, see "Specifying Security Information".

  6. Specifying lock information

    For more information, see"Specifying Lock Information".

  7. Suspending a job or step

    For more information, see "Suspending a Job or Step".

  8. Restarting a job

    For more information, see "Restarting a Job".

8.2 About Job Types

Enterprise Manager enables you to define jobs of different types that can be executed using the Enterprise Manager job system, thereby extending the number and complexity of the tasks you can automate.

By definition, a job type is a specific category of job that carries out a well-defined unit of work. A job type is uniquely identified by a string. For example, OSCommand may be a job type that runs a remote command. You define a job type by using an XML specification that defines the steps in a job, the work (command) that each step performs, and the relationships between the steps.

Table 8-1 shows some of the Enterprise Manager job types and functions.

Table 8-1 Example of Job Types

Job Type Purpose

Backup

Backs up a database.

Backup Management

Performs management functions such as crosschecks and deletions on selected backup copies, backup sets, or files.

CloneHome

Clones an Oracle home directory.

DBClone

Clones an Oracle Database instance.

DBConfig

Configures monitoring for database releases earlier than release 10g.

Export

Exports database contents or objects within an Enterprise Manager user's schemas and tables.

GatherStats

Generates and modifies optimizer statistics.

OSCommand

Runs an operating system command or script.

HostComparison

Compares the configurations of multiple hosts.

Import

Imports the content of objects and tables.

Load

Loads data from a non Oracle database into an Oracle Database.

Move Occupant

Moves occupants of the SYSAUX tablespace to another tablespace.

Patch

Patches an Oracle product.

Recovery

Restores or recovers a database, tablespaces, data files, or archived logs.

RefreshFromMetalink

Allows Enterprise Manager to download patches and critical patch advisory information from My Oracle Support (https://support.oracle.com).

Reorganize

Rebuilds fragmented database indexes or tables, moves objects to a different tablespace, or optimizes the storage attributes of specified objects.

Multi-Task

Runs a composite job consisting of multiple tasks.

SQLScript

Runs a SQL or PL/SQL script using SQL*Plus.


8.3 Introducing New Job Types

An Enterprise Manager job consists of a set of steps and each step runs a command or script. The job type defines how the steps are assembled. For example, which steps run serially, which ones execute in parallel, step order, and dependencies. You can express a job type, the steps, and commands in XML (for more information, see "Specifying a New Job Type in XML"). The job system then constructs an execution plan from the XML specification that enables it to run the steps in the specified order.

8.4 Specifying a New Job Type in XML

A new job type is specified in XML. The job type specification provides the following information to the job system:

  • Steps that make up the job.

  • Commands or scripts to run in each step.

  • How steps relate to each other. For example, whether steps run in parallel or serially, or whether one step depends on another step.

  • User credentials to authenticate the job (typically, the owner of the job must provide these). The job type author must also declare these credentials in the job type XML.

  • How specific job parameters should be computed (optional).

  • What locks, if any, a running job execution must attempt to acquire and what happens if the locks are unavailable.

  • What privileges users must have to submit a job.

The XML job type specification is then added to a metadata plug-in archive. After the metadata plug-in is added to Enterprise Manager, the job system has enough information to schedule the steps of the job, as well as what to run in each step.

8.4.1 Understanding Job Type Categories

A job type can have one of the following categories depending on how it performs tasks on the targets to which it is applied:

  • Single-Node

    A single-node job type is a job type that runs the same set of steps in parallel on every target on which the job is run. Typically, the target list for these job types is not fixed. They can take any number of targets. The following are examples of single-node job types:

    • OSCommand

      Runs an OS command or script on all of its targets.

    • SQL

      Runs a specified SQL script on all of its targets.

  • Multi-Node or Combination

    A multi-node job type is a job type that performs different, possibly inter-related tasks on multiple targets. Typically such job types operate on a fixed set of targets. For example, a Clone job that clones an application schema might require two targets, a source database and a target database.

    Note:

    You can use iterative stepsets for multi-node and combination job types to repeat the same activity over multiple targets.

8.4.2 Using Agent-Bound Job Types

An Agent-bound job type is one whose jobs cannot be run unless the Management Agent of one or more targets in the target list is functioning and responding. A job type that fits this category must declare itself to be Agent-bound by setting the agentBound attribute of the jobType XML tag to true.

If a job type is Agent-bound, then the job system does not schedule any executions if one or more of the Management Agents corresponding to the targets in the target list of the job execution are not responding. The job (and all its scheduled steps) is set to a special state called Suspended/Agent down. The job is kept in this state until the Enterprise Manager repository tier detects that the Management Agent has restarted.

At this point, the job and its steps are set to scheduled status again and the job can execute. By declaring their job types to be Agent-bound, a job-type writer can ensure that the job system will not schedule the job when it has detected that the Management Agent is down.

Note:

Single-node job types are Agent-bound by default while multi-node job types are not.

If an Agent-bound job has multiple targets in its target list, then it is marked as Suspended even if one of the Management Agents goes down.

An example of an Agent-bound job type is the OSCommand job type, which executes an OSCommand using the Management Agent of a specified target. However, not all job types are Agent-bound. For example, a job type that executes SQL in the Management Repository is not Agent-bound.

Enterprise Manager has a heartbeat mechanism that enables the repository tier to quickly determine when a remote Management Agent goes down. After a Management Agent is marked as Down, all Agent-bound job executions that have this Management Agent in their target list are marked Suspended/Agent Down. However, there is still a possibility that the job system might try to dispatch some remote operations during the time the Management Agent went down and when the Management Repository detects the fact. In cases when the Management Agent cannot be contacted and the step executes, the step is set back to a SCHEDULED state and is retried by the job system. The series of retries continues until the heartbeat mechanism marks the node as down, at which point the job is suspended.

When a job is marked as Suspended/Agent Down, by default the job system keeps the job in that state until the Management Agent restarts. However, there is a parameter called the grace period which, if defined, can override this behavior. The grace period is the maximum amount of time (in minutes) that a job's execution is allowed to start executing within. If the job cannot start within this grace period, the job execution is skipped for that schedule.

The only way that a job execution in a Suspended/Agent Down state can resume, is for the Management Agents to come back up. You cannot use the resume_execution() APIs to resume the job.

8.4.3 About Job Steps

The unit of execution in a job is called a step. A step has a command, which determines what work the step will be doing. Each command has a Java class, called a command executor, that implements the command. A command also has a set of parameters, which will be interpreted by the command executor.

The job system offers a fixed set of pre-built commands, such as:

Steps are grouped into sets called stepsets. Stepsets can contain steps or other stepsets and can be categorized into the following types:

  • Serial Stepsets

    Serial stepsets are stepsets where the steps execute serially. Steps in a serial stepset can have dependencies on their execution. For example, a job can specify that step S2 executes only if step S1 completes successfully, or that step S3 executes only if S1 fails.

    Steps in a serial stepset can have dependencies only on other steps or stepsets within the same stepset. By default, a serial stepset is considered to complete successfully if the last step in the stepset completed successfully. It is considered to have failed if the last step in the stepset failed. You can override this behavior by using the stepsetStatus attribute as long as the step is not a dependent on another (no successOf/failureOf/abortOf attribute).

  • Parallel Stepsets

    Parallel stepsets are stepsets whose steps execute in parallel (execute simultaneously). Steps in a parallel stepset cannot have dependencies. A parallel stepset is considered to have succeeded if all the parallel steps have completed successfully. It is considered to have failed if any step within it failed. By default, a parallel stepset is considered to have failed if one or more of its constituent steps failed, and no steps were aborted. You can override this behavior by using the stepsetStatus attribute.

  • Iterative Stepsets

    Iterative stepsets are special stepsets that iterate over a vector parameter. The target list of a job is available using special, implicit parameters named job_target_names and job_target_types. An iterative stepset iterates over the target list or vector parameter and essentially executes the stepset N times; once for each value of the target list or vector parameter.

    Iterative stepsets can execute in parallel (N stepset instances execute at simultaneously), or serially (N stepset instances are scheduled serially, one after another). An iterative stepset is said to have succeeded if all its N instances have succeeded. Otherwise, it is said to have failed if at least one of the N stepsets aborted. It is said to have failed if at least one of the N stepsets failed and none were aborted. An abort always causes an iterative stepset to stop processing further.

    Steps within each iterative stepset instance execute serially and can have serial dependencies similar to those within serial stepsets. Iterative serial stepsets have an attribute called iterateHaltOnFailure (not applicable for iterativeParallel stepsets). If this is set to true, the stepset halts at the first failed or aborted child iteration. By default, all iterations of an iterative serial stepset execute, even if some of them fail (iterateHaltOnFailure=false).

  • Switch Stepsets

    Switch stepsets are stepsets where only one of the steps in the stepset is executed based on the value of a specified job parameter. A switch stepset includes a switchVarName attribute, which is a job (scalar) parameter with a value that is examined by the job system to determine which of the steps in the stepset must be executed. Each step in a switch stepset has a switchCaseVal attribute, which is one of the possible values of the parameter specified by switchVarName.

    The step in the switch stepset that is executed is the one whose switchCaseVal parameter value matches the value of the switchVarName parameter of the switch stepset. Only the selected step in the switch stepset is executed. Steps in a switch stepset cannot have dependencies with other steps or stepsets within the same stepset or outside.

    By default, a switch stepset is considered to complete successfully if the selected step in the stepset completed successfully. It is considered to have failed if the selected step in the stepset failed. Also, a switch stepset succeeds if no step in the stepset was selected.

    For example, if there is a switch stepset with two steps, S1 and S2 and you specify the following:

    • switchVarName is sendEmail

    • switchCaseVal for S1 is true

    • switchCaseVal for S2 is false

    If the job is submitted with the job parameter sendEmail set to true, then S1 will be executed. If the job is submitted with the job parameter sendEmail set to false, then S2 will be executed. If the value of sendEmail is anything else, the stepset still succeeds but does nothing.

  • Nested Jobs

    One of the steps in a stepset might itself be a reference to another job type. A job type can include other job types within itself. However, a job type cannot reference itself.

    Nested jobs are a convenient way to reuse blocks of functionality. For example, performing a database backup is a job with a complicated sequence of steps. However, other job types (such as patch and clone) might use the backup facility as a nested job. With nested jobs, the job type writer can choose to pass all the targets of the containing job to the nested job, or only a subset of the targets. Also, the job type can specify whether the containing job should pass all its parameters to the nested job or whether the nested job has its own set of parameters (derived from the parent job's parameters).

    The status of the individual steps and stepsets (and possibly other nested jobs) within the nested job determines the status of a nested job.

    Note:

    If a nested job refers to a job type with singleTarget set to true, then you must explicitly specify the target type applicable for the nested job, using the targetType attribute of the nested job. Without this, the nested job picks those targets that correspond to its job type's default target type only.

8.4.3.1 Affecting the Status of a Stepset

The default algorithm by which the status of a stepset is computed from the status of its steps can be altered by the job type, using the stepsetStatus attribute of a stepset. By setting stepsetStatus to the name (ID) of a step, stepset, or job contained within it, a stepset can indicate that the status of the stepset depends on the status of the specific step, stepset, or job named in the stepStatus attribute. This feature is useful if the author of a job type wants a stepset to succeed, even if certain steps within it fail.

An example is a step that runs as the final step in a stepset in a job that sends e-mails about the status of the job to a list of administrators. The status of the job must be set to the status of the step (or steps) that performs the work, and not the status of the step that sent the e-mail. Only steps that are unconditionally executed can be named in the stepsetStatus attribute. A step, stepset, or job that is executed as a successOf or failureOf dependency cannot be named in the stepsetStatus attribute.

8.4.3.2 Passing Job Parameters

To pass the parameters of the job to steps, enclose the parameter name in a placeholder (contained within two % symbols). For example, %patchNo% represents the value of a parameter named patchNo. The job system substitutes the value of this parameter when it is passed to the command executor of a step.

Placeholders can also be defined for vector parameters by using the [] notation. For example, the first value of a vector parameter called patchList is referenced as %patchList%[1], the second is %patchList%[2].

The job system provides a predefined set of placeholders that can be used. These are always prefixed by job_. The following placeholders are provided:

  • job_iterate_index

    The index of the current value of the parameter in an iterative stepset, when iterating over any vector parameter. The index refers to the closest enclosing stepset only. In case of nested iterative stepsets, the outer iterate index cannot be accessed.

  • job_iterate_param

    The name of the parameter being iterated over, in an iterative stepset.

  • job_target_names[n]

    The job target name at position n. For single-node jobs, the array would always be only of size 1 and refer only to the current node the job is execution on, even if the job was submitted against multiple nodes.

  • job_target_types[n]

    The type of the job target at position n. For single-node jobs, the array would always only be of size one and refer only to the current node the job is executing on, even if the job was submitted against multiple nodes.

  • job_name

    The name of the job.

  • job_type

    The type of the job.

  • job_owner

    The Enterprise Manager user that submitted the job.

  • job_id

    The job id. This is a string representing a globally unique identifier (GUID).

  • job_execution_id

    The execution id. This is a string representing a GUID.

  • job_step_id

    The step id. This is an integer.

In addition to the above placeholders, the following target-related placeholders are also supported:

  • emd_root: The location of the Management Agent installation

  • perlbin: The location of the (Enterprise Manager) Perl installation

  • scriptsdir: The location of Management Agent-specific scripts

The above placeholders are not interpreted by the job system, but by the Management Agent. For example, when %emd_root% is used in the remoteCommand or args parameters of the remoteOp command, or in any of the file names in the putFile, getFile and transferFile commands, the Management Agent substitutes the actual value of the Management Agent root location for this placeholder.

8.4.3.3 About Job Step Output and Errors

A step consists of a status (indicates whether it succeeded, failed, or terminated), some output (the log of the step), and an error message. If a step failed, the command executed by the step could indicate the error in the error message column. By default, the standard output and standard error of an asynchronous remote operation is set to the output of the step that requested the remote operation.

A step can choose to insert error messages by using either:

  • the getErrorWriter() method in CommandManager (synchronous)

  • the insert_step_error_ message API in the mgmt_jobs package (typically, this is called by a remotely executing script in a command channel)

8.5 Using Commands

This section describes available commands and associated parameters. Targets of any type can be provided for the target names and target type parameters described in the following sections. The job system automatically identifies and contacts the Management Agent that is monitoring the specified targets.

8.5.1 Using the remoteOp Command

The remote operation command has the identifier remoteOp. The command accepts a credential usage with name as defaultHostCred, which you must have to perform the operation on the host of the target. The binding can be performed as follows:

<step ID="Step_2" command="remoteOp">
  <credList>
    <cred usage="defaultHostCred" reference="osCreds"/>
  </credList>
  <paramList>
    <param name="targetName">%job_target_names%[1]</param>
    <param name="targetType">%job_target_types%[1]</param>
    <param name="remoteCommand">%remoteCommand%</param>
    <param name="args">%args%</param>
    <param name="executeSynchronous">false</param>
  </paramList>
</step>

defaultHostCred is the credential usage which is understood by the command. For example, the Java code in the command makes a call for credentials with this string, whereas osCreds is the credential usage declared in the job type at the top level.

The remote operation command takes the following parameters:

  • remoteCommand: The path name to the executable/script (for example, /usr/local/bin/perl).

  • args: A comma-separated list of arguments to the remoteCommand.

  • targetName: The name of the target on which the command is executed. You can use placeholders to represent targets.

  • targetType: The target type of the target on which the command is executed.

  • executeSynchronous: This option defaults to false whereby a remote command always executes asynchronously on the Management Agent and updates the status of the step after the command is executed.

    If this option is set to true, then the command executes synchronously, waiting until the Management Agent completes the process. Typically, this parameter is set to true for quick, short-lived remote operations, such as starting up a listener. For remote operations that take a long time to execute, this parameter must be set to false.

    Note:

    For 12c Release 4 (12.1.0.4), this parameter is set to false and you cannot override the setting.
  • successStatus: A comma-separated list of integer values that determines the success of the step. If the remote command returns any of these numbers as the exit status, then the step is successful. The default is zero. These values are only applicable when executeSynchronous is set to true.

  • failureStatus: A comma-separated list of integer values that determines the failure of the step. If the remote command returns any of these numbers as the exit status, the step has failed. The default is all nonzero values. These values are only applicable when executeSynchronous is set to true.

  • input: If specified, this is passed as standard input to the remote program.

  • outputType: Specifies the type of output the remote command generates. This option can have two values:

    • Normal (default)

      Normal output is output that is stored in the log corresponding to this step and is not interpreted in any way.

    • Command

      Command output is output that can contain one or more command blocks, which are XML sequences that map to preregistered SQL procedure calls. This option enables remote commands to generate command blocks that can be directly loaded into schema in the Management Repository.

The standard output generated by the executed command is stored by the job system as the output corresponding to this step.

8.5.2 Using the fileTransfer Command

The fileTransfer command transfers a file from one Management Agent to another. It can also execute a command on the source Management Agent and transfer its standard output as a file to the destination Management Agent or as standard input to a command on the destination Management Agent. The fileTransfer command is always asynchronous and it takes the following parameters:

<step ID="S1" command="fileTransfer">
  <credList>
      <cred usage=”srcReadCreds” reference=”mySourceReadCreds”/>
      <cred usage=”dstWriteCreds” reference=”myDestWriteCreds”/>
  </credList>
    <paramList>
      <param name="sourceTargetName">%job_target_names%[1]</param>
      <param name="sourceTargetType">%job_target_types%[1]</param>
      <param name="destTargetName">%job_target_names%[2]</param>
      <param name="destTargetType">%job_target_types%[2]</param>
      <param name="sourceFile">%sourceFile%</param>
      <param name="sourceCommand">%sourceCommand%</param>
      <param name="sourceArgs">%sourceArgs%</param>
      <param name="sourceInput">%sourceInput%</param>
      <param name="destFile">%destFile%</param>
      <param name="destCommand">%destCommand%</param>
      <param name="destArgs">%destArgs%</param>
  </paramList>
</step>

The following command uses two credentials. The srcReadCreds credential is used to read the file from the source and the dstWriteCreds credential is used to write the file to the destination. The binding can be performed as follows:

<step ID="S1" command="fileTransfer">
      <credList>
        <cred usage=”srcReadCreds” reference=”mySourceReadCreds”/>
        <cred usage=”dstWriteCreds” reference=”myDestWriteCreds”/>
      </credList>
  <paramList>
    <param name="sourceTargetName">%job_target_names%[1]</param>
    <param name="sourceTargetType">%job_target_types%[1]</param>
    <param name="destTargetName">%job_target_names%[2]</param>
    <param name="destTargetType">%job_target_types%[2]</param>
    <param name="sourceFile">%sourceFile%</param>
    <param name="sourceCommand">%sourceCommand%</param>
    <param name="sourceArgs">%sourceArgs%</param>
    <param name="sourceInput">%sourceInput%</param>
    <param name="destFile">%destFile%</param>
    <param name="destCommand">%destCommand%</param>
    <param name="destArgs">%destArgs%</param>
  </paramList>
</step>
  • sourceTargetName: The target name corresponding to the source Management Agent.

  • destTargetName: The target name corresponding to the destination Management Agent.

  • destTargetType: The target type corresponding to the destination Management Agent.

  • sourceFile: The file to be transferred from the source Management Agent.

  • sourceCommand: The command to be executed on the source Management Agent. If this is specified, then the standard output of this command is streamed to the destination Management Agent. Both sourceFile and sourceCommand parameters cannot be specified.

  • sourceArgs: A comma-separated set of command-line parameters for the sourceCommand.

  • destFile: The location or file name of where the file is to be stored on the destination Management Agent.

  • destCommand: The command to be executed on the destination Management Agent. If this is specified, then the stream generated from the source Management Agent (whether from a file or a command) is sent to the standard input of this command. You cannot specify both destFile and destCommand parameters.

  • destArgs: A comma-separated set of command-line parameters for the destCommand.

The fileTransfer command succeeds (and returns a status code of 0) if the file was successfully transferred between the Management Agents. If there was an error, it returns error codes appropriate to the reason for failure.

8.5.3 About the putFile Command

The putFile command enables you to transfer large amounts of data from the Management Repository to a file on the Management Agent. The transferred data can come from a Binary Large Objects (BLOB) in the Management Repository, a file on the file system, or embedded in the specification (inline).

If a file is being transferred, the location of the file must be accessible from the Management Repository installation. If a BLOB in a Management Repository is being transferred, then it must be in a table in the Management Repository that is accessible to the Management Repository schema user (typically mgmt_rep).

The command accepts a credential usage with name as defaultHostCred. You must have these credentials to write the file at the host of the target. The binding can be performed as follows:

<step ID="S1" command="putFile">
       <credList>
          <cred usage="defaultHostCred" reference="osCreds"/>
       </credList>
    <paramList>
     <param name="sourceType">file</param>
     <param name="targetName">%job_target_names%[1]</param>
     <param name="targetType">%job_target_types%[1]</param>
     <param name="sourceFile">%oms_root%/myfile</param>
     <param name="destFile">%emd_root%/yourfle</param>
   </paramList>
</step>

The putFile command requires the following parameters:

  • sourceType: The type of the source data. This can be SQL, file, or inline.

  • targetName: The name of the target where the file is to be transferred (destination Management Agent).

  • targetType: The type of the destination target.

  • sourceFile: The file to be transferred from the Management Repository (if sourceType is set to fileSystem). This must be a file that is accessible to the Management Repository installation.

  • sqlType: The type of SQL data (if the sourceType is set to sql). Valid values are CLOB and BLOB.

  • accessSql: A SQL statement that is used to retrieve the BLOB data (if the sourceType is set to sql). For example, " select output from my_output_table where blob_id=%blobid%".

  • destFile: The location or file name of where the file is to be stored on the destination Management Agent.

  • contents: If the sourceType is set to "inline", this parameter contains the contents of the file. Note that the text can include placeholders for parameters in the form %param%.

The putFile command succeeds if the file was transferred successfully and the status code is set to 0. On failure, the status code is set to an integer indicating the reason for failure.

8.5.4 Using the getFile Command

The getFile command transfers a file from a Management Agent to the Management Repository. The file is stored as the output of the step that executed this command.

The command accepts a credential usage with the name as defaultHostCred, which you must have to read the file at the host of the target. The binding can be performed as follows:

<step ID="S1" command="getFile">
       <credList>
         <cred usage="defaultHostCred" reference="osCreds"/>
       </credList>
   <paramList>
     <param name="targetName">%job_target_names%[1]</param>
     <param name="targetType">%job_target_types%[1]</param>
     <param name="sourceFile">%sourceFile%</param>
     <param name="destType">%destType%</param>
     <param name="destFile">%destFile%</param>
     <param name="destParam">%destParam%</param>
   </paramList>
</step>

The getFile command has the following parameters:

  • sourceFile: The location of the file to be transferred to the Management Agent.

  • targetName: The name of the target where the Management Agent will be contacted to get the file.

  • targetType: The type of the target.

The getFile command succeeds if the file was transferred successfully and the status code is set to 0. On failure, the status code is set to an integer indicating the reason for failure.

8.5.5 Using the execAndSuspend Command

The execAndSuspend command is similar to the remoteOp command but it is used for executing a host process that restarts the Management Agent. Typically, use this command in scenarios that update Management Agent binaries or configuration and require a restart of the Management Agent. The command ”posts” the Agent-based operation to the Management Agent and switches its status to ”success” immediately while the subsequent step moves into a suspended status waiting for the ”startup” notification from the Management Agent.

It is important to follow these restrictions and guidelines:

  • The command executed at the Management Agent must not produce any standard output or errors. Such output, if any, must be redirected to a file or to null as part of the submitted operation. Failure to do this could cause the command to fail.

  • The job type must contain a step immediately after a step that runs the execAndSuspend command. This successor step checks the success of the operation that was submitted as part of the execAndSuspend step. Because the Agent-based operation might have failed, the successor step must avoid using remoteOp and rely on direct Agent-based Java calls to check the status of the operation.

Most of the arguments to this command are similar to the remoteOp command. This command accepts a credential usage with name as defaultHostCred, which you must have to perform the operation on the host of the target. The binding can be performed as follows:

<step ID="Ta_S1_suspend" command="execAndSuspend">
      <credList>
         <cred usage="defaultHostCred" reference="osCreds"/>
      </credList>
   <paramList>
     <param name="remoteCommand">%command%</param>
     <param name="args">%args%</param>
     <param name="targetName">%job_target_names%[1]</param>
     <param name="targetType">%job_target_types%[1]</param>
     <param name="suspendTimeout">2</param>
   </paramList>
</step>

The execAndSuspend command has the following parameters:

  • remoteCommand: The path name to the executable or script, such as /usr/local/bin/perl.

  • args: A comma-separated list of arguments to the remoteCommand

  • targetName: The name of the target on which the command is executed. You can use placeholders to represent targets

  • targetType: The target type of the target on which the command is executed.

  • input: If specified, this is passed as standard input to the remote program.

  • suspendTimeout: The duration, in minutes, to wait for the notification of the Management Agent's startup. If the notification is not received within this time, the execution resumes and the successor step is executed. (The successor step is also executed if the Management Agent's startup notification is received, so the successor step must determine whether it timed out or completed successfully).

Here defaultHostCred is the credential usage which is understood by the command. For example, the Java code in the command would make a call for credential with this string, whereas the osCreds is the credential usage declared in the job type at the top level.

8.6 About Command Error Codes

The remoteOp, putFile, fileTransfer and getFile commands return the error codes listed in Table 8-2, "Command Error Codes". In the following messages, "command process" refers to a process that the Management Agent executes that actually executes the specified remote command and grabs the standard output and standard error of the executed command.

On a UNIX installation, this process is called nmo and is located in $EMD_ROOT/bin. It must be SETUID to root before it can be used successfully. This does not pose a security risk because nmo will not execute any command unless it has a valid username and password.

Table 8-2 Command Error Codes

Error Code Description

0

No error.

1

Could not initialize core module. Most likely, something is wrong with the installation or environment of the Agent.

2

The Agent ran out of memory.

3

The Agent could not read information from its input stream.

4

The size of the input parameters was too large for the Agent to handle.

5

The command process was not setuid to root. (Every UNIX Agent installation has an executable called nmo, which must be setuid root).

6

The specified user does not exist on this system.

7

The password was incorrect.

8

Could not run as the specified user.

9

Failed to fork the command process (nmo).

10

Failed to execute the specified process.

11

Could not obtain the exit status of the launched process.

12

The command process was interrupted before exit.

13

Failed to redirect the standard error stream to standard output.


8.7 Executing Long-Running Commands at the Oracle Management Service

The job system enables plug-in developers to write commands that perform their work at the Management Service level. For example, a command that reads two Large Objects (LOBs) from the database and performs various transformations on them and writes them back. The job system expects such commands to implement an (empty) interface called LongRunningCommand, which is an indication that the command executes synchronously on the middle tier, and could potentially execute for a long time. This enables a component of the job system called the dispatcher to schedule the long-running command as efficiently as possible, so as not to degrade the throughput of the system.

8.7.1 Configuring the Job Dispatcher to Handle Long-Running Commands

The dispatcher is a component of the job system that executes the various steps of a job when they are ready to execute. The command class associated with each step is called and any asynchronous operations requested by it are dispatched; a process referred to as dispatching a step. The dispatcher uses thread-pools to execute steps. A thread-pool is a collection of a specified number of worker threads, any one of which can dispatch a step.

The job system dispatcher uses two thread-pools:

  • a short-command pool for dispatching asynchronous steps and short synchronous steps

  • a long-command pool for dispatching steps that have long-running commands

Typically, the short-command pool has a larger number of threads (for example, 25) compared to the long-running pool (for example, 10).

Usually the long-running middle-tier steps are few compared to more numerous, short-running commands. However, the sizes of the two pools are fully configurable in the dispatcher to suit the job mix at a particular site. Because multiple dispatchers can run on different nodes, the site administrator can dedicate a dispatcher to only dispatch long-running or short-running steps.

8.8 Specifying Parameter Sources

By default, the job system expects plug-in developers to provide values for all job parameters, either when the job is submitted or at execution time (by adding or updating parameters dynamically). Typically, an application supplies these parameters in one of the following ways:

  • Asking the user of the application at the time of submitting the job.

  • Fetching parameter values from application-specific data (such as a table) and then inserting them into the job parameter list.

  • Generating new parameters dynamically through the command blocks in the output of a remote command. These could be used by subsequent steps.

The job system offers the concept of parameter sources so that plug-in developers can simplify the amount of application-specific code they have to write to fetch and populate job or step parameters (such as the second category above). A parameter source is a mechanism that the job system uses to fetch a set of parameters, either when a job is submitted or when it is about to start executing.

The job system supports SQL (a PL/SQL procedure to fetch a set of parameters), credential (retrieval of username and password information from the Enterprise Manager credentials table) and user sources. Plug-in developers can use these pre-built sources to fetch a wide variety of parameters. When the job system has been configured to fetch one or more parameters using a parameter source, you do not have to specify the parameters in the parameter list to the job when a job is submitted. The job system automatically fetches the parameters and adds them to the parameter list of the job.

A job type can embed information about the parameters that must be fetched by having an optional paramInfo section in the XML specification. The following example provides a snippet of a job type that executes a SQL query on an application-specific table to fetch three parameters, a, b, and c.

<jobType version="1.0" name="OSCommand" >
<paramInfo>
    <!-- Set of scalar params -->
    <paramSource paramNames="a,b,c" sourceType="sql" overrideUser="true">
        select name, value from name_value_pair_table where
            name in ('a', 'b', 'c');
    </paramSource>
</paramInfo>
.... description of job type follows ....
</jobType>

In the previous example, the paramInfo section contains the following elements:

  • paramSource: Each paramSource tag references a parameter source that can be used to fetch one or more parameters.

  • paramNames: The paramNames attribute is a comma-separated set of parameter names that the parameter source is expected to fetch.

  • sourceType: The sourceType attribute indicates the source that will be used to fetch the parameters (one of sql, credential or user)

  • overrideUser: The overrideUser attribute, if set to true, indicates that this parameter-fetching mechanism will always be used to fetch the value of the parameters, even if the parameter was specified by the user (or application) at the time the job was submitted. The default for the overrideUser attribute is false, indicating that the parameter source mechanism will be disabled if the parameter was already specified when the job was submitted.

    You can add additional source-specific properties to a parameter source that describes the fetching mechanism in greater detail. Section 8.8.1, "Understanding SQLParameter Source" provides more information.

  • evaluateOnRetry: The evaluateOnRetry attribute is an optional attribute, applicable for all. The default setting is false for all, except credentials (credentials ignores the value set and forces true). It indicates whether the parameter source must be run again when a failed execution of this job type is retried.

8.8.1 Understanding SQLParameter Source

The SQL parameter source enables plug-in developers to specify a SQL query or a PL/SQL procedure that fetches a set of parameters.

8.8.1.1 Using a PL/SQL Procedure to Fetch Scalar and Vector Parameters

The job type XML syntax is as follows:

    <paramSource sourceType="sql" paramNames="param1, param2, ...">
      <sourceParam name="procName"   value="MyPackage.MyPLSQLProc"/>
      <sourceParam name="procParams" value="%a%, %b%[1], ..."/>
    </paramSource>

The values specified in paramNames are the names of the parameters that are expected to be returned by the PL/SQL procedure specified in procName. The values in procParams specify the list of values to be passed to the PL/SQL procedure.

PL/SQL Procedure Definition

The definition of the PL/SQL procedure must adhere to the following guidelines:

  • The PL/SQL procedure must be accessible from the SYSMAN schema

  • The PL/SQL procedure must have the following signature:

          PROCEDURE MySQLProc(p_param_names     MGMT_JOB_VECTOR_PARAMS,
                              p_proc_params     MGMT_JOB_VECTOR_PARAMS,
                              p_param_list  OUT MGMT_JOB_PARAM_LIST)
    

    The list of parameters specified in paramNames are passed as p_param_names to the procedure.

    The comma-separated list of values specified in procParams allows you to pass a list of scalar (string/VARCHAR2) values as parameters to the procedure. These values are substituted with job parameter references (if used), bundled into an array (in the order specified in the XML) and passed to the PL/SQL procedure as the second parameter (p_proc_params).

    The third parameter is an OUT parameter that contains the list of parameters fetched by the procedure. The names of the parameters returned by this OUT parameter must match the names specified in p_param_names.

    Note:

    Although this check is not currently enforced, Oracle recommends strongly that you ensure that the names of the parameters returned by p_param_list matches or is a subset of the list of parameter names passed in p_param_names.

Example

The following SQL parameter source creates a parameter named db_role_suffix based on an existing parameter named db_role. It also preserves the type (scalar/vector) of the original parameter and therefore looks up the parameter from the internal tables rather than have its value passed (db_role is passed as a literal rather than as a substituted value). The values of job_id and job_execution_id are passed substituted.

    <paramSource sourceType="sql" paramNames="db_role_suffix">
      <sourceParam name="procName"   value="MGMT_JOB_FUNCTIONS.get_dbrole_
        prefix"/>
      <sourceParam name="procParams" value="%job_id%, %job_execution_id%, db_
        role"/>
    </paramSource>

Within the PL/SQL procedure MGMT_JOB_FUNCTIONS.get_dbrole_prefix, the p_proc_params list contains the values corresponding to the job_id at index 1 and the execution_id at index 2, while the element at index 3 corresponds to the literal text db_role.

Available SQL Paramsource Procedures

The Job System team provided the following PL/SQL procedures for use in job types across Enterprise Manager:

  • is_null

    Checks whether the passed job variable is null. A missing variable is also considered null. For each variable passed, the procedure creates a corresponding variable with the scalar value true if the passed variable is non-existent or null. For all other cases, the scalar value false is set. A vector of zero elements is considered non-null.

    Example:

        <paramSource sourceType="sql" paramNames="a_is_null, b_is_null, c_is_null">
          <sourceParam name="procName"   value="MGMT_JOB_FUNCTIONS.is_null"/>
          <sourceParam name="procParams" value="%job_id%, %job_execution_id%, a, b,
           c"/>
        </paramSource>
    

    In this example, the job variables a, b, and c are checked for null values and the variables a_is_null, b_is_null, and c_is_null are assigned the values of true or false correspondingly.

  • add_dbrole_prefix

    For every variable passed, the procedure prefixes the string AS if the value is not null or Normal (case-insensitive), otherwise it returns null. Therefore, a variable with value SYSDBA results in a value of AS SYSDBA, but a value of Normal returns null. If the passed variable corresponds to a vector, the same logic is applied to each individual element of the vector. This is useful while using DB credentials to connect to a SQL*Plus session.

    Example:

        <paramSource sourceType="sql" paramNames="db_role_suffix1, db_role_
         suffix2">
          <sourceParam name="procName"   value="MGMT_JOB_FUNCTIONS.get_dbrole_
            prefix"/>
          <sourceParam name="procParams" value="%job_id%, %job_execution_id%, db_  
            role1, db_role2"/>
        </paramSource>
    

    Here, the values of the variables db_role1 and db_role2 are prefixed with AS as necessary and saved into variables db_role_suffix1 and db_role_suffix2 respectively.

8.8.2 About the User Parameter Source

The job system also offers a special parameter source called "user", which indicates that a set of parameters must be supplied when a job of that type is submitted. If a parameter is declared to be of source "user" and the "required" attribute is set to "true", then the job system validates that all specified parameters in the source are provided when a job is submitted.

The user source can be evaluated at job submission time or job execution time. When evaluated at submission time, it causes an exception to be thrown if any required parameters are missing. When evaluated at execution time, it causes the execution to fail or stop if there are any missing required parameters.

<paramInfo>
    <!-- Indicate that parameters a, b and c are required params -->
    <paramSource paramNames="a, b, c" required="true" sourceType="user" />
</paramInfo>

The user source can also be used to indicate that a pair of parameters are target parameters. For example:

<paramInfo>
    <!-- Indicate that parameters a, b, c, d, e, f are target params -->
    <paramSource paramNames="a, b, c, d, e, f" sourceType="user" >
        <sourceParam name="targetNameParams" value="a, b, c" />
        <sourceParam name="targetTypeParams" value="d, e, f" />
    </paramSource>
</paramInfo>

This example indicates that parameters (a,d), (b,e), (c,f) are parameters that hold target information. Parameter "a" holds target names and "d" holds the corresponding target types. Similarly with parameters "b" and "e", and "c" and "f". For each parameter that holds target names, there must be a corresponding parameter that holds target types. The parameters can be either scalar or vector.

8.8.3 About the Inline Parameter Source

The inline parameter source allows job types to define parameters in terms of other parameters. It is a convenient mechanism to construct parameters that can be reused in other parts of the job type. The following example creates a parameter called filename based on the job execution id, for use in other parts of the job type.

<jobType>
      <paramInfo>
           <!-- Indicate that value for parameter filename is provided inline -->
           <paramSource paramNames="fileName" sourceType="inline" >
                <sourceParam name="paramValues" value="%job_execution_id%.log" />
                </paramSource>
      </paramInfo>
.....
      <stepset ID="main" type="serial">
        <step command="putFile" ID="S1">
             ...
            <param name="destFile">%fileName%</param>
             ...
        </step>
      </stepset>
</jobType>

The following example sets a vector parameter called vparam to be a vector of the values v1, v2, v3, and v4. Only one vector parameter at a time can be set using the inline source.

<jobType>
      <paramInfo>
          <!-- Indicate that value for parameter vparam is provided inline -->
          <paramSource paramNames="vparam" sourceType="inline" >
              <sourceParam name="paramValues" value="v1,v2,v3,v4" />
              <sourceParam name="vectorParams" value="vparam" />
          </paramSource>
      </paramInfo>
....

8.8.4 Using the checkValue Parameter Source

The checkValue parameter source enables job types to have the job system check that a specified set of parameters has a specified set of values. If a parameter does not have the specified value, then the job system either terminates or suspends the job.

<paramInfo>
      <!-- Check that the parameter halt has the value true. If not, suspend the job
-->
      <paramSource paramNames="halt" sourceType="checkValue" >
          sourceParam name="paramValues" value="true" />
          <sourceParam name="action" value="suspend" />
      </paramSource>
</paramInfo>

The following example checks whether a vector parameter v has the values v1,v2,v3, and v4. Only one vector parameter at a time can be specified in a checkValue parameter source. If the vector parameter does not have those values, in that order, then the job is terminated.

<paramInfo>
    <!-- Check that the parameter halt has the value true. If not, suspend the job -->
    <paramSource paramNames="v"  sourceType="checkValue" >
        <sourceParam name="paramValues" value="v1,v2,v3,v4" />
        <sourceParam name="action" value="abort" />
        <sourceParam name="vectorParams" value="v" />
    </paramSource>
</paramInfo>

8.8.5 About the properties Parameter Source

The properties parameter source fetches a named set of target properties for each of a specified set of targets and stores each set of property values in a vector parameter.

The following example fetches the properties "OracleHome" and "OracleSID" for the specified set of targets (dlsun966 and ap952sun) into the vector parameters ohomes and osids, respectively. The first vector value in the ohomes parameter will contain the OracleHome property for dlsun966, and the second will contain the OracleHome property for ap952sun. Likewise with the OracleSID property.

<paramInfo>
    <!-- Fetch the OracleHome and OracleSID property into the vector params ohmes, osids -->
    <paramSource paramNames="ohomes,osids" overrideUser="true" sourceType="properties">
      <sourceParams>
            <sourceParam name="propertyNames" value="OracleHome,OracleSID" />
            <sourceParam name="targetNames" value="dlsun966,ap952sun" />
            <sourceParam name="targetTypes" value="host,host" />
      </sourceParams>
    </paramSource>
</paramInfo>

As with the credentials source, vector parameter names can be provided for the target names and types.

<paramInfo>
    <!-- Fetch the OracleHome and OracleSID property into the vector params ohmes, osids -->
    <paramSource paramNames="ohomes,osids" overrideUser="true" sourceType="properties">
      <sourceParams>
            <sourceParam name="propertyNames" value="OracleHome,OracleSID" />
            <sourceParam name="targetNamesParam" value="job_target_names" />
            <sourceParam name="targetTypes" value="job_target_types" />
      </sourceParams>
    </paramSource>
</paramInfo>

8.8.6 Understanding Parameter Sources and Parameter Substitution

Parameter sources are applied in the order they are specified. Parameter substitution (of the form %param%) can be used inside sourceParam tags, but the substituted parameter must exist when the parameter source is evaluated. Otherwise, the job system substitutes an empty string in its place.

8.8.7 About Parameter Encryption

The job system offers the facility of storing specified parameters in encrypted form. Parameters that contain sensitive information, such as passwords, must be stored in encrypted form. A job type can indicate that parameters fetched through a parameter source be encrypted by setting the encrypted attribute to true in a parameter source.

For example:

<paramInfo>
    <!-- Fetch params from the credentials table into vector parameters; store them encrypted -->
    <paramSource paramNames="vec_usernames,vec_passwords" overrideUser="true" 
                                  sourceType="credentials" encrypted="true">
      <sourceParams>
            <sourceParam name="credentialType" value="patch" />
            <sourceParam name="credentialColumns" value="node_username,node_password" />
            <sourceParam name="targetNames" value="dlsun966,ap952sun" />
            <sourceParam name="targetTypes" value="host,host" />
            <sourceParam name="credentialScope" value="system" />
      </sourceParams>
    </paramSource>
</paramInfo>

A job type can also specify that parameters supplied by the user be stored in encrypted form:

<paramInfo>
    <!-- Indicate that parameters a, b and c are required params -->
    <paramSource paramNames="a, b, c" required="true" sourceType="user" encrypted="true" />
</paramInfo>

8.9 Specifying Credential Information

Until Oracle Enterprise 11g release 1, credentials were represented as two parameters, (user name and password). The job type owner can either have a credential parameter source to extract these parameters or define these as user parameters, and then pass on the parameters to the various steps that require the parameters.

This required knowledge about the credential set, credential types, and their columns, along with knowledge about various authentication mechanisms, must be supported by the job type, irrespective of the pool of authentication schemes that could be supported by the Enterprise Manager. This restricted the freedom of the job type owner to model just the job type and ignore the authentication required to perform the operations. To overcome these issues and to evolve a unified mechanism in the job type to specify the credentials, Oracle introduced a new concept called credential usage.

8.9.1 About Credential Usage

A credential usage is the point where the credential is required to perform an operation. Credential submissions must be made against these usages only.

8.9.2 Overview of Credential Binding

A credential binding is a reference of a credential by a step. Each step exposes its credential usage which must be fulfilled in the metadata. Therefore, each credential binding refers to a reference credential usage that is defined in the credential usage section of the metadata. When the step requests its own credential usage, a binding helps resolve which credential submission in a particular automation entity (Job or DP instance) must be passed to that step.

In earlier releases, the job types had a credential parameter source to extract the user name and password from the credentials (JobCredRecord) passed to the job and then these were available as parameters to the entire job type. This behavior is deprecated with no support and is superseded by the new credential usage structure.

The following Job type example shows the use of credentials declaration in the job type:

<jobType version="1.0" name="OSCommandNG" 
         singleTarget="true" targetTypes="all" 
         defaultTargetType="host" editable="true" 
         restartable="true" suspendable="true" > 
    <credentials> 
       <credential usage=”hostCreds” authTargetType=”host” 
                   defaultCredentialSet=”HostCredsNormal”/> 
    </credentials> 
    <paramInfo> 
         <paramSource sourceType="user" paramNames="command" 
              required="true" evaluateAtSubmission="true" /> 
         <paramSource sourceType="inline" 
                      paramNames="TargetName,TargetType" 
                    overrideUser="true" 
            evaluateAtSubmission="true"> 
             <sourceParam name="paramValues" 
                         value="%job_target_names%[1],
                           %job_target_types%[1]" /> 
         </paramSource> 
         <paramSource sourceType="properties" 
                    overrideUser="true" 
            evaluateAtSubmission="false" > 
            <sourceParam name="targetNamesParam" 
                        value="job_target_names" /> 
            <sourceParam name="targetTypesParam" 
                        value="job_target_types" /> 
         </paramSource> 
         <paramSource sourceType="substValues" 
                      paramNames="host_command,host_args,os_script" 
                    overrideUser="true" evaluateAtSubmission="false"> 
              <sourceParam name="sourceParams" 
                          value="command,args,os_script" /> 
         </paramSource> 
     </paramInfo> 
     <stepset ID="main" type="serial" > 
        <step ID="Command" command="sampleRemoteOp"> 
           <credList> 
              <cred usage=”OS_CRED” reference=”hostCreds”/> 
           </credList> 
           <paramList> 
              <param name="remoteCommand">%host_command%</param> 
              <param name="args">%host_args%</param> 
              <param name="input"><![CDATA[%os_script%]]></param> 
              <param name="largeInputParam">large_os_script</param> 
              <param name="substituteLargeParam">true</param> 
              <param name="targetName">%job_target_names%[1]</param> 
              <param name="targetType">%job_target_types%[1]</param> 
              <param name="executeSynchronous">false</param> 
          </paramList> 
       </step> 
    </stepset> 
</jobType> 

The first set of three lines declares a credential usage in the job type. The next set of lines binds the credential usage to that of the step. The user name and password cannot be extracted by the jobs system and therefore can no longer be exposed as parameters.

8.9.3 XSD Elements – Credential Usage and Credential Binding

The XSD element credential usage and credentials binding are explained in Table 8-3 and Table 8-4.

Table 8-3 Credential Usage (credential)

Attribute Required (Y/N) Description

usage

Y

Name of the credential through which it will be referred in the job type. All credential submissions are to be made for this name.

authTargetType

Y

Target type against which authentication is to be performed for any operation. For example, running ”ls” any target means authentication against the host.

defaultCredentialSet

Y

Name of the credential set to be picked up as a credential if no submissions are found for the credential usage when required.

credentialTypes

N

Name of the credential types which can only be used for specifying the credentials. This is to facilitate filtering of credentials in the credential selector UI component.

displayName

N

Name that is intended to be shown in the credential selector UI.

description

N

Description that is intended to be shown in the credential selector UI.


Table 8-4 Credential Binding (cred)

Attribute / sub element Required (Y/N) Description

usage

Y

Credential usage understood by the step.

reference

Y

Credential usage referred to and present in the declarations of the job type or DP metadata.


Note:

The Credential Binding element can only be used inside the step or job elements in the job type XML.

8.10 Specifying Security Information

Typically, a job type tends to perform actions that can be considered to be "privileged". For example, patching a production database or affecting the software installed in an Oracle home directory or the APPL_TOP directory. Such job types must be submitted by Enterprise Manager users that have the appropriate level of privileges to perform these actions.

The job system provides a section called securityInfo, which the author of a job type can use to specify the minimum level of privileges (system and target) that the submitter of a job of this type must have.

The securityInfo section enables the job type author to encapsulate the security requirements associated with submitting a job in the job type itself. No further code must be written to enforce security. Also, it ensures that Enterprise Manager users cannot directly submit jobs of a specific type (using the job system APIs and bypassing the application) unless they have the set of privileges defined by the job type author.

Example 1

The following example shows what a typical securityInfo section looks like. Suppose you are writing a job type that clones a database. This job type requires two targets, a source database and a destination node on which the destination database will be created. This job type requires that the user submitting a clone job have a CLONE FROM privilege on the source (database) and a MAINTAIN privilege on the destination (node).

In addition, the user requires the CREATE TARGET system privilege to introduce a new target into the system. Assuming that the job type is written so that the first target in the target list is the source and the second target in the target list is the destination, the security requirements for such a job type could be addressed as follows:

<jobType>
  <securityInfo>
    <privilege name="CREATE TARGET" type="system" />
    <privilege name="CLONE FROM" type="target" evaluateAtSubmission="false" >
        <target name="%job_target_names%[1]" type="%job_target_types%[1]" />
    </privilege>
    <privilege name="MAINTAIN" type="target" evaluateAtSubmission="false">
        <target name="%job_target_names%[2]" type="%job_target_types%[2]" />
    </privilege>
  </securityInfo>
  <!-- An optional <paramInfo> section will follow here, followed by the stepset
       definition of the job
  -->
  <paramInfo>
   ....
  </paramInfo>
  <stepset ...>
  </stepset>
</jobType>

The securityInfo section is a set of <privilege> tags. Each privilege could be a system or target privilege, as indicated by the type attribute of the tag. If the privilege is a target privilege, then the targets that the privilege is attached to must be explicitly enumerated, or else the target_names_param and target_types_param attributes must be used as shown in the following example. The usual %param% notation can be used to indicate job parameter and target placeholders.

By default, all <privilege> directives in the securityInfo section are evaluated at job submission time, after all submit-time parameter sources have been evaluated. The job system throws an exception if the user does not have any of the privileges specified in the securityInfo section.

Execution-time parameter sources are not evaluated at job submission time, so take care not to use job parameters that might not have been evaluated yet. You could also direct the job system to evaluate a privilege directive at job execution time by setting the evaluateAtSubmission parameter to false.

The only reason you might want to do this is if the exact set of targets that the job is operating on is unknown until the job execution time (for example, it is computed using an execution-time parameter source). Execution-time privilege directives are evaluated after all execution-time parameter sources are evaluated.

Example 2

Assume that you are writing a job type that requires a MODIFY privilege on each one of its targets, but the exact number of targets is unknown at the time of writing. Use the target_names_param and target_types_param attributes for this purpose. These specify vector parameters from which the job system will get the target names and the corresponding target types. These could be any vector parameters. This example uses the job target list (job_target_names and job_target_types).

<securityInfo>
    <privilege name="MODIFY" type="target" target_names_param="job_target_names" 
                target_types_param="job_target_types" />
</securityInfo>

8.11 Specifying Lock Information

Often executing jobs need to acquire resources. For example, a job applying a patch to a database might need a mechanism to ensure that other jobs (submitted by other users in the system) on the database are prevented from running while the patch is being applied. In other words, it might want to acquire a lock on the database target so that other jobs that try to acquire the same lock block (or terminate). This allows a patch job, once it starts, to perform its work without disruption.

Sometimes, locks could be at more than one level. A hot backup of a database, for example, can allow other hot backups to proceed (because they do not bring down the database), but cannot allow cold backups or database shutdown jobs to proceed (because they shut down the database, causing the backup to fail).

A job execution indicates that it is reserving a resource on a target by acquiring a lock on the target. A lock is a proxy for reserving some part of the functionality of a target. When an execution acquires a lock, it blocks other executions that try to acquire the same lock on the target. A lock is identified by a name and a type and can be of the following types:

  • Global: These are locks that are not associated with a target. An execution that holds a global lock blocks other executions that are trying to acquire the same global lock (such as a lock with the same name).

  • Target Exclusive: These are locks that are associated with a target. An execution that holds an exclusive lock on a target blocks executions that are trying to acquire any named lock on the target, as well as executions trying to acquire an exclusive lock on the target. Target exclusive locks have no name: there is exactly one exclusive lock per target.

  • Target Named: A named lock on a target is analogous to obtaining a lock on one particular functionality of the target. A named lock has a user-specified name. An execution that holds a named lock blocks other executions that are trying to acquire the same named lock, as well as executions that are trying to acquire an exclusive lock on the target.

Example

Locks that a job type wants to acquire can be obtained by specifying a lockInfo section in the job type. This example lists the locks that the job is to acquire, the types of locks, as well as the targets on which it wants to acquire the locks:

<lockInfo action="suspend">
    <lock type="targetExclusive">
        <targetList>
            <target name="%backup_db%" type="oracle_database" />
        </targetList>
    </lock>
    <lock type="targetNamed" name="LOCK1" >
        <targetList>
            <target name="%backup_db%" type="oracle_database" />
            <target name="%job_target_names%[1]" type="%job_target_types%[1]" />
            <target name="%job_target_names%[2]" type="%job_target_types%[2]" />
        </targetList>
    </lock>
    <lock type="global" name="GLOBALLOCK1" />
</lockInfo>

This example shows a job type that acquires a target-exclusive lock on a database target whose name is given by the job parameter backup_db. It also acquires a named target lock named "LOCK1" on three targets, namely, the database whose name is stored in the job parameter backup_db, and the first two targets in the target list of the job. Finally, it acquires a global lock named "GLOBALLOCK1". The "action" attribute specifies what the job system should do to the execution if any of the locks in the section cannot be obtained (because some other execution is holding them). Possible values are suspend (all locks are released and the execution state changes to "Suspended:Lock") and abort (the execution terminates). The following points can be made about executions and locks:

  • An execution can only attempt to obtain locks when it starts (although it is possible to override this by using nested jobs).

  • An execution can acquire multiple locks. Locks are always acquired in the order specified. Because of this, executions can potentially deadlock each other if they attempt to acquire locks in the wrong order.

  • Target locks are always acquired on targets in the same order as they are specified in the <targetList> tag.

  • If a target in the target list is null or does not exist, the execution terminates.

  • If an execution attempts to acquire a lock it already holds, it succeeds.

  • If an execution cannot acquire a lock (usually because another execution is holding it), it has a choice of suspending itself or terminating. If it chooses to suspend itself, all locks it has acquired so far are released, and the execution is put in the Suspended/Lock state.

  • All locks held by an execution are released when an execution finishes (whether it completes, fails, or is stopped). There might be several waiting executions for each released lock and these are sorted by time, with the earliest request getting the lock.

When jobs that have the lockInfo section are nested inside each other, the nested job's locks are obtained when the nested job first executes, not when an execution starts. If the locks are not available, the parent execution can be suspended or terminated, possibly after a few steps have executed already.

lockInfo Example 1

In this example, two job types called HOTBACKUP and COLDBACKUP perform hot backups and cold backups, respectively, on the database. The difference is that the cold backup brings the database down, but the hot backup leaves it up. Only one hot backup can execute at a time and it keeps out other hot backups as well as cold backups.

When a cold backup is executing, no other job type can execute (since it shuts down the database as part of its execution). A third job type called SQLANALYZE performs scheduled maintenance activity that results in modifications to database tuning parameters (two SQLANALYZE jobs cannot run at the same time).

Table 8-5 shows the incompatibilities between the job types. An 'X' indicates that the job types are incompatible. An 'OK' indicates that the job types are compatible.

Table 8-5 Job Type Incompatibilities

Job Type HOTBACKUP COLDBACKUP SQLANALYZE

HOTBACKUP

X

X

OK

COLDBACKUP

X

X

X

SQLANALYZE

OK

X

X


The following code example shows the lockInfo sections for the three job types. The cold backup obtains an exclusive target lock on the database. The hot backup job does not obtain an exclusive lock, but only the named lock "BACKUP_LOCK". Likewise, the SQLANALYZE job obtains a named target lock called "SQLANALYZE_LOCK".

Assuming that the database that the jobs operate on is the first target in the target list of the job, the lock sections of the two jobs look as follows:

<jobType name="SQLANALYZE">
    <lockInfo action="abort">
        <lock type="targetNamed" name="SQLANALYZE_LOCK" >
            <targetList>
              <target name="%job_target_names%[1]" type="%job_target_names%[1]" />
            </targetList>
        </lock>
    </lockInfo>
    ........ Rest of the job type follows
</jobType>

Since a named target lock blocks all target exclusive locks, executing hot backups suspends cold backups, but not analyze jobs (because they try to acquire different named locks). Executing SQL analyze jobs terminates other SQL analyze jobs and suspends cold backups, but not hot backups. Executing cold backups suspends hot backups and terminates SQL analyze jobs.

lockInfo Example 2

A job type called PATCHCHECK periodically checks a patch stage area and downloads information about newly staged patches into the Management Repository. Two such jobs cannot run at the same time; however, the job is not associated with any target. The solution is for the job type to attempt to grab a global lock:

<jobType name="PATCHCHECK">
    <lockInfo>
        <lock type="global" name="PATCHCHECK_LOCK" />
    </lockInfo>
    ........ Rest of the job type follows
</jobType>

lockInfo Example 3

A job type that nests the SQLANALYZE type within itself is shown in the following example. The nested job executes after the first step (S1) executes.

<jobType name="COMPOSITEJOB">
    <stepset ID="main" type="serial">
        <step ID="S1" ...>
           ....
         </step>
         <job name="nestedsql" type="SQLANALYZE">
            ....
         </job>
    </stepset>
</jobType>

In the previous example, the nested job tries to acquire locks when it executes (because the SQLANALYZE has a lockInfo section). If the locks are currently held by other executions, then the nested job terminates (as specified in the lockInfo), which in turn terminates the parent job.

8.12 Suspending a Job or Step

Suspended is a special state that indicates that steps in the job will not be considered for scheduling and execution. A step in an executing job can suspend the job, through the suspend_job PL/SQL API. This suspends both the currently executing step, and the job itself.

Suspending a job means that all steps in the job that are currently in a "scheduled" state are marked as "suspended" and will thereafter not be scheduled or executed. All currently executing steps (for example, parallel stepsets) continue to execute. However, when any currently executing step completes, the next steps in the job will not be scheduled. Instead they are put in suspended state. When a job is suspended on submission, the previous applies to the first steps in the job that would have been scheduled.

Suspended jobs may be restarted at any time by calling the restart_job() PL/SQL API. However, jobs that are suspended because of serialization (locking) rules are not restartable manually. The job system restarts such jobs automatically when currently executing jobs of that job type complete. Restarting a job effectively changes the state of all suspended steps to scheduled and job execution proceeds normally.

8.13 Restarting a Job

If a job is suspended, failed, or terminated, you can restart it from any given step (typically, the stepset that contains a failed or terminated step). For failed or terminated jobs, the steps that get scheduled again depends on which step from which the job is restarted.

8.13.1 Restarting Versus Resubmitting

If a step in a job is resubmitted, it means that it executes regardless of whether the original execution of the step completed or failed. If a stepset is resubmitted, then the first step, stepset, or job in the stepset is resubmitted, recursively. Therefore, when a job is resubmitted, the entire job is executed again by recursively resubmitting its initial stepset. The parameters and targets used are the same that were used when the job was first submitted. Essentially, the job executes as if it were submitted for the first time with the specified set of parameters and targets. Also, you can use the resubmit_job API in the mgmt_jobs package to resubmit a job. You can resubmit jobs even if the earlier executions completed successfully.

Restarting a job generally refers to resuming job execution from the last failed step (although the job type can control this behavior using the restartMode attribute of steps/stepsets/jobs). Usually, steps from the failed job execution that succeeded are not executed again.

To restart a failed or terminated job, call the restart_job API in the mgmt_jobs package. You cannot restart a job that completed successfully.

8.13.2 Default Restart Behavior

Restarting a job creates a new execution called the restart execution. The original failed execution of the job is called the source execution. All parameters and targets are copied over from the source execution to the restart execution. Parameter sources are not reevaluated, unless the original job terminated because of a parameter source failure.

To restart a serial or iterative stepset, the job system first examines the status of the serial stepset. If the status of the serial stepset is "Completed", then all the entries for its constituent steps are copied over from the source execution to the restart execution. If the status of the stepset is "Failed" or "Aborted", then the job system starts top down from the first step in the stepset.

If the step previously completed successfully in the source execution, it is copied to the restart execution. If the step previously failed or aborted, it is rescheduled for execution in the restart execution. After this step has finished executing, the job system determines the next steps to execute. These could be successOf or failureOf dependencies, or simply steps/stepsets/jobs that execute after the current step.

If the subsequent step completed successfully in the source execution, then it will not be scheduled for execution again and the job system copies the source execution status to the restart execution for that step. It continues in this fashion until it reaches the end of the stepset. It then recomputes the status of the stepset based on the new executions.

To restart a parallel stepset, the job system first examines the status of the parallel stepset. If the status of the stepset is "Completed", then all the entries for its constituent steps are copied over from the source execution to the restart execution. If the status of the stepset is "Failed" or "Aborted", then the job system copies over all successful steps in the steps from the source to the restart execution. It reschedules all steps that failed or terminated in the source execution, in parallel. After these steps have finished executing, the status of the stepset is recomputed.

To restart a nested job, the restart algorithm is applied recursively to the first (outer) stepset of the nested job.

In the previous paragraphs, if one of the entities is a stepset or a nested job, then the restart mechanism is applied recursively to the stepset or job. When entries for steps are copied over to the restart execution, the child execution entries point to the same output Character Large Object (CLOB) entries as the parent execution.

8.13.3 Using the restartMode Directive

A job type can affect the restart behavior of each step, stepset, or job within it by the use of the restartMode attribute. You can set this to "failure" (default) or "always".

  • When set to failure and the top-down copying process described in the previous section occurs, the step, stepset, or job is copied without being executed again if it succeeded in the source execution. If it failed or terminated in the source execution, then it restarts recursively at the last point of failure.

  • When the restartMode attribute is set to "always" for a step, the step is always executed again in a restart, regardless of whether it succeeded or failed in the source execution. The use of this attribute is useful when certain steps in a job must always be executed again in a restart (for example, a step that shuts down a database before backing it up).

For a stepset or nested job, if the restartMode attribute is set to "always", then all steps in the stepset/nested job are restarted, even if they completed successfully in the source execution. If it is set to "failure", then restart is attempted only if the status of the stepset or nested job was set to Failed or Aborted in the source execution.

Individual steps inside a stepset or nested job might have their restartMode set to "always" and such steps are always executed again.

Restart Examples

The following sections discuss a range of scenarios related to restarting stepsets.

Example 1

Consider the serial stepset with the sequence of steps below:

<jobtype ...>
<stepset ID="main" type="serial" >
    <step ID="S1" ...>
     ...
    </step>
    <step ID="S2" ...>
     ...
    </step>
    <step ID="S3" failureOf="S2"...>
     ...
    </step>
    <step ID="S4" successOf="S2"...>
     ...
    </step> 
</stepset>
</jobtype>

In this stepset, assume the source execution had S1 execute successfully and step S2 and S3 (the failure dependency of S2) fail.

When the job is restarted, step S1 is copied to the restart execution from the source execution without being re-executed (because it successfully completed in the source execution). Step S2, which failed in the source execution, is rescheduled and executed.

If S2 completes successfully, then S4, its success dependency (which never executed in the source execution), is scheduled and executed. The status of the stepset (and the job) is the status of S4.

If S2 fails, then S3 (its failure dependency) is rescheduled and executed (since it had failed in the source execution), and the status of the stepset (and the job) is the status of S3.

Assume that step S1 succeeded, S2 failed, and S3 (its failure dependency) succeeded in the source execution. As a result, the stepset (and therefore the job execution) succeeded. This execution cannot be restarted because the execution completed successfully although one of its steps failed.

Finally, assume that steps S1 and S2 succeed, but S4 (S2's success dependency) failed. S3 is not scheduled in this situation. When the execution is restarted, the job system copies over the executions of S1 and S2 from the source to the restart execution, and reschedules and executes S4. The job succeeds if S4 succeeds.

Example 2

Consider the following:

<jobtype ...>
<stepset ID="main" type="serial" stepsetStatus="S2" >
    <step ID="S1" restartMode="always" ...>
     ...
    </step>
    <step ID="S2" ...>
     ...
    </step>
    <step ID="S3" ...>
     ...
    </step> 
</stepset>
</jobtype>

In the previous example, assume that step S1 completes and S2 fails. S3 executes (because it does not have a dependency on S2) and succeeds. The job, however, fails, because the stepset main has its stepsetStatus set to S2.

When the job is restarted, S1 is executed again, although it completed the first time, because the restartMode of S1 was set to "always".

Step S2 is rescheduled and executed, because it failed in the source execution. After S2 executes, step S3 is not rescheduled for execution again, because it executed successfully in the source execution. If the intention is that S3 must execute in the restart execution, then its restartMode must be set to "always".

In the previous example, if S1 and S2 succeeded and S3 failed, the stepset main would still succeed (because S2 determines the status of the stepset). In this case, the job succeeds, and cannot be restarted.

Example 3

Consider the following example:

<jobtype ...>
<stepset ID="main" type="serial"  >
  <stepset type="serial" ID="SS1" stepsetStatus="S1">
    <step ID="S1" ...>
     ...
    </step>
    <stepset ID="S2" ...>
     ...
    </step>
  </stepset>
  <stepset type="parallel" ID="PS1" successOf="S1" >
    <step ID="P1" ...>
     ...
    </step>
    <step ID="P2" ...>
     ...
    </step>
    <step ID="P3" ...>
     ...
    </step>
  </stepset>
</stepset>
</jobtype>

In this example, assume that steps S1 and S2 succeeded (and therefore, stepset SS1 completed successfully). Thereafter, the parallel stepset PS1 was scheduled, and assume that P1 completed, but P2 and P3 failed. As a result, the stepset "main" (and the job) failed.

When the execution is restarted, the steps S1 and S2 (and therefore the stepset SS1) are copied over without execution. In the parallel stepset PS1, both the steps that failed (P2 and P3) are rescheduled and executed.

Assume that S1 completed and S2 failed in the source execution. Stepset SS1 still completed successfully because the status of the stepset is determined by S1, not S2 (because of the stepsetStatus directive). Assume that PS1 was scheduled and P1 failed, and P2 and P3 executed successfully. When this job is rescheduled, the step S2 will not be executed again (because the stepset SS1 completed successfully). The step P1 is not rescheduled and executed.

Example 4

Consider a slightly modified version of the XML in "Example 3":

<jobtype ...>
<stepset ID="main" type="serial"  >
  <stepset type="serial" ID="SS1" stepsetStatus="S1" restartMode="always" >
    <step ID="S1" ...>
     ...
    </step>
    <stepset ID="S2" ...>
     ...
    </step>
  </stepset>
  <stepset type="parallel" ID="PS1" successOf="S1" >
    <step ID="P1" ...>
     ...
    </step>
    <step ID="P2" ...>
     ...
    </step>
    <step ID="P3" ...>
     ...
    </step>
  </stepset>
</stepset>
</jobtype>

In the previous example, assume that S1 and S2 succeeded (and therefore, stepset SS1 completed successfully). Thereafter, the parallel stepset PS1 was scheduled, and assume that P1 completed, but P2 and P3 failed. When the job is restarted, the entire stepset SS1 is restarted (since the restartMode is set to "always"). This means that steps S1 and S2 are successively scheduled and executed. Now the stepset PS1 is restarted, and because the restartMode is not specified (it is always "failure" by default), it is restarted at the point of failure, which in this case means that the failed steps P2 and P3 are executed again, but not P1.

8.14 Adding Job Types to the Job Activity and Job Library Pages

To make a new job type accessible from the Enterprise Manager Cloud Console Job Activity or Job Library page, you must to modify the following specific XML tag attributes.

  • To display the job type on Job Activity page, set useDefaultCreateUI to "true" as shown in the following example.

    <displayInfo useDefaultCreateUI="true"/>
    
  • To display the job type on the Job Library page, in addition to setting useDefaultCreateUI attribute, you must also set the jobtype editable attribute to "true."

    <jobtype name="jobType1" editable="true">
    

If you set useDefaultCreateUI="true" and editable="false", then the job type appears on the Job Activity page only and not on Job Library page.This means you cannot edit the job definition.

8.14.1 Adding a Job Type to the Job Activity Page

Figure 8-1 shows the result of setting the useDefaultCreateUI attribute to "true" and enabling users to create a job to select the newly added job type from the Create Job menu.

Figure 8-1 Available Job Types from the Job Activity Page

available job types

Making the job type available from the Job Activity page also permits access to the default Create Job user interface when a user attempts to create a job using the newly added job type.

Adding the displayInfo Tag

You can add the displayInfo tag to the job definition file at any point after the </stepset> tag and before the </jobtype> tag at the end of the job definition file, as shown in the following example.

<jobtype ...>
<stepset ID="main" type="serial"  >
  <stepset type="serial" ID="SS1" stepsetStatus="S1">
    <step ID="S1" ...>
     ...
    </step>
    <stepset ID="S2" ...>
     ...
    </step>
  </stepset>
  <stepset type="parallel" ID="PS1" successOf="S1" >
    <step ID="P1" ...>
     ...
    </step>
    <step ID="P2" ...>
     ...
    </step>
    <step ID="P3" ...>
     ...
    </step>
  </stepset>
</stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobtype>

8.14.2 Adding a Job Type to the Job Library Page

To make the job type available from the Job Library page, you must also set the jobType tag's editable attribute to "true" in addition to adding the displayInfo tag, This makes the newly added job type a selectable option from the Create Library Job menu.

Making the Job Type Editable

The editable attribute of the jobtype tag is set at the beginning of the job definition file, as shown in the following example.

<jobtype name="jobType1" editable="true">
<stepset ID="main" type="serial"  >
  <stepset type="serial" ID="SS1" stepsetStatus="S1">
    <step ID="S1" ...>
     ...
    </step>
    <stepset ID="S2" ...>
     ...
    </step>
  </stepset>
  <stepset type="parallel" ID="PS1" successOf="S1" >
    <step ID="P1" ...>
     ...
    </step>
    <step ID="P2" ...>
     ...
    </step>
    <step ID="P3" ...>
     ...
    </step>
  </stepset>
</stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobtype>

8.15 Examples: Specifying Job Types in XML

The following sections provide examples of specifying job types in XML.

Example 1

Example 8-1 describes a job type called jobType1 that defines four steps, S1, S2, S3, and S4. It executes S1 and S2 serially, one after another. It executes step S3 only if step S2 succeeds, and step S4 only if S2 fails. All the steps execute within an iterative subset, so these actions are performed in parallel on all targets in the job target list of type database.

Note:

These examples use percentage (%) symbols to indicate parameters, %patchno%, %username%, %password%, and %job_target_name%.

The job system substitutes the value of a job parameter named "patchno" in place of the %patchno%. Likewise, it substitutes the values of the corresponding parameters for %username% and %password%. %job_target_name% and %job_target_type% are "pre-built" placeholders that substitute the name of the target that the step is currently executing against.

The steps S2, S3, and S4 illustrate how you can use the remoteOp command to execute a SQL*Plus script on the Management Agent.

The status of a job is failed if any of the following occurs:

  • S2 fails and S4 fails

  • S2 succeeds and S3 fails

Because S2 executes after S1 (regardless of whether S1 succeeds or fails), the status of S1 does not affect the status of the job.

Example 8-1 Job Type Defining Four Steps

<jobtype name="jobType1" editable="true" version="1.0">
<credentials>
    <credential usage="defaultHostCred" authTargetType="host"
           defaultCredentialSet="DBHostCreds"/>
    <credential usage="defaultDBCred" authTargetType="oracle_database"
           credentialTypes=”DBCreds”
           defaultCredentialSet="DBCredsNormal"/>
    </credentials>
    <stepset ID="main" type="iterativeParallel" iterate_param="job_target_types" iterate_param_filter="oracle_database" >
    <step ID="s1" command="remoteOp"">
    <credList>
   <cred usage="defaultHostCred" reference="defaultHostCred"/>
    </credList>
    <paramList>
      <param name="remoteCommand">myprog</param>
      <param name="targetName">%job_target_names%[%job_iterate_
       index%]
         </param>
         <param name="targetType">%job_target_types%[%job_iterate_
          index%]
          </param>
          <param name="args">-id=%patchno%</param>
          <param name="successStatus">3</param>
          <param name="failureStatus">73</param>
      </paramList>
    </step>
    <step ID="s2" command="remoteOp"">
      <credList>
       <cred usage="defaultHostCred" reference="defaultHostCred"/>
      </credList>
      <paramList>
        <param name="remoteCommand">myprog2</param>
        <param name="targetName">%job_target_names%[%job_iterate_
         index%]</param>
        <param name="targetType">%job_target_types%[%job_iterate_
         index%]</param>
        <param name="args">-id=%patchno%</param>
        <param name="successStatus">3</param>
        <param name="failureStatus">73</param>
      </paramList>
    </step>
    <step ID="s3" successOf="s2" command="remoteOp">
    <credList>
     <cred usage="defaultHostCred" reference="defaultHostCred"/>
     <cred usage="defaultDBCred" reference="defaultDBCred">
       <map toParam="db_username" credColumn="DBUserName"/>
       <map toParam="db_passwd" credColumn="DBPassword"/>
       <map toParam="db_alias" credColumn="DBRole"/>
    </cred>
    </credList>
      <paramList>
        <param name="command">prog1</command>
        <param name="script">
        <![CDATA[
          select * from MGMT_METRICS where target_name=%job_target_type%[%job_     
          iterate_param_index%]
        ]]>
        </param>
      <param name="args">%db_username%/%db_passwd%@%db_alias%</param>
      <param name="targetName">%job_target_names%[%job_iterate_
       index%]</param>
      <param name="targetType">%job_target_types%[%job_iterate_
       index%]</param>
      <param name="successStatus">0</param>
      <param name="failureStatus">1</param>
      </paramList>
    </step>
         <step ID="s4" failureOf="s2" command="remoteOp">
    <credList>
      <cred usage="defaultHostCred" reference="defaultHostCred"/>
    </credList>
    <paramList>
    <param name="input">
    <![CDATA[
       This is standard input to the executed progeam. You can use placeholders   
       for parameters, such as
       %job_target_name%[%job_iterate_param_index%]
    ]]>
    </param>
    <param name="remoteCommand">prog2</param>
    <param name="targetName">%job_target_names%[%job_iterate_
     index%]</param>
    <param name="targetType">%job_target_types%[%job_iterate_

index%]</param>
    <param name="args"></param>
    <param name="successStatus">0</param>
    <param name="failureStatus">1</param>
   </paramList>
  </step>
  </stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobtype>

Example 2

Example 8-2 describes a job type that has two steps, S1 and S2, that execute in parallel (within a parallel stepset ss1) and a third step, S3, that executes only after both S1 and S2 have completed successfully. This is achieved by placing the step S3 in a serial stepset ("main") that also contains the parallel stepset ss1. This job type is a "multi-node" job. The example uses %job_target_name%[1], %job_target_name%[2] in the parameters to the commands. In stepsets other than an iterative stepset, you can only refer to job targets by using their position in the targets array (which is ordered).

%job_target_name%[1] refers to the first target, %job_target_name%[2] to the second, and so on. The assumption is that most multi-node jobs expect their targets to be in some order. For example, a clone job might expect the source database to be the first target, and the target database to be the second target. This job fails if any of the following occurs:

  • The parallel stepset SS1 fails (either S1, or S2, or both fail)

  • Both S1 and S2 succeed, but S3 fails

The job type has declared itself to be Agent-bound. This means that the job is set to Suspended/Agent Down state if either Management Agent (corresponding to the first target or the second target) goes down.

Example 8-2 Job Type Defining Two Steps Followed by a Third Step

<jobtype name="jobType2" version="1.0" agentBound="true" >
  <stepset ID="main" type="serial" editable="true">
        <!-- All steps in this stepset ss1 execute in parallel -->
    <credentials>
    <credential usage=”hostCreds” authTargetType=”host”
            defaultCredentialSet=”HostCredsNormal”/>
    </credentials>
    <stepset ID="ss1" type="parallel" >
      <step ID="s1" command="remoteOp" >
      <credList>
         <cred usage="defaultHostCred" reference="defaultHostCred"/>
      </credList>
        <paramList>
          <param name="remoteCommand">myprog</param>
          <param name="targetName">%job_target_names%[1]</param>
          <param name="targetType">%job_target_types%[1]</param>
          <param name="args">-id=%patchno%</param>
          <param name="successStatus">3</param>
          <param name="failureStatus">73</param>
        </paramList>
      </step>
    <step ID="s2" command="remoteOp" >
    <credList>
    <cred usage=”defaultHostCred” reference=”hostCreds”/>
    </credList>
        <paramList>
          <param name="remoteCommand">myprog</param>
          <param name="targetName">%job_target_names%[2]</param>
          <param name="targetType">%job_target_types%[2]</param>
          <param name="args">-id=%patchno%</param>
          <param name="successStatus">3</param>
          <param name="failureStatus">73</param>
        </paramList>
      </step>
    </stepset>
    <!-- This step executes after stepset ss1 has executed, since it is inside the serial subset "main"
    -->
    <step ID="s3" successOf="ss1" command="remoteOp" >
      ...
    </step>
  </stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobtype>

Example 3

Example 8-3 defines a new job type called jobType3 that executes jobs of type jobType1 and jobType2 consecutively. The job2 job of type jobType2 is executed only if the first job fails. To execute another job, the target list and the param list must be passed. The targetList tag has a parameter called allTargets, which when set to true, passes along the entire target list passed to this job. By setting allTargets to false, a job type has the option of passing along a subset of its targets to the other job type.

In Example 8-3, jobType3 passes along all its targets to the instance of the job of type jobType1, but only the first two targets in its target list (in that order) to the job instance of type jobType2. There is another attribute called allParams (associated with paramList) that performs a similar function with respect to parameters. If allParams is set to true, then all parameters of the parent job are passed to the nested job. Typically the nested job has a different set of parameters (with different names).

If allParams is set to false (default), then the job type can name the nested job parameters explicitly and they do not have to have the same names as those in the parent job. Use parameter substitution to express the nested job parameters in terms of the parent job parameters, as shown in Example 8-3.

You can express the dependencies between nested jobs just as if they were steps or stepsets. In this example, a job of type jobType3 succeeds if either:

  • the nested job job1 succeeds

  • job1 fails and job2 succeeds

Example 8-3 Defining a Job Type That Executes Jobs of Other Job Types

<jobType name="jobType3" editable="true" version="1.0">
  <stepset ID="main" type="serial">
    <job type="jobType1" ID="job1" >
      <target_list allTargets="true" />
      <paramList>
        <param name="patchno">%patchno%</param>
      </paramList>
    </job>
    <job type="jobType2" ID="job2" failureOf="job1" >
      <targetList>
        <target name="%job_target_names%[1]" type="%job_target_types%[1]" />
        <target name="%job_target_names%[2]" type="%job_target_types%[2]" />
      </targetList>
      <paramList>
        <param name="patchno">%patchno%</param>
      </paramList>
    </job>
  </stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobType>

Example 4

Example 8-4 illustrates the use of the generateFile command. Assume that you are executing a sequence of scripts, all of which must source a common file that sets up some environment variables, which are known only at runtime. One way to do this is to generate the variables in a file with a unique name. All subsequent scripts are passed this file name as one of their command-line arguments, which they read to set the required environment or shell variables.

The first step, S1, in this job uses the generateFile command to generate a file named app-home/execution-id.env. Because the execution id of a job is always unique, this ensures a unique file name. It generates three environment variables, ENVVAR1, ENVVAR2, and ENVVAR3, which are set to the values of the job parameters param1, param2 and param3, respectively. These parameters must be set to the right values when the job is submitted.

%job_execution_id% is a placeholder provided by the job system, while %app-home% is a job parameter which must be explicitly provided when the job is submitted.

The second step, S2, executes a script called myscript. The first command-line argument to the script is the generated file name. This script must "source" the generated file, which sets the required environment variables, and then performs its other tasks, as shown in the following code:

#!/bin/ksh
ENVFILE=$1
# Execute the generated file, sets the required environment vars
. $ENVFILE
# I can now reference the variables set in the file doSomething $ENVVAR1 $ENVVAR2 $ENVVAR3...

Example 8-4 provides the full job type specification. Step S3 removes the file that was created by the first step S1. It is important to clean up when using the putFile and generateFile commands to write temporary files on the Management Agent. This example performs the cleanup explicitly as a separate step, but it could also be done by one of the scripts that executes on the remote host.

Additionally, the securityInfo section that specifies the user that submits a job of this job type, must have MAINTAIN privilege on both the targets on which the job operates.

Example 8-4 Defining a Job Type That Generates Variables in a File

<jobtype name="jobType4" editable="true" version="1.0">
  <securityInfo>
    <privilege name="MAINTAIN" type="target" evaluateAtSubmission="false">
      <target name="%job_target_names%[1]" type="%job_target_types%[1]" />
      <target name="%job_target_names%[2]" type="%job_target_types%[2]" />
    </privilege>
  </securityInfo>
  <credentials>
    <credential usage=”hostCreds” authTargetType=”host”
                defaultCredentialSet=”HostCredsNormal”/>
  </credentials>
  <stepset ID="main" type="serial">
    <step ID="s1" command="putFile" >
      <paramList>
        <param name=sourceType>inline</param>
        <param name="destFile">%app-home%/%job_execution_id%.env</param>
        <param name="targetName">%job_target_names%[1]</param>
        <param name="targetType">%job_target_types%[1]</param>
        <param name=contents">
        <![CDATA[#!/bin/ksh
        export ENVVAR1=%param1% export ENVVAR2=%param2% export ENVVAR3=%param3%
        ]]>
      </param>
    </paramList>
  </step>
<step ID="s2" command="remoteOp" >
  <credList>
     <cred usage=”defaultHostCred” reference=”hostCreds”/>
  </credList>
    <paramList>
      <param name="remoteCommand">myscript</param>
      <param name="targetName">%job_target_names%[2]</param>
      <param name="targetType">%job_target_types%[2]</param>
      <param name="args">%app-home%/%job_execution_id%.env</param>
      <param name="successStatus">3</param>
      <param name="failureStatus">73</param>
    </paramList>
  </step>
<step ID="s3" command="remoteOp" >
  <credList>
     <cred usage=”defaultHostCred” reference=”hostCreds”/>
  </credList>
 
  <paramList>
    <param name="remoteCommand">rm</param>
    <param name="targetName">%job_target_names%[2]</param>
    <param name="targetType">%job_target_types%[2]</param>
    <param name="args">-f, %app-home%/%job_execution_id%.env</param>
    <param name="successStatus">0</param>
  </paramList>
</step>
</stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobtype>

Example 5

Example 8-5 illustrates the use of the repSQL command to execute SQL statements and anonymous PL/SQL blocks against the Management Repository. The job type specification below calls a SQL statement in the first step S1, and a PL/SQL procedure in the second step. Note the use of the variables %job_id% and %job_name%, which are special job-system placeholders. Other job parameters can be similarly escaped as well. Also note the use of bind parameters in the SQL queries. The parameters sqlinparam[n] can be used to specify bind parameters. There must be one parameter of the form sqlinparam[n] for each bind parameter. Bind parameters must be used as far as possible to make optimum use of database resources.

Example 8-5 Defining a Job Type That Executes SQL Statements and PL/SQL Procedures

<jobtype name="repSQLJob" editable="true" version="1.0">
  <stepset ID="main" type="serial">
    <step ID="s1" command="repSQL" >
      <paramList>
        <param name="sql">update mytable set status='executed' where 
         name=?</param>
        <param name="sqlinparam1">%job_name%</param>
      </paramList>
    </step>
  <step ID="s2" command="repSQL" >
<paramList>
  <param name="sql">begin mypackage.job_done(?,?,?); end;</param>
  <param name="sqlinparam1">%job_id%</param>
  <param name="sqlinparam2">3</param><param name="sqlinparam3">mgmt_rep</param>
</paramList>
</step>
</stepset>
<displayInfo useDefaultCreateUI="true"/>
</stepset>
</jobtype>

Example 6

This example illustrates the use of the switch stepset. The main stepset of this job is a switch stepset where switchVarName is a job parameter called stepType. The possible values (switchCaseVal) that this parameter can have are "simpleStep", "parallel", and "OSJob", which will end up selecting, respectively, the step SWITCHSIMPLESTEP, the parallel stepset SWITCHPARALLELSTEP, or the nested job J1.

<jobType version="1.0" name="SwitchSetJob" editable="true">
              <stepset ID="main" type="switch" switchVarName="stepType" >
  <credentials>
    <credential usage=”hostCreds” authTargetType=”host”
                defaultCredentialSet=”HostCredsNormal”/>
  </credentials>
 
<step ID="SWITCHSIMPLESTEP" switchCaseVal="simpleStep" command="remoteOp">
 
  <credList>
    <cred usage=”defaultHostCred” reference=”hostCreds”/>
  </credList><paramList>
    <param name="remoteCommand">%command%</param>
    <param name="args">%args%</param>
    <param name="targetName">%job_target_names%[1]</param>
    <param name="targetType">%job_target_types%[1]</param>
  </paramList>
</step>
<stepset ID="SWITCHPARALLELSTEP" type="parallel" switchCaseVal="parallelStep">
  <step ID="P11" command="remoteOp" >
    <credList>
      <cred usage=”defaultHostCred” reference=”hostCreds”/>
    </credList>
    <paramList>
      <param name="remoteCommand">%command%</param>
      <param name="args">%args%</param>
      <param name="targetName">%job_target_names%[1]</param>
      <param name="targetType">%job_target_types%[1]</param>
    </paramList>
  </step>
  <step ID="P12" command="remoteOp" >
    <credList>
      <cred usage=”defaultHostCred” reference=”hostCreds”/>
    </credList>
  <paramList>
    <param name="remoteCommand">%command%</param>
    <param name="args">%args%</param>
    <param name="targetName">%job_target_names%[1]</param>
    <param name="targetType">%job_target_types%[1]</param>
  </paramList>
</step>
</stepset>
<job ID="J1" type="OSCommandSerial" switchCaseVal="OSJob" >
  <paramList>
    <param name="command">%command%</param>
    <param name="args">%args%</param>
  </paramList>
  <targetList>
    <target name="%job_target_names%[1]" type="%job_target_types%[1]" />
  </targetList>
</job>
</stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobType>

Example 7

This example shows the use of the <securityInfo> tag to ensure that only users that have CLONE FROM privilege over the first target and MAINTAIN privilege over the second target are able to submit jobs of the following type:

<jobType name="Clone" editable="true" version="1.0" >
  <securityInfo>
    <privilege name="CREATE TARGET" type="system" />
    <privilege name="CLONE FROM" type="target" evaluateAtSubmission="false" >
      <target name="%job_target_names%[1]" type="%job_target_types%[1]" />
    </privilege>
    <privilege name="MAINTAIN" type="target" evaluateAtSubmission="false">
      <target name="%job_target_names%[2]" type="%job_target_types%[2]" />
    </privilege>
  </securityInfo>
  <!-- An optional <paramInfo> section will follow here, followed by the stepset definition of the job
  -->
  <paramInfo>
  ....
  </paramInfo>
  <stepset ...>
  .......
  </stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobType>

Example 8

The following shows an example of a scenario where credentials are passed to a nested job in the job type specification:

<jobType version="1.0" name="SampleJobType001" singleTarget="true" editable="true" 
 defaultTargetType="host" targetTypes="all">
 <credentials>
  <credential usage="osCreds" authTargetType="host"
   defaultCredentialSet="HostCredsNormal" credentialTypes="HostCreds">
    <displayName nlsid="LABEL_NAME">OS Credentials</displayName> 
    <description nlsid="LABEL_DESC">Please enter credentials.</description> 
  </credential>
 </credentials>
 <stepset ID="main" type="serial">
  <step ID="Step" command="remoteOp">
   <credList>
    <cred usage="defaultHostCred" reference="osCreds" /> 
   </credList>
   <paramList>
    <param name="targetName">%job_target_names%[1]</param> 
    <param name="targetType">%job_target_types%[1]</param> 
    <param name="remoteCommand">/bin/sleep</param> 
    <param name="args">1</param> 
   </paramList>
  </step>
 <job ID="Nested_Job" type="OSCommand">
  <credList>
   <cred usage="defaultHostCred" reference="osCreds" /> 
  </credList>
  <targetList allTargets="true" /> 
  <paramList>
    <param name="command">/bin/sleep</param> 
    <param name="args">1</param> 
   </paramList>
  </job>
 </stepset>
</jobType>

8.16 About Performance Issues

This section provides a brief discussion on issues to consider when designing your job type. These issues might impact the performance of your job type as well as the overall job system.

8.16.1 Using Parameter Sources

The following issues are important in relation to the use of parameter sources:

  • Parameter sources are a convenient way to obtain required parameters from known sources, such as the Management Repository or the credentials table. The parameter sources must be used only for quick queries that fetch information stored somewhere else.

  • Parameter sources that are evaluated at job execution time will, in general, effect the throughput of the job dispatcher and must be used with care. In some cases, the fetching of parameters at execution time might be unavoidable and if you do not care whether the parameters are fetched at execution time or submission time, set evaluateAtSubmission to false.

  • When executing SQL queries to obtain parameters (using the SQL parameter source), the usual performance improvement guidelines apply. These include using indexes only where necessary and avoiding the joining of large tables.

8.17 Adding a Job Type to Enterprise Manager

To package a new job type with a metadata plug-in, you must adhere to the following implementation guidelines:

New job types packaged with a metadata plug-in have two new files:

  • Job type definition XML file: Used by the job system during plug-in deployment to define your new job type. There is one XML file for each job type.

  • Job type script file: Installed on selected Management Agents during plug-in deployment. A single script might be shared amongst different jobs.

The following two properties must be set to "true" in the first line of the job type definition XML file:

  • agentBound

  • singleTarget

Here is an example:

<jobType version="1.0" name="PotatoUpDown" singleTarget="true" agentBound="true" targetTypes="potatoserver_os">

Because the use of Java for a new job type is not supported for job types packaged with a plug-in, new job types are agentBound and perform their work through a script delivered to the Management Agent (the job type script file). The job type definition XML file contains a reference to the job type script file and executes it on the Management Agent whenever the job is run from the Enterprise Manager console.

Adding a Job Type to an Oracle Plug-in Archive (OPAR)

After you have created the job type definition XML file and modified the target type definition file, add your files to an Oracle Plug-in Archive (OPAR) just as you would any other target type. See Chapter 14, "Validating, Packaging, and Deploying the Plug-in" for more information.

Release 11.1 Job Types Versus Enterprise Manager Cloud Control 12c Job Types

In Oracle Enterprise Manager Cloud Control 12c, the job type parser has moved to an XSD-based parser. However, Enterprise Manager release 11.1 job types will work because there are no major changes required to enable an 11.1 job type to be parsed with a Cloud Control 12c parser.

The following are some of the known changes required by the Cloud Control 12c parser in the job type XML:

  • <jobtype> must change to <jobType>.

  • <paramInfo> must not contain <stepset>.

  • <ParameterUrisource> tag must end like <parameterUrisource attr1=”” attr2=”” /> and not like <parameterUrisource attr1=”” attr2=”> </parameterUriSource>.

  • <paramInfo/> must be removed.

  • stepSet does not contain successOf or failureOf attributes.

  • Make sure the ID specified in the stepDisplayInfo exists in the job type (that is, a step with that ID should exist).

    In Cloud Control 12c, job types can be registered through an emctl command, see the following command information:

    emctl register oms metadata –service jobTypes –file <file name with absolute path>  -sysman <sysman password> -pluginId <plugin id>