Get job details
get
/20260430/aiDataPlatforms/{aiDataPlatformId}/workspaces/{workspaceKey}/jobs/{jobKey}
Returns detailed information about a given job in AI Data Platform Workbench.
Request
Path Parameters
-
aiDataPlatformId(required): string
The [OCID](/iaas/Content/General/Concepts/identifiers.htm) of the AI Data Platform (Data Lake) instance.
-
jobKey(required): string
Job key.
-
workspaceKey(required): string
The key of the Workspace
Header Parameters
-
opc-request-id: string
Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. The only valid characters for request IDs are letters, numbers, underscore, and dash.
-
should-update-recent: boolean
A flag to identify if the recent list should be updated.Default Value:
false
Response
Supported Media Types
- application/json
200 Response
Successful operation. Detailed job information is retrieved.
Headers
-
etag: string
For optimistic concurrency control. See `if-match`.
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Job
Type:
objectA description of a Job.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to
an administrator. If you're an administrator who needs to write policies to give users access, see
[Getting Started with Policies](/iaas/Content/Identity/policiesgs/get-started-with-policies.htm).
Show Source
-
continuous:
object Continuous
The continuous property ensures that there is always one execution for this job.
-
createdBy(required):
string
The OCID of the IAM user.
-
createdByName:
string
Name of the user who created this record
-
description:
string
Maximum Length:
1024A description for the job. -
gitConfig:
object GitConfig
Git configuration used when source is GIT_PROVIDER.
-
jobClusters:
array jobClusters
Maximum Number of Items:
100List of job cluster configurations. -
key(required):
string
The OCID of the job.
-
maxConcurrentRuns:
integer(int32)
Minimum Value:
0Maximum Value:1000Default Value:1Indicates the number of executions for the same job that can be run concurrently. The maximum value cannot exceed 1000. -
name(required):
string
A user-friendly name. Does not have to be unique, and is changeable.
-
parameters:
array parameters
An optional list of parameters.
-
path:
string
The path to store the job definition in.
-
queue:
object Queue
Queue configuration for job.
-
runAs:
string
The id with which the job run as.
-
schedule:
object Schedule
The schedule configuration for the job.
-
tasks:
array tasks
List of tasks in a job.
-
timeCreated:
string(date-time)
The date and time the DataLake was created, in the format defined by RFC 3339. Example: `2025-05-25T21:10:29.600Z`
-
timeoutSeconds:
integer(int32)
Minimum Value:
60Maximum Value:172800Default Value:0An optional value to indicate the max run duration of a job after which job will be timed out. The default is Zero indicating no timeout value. -
timeUpdated:
string(date-time)
The date and time the DataLake was updated, in the format defined by RFC 3339. Example: `2025-05-25T21:10:29.600Z`
-
updatedBy:
string
The username of the latest updater. The OCID of the IAM user.
-
updatedByName:
string
Name of the user who updated this record.
Nested Schema : Continuous
Type:
objectThe continuous property ensures that there is always one execution for this job.
Show Source
-
pauseStatus:
string
Default Value:
UNPAUSEDAllowed Values:[ "PAUSED", "UNPAUSED" ]Indicates whether the continuous execution of this job is paused or not.
Nested Schema : GitConfig
Type:
objectGit configuration used when source is GIT_PROVIDER.
Show Source
-
branch:
string
Git branch path.
-
credential:
string
Git credential to access the repository.
-
provider:
string
Allowed Values:
[ "GITHUB", "BITBUCKET", "GITLAB", "OCI_DEVOPS" ]Git provider. -
repositoryUrl:
string
Git repository URL.
Nested Schema : jobClusters
Type:
arrayMaximum Number of Items:
100List of job cluster configurations.
Show Source
-
Array of:
object JobCluster
The cluster configuration that can be shared by tasks in the job.
Nested Schema : parameters
Type:
arrayAn optional list of parameters.
Show Source
-
Array of:
object Parameter
Specifies the name and value of the defined parameter.
Nested Schema : Queue
Type:
objectQueue configuration for job.
Show Source
-
isEnabled(required):
boolean
Default Value:
falseTrue if job queue is enabled.
Nested Schema : Schedule
Type:
objectThe schedule configuration for the job.
Show Source
-
pauseStatus:
string
Default Value:
UNPAUSEDAllowed Values:[ "PAUSED", "UNPAUSED" ]Indicates whether the schedule is paused or not. -
quartzCronExpression(required):
string
A cron expression using Quartz syntax that describes the schedule for a job.
-
timezoneId(required):
string
A Java timezone ID. The schedule of the job is resolved with respect to this timezone. Example - US/Pacific.
Nested Schema : tasks
Type:
arrayList of tasks in a job.
Show Source
-
Array of:
object Task
Discriminator:
typeProperties of a task provided by the user.
Nested Schema : JobCluster
Type:
objectThe cluster configuration that can be shared by tasks in the job.
Show Source
-
clusterKey:
string
Minimum Length:
1Maximum Length:100A unique identifier for the job cluster. -
clusterName:
string
Minimum Length:
1Maximum Length:100A unique name for the job cluster. -
newCluster:
object NewClusterConfiguration
The cluster configuration to create a new cluster.
Nested Schema : NewClusterConfiguration
Type:
objectThe cluster configuration to create a new cluster.
Show Source
-
autoScale:
object AutoScale
Properties required to automatically scale the clusters up and down based on load.
-
clusterName:
string
Minimum Length:
1Maximum Length:100A unique name for the job cluster. -
numWorkers:
integer(int32)
Number of worker nodes configured for this cluster.
-
sparkConf:
string
The spark configuration in key-value pairs.
-
sparkVersion:
string
The Spark version used to run the application.
Nested Schema : AutoScale
Type:
objectProperties required to automatically scale the clusters up and down based on load.
Show Source
-
maxWorkers:
integer(int32)
The maximum number of workers to which the cluster can scale up when overloaded.
-
minWorkers:
integer(int32)
The minimum number of workers to which the cluster can scale down when underused.
Nested Schema : Parameter
Type:
objectSpecifies the name and value of the defined parameter.
Show Source
-
name(required):
string
Pattern:
^[\w\-.]+$The name of the defined parameter. May only contain alphanumeric characters, '_', '-', and '.' -
value:
string
Value of the defined parameter.
Nested Schema : Task
Type:
objectDiscriminator:
typeProperties of a task provided by the user.
Show Source
-
dependsOn:
array dependsOn
Specifies the dependency graph of the task. All the tasks mentioned in this field need to be completed before executing this task.
-
isRetryOnTimeout:
boolean
Default Value:
falseAn optional policy to specify whether to retry a task when it times out. The default behavior is to not retry on timeout. -
maxRetries:
integer(int32)
Minimum Value:
0Maximum Value:300Default Value:0The maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it fails with status FAILED or INTERNAL_ERROR. Maximum value is 300. -
minRetryIntervalMillis:
integer(int32)
Minimum Value:
0Maximum Value:600000Default Value:0An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. If value is not provided, the run would be immediately retried. Maximum value is 10 mins (600000) -
runIf(required):
string
Default Value:
ALL_SUCCESSAllowed Values:[ "ALL_SUCCESS", "ALL_DONE", "NONE_FAILED", "AT_LEAST_ONE_SUCCESS", "ALL_FAILED", "AT_LEAST_ONE_FAILED" ]The trigger rule based on which the current task execution is determined. -
taskKey(required):
string
Minimum Length:
1Maximum Length:100The display name of the task. User can specify a value for this. -
type(required):
string
Allowed Values:
[ "NOTEBOOK_TASK", "PYTHON_TASK", "SPARK_SUBMIT_TASK", "IF_ELSE_TASK", "JOB_TASK", "JAR_TASK" ]The type of the task.
Nested Schema : dependsOn
Type:
arraySpecifies the dependency graph of the task. All the tasks mentioned in this field need to be completed before executing this task.
Show Source
-
Array of:
object DependsOn
Specifies the dependency graph of the task. All the tasks mentioned in this field need to be completed before executing this task.
Nested Schema : DependsOn
Type:
objectSpecifies the dependency graph of the task. All the tasks mentioned in this field need to be completed before executing this task.
Show Source
-
outcome:
string
Specified on condition task dependencies. The outcome of the dependent task should be met for this task to be executed.
-
taskKey(required):
string
The name of the task that it depends on.
400 Response
Bad Request (invalid query parameters, malformed headers, and so on).
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
401 Response
Unauthorized (missing or expired credentials, and so on).
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
404 Response
Not Found. The requested resource was not found.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
429 Response
Too Many Requests. Too many requests sent to the server in a short period.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
500 Response
Internal Server Error. The server encountered an unexpected condition preventing fulfilment of the request.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
Default Response
Unknown Error. Error is not recognized by the system.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.