Get default cluster
get
/20260430/aiDataPlatforms/{aiDataPlatformId}/defaultCluster
Gets information about the master catalog default cluster.
Request
Path Parameters
-
aiDataPlatformId(required): string
The [OCID](/iaas/Content/General/Concepts/identifiers.htm) of the AI Data Platform (Data Lake) instance.
Header Parameters
-
opc-request-id: string
Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. The only valid characters for request IDs are letters, numbers, underscore, and dash.
Response
Supported Media Types
- application/json
200 Response
Successful operation. Master catalog default cluster information is retrieved.
Headers
-
etag: string
For optimistic concurrency control. See `if-match`.
-
opc-request-id: string
Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : DefaultCluster
Type:
objectThe default cluster created by AI Data Platform Workbench.
Match All
The default cluster created by AI Data Platform Workbench.
The default cluster created by AI Data Platform Workbench.
Show Source
-
object
Cluster
Discriminator:
sourceApiA Cluster is a compute subresource within AI Data Platform Workbench. Compute/Runtime Clusters are Spark execution environments. Spark clusters are used for Notebook execution and for Spark SQL query execution over JDBC/ODBC. These clusters seamlessly process the data in the AI Data Platform Workbench. Users can also use JDBC/ODBC endpoints for highly performant SQL execution for integration with analytics tools such as Oracle Analytic Cloud and Excel. A DEFAULT cluster is a subresource within AI Data Platform Workbench associated with master catalog and it can not be attached to a notebook. A USER cluster is a subresource within workspace and can be attached to a notebook. -
object
DefaultCluster-allOf[1]
Discriminator:
DEFAULT_CLUSTER_API
Nested Schema : Cluster
Type:
objectDiscriminator:
sourceApiA Cluster is a compute subresource within AI Data Platform Workbench. Compute/Runtime Clusters are Spark execution environments.
Spark clusters are used for Notebook execution and for Spark SQL query execution over JDBC/ODBC.
These clusters seamlessly process the data in the AI Data Platform Workbench. Users can also use JDBC/ODBC endpoints for highly
performant SQL execution for integration with analytics tools such as Oracle Analytic Cloud and Excel.
A DEFAULT cluster is a subresource within AI Data Platform Workbench associated with master catalog and it can not be
attached to a notebook. A USER cluster is a subresource within workspace and can be attached to a notebook.
Show Source
-
activeClusterResources:
object ActiveClusterResources
Active resources of a cluster.
-
createdBy:
string
OCID of the user who created this record.
-
createdByName:
string
Name of the user who created this record.
-
description:
string
Minimum Length:
1Maximum Length:400Cluster description. -
displayName(required):
string
Cluster name.
-
driverConfig:
object DriverConfig
Driver configuration.
-
key(required):
string
Cluster key.
-
nodeType:
string
Minimum Length:
1Maximum Length:255Cluster node type encodes the node shape and associated resources. -
sourceApi:
string
Default Value:
CLUSTER_APIAllowed Values:[ "CLUSTER_API", "DEFAULT_CLUSTER_API", "AGENT_FLOW_COMPUTE" ]User created clusters are associated with a particular workspace. Default cluster is to be used by all catalogs operations that require compute. Default cluster can be thought of as associated with Master catalog. Agent Flow Compute is used to execute Agent Flows. -
state(required):
string
Allowed Values:
[ "ACCEPTED", "CREATING", "ACTIVE", "DELETING", "DELETED", "FAILED", "STOPPING", "STOPPED", "UPDATING", "RESTARTING", "STARTING", "NETWORK_CONFIGURATION_ATTACH_IN_PROGRESS", "NETWORK_CONFIGURATION_ATTACH_SUCCESSFUL", "NETWORK_CONFIGURATION_ATTACH_FAILED", "NETWORK_CONFIGURATION_DETACH_IN_PROGRESS", "NETWORK_CONFIGURATION_DETACH_SUCCESSFUL", "NETWORK_CONFIGURATION_DETACH_FAILED" ]Common lifecycle states for resources in a compute cluster. ACCEPTED - The resource create request has been accepted. CREATING - The resource is being created and might not be usable until the entire metadata is defined. ACTIVE - The resource is valid and available for access. DELETING - The resource is being deleted, and might require a deep clean of any children. DELETED - The resource has been deleted, and isn't available. FAILED - The resource is in a failed state due to validation or other errors. STOPPING - The resource is being stopped. STOPPED - The resource has been stopped. UPDATING - The resource is being updated and might not be usable until all changes are commited. STARTING - The resource is being started. RESTARTING - The resource is being restarted. -
stateDetails:
string
A message that describes the current state of the workspace cluster in more detail. For example, can be used to provide actionable information for a resource in the Failed state.
-
stoppedBy:
string
OCID of the user who stopped the cluster. Value will be 'SYSTEM' if it was auto stopped.
-
stoppedByName:
string
Name of the user who stopped the cluster. Value will be 'SYSTEM' if it was auto stopped.
-
timeCreated(required):
string(date-time)
Date and time the cluster was created.
-
timeUpdated:
string(date-time)
Date and time the cluster was updated.
-
type:
string
Allowed Values:
[ "USER", "AGENT_FLOW_COMPUTE" ]ClusterType -
updatedBy:
string
OCID of the user who updated this record.
-
updatedByName:
string
Name of the user who updated this record.
Nested Schema : DefaultCluster-allOf[1]
Type:
objectDiscriminator:
Show Source
DEFAULT_CLUSTER_API-
autoTerminationMinutes:
integer(int32)
Minimum Value:
5Maximum Value:59940Default Value:oracle.doceng.json.BetterJsonNull@6719a5b8Optional timeout value in minutes used to automatically stop idle compute clusters. -
clusterRuntimeConfig:
object ClusterRuntimeConfig
Discriminator:
typeCluster runtime configurations. -
jdbcEndpointUrl:
string
Spark JDBC URL.
-
loggingConfig:
object LoggingConfig
Discriminator:
typeLogging configuration. -
logGroupId:
string
The unique OCID that identifies a specific log group within OCI Logging. This log group is exclusively associated with the AI Data Platform Workbench instance and is created in the same compartment within the customer???s tenancy as the AI Data Platform Workbench instance.Example:
ocid1.loggroup.oc1.phx.amaaaaaaeq37tyqau3s5k4zp7gimmao3duxdy2y2j5x3lutjdhggjwzb7w3q -
logId:
string
The OCID of the log where cluster logs are published and retrieved. This logId is always created within the logGroupId returned in the response payload.Example:
ocid1.log.oc1.phx.amaaaaaaeq37tyqau2j7ug7xnlhtwiubbinfhiuc2fatgwtzt3s3flivchma -
subscription:
object SubscriptionDetails
Details of subscription.
-
workerConfig:
object WorkerConfig
Worker configuration.
-
workspaceKey:
string
The key of the AI Data Platform Workbench workspace where the default cluster is.
400 Response
Bad Request (invalid query parameters, malformed headers, and so on).
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
401 Response
Unauthorized (missing or expired credentials, and so on).
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
404 Response
Not Found. The requested resource was not found.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
429 Response
Too Many Requests. Too many requests sent to the server in a short period.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
500 Response
Internal Server Error. The server encountered an unexpected condition preventing fulfilment of the request.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.
Default Response
Unknown Error. Error is not recognized by the system.
Headers
-
opc-request-id: string
Unique Oracle-assigned ID for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Root Schema : Error
Type:
objectError information.
Show Source
-
code(required):
string
A short error code that defines the error, meant for programmatic parsing.
-
message(required):
string
A human-readable error message.