Create a Pluggable Database
/database/pdbs/
Request
There are no request parameters for this operation.
- application/json
object
-
admin_password: string
The administrator password for the new PDB. This property is required when creating pdb from PDB$SEED.
-
admin_user: string
The administrator username for the new PDB. This property is required when creating pdb from PDB$SEED.
-
as_clone: boolean
Indicate if 'AS CLONE' option should be used in the command to plug in a PDB. This property is optional when creating pdb from XML file.
-
create_file_dest: string
NONE to disable Oracle Managed Files for the PDB, Specify either directory_path_name or diskgroup_name to enable Oracle Managed Files for the PDB. It is optional.
-
dryrun: boolean
If defined, the response will contain a JSON object with the information of the script that was generated for execution. A database is not created when this property is set to true.
-
file_name_convert: string
Relevant for create and plug operations. As defined in the Oracle Multitenant Database documentation. Values can be a filename convert pattern or NONE.
-
keystore_password: string
TDE password when applicable (optional). This property is optional when creating pdb from a source pdb or a snapshot.
-
new_pdb_name(required): string
The name of the new PDB.
-
no_data: boolean
Relevant for clone operations. Specifies that the source pluggable database data model definition is cloned but not the data. Defaults to false.
-
roles: string
Grant one or more roles to the PDB_DBA role. This property is optional when creating pdb from PDB$SEED.
-
service_name_convert: string
Relevant for create and plug operations. As defined in the Oracle Multitenant Database documentation. Values can be an even number of strings or NONE.
-
snapshot_copy: boolean
Creates a snapshot copy PDB from a storage-managed snapshot. Storage-managed snapshots are only supported on specific file systems. It must not be used with snapshotName, snapshotScn and snapshotTimestamp parameters.
-
snapshot_name: string
The name of the PDB snapshot that the new PDB will clone from. Only one of snapshot_name, snapshot_scn and snapshot_timestame can be provided.
-
snapshot_scn: string
The id of the PDB snapshot that new PDB will clone from. Only one of snapshot_name, snapshot_scn and snapshot_timestame can be provided.
-
snapshot_timestamp: string
The time stamp of the PDB snapshot that new PDB will clone from. Only one of snapshot_name, snapshot_scn and snapshot_timestame can be provided.
-
source_file_name_convert: string
Values can be a source filename convert pattern or NONE, This property is optional when creating pdb from XML file.
-
source_pdb_name: string
The name of the source PDB. This property is for clone pdb and clone pdb from snapshot.
-
storage: String
Storage limits for the PDB. it can be UNLIMITED, MAXSIZE or MAX_AUDIT_SIZE, MAX_DIAG_SIZE etc.
-
temp_file_reuse: boolean
Relevant for create and plug operations. True for temporary file reusage.
-
xml_file_action: string
Allowed Values:
[ "COPY", "NOCOPY", "MOVE" ]
Indicate which copy option should be used in the command to plug in a PDB. This property is optional when creating pdb from XML file. -
xml_file_name: string
The path of the XML metadata file to use when plugging-in a PDB. This property is required when creating pdb from XML file.
Response
- application/json
200 Response
object
-
allow_runs_in_restricted_mode: string
Indicates whether the job is allowed to run in restricted session mode (TRUE) or not (FALSE).
-
auto_drop: string
Indicates whether the job will be dropped when it has completed (TRUE) or not (FALSE).
-
client_id: string
Client identifier of the user creating the job.
-
comments: string
Comments on the job.
-
connect_credential_name: string
Name of connect credential.
-
connect_credential_owner: string
Owner of connect credential.
-
credential_name: string
Name of the credential to be used for an external job.
-
credential_owner: string
Owner of the credential to be used for an external job.
-
deferred_drop: string
Indicates whether the job will be dropped when completed due to user request (TRUE) or not (FALSE).
-
destination: string
Destination that this job will run on.
-
destination_owner: string
Owner of the destination object (if used), else NULL.
-
enabled: string
Indicates whether the job is enabled (TRUE) or disabled (FALSE).
-
end_date: string
Date after which the job will no longer run (for an inline schedule).
-
event_condition: string
Boolean expression used as the subscription rule for the event on the source queue.
-
event_queue_agent: string
Name of the AQ agent used by the user on the event source queue (if it is a secure queue).
-
event_queue_name: string
Name of the source queue into which the event will be raised.
-
event_queue_owner: string
Owner of the source queue into which the event will be raised.
-
event_rule: string
Name of the rule used by the coordinator to trigger the event-based job.
-
fail_on_script_error: string
Indicates whether this job fails on script error (TRUE) or not (FALSE).
-
failure_count: integer
Number of times the job has failed to run.
-
file_watcher_name: string
Name of the file watcher on which this job is based.
-
file_watcher_owner: string
Owner of the file watcher on which this job is based.
-
flags: integer
This column is for internal use.
-
global_uid: string
Global user identifier of the user creating the job.
-
has_constraints: string
Indicates whether the job (not including the program of the job) is part of a resource constraint or incompatibility (TRUE) or not (FALSE).
-
instance_id: integer
Instance on which the user requests the job to run.
-
instance_stickiness: string
Indicates whether the job is sticky (TRUE) or not (FALSE).
-
job_action: string
Inline job action
-
job_class: string
Name of the job class associated with the job.
-
job_creator: string
Original creator of the job.
-
job_name: string
Name of the Scheduler job.
-
job_priority: integer
Priority of the job relative to other jobs in the same class.
-
job_style: string
Job style: REGULAR, LIGHTWEIGHT, IN_MEMORY_RUNTIME, IN_MEMORY_FULL.
-
job_subname: string
Subname of the Scheduler job (for a job running a chain step).
-
job_type: string
Inline job action type.
-
job_weight: integer
Weight of the job.
-
last_run_duration: string
Amount of time the job took to complete during the last run.
-
last_start_date: string
Last date on which the job started running.
-
links: array
links
-
logging_level: string
Amount of logging that will be done pertaining to the job.
-
max_failures: integer
Number of times the job will be allowed to fail before being marked broken.
-
max_run_duration: string
Maximum amount of time for which the job will be allowed to run.
-
max_runs: integer
Maximum number of times the job is scheduled to run.
-
next_run_date: string
Next date on which the job is scheduled to run.
-
nls_env: string
NLS environment of the job.
-
number_of_arguments: integer
Inline number of job arguments
-
number_of_destinations: integer
Number of destinations associated with this job.
-
owner: string
Owner of the Scheduler job.
-
program_name: string
Name of the program associated with the job.
-
program_owner: string
Owner of the program associated with the job.
-
raise_events: string
List of job events to raise for the job.
-
repeat_interval: string
Inline schedule PL/SQL expression or calendar string.
-
restart_on_failure: string
Indicates whether the step should be restarted on application failure (TRUE) or not (FALSE).
-
restart_on_recovery: string
Indicates whether the step should be restarted on database recovery (TRUE) or not (FALSE).
-
restartable: string
Indicates whether the job can be restarted (TRUE) or not (FALSE).
-
retry_count: integer
Number of times the job has retried, if it is retrying.
-
run_count: integer
Number of times the job has run.
-
schedule_limit: string
Time after which a job which has not run yet will be rescheduled.
-
schedule_name: string
Name of the schedule that the job uses (can be a window or a window group)
-
schedule_owner: string
Owner of the schedule that the job uses (can be a window or a window group)
-
schedule_type: string
Type of the schedule that the job uses.
-
source: string
Source global database identifier.
-
start_date: string
Original scheduled start date of the job (for an inline schedule).
-
state: string
Current state of the job.
-
stop_on_window_close: string
Indicates whether the job will stop if a window associated with the job closes (TRUE) or not (FALSE).
-
store_output: string
Indicates whether all job output messages for the job are stored in the OUTPUT column of the *_JOB_RUN_DETAILS views for job runs that are logged.
-
system: string
Indicates whether the job is a system job (TRUE) or not (FALSE).
-
uptime_failure_count: integer
Number of failures since the database last restarted. For in-memory jobs, this column is populated, but the FAILURE_COUNT column is not populated. For all other jobs, this column is NULL.
-
uptime_run_count: integer
Number of runs since the database last restarted. For in-memory jobs, this column is populated, but the RUN_COUNT column is not populated. For all other jobs, this column is NULL.
Examples
The following example shows how to create a new Pluggable Database by submitting a POST request on the REST resource using cURL.
curl -i -X POST -u username:password
-d @request_body.json
-H "Content-Type:application/json" https://rest_server_url/ords/_/db-api/stable/database/pdbs/
Example of Request Body
Note:
https://rest_server_url/resource-path
, used in the preceding command has the following components:
rest_server_url
is the REST server where Oracle Rest Data Server is running- The remainder of the URL includes the ORDS context root, the version of ORDS Database API to use, and the path for this operation. The PDB Lifecycle Management service requires
db.cdb.adminUser
credentials to be set in the pool configuration and in this example, the default pool is configured for the container database.
The following is an example request body to create a pluggable database called pdb_sample
in this example, from PDB$SEED
with unlimited storage. In this example, file_name_convert
parameter is also provided that results in a FILE_NAME_CONVERT
clause included in the CREATE PLUGGABLE DATABASE
statement executed in the container database:
{
"new_pdb_name": "pdb_sample",
"admin_user": "pdbadmin",
"admin_password": "W3lc0m31",
"file_name_convert": "('/disk1/oracle/dbs/pdbseed/','/disk1/oracle/dbs/pdb_sample/')",
"storage": "UNLIMITED",
"temp_file_reuse": true
}
The following is an example request body to get the generated script for creating a pluggable database from PDB$SEED
with custom storage settings. Note that the script is not executed in the database. In this example, file_name_convert
parameter has a NONE
value that results in a FILE_NAME_CONVERT=NONE
clause included in the CREATE PLUGGABLE DATABASE
statement generated.
{
"new_pdb_name": "pdb_sample",
"admin_user": "pdbadmin",
"admin_password": "W3lc0m31",
"file_name_convert": "NONE",
"temp_file_reuse": true,
"storage": "(MAXSIZE 2G MAX_SHARED_TEMP_SIZE 800M)",
"dryrun": true
}
The following is an example request body to create a new pluggable database by cloning the pluggable database specified by the source_pdb_name
parameter in the Json payload. In this example, file_name_convert
parameter is also provided that results in a FILE_NAME_CONVERT
clause included in the CREATE PLUGGABLE DATABASE
statement executed in the container database:
{
"new_pdb_name": "pdb_new",
"source_pdb_name": "devpdb1",
"file_name_convert": "('/disk1/oracle/dbs/devpdb1/','/disk1/oracle/dbs/pdb_new/')",
"storage": "UNLIMITED"
}
The following is an example request body to plugin a pluggable database called sales_pdb into the container database. In this example request body the pluggable database definition is specified in the sales_pdb.xml file:
{
"new_pdb_name": "sales_pdb",
"xml_file_name": "/disk1/oracle/dbs/sales_pdb.xml",
"source_file_name_convert": "NONE",
"file_name_convert": "NONE",
"storage": "UNLIMITED",
"xml_file_action": "NOCOPY",
"temp_file_reuse": true,
"dryrun":true
}
Example of Response Body when dryrun is true
The following example shows the response body with 200 returned in JSON format:
{"response":" BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'DBAPI_N6765OAGAO8NCPU120240326210214',
job_type => 'PLSQL_BLOCK',
comments => 'ORDS_PDB_Lifecycle_API',
job_action => 'BEGIN
EXECUTE IMMEDIATE ''CREATE PLUGGABLE DATABASE sales_pdb
USING ''''/disk1/oracle/dbs/sales_pdb.xml''''
NOCOPY
FILE_NAME_CONVERT = NONE
TEMPFILE REUSE
STORAGE UNLIMITED'';
EXECUTE IMMEDIATE ''ALTER PLUGGABLE DATABASE sales_pdb OPEN READ WRITE'';
END;',
start_date => null,
enabled => TRUE);
DBMS_SCHEDULER.SET_ATTRIBUTE (
name => 'DBAPI_N6765OAGAO8NCPU120240326210214',
attribute => 'job_priority',
value => 1);
END;"}