Using console to insert table data
Learn how to insert data into Oracle NoSQL Database Cloud Service tables from the NoSQL console.
- Simple Input Mode: You can use this mode to provide the values for the new rows declaratively.
- Advanced JSON Input Mode: You can use this mode to provide the values for the new rows in JSON format.
- You can use the INSERT statement to add a row. Under NoSQL>>Tables, choose the table where you want to add a row. Under Explore Data, add the INSERT statement in the text box provided for the SQL statement and execute the command.
- You can also bulk upload the data from a local file into the table, via the browser.
Note: If the table is a Global Active table, then adding data in one region also adds the data to all the regional table replicas.
Inserting Data Into Tables: Simple Input Mode
Learn how to insert data into Oracle NoSQL Database Cloud Service tables by using the Simple Input insertion mode.
Inserting Data Into Tables: Advanced JSON Input Mode
Learn how to insert data into Oracle NoSQL Database Cloud Service tables by using the Advanced JSON input mode.
Using SQL command to insert data
The INSERT statement is used to construct a new row and add it to a specified table.
Optional column(s) may be specified after the table name. This list contains the column names for a subset of the table’s columns. The subset must include all the primary key columns. If no columns list is present, the default columns list is the one containing all the columns of the table, in the order, they are specified in the CREATE TABLE statement.
The columns in the columns list correspond one-to-one to the expressions (or DEFAULT keywords) listed after the VALUES clause (an error is raised if the number of expressions/DEFAULTs is not the same as the number of columns). These expressions/DEFAULTs compute the value for their associated column in the new row. An error is raised if an expression returns more than one item. If an expression returns no result, NULL is used as the result of that expression. If instead of an expression, the DEFAULT keyword appears in the VALUES list, the default value of the associated column is used as the value of that column in the new row. The default value is also used for any missing columns when the number of columns in the columns list is less than the total number of columns in the table.
BaggageInfo
table providing all column
values:INSERT INTO BaggageInfo VALUES(
1762392196147,
"Birgit Naquin",
"M",
"165-742-5715",
"QD1L0T",
[ {
"id" : "7903989918469",
"tagNum" : "17657806240229",
"routing" : "JFK/MAD",
"lastActionCode" : "OFFLOAD",
"lastActionDesc" : "OFFLOAD",
"lastSeenStation" : "MAD",
"flightLegs" : [ {
"flightNo" : "BM495",
"flightDate" : "2019-03-07T07:00:00Z",
"fltRouteSrc" : "JFK",
"fltRouteDest" : "MAD",
"estimatedArrival" : "2019-03-07T14:00:00Z",
"actions" : [ {
"actionAt" : "MAD",
"actionCode" : "Offload to Carousel at MAD",
"actionTime" : "2019-03-07T13:54:00Z"
}, {
"actionAt" : "JFK",
"actionCode" : "ONLOAD to MAD",
"actionTime" : "2019-03-07T07:00:00Z"
}, {
"actionAt" : "JFK",
"actionCode" : "BagTag Scan at JFK",
"actionTime" : "2019-03-07T06:53:00Z"
}, {
"actionAt" : "JFK",
"actionCode" : "Checkin at JFK",
"actionTime" : "2019-03-07T05:03:00Z"
} ]
} ],
"lastSeenTimeGmt" : "2019-03-07T13:51:00Z",
"bagArrivalDate" : "2019-03-07T13:51:00Z"
} ]
)
Example 2: Skipping some data while doing an INSERT statement by specifying the DEFAULT clause.
INSERT INTO BaggageInfo VALUES(
1762397286805,
"Bonnie Williams",
DEFAULT,
DEFAULT,
"CZ1O5I",
[ {
"id" : "79039899129693",
"tagNum" : "17657806216554",
"routing" : "SFO/ORD/FRA",
"lastActionCode" : "OFFLOAD",
"lastActionDesc" : "OFFLOAD",
"lastSeenStation" : "FRA",
"flightLegs" : [ {
"flightNo" : "BM572",
"flightDate" : "2019-03-02T05:00:00Z",
"fltRouteSrc" : "SFO",
"fltRouteDest" : "ORD",
"estimatedArrival" : "2019-03-02T09:00:00Z",
"actions" : [ {
"actionAt" : "SFO",
"actionCode" : "ONLOAD to ORD",
"actionTime" : "2019-03-02T05:24:00Z"
}, {
"actionAt" : "SFO",
"actionCode" : "BagTag Scan at SFO",
"actionTime" : "2019-03-02T04:52:00Z"
}, {
"actionAt" : "SFO",
"actionCode" : "Checkin at SFO",
"actionTime" : "2019-03-02T03:28:00Z"
} ]
}, {
"flightNo" : "BM582",
"flightDate" : "2019-03-02T05:24:00Z",
"fltRouteSrc" : "ORD",
"fltRouteDest" : "FRA",
"estimatedArrival" : "2019-03-02T13:24:00Z",
"actions" : [ {
"actionAt" : "FRA",
"actionCode" : "Offload to Carousel at FRA",
"actionTime" : "2019-03-02T13:20:00Z"
}, {
"actionAt" : "ORD",
"actionCode" : "ONLOAD to FRA",
"actionTime" : "2019-03-02T12:54:00Z"
}, {
"actionAt" : "ORD",
"actionCode" : "OFFLOAD from ORD",
"actionTime" : "2019-03-02T12:30:00Z"
} ]
} ],
"lastSeenTimeGmt" : "2019-03-02T13:18:00Z",
"bagArrivalDate" : "2019-03-02T13:18:00Z"
} ]
)
Example 3: Specifying column names and skipping columns in the insert statement.
INSERT INTO BaggageInfo(ticketNo, fullName,confNo,bagInfo) VALUES(
1762355349471,
"Bryant Weber",
"LI7N1W",
[ {
"id" : "79039899149056",
"tagNum" : "17657806234185",
"routing" : "MEL/LAX/MIA",
"lastActionCode" : "OFFLOAD",
"lastActionDesc" : "OFFLOAD",
"lastSeenStation" : "MIA",
"flightLegs" : [ {
"flightNo" : "BM114",
"flightDate" : "2019-03-01T12:00:00Z",
"fltRouteSrc" : "MEL",
"fltRouteDest" : "LAX",
"estimatedArrival" : "2019-03-02T02:00:00Z",
"actions" : [ {
"actionAt" : "MEL",
"actionCode" : "ONLOAD to LAX",
"actionTime" : "2019-03-01T12:20:00Z"
}, {
"actionAt" : "MEL",
"actionCode" : "BagTag Scan at MEL",
"actionTime" : "2019-03-01T11:52:00Z"
}, {
"actionAt" : "MEL",
"actionCode" : "Checkin at MEL",
"actionTime" : "2019-03-01T11:43:00Z"
} ]
}, {
"flightNo" : "BM866",
"flightDate" : "2019-03-01T12:20:00Z",
"fltRouteSrc" : "LAX",
"fltRouteDest" : "MIA",
"estimatedArrival" : "2019-03-02T16:21:00Z",
"actions" : [ {
"actionAt" : "MIA",
"actionCode" : "Offload to Carousel at MIA",
"actionTime" : "2019-03-02T16:18:00Z"
}, {
"actionAt" : "LAX",
"actionCode" : "ONLOAD to MIA",
"actionTime" : "2019-03-02T16:12:00Z"
}, {
"actionAt" : "LAX",
"actionCode" : "OFFLOAD from LAX",
"actionTime" : "2019-03-02T16:02:00Z"
} ]
} ],
"lastSeenTimeGmt" : "2019-03-02T16:09:00Z",
"bagArrivalDate" : "2019-03-02T16:09:00Z"
} ]
)
stream_acct
table providing all column
values:INSERT INTO stream_acct VALUES(
1,
"AP",
"2023-10-18",
{
"firstName": "Adam",
"lastName": "Phillips",
"country": "Germany",
"contentStreamed": [{
"showName": "At the Ranch",
"showId": 26,
"showtype": "tvseries",
"genres": ["action", "crime", "spanish"],
"numSeasons": 4,
"seriesInfo": [{
"seasonNum": 1,
"numEpisodes": 2,
"episodes": [{
"episodeID": 20,
"episodeName": "Season 1 episode 1",
"lengthMin": 85,
"minWatched": 85,
"date": "2022-04-18"
},
{
"episodeID": 30,
"lengthMin": 60,
"episodeName": "Season 1 episode 2",
"minWatched": 60,
"date": "2022 - 04 - 18 "
}]
},
{
"seasonNum": 2,
"numEpisodes": 2,
"episodes": [{
"episodeID": 40,
"episodeName": "Season 2 episode 1",
"lengthMin": 50,
"minWatched": 50,
"date": "2022-04-25"
},
{
"episodeID": 50,
"episodeName": "Season 2 episode 2",
"lengthMin": 45,
"minWatched": 30,
"date": "2022-04-27"
}]
},
{
"seasonNum": 3,
"numEpisodes": 2,
"episodes": [{
"episodeID": 60,
"episodeName": "Season 3 episode 1",
"lengthMin": 50,
"minWatched": 50,
"date": "2022-04-25"
},
{
"episodeID": 70,
"episodeName": "Season 3 episode 2",
"lengthMin": 45,
"minWatched": 30,
"date": "2022 - 04 - 27 "
}]
}]
},
{
"showName": "Bienvenu",
"showId": 15,
"showtype": "tvseries",
"genres": ["comedy", "french"],
"numSeasons": 2,
"seriesInfo": [{
"seasonNum": 1,
"numEpisodes": 2,
"episodes": [{
"episodeID": 20,
"episodeName": "Bonjour",
"lengthMin": 45,
"minWatched": 45,
"date": "2022-03-07"
},
{
"episodeID": 30,
"episodeName": "Merci",
"lengthMin": 42,
"minWatched": 42,
"date": "2022-03-08"
}]
}]
}]
});
Upload of table rows
The Upload Data button in the Table details page allows bulk uploading of data from a local file into the table, via the browser.
The Upload feature is intended for loading less than a few thousand rows. This feature is great for performing a proof of concept ( POC) or doing basic testing of the service. It is a convenient way to populate a small table. If you want to write tens of thousands of rows, then for performance reasons you would be better off using the Oracle NoSQL Database Migrator or writing your own program using one of the NoSQL SDKs. If, however, you want to quickly insert a few hundred or a few thousand rows, this upload method is an expeditious approach.
The file to be uploaded must contain a series of JSON objects. The objects can be expressed as comma-separated items of a single array or as a sequence of simple objects bounded by curly braces, with no syntactic delimiters between them. The contents of each object must be correctly formatted JSON and must conform to the schema of the table to which they will be uploaded.
CREATE TABLE Simple ( id integer, val string, PRIMARY KEY ( id ) )
[
{
"id": 0,
"val": "0"
},
{
"id": 1,
"val": "2"
}, ...
]
{
"id": 0,
"val": "0"
}
{
"id": 1,
"val": "2"
}, ...
- If a column value is not required by the table's schema, then the corresponding JSON property may be left out.
- If a column value is GENERATED ALWAYS, then the corresponding JSON property must be left out.
- If a JSON object contains properties with names that do not match any column names, those properties are ignored.
To use the upload feature, click the Upload Data button and select the file to be uploaded. The upload begins immediately, and progress will be shown on the page. Upon successful completion, the total number of rows inserted will be shown. You can interrupt the upload by clicking the Stop Uploading button. The number of rows that were successfully committed to the database will be shown.
If an error in the input file is detected, then uploading will stop and an error message with an approximate line number will be shown. Input errors might be caused by incorrect JSON syntax or schema nonconformance. Errors can also occur during requests for the service. Such errors also stop the uploading and display a message.
CREATE TABLE IF NOT EXISTS NDCStable (id INTEGER, name STRING, email STRING, age INTEGER, income DOUBLE, PRIMARY KEY(SHARD(id)))
Table - Error Handling During Upload
Issues in the input file | Error handling |
---|---|
A non-JSON text in one of the rows
|
The system ignores the text and continues the upload. A success message is displayed after all the rows are successfully loaded into the table. In this example, the text "missed a line" is skipped and the two rows with id 1 and 2 are successfully loaded. |
Duplicate primary key entries
|
The system skips the duplicated row entry and continues the upload. A success message is displayed after successfully loading all the rows. In this example, the first row with id 2 is loaded into the table. The second row containing the duplicated id value 2 is skipped. All other rows are successfully uploaded. |
Incorrect JSON format
|
The system stops the uploading when it detects the erroneous row. An error message is displayed indicating unexpected end of file and its approximate line number. In this example, there are two JSON format errors. The row with id 8 is missing a closing brace and id 9 is missing a comma between two JSON fields. Both these entries do not conform to the JSON Data Type format resulting in upload failure. Only the row with id 7 is successfully loaded into the table. Note: The uploading operation is stopped upon detecting the first instance of error or inconsistency in the input file. |
Primary key missing in a row
|
The system stops the uploading when it detects the erroneous row. An error message is displayed indicating that the primary key must have a value. In this example, the second row does not include the primary key field resulting in an error. Only the row with id 2 is successfully loaded into the table. |
Schema non-conformance - missing, incorrect, or additional fields
|
If fields other than the primary key are missing in the input file, the data is successfully uploaded. A success message is displayed. The missing fields are populated with a null value. In this example, the field name is spelled incorrectly in the row with id 7, and the field |
- If there are no columns with generated key values (that is, if the keys are entirely dictated by the JSON file), then you can simply start over with the same file. The already-written rows will be written again.
- If there are generated key values, then starting over would write new records instead of overwriting existing records. The easiest path would be to drop the table and create it again.
- Alternatively, you could remove all records from the table by executing the
statement
DELETE FROM tablename
in the Explore data form.
If the provisioned write limit is exceeded during the upload process, a transient message indicating so will be displayed, and the uploading will be slowed down to avoid exceeding the limit again.