You can access KVStore commands either through the Command
Line Interface(CLI) or through "java
-Xmx256m -Xms256m -jar
<kvhome>/lib/kvstore.jar <command>"
.
The following sections will describe both the CLI commands and
the utility commands accessed through "java
-jar"
.
The Command Line Interface (CLI) is run interactively or used to run single commands. The general usage to start the CLI is:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar runadmin -host <hostname> -port <port> [single command and arguments]
If you want to run a script file, you can use the "load" command on the command line:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar runadmin -host <hostname> -port <port> load -file <path-to-script>
If none of the optional arguments are passed, it starts interactively. If additional arguments are passed they are interpreted as a single command to run, then return. The interactive prompt for the CLI is:
"kv-> "
Upon successful completion of the command, the CLI's process exit code is zero. If there is an error, the exit code will be non-zero.
The CLI comprises a number of commands, some of which have subcommands. Complex commands are grouped by general function, such as "show" for displaying information or "ddl" for manipulating schema. All commands accept the following flags:
-help
Displays online help for the command or subcommand.
?
Synonymous with -help. Displays online help for the command or subcommand.
-verbose
Enables verbose output for the command.
CLI commands have the following general format:
All commands are structured like this:
"kv-> command [sub-command] [arguments]
All arguments are specified using flags which start with "-"
Commands and subcommands are case-insensitive and match on partial strings(prefixes) if possible. The arguments, however, are case-sensitive.
Inside a CLI script file, you can use #
to designate
a comment. Also, you can terminate a line with a backslash \
to
continue a command onto the next line.
This appendix contains the following information on the commands and subcommands:
Performs simple data aggregation operations on numeric fields like count, sum, average, keys, start and end. The aggregate command iterates matching keys or rows in the store so, depending on the size of the specified key or row, it may take a very long time to complete.
The aggregate subcommands are:
aggregate kv [-count] [-sum <field[,field,..]>] [-avg <field[,field,..]>] [-key <key>] [-schema <name>] [-start <prefixString>] [-end <prefixString>]
Performs simple data aggregation operations using the specified key.
where:
-count
Returns the count of matching records.
-sum
Returns the sum of the values of matching fields. All records with a schema with the named field are matched. Unmatched records are ignored.
-avg
Returns the average of the values of matching fields. All records with a schema with the named field are matched. Unmatched records are ignored.
-key
Specifies the key (prefix) to use.
-schema
Specifies the Avro schema to use.
-start
and
-end
flags
Restricts the range used for iteration.
This is particularly helpful when getting a
range of records based on a key component,
such as a well-formatted string.
-start
and
-end
arguments are
inclusive.
For example, a simple count
of
all records in the store:
kv-> aggregate kv -count count: 33508
Sum
and average
operate on specific field names in matching records
which means that only Avro records containing the
named fields are used. Sum and average only operate on
numeric fields of Avro types INT, LONG, FLOAT, and
DOUBLE.
For example, with the following schema:
{ "type" : "record", "name" : "Cookie", "fields" : [ { "name" : "id", "type" : "string", "default" : "" }, { "name" : "frequency", "type" : "int", "default" : 0 }, { "name" : "lastVisit", "type" : "string", "default" : "" }, { "name" : "segments", "type" : { "type" : "array", "items" : "string" }, "default" : [ ] } ] }
An example of sum
on a field
named frequency
:
kv-> aggregate kv -sum frequency -key /visits/charitable_donors/date sum(frequency): 2068
An example of average
on a field
named frequency
:
kv -> aggregate kv -avg frequency -key /visits/charitable_donors/date avg(frequency): 2.494571773220748
aggregate table -name <name> [-count] [-sum <field[,field,..]>] [-avg <field[,field,..]>] [-index <name>] [-field <name> -value <value>]* [-field <name> [-start <value>] [-end <value>]] [-json <string>]
Performs simple data aggregation operations on numeric fields of the table.
where:
-table
Specifies the table for the operation.
-count
Returns the count of matching records.
-sum
Returns the sum of the values of matching fields.
-avg
Returns the average of the values of matching fields.
-index
Specifies the name of the index to use. When an index is used, the fields named must belong to the specified index and the aggregation is performed over rows with matching index entries.
-field
and
-value
pairs are
used to specify the field values of the
primary key to use to match for the aggregation,
or you can use an empty key to match the entire table.
The -field
flat, along
with its
-start
and
-end
flags, can be used
for restricting the range used to match rows.
-json
Specifies the fields and values to use for the aggregation as a JSON input string.
See the example below:
# Create a table 'user_test' with an index on user_test(age): kv-> execute 'CREATE TABLE user_test (id INTEGER, firstName STRING, lastName STRING, age INTEGER, PRIMARY KEY (id))' Statement completed successfully kv-> execute 'CREATE INDEX idx1 on user_test (age)' Statement completed successfully # Insert 3 rows: kv-> put table -name user_test -json '{"id":1,"firstName":"joe","lastName":"wang","age":21}' Operation successful, row inserted. kv-> put table -name user_test -json '{"id":2,"firstName":"jack","lastName":"zhao","age":32}' Operation successful, row inserted. kv-> put table -name user_test -json '{"id":3,"firstName":"john","lastName":"gu","age":43}' Operation successful, row inserted. # Get count(*), sum(age) and avg(age) of rows in table: kv-> aggregate table -name user_test -count -sum age -avg age Row count: 3 Sum: age(3 values): 96 Average: age(3 values): 32.00 # Get count(*), sum(age) and avg(age) of rows where age >= 30, idx1 is utilized to filter the rows: kv-> aggregate table -name user_test -count -sum age -avg age -index idx1 -field age -start 30 Row count: 2 Sum: age(2 values): 75 Average: age(2 values): 37.50
change-policy [-dry-run] -params [name=value]*
Modifies store-wide policy parameters that apply to not yet deployed services. The parameters to change follow the -params flag and are separated by spaces.
Parameter values with embedded spaces must be quoted, for example, name="value with spaces". If -dry-run is specified, the new parameters are returned without changing them.
For more information on setting policy parameters, see Setting Store Wide Policy Parameters.
configure -name <storename>
Configures a new store. This call must be made before any other administration can be performed.
Use the -name
option to specify
the name of the KVStore that you want to configure.
The name is used to form a path to records kept in the
store. For this reason, you should avoid using
characters in the store name that might interfere with
its use within a file path. The command line interface
does not allow an invalid store name. Valid characters
are alphanumeric, '-', '_', and '.'.
Encapsulates commands that connect to the specified host and registry port to perform administrative functions or connect to the specified store to perform data access functions.
The current store, if any, will be closed before connecting to another store. If there is a failure opening the specified KVStore, the following warning is displayed: "Warning: You are no longer connected to KVStore".
The subcommands are as follows:
connect admin -host <hostname> -port <registry port> [-username <user>] [-security <security-file-path>]
Connects to the specified host and registry port to perform administrative functions. An Admin service must be active on the target host. If the instance is secured, you may need to provide login credentials.
Encapsulates commands that delete key/value pairs from store or rows from table. The subcommands are as follows:
delete kv [-key <key>] [-start prefixString] [-end prefixString] [-all]
Deletes one or more keys. If
-all
is specified, delete
all keys starting at the specified key. If no key
is specified, delete all keys in the store. The
-start
and
-end
flags can be used for
restricting the range used for deleting.
For example, to delete all keys in the store starting at root:
kv -> delete kv -all 301 Keys deleted starting at root
delete table -name <name> [-field <name> -value <value>]* [-field <name> [-start <value>] [-end <value>]] [-ancestor <name>]* [-child <name>]* [-json <string>] [-delete-all]
Deletes one or multiple rows from the named
table. The table name is a dot-separated name with
the format
tableName[.childTableName]*
.
-field
and
-value
pairs are
used to specify the field values of the
primary key, or you can use an empty key
to delete all rows from the table.
The -field
flag, along
with its
-start
and
-end
flags, can be used
for restricting the sub range for deletion
associated with the parent key.
-ancestor
and
-child
flags are
used to delete rows from specific ancestor
and/or descendant tables as well as the
target table.
-json
indicates that the
key field values are in JSON format.
-delete-all
is used to
delete all rows in a table.
Encapsulates operations that manipulate schemas in the store. The subcommands are as follows:
For details on managing schema in the store, see Managing Avro Schema.
ddl add-schema <-file <file> | -string <schema string>> [-evolve] [-force]
Adds a new schema or changes (evolves) an existing schema with the same name. Use the -evolve flag to indicate that the schema is changing. Use the -force flag to add the schema in spite of evolution warnings.
ddl enable-schema -name <name>.<ID>
Enables an existing, previously disabled schema.
execute <statement>
Oracle NoSQL Database provides a Data Definition Language (DDL) that you use
to form table and index statements. Use the execute
command to run the specified statement synchronously. The statement
must be enclosed in single or double quotes. Before using this command,
you need to connect to a store first.
For example:
kv-> execute 'CREATE TABLE users (id INTEGER, name STRING, pets ARRAY(STRING), primary key (id))' Statement completed successfully
kv-> execute 'DESCRIBE AS JSON TABLE users' { "type" : "table", "name" : "users", "comment" : null, "shardKey" : [ "id" ], "primaryKey" : [ "id" ], "fields" : [ { "name" : "id", "type" : "INTEGER", "nullable" : true, "default" : null }, { "name" : "name", "type" : "STRING", "nullable" : true, "default" : null }, { "name" : "pets", "type" : "ARRAY", "collection" : { "type" : "STRING" }, "nullable" : true, "default" : null } ] }
For more information on using Data Definition Language(DDL) to perform table operations see Getting Started with NoSQL Database Table API.
Encapsulates commands that get key/value pairs from store or get rows from table. The subcommands are as follows:
get kv [-key <keyString>] [-json] [-file <output>] [-all] [-keyonly] [-valueonly] [-start <prefixString>] [-end <prefixString>]
Perform a simple get operation using the specified key. The obtained value is printed out if it contains displayable characters, otherwise the bytes array is encoded using Base64 for display purposes. "[Base64]" is appended to indicate this transformation. The arguments for the get command are:
-key <keyString>
Indicates the full or the prefix key
path to use. If
<keyString>
is a full key path, it returns a single
value information. The format of this
get
command is:
get -key
<keyString>.
If
<keyString>
is a prefix key path, it returns multiple
key/value pairs. The format of this
get
command is:
get -key <keyString>
-all.
Key can be composed of
both major and minor key paths, or a major
key path only. The <keyString>
format is:
"major-key-path/-/minor-key-path".
Additionally, in the case of the prefix
key path, a key can be composed of the
prefix part of a major key path.
For example, with some sample keys in the KVStore:
/group/TC/-/user/bob /group/TC/-/user/john /group/TC/-/dep/IT /group/SZ/-/user/steve /group/SZ/-/user/diana
A get command with a key containing only the prefix part of the major key path results in:
kv -> get kv -key /group -all -keyonly /group/TC/-/user/bob /group/TC/-/user/john /group/TC/-/dep/IT /group/SZ/-/user/steve /group/SZ/-/user/diana
A get command with a key containing a major key path results in:
kv -> get kv -key /group/TC -all -keyonly /group/TC/-/user/bob /group/TC/-/user/john /group/TC/-/dep/IT
Get commands with a key containing major and minor key paths results in:
kv -> get kv -key /group/TC/-/user -all -keyonly /group/TC/-/user/bob /group/TC/-/user/john
kv -> get kv -key /group/TC/-/user/bob { "name" : "bob.smith", "age" : 20, "email" : "bob.smith@gmail.com", "phone" : "408 555 5555" }
-json
Should be specified if the record is JSON.
-file <output>
Specifies an output file, which is truncated, replacing all existing content with new content.
In the following example, records from
the key /Smith/Bob
are
written to the file
"data.out".
kv -> get kv -key /Smith/Bob -all -file ./data.out
In the following example, contents of
the file "data.out"
are
replaced with records from the key
/Wong/Bill.
kv -> get kv -key /Wong/Bill -all -file ./data.out
-all
Specified for iteration starting at the specified key. If the key argument is not specified, the entire store will be iterated.
-keyonly
Specified with -all
to return only keys.
-valueonly
Specified with -all
to return only values.
-start
<prefixString>
and
-end
<prefixString>
Restricts the range used for iteration.
This is particularly helpful when getting
a range of records based on a key
component, such as a well-formatted
string. Both the -start
and -end
arguments are
inclusive.
-start
and
-end
only work
on the key component specified by
-key
<keyString>
. The
value of
<keyString>
should be composed of simple strings
and cannot have multiple key
components specified.
For example, a log where its key structure is:
/log/<year>/<month>/-/<day>/<time>
puts all log entries for the same day
in the same partition, but splits the days
across shards. The time
format is: "hour.minute".
In this way, you can do a
get
of all log
entries in February and March, 2013 by
specifying:
kv-> get kv -all -keyonly -key /log/2013 -start 02 -end 03 /log/2013/02/-/01/1.45 /log/2013/02/-/05/3.15 /log/2013/02/-/15/10.15 /log/2013/02/-/20/6.30 /log/2013/02/-/28/8.10 /log/2013/03/-/01/11.13 /log/2013/03/-/15/2.28 /log/2013/03/-/22/4.52 /log/2013/03/-/31/11.55
You can be more specific to the
get
command by
specifying a more complete key path. For
example, to display all log entries from
April 1st to April 4th:
kv-> get kv -all -keyonly -key /log/2013/04 -start 01 -end 04 /log/2013/04/-/01/1.03 /log/2013/04/-/01/4.05 /log/2013/04/-/02/7.22 /log/2013/04/-/02/9.40 /log/2013/04/-/03/4.15 /log/2013/04/-/03/6.30 /log/2013/04/-/03/10.25 /log/2013/04/-/04/4.10 /log/2013/04/-/04/8.35
See the subcommand get table
get table -name <name> [-index <name>] [-field <name> -value <value>]* [-field <name> [-start <value>] [-end <value>]] [-ancestor <name>]* [-child <name>]* [-json <string>] [-file <output>] [-pretty]
Performs a get operation to retrieve row(s)
from a named table. The table name is a
dot-separated name with the format
tableName[.childTableName]*
.
-field
and
-value
pairs are
used to specify the field values of the
primary key or index key if using index
specified by -index
, or
with an empty key to iterate the entire
table.
The -field
flag, along
with its
-start
and
-end
flags, can be used
for restricting the sub range for
retrieving associated with parent key.
-ancestor
and
-child
flags are
used to return results from specific
ancestor and/or descendant tables as well
as the target table.
-file
is used to
specify an output file, which is
truncated.
-pretty
is used for a
nicely formatted JSON string with
indentation and carriage returns.
help [command [sub-command]] [-include-deprecated]
Prints help messages. With no arguments the top-level shell commands are listed. With additional commands and sub-commands, additional detail is provided.
kv-> help load load -file <path to file> Loads the named file and interpret its contents as a script of commands to be executed. If any of the commands in the script fail, execution will stop at that point.
Use -include-deprecated
to show
deprecated commands.
For example:
kv-> help show -include-deprecated Usage: show admins datacenters events faults indexes parameters perf plans pools schemas snapshots tables topology upgrade-order users zones
Toggles visibility and setting of parameters that are normally hidden. Use these parameters only if advised to do so by Oracle Support.
history [-last <n>] [-from <n>] [-to <n>]
Displays command history. By default all history is displayed. Optional flags are used to choose ranges for display.
load -file <path to file>
Loads the named file and interpret its contents as a script of commands to be executed. If any of the commands in the script fail, execution will stop at that point.
For example, suppose the following commands are
collected in the script file
load-contacts-5.txt
:
### Begin Script ### put -key /contact/Bob/Walker -value "{\"phone\":\"857-431-9361\", \ \"email\":\"Nunc@Quisque.com\",\"city\":\"Turriff\"}" \ -json example.ContactInfo put -key /contact/Craig/Cohen -value "{\"phone\":\"657-486-0535\", \ \"email\":\"sagittis@metalcorp.net\",\"city\":\"Hamoir\"}" \ -json example.ContactInfo put -key /contact/Lacey/Benjamin -value "{\"phone\":\"556-975-3364\", \ \"email\":\"Duis@laceyassociates.ca\",\"city\":\"Wasseiges\"}" \ -json example.ContactInfo put -key /contact/Preston/Church -value "{\"phone\":\"436-396-9213\", \ \"email\":\"preston@mauris.ca\",\"city\":\"Helmsdale\"}" \ -json example.ContactInfo put -key /contact/Evan/Houston -value "{\"phone\":\"028-781-1457\", \ \"email\":\"evan@texfoundation.org\",\"city\":\"Geest-G\"}" \ -json example.ContactInfo exit ### End Script ###
Then, the script can be run by using the
load
command in the data
command line interface:
A schema must be loaded to the store before this script can successfully run. For more information on adding schema, see "Adding Schema" section in Oracle NoSQL Database Getting Started.
> java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar runadmin -host node01 -port 5000 \ -store mystore kv-> load -file ./load-contacts-5.txt Operation successful, record inserted. Operation successful, record inserted. Operation successful, record inserted. Operation successful, record inserted. Operation successful, record inserted.
The following schema was previously added to the store:
{ "type": "record", "name": "ContactInfo", "namespace": "example", "fields": [ {"name": "phone", "type": "string", "default": ""}, {"name": "email", "type": "string", "default": ""}, {"name": "city", "type": "string", "default": ""} ] }
For more information on using the load command, see Using a Script to Configure the Store.
ping [-json]
Pings the runtime components of a store. Components available from the Topology are contacted, as well as Admin services.
where:
-json
Displays output in JSON format.
Encapsulates operations, or jobs that modify store
state. All subcommands with the exception of interrupt
and wait change persistent state. Plans are
asynchronous jobs so they return immediately unless
-wait
is used. Plan status can
be checked using show plans
. The
optional arguments for all plans include:
-wait
Wait for the plan to complete before returning.
-plan-name
The name for a plan. These are not unique.
-noexecute
Do not execute the plan. If specified, the
plan can be run later using plan
execute
.
-force
Used to force plan execution and plan retry.
The subcommands are as follows:
plan add-index -name <name> -table <name> [-field <name>]* [-desc <description>] [-plan-name <name>] [-wait] [-noexecute] [-force]
Adds an index to a table in the store. The
table name is a dot-separated name with the format
tableName[.childTableName]*
.
plan add-table -name <name> [-plan-name <name>] [-wait] [-noexecute] [-force]
Adds a new table to the store. The table name
is a dot-separated name with the format
tableName[.childTableName]*
.
Before adding a table, first use the
table create
command to
create the named table. The following example
defines a table (creates a table by name, adds
fields and other table metadata).
## Enter into table creation mode table create -name user -desc "A sample user table" user-> user-> help Usage: add-array-field | add-field | add-map-field | add-record-field | add-schema | cancel | exit | primary-key | remove-field | set-description | shard-key | show
## Now add the fields user-> help add-field Usage: add-field -type <type> [-name <field-name> ] [-not-required] [-nullable] [-default <value>] [-max <value>] [-min <value>] [-max-exclusive] [-min-exclusive] [-desc <description>] [-size <size>] [-enum-values <value[,value[,...]] <type>: INTEGER, LONG, DOUBLE, FLOAT, STRING, BOOLEAN, DATE, BINARY, FIX ED_BINARY, ENUM
## Adds a field. Ranges are inclusive with the exception of String, ## which will be set to exclusive. user-> add-field -type Integer -name id user-> add-field -type String -name firstName user-> add-field -type String -name lastName user-> help primary-key Usage: primary-key -field <field-name> [-field <field-name>]* ## Sets primary key. user-> primary-key -field id ## Exit table creation mode user-> exit ## Table User built.
Use table list -create
to
see the list of tables that can be added. The
following example lists and displays tables that
are ready for deployment.
kv-> table list ## Tables to be added: ## User -- A sample user table kv-> table list -name user ## Add table User: { "type" : "table", "name" : "User", "id" : "User", "description" : "A sample user table", "shardKey" : [ "id" ], "primaryKey" : [ "id" ], "fields" : [ { "name" : "id", "type" : "INTEGER" }, { "name" : "firstName", "type" : "STRING" }, { "name" : "lastName", "type" : "STRING" } ] }
The following example adds the table to the store.
## Add the table to the store. kv-> help plan add-table kv-> plan add-table -name user -wait Executed plan 5, waiting for completion... Plan 5 ended successfully kv-> show tables -name user { "type" : "table", "name" : "User", "id" : "r", "description" : "A sample user table", "shardKey" : [ "id" ], "primaryKey" : [ "id" ], "fields" : [ { "name" : "id", "type" : "INTEGER" }, { "name" : "firstName", "type" : "STRING" }, { "name" : "lastName", "type" : "STRING" } ] }
For more information and examples on table design, see the Introducing Oracle NoSQL Database Tables and Indexes.
plan cancel -id <plan id> | -last
Cancels a plan that is not running. A running plan must be interrupted before it can be canceled.
Use the -last
option to
reference the most recently created plan.
plan change-parameters -security | -service <id> | -all-rns [-zn <id> | - znname <name>] | -all-admins [-zn <id> | -znname <name>] [-dry-run] [-plan-name <name>] [-wait] [-noexecute] [-force] -params [name=value]*
Changes parameters for either the specified service, or for all service instances of the same type that are deployed to the specified zone or all zones.
The -security
flag allows
changing store-wide global security parameters,
and should never be used with other flags.
The -service
flag allows a
single instance to be affected; and should never
be used with either the -zn
or
-znname
flag.
The -all-*
flags can be used
to change all instances of the service type. The
parameters to change follow the
-params
flag and are
separated by spaces. The parameter values with
embedded spaces must be quoted; for example,
name="value with spaces".
One of the -all-* flags can be combined with the -zn or -znname flag to change all instances of the service type deployed to the specified zone; leaving unchanged, any instances of the specified type deployed to other zones. If one of the -all-* flags is used without also specifying the zone, then the desired parameter change will be applied to all instances of the specified type within the store, regardless of zone.
If -dry-run
is specified,
the new parameters are returned without changing
them. Use the command show
parameters
to see what parameters
can be modified. For more information, see show parameters.
For more information on changing parameters in the store, see Setting Store Parameters.
plan change-storagedir -sn <id> -storagedir <path> -add | -remove [-plan-name <name>] [-wait] [-noexecute] [-force]
Adds or removes a storage directory on a Storage Node, for storing a Replication Node.
plan change-user -name <user name> [-disable | -enable] [-set-password [-password <new password>] [-retain-current-password]] [-clear-retained-password] [-plan-name <name>] [-wait] [-noexecute] [-force]
Change a user with the specified name in the store. The -retain-current-password argument option causes the current password to be remembered during the -set-password operation as a valid alternate password for configured retention time or until cleared using -clear-retained-password. If a retained password has already been set for the user, setting password again will cause an error to be reported.
This command is deprecated. For more information see User Modification in the Oracle NoSQL Database Administrator's Guide.
plan create-user -name <user name> [-admin] [-disable] [-password <new password>] [-plan-name <name>] [-wait] [-noexecute] [-force]
Create a user with the specified name in the store. The -admin argument indicates that the created user has full administrative privileges.
This command is deprecated. For more information see User Creation in the Oracle NoSQL Database Administrator's Guide.
plan deploy-admin -sn <id> -port <http port> [-plan-name <name>] [-wait] [-noexecute] [-force]
Deploys an Admin to the specified Storage Node. The admin type (PRIMARY/SECONDARY) is the same type as the zone the Storage Node is in. Its graphical interface listens on the specified port.
For more information on deploying an admin, see Create an Administration Process on a Specific Host.
Deprecated. See plan deploy-zone instead.
plan deploy-sn -zn <id> | -znname <name> -host <host> -port <port> [-plan-name <name>] [-wait] [-noexecute] [-force]
Deploys the Storage Node at the specified host and port into the specified zone.
For more information on deploying your Storage Nodes, see Create the Remainder of your Storage Nodes.
plan deploy-topology -name <topology name> [-plan-name <name>] [-wait] [-noexecute] [-force]
Deploys the specified topology to the store. This operation can take a while, depending on the size and state of the store.
For more information on deploying a satisfactory topology candidate, see Deploy the Topology Candidate.
plan deploy-zone -name <zone name> -rf <replication factor> [-type [primary | secondary]] [-plan-name <name>] [-wait] [-noexecute] [-force]
Deploys the specified zone to the store and creates a primary zone if -type is not specified.
For more information on creating a zone, see Create a Zone.
plan drop-user -name <user name> [-plan-name <name>] [-wait] [-noexecute] [-force]
Drop a user with the specified name in the store. A logged-in user may not drop itself.
This command is deprecated. For more information see User Removal in the Oracle NoSQL Database Administrator's Guide.
plan evolve-table -name <name> [-plan-name <name>] [-wait] [-noexecute] [-force]
Evolves a table in the store. The table name is
a dot-separate with the format
tableName[.childTableName]*
.
Use the table evolve
command
to evolve the named table. The following example
evolves a table.
## Enter into table evolution mode kv-> table evolve -name User kv-> show { "type" : "table", "name" : "User", "id" : "r", "description" : "A sample user table", "shardKey" : [ "id" ], "primaryKey" : [ "id" ], "fields" : [ { "name" : "id", "type" : "INTEGER" }, { "name" : "firstName", "type" : "STRING" }, { "name" : "lastName", "type" : "STRING" } ] } ## Add a field kv-> add-field -type String -name address ## Exit table creation mode kv-> exit ## Table User built.
kv-> plan evolve-table -name User -wait ## Executed plan 6, waiting for completion... ## Plan 6 ended successfully kv-> show tables -name User { "type" : "table", "name" : "User", "id" : "r", "description" : "A sample user table", "shardKey" : [ "id" ], "primaryKey" : [ "id" ], "fields" : [ { "name" : "id", "type" : "INTEGER" }, { "name" : "firstName", "type" : "STRING" }, { "name" : "lastName", "type" : "STRING" }, { "name" : "address", "type" : "STRING" } ] }
Use table list -evolve
to
see the list of tables that can be evolved. For
more information, see plan add-table .
plan execute -id <id> | -last [-wait] [-force]
Executes a created but not yet executed plan.
The plan must have been previously created using
the -noexecute
flag .
Use the -last
option to
reference the most recently created plan.
plan grant [-role <role name>]* -user <user_name>
Allows granting roles to users.
where:
-role <role name>
Specifies the roles that will be granted. The role names
should be the system-defined roles (except public
)
listed in the Oracle NOSQL Database Security Guide.
-user <user_name>
Specifies the user who the role will be granted from.
This command is deprecated. For more information see Grant Role or Privilege in the Oracle NoSQL Database Administrator's Guide.
plan interrupt -id <plan id> | -last
Interrupts a running plan. An interrupted plan can only be re-executed or canceled. Use -last to reference the most recently created plan.
plan migrate-sn -from <id> -to <id> [-admin-port <admin port>] [-plan-name <name>] [-wait] [-noexecute] [-force]
Migrates the services from one Storage Node to another. The old node must not be running.
The -admin-port
option is
required if the old node hosted an admin service.
Before executing the plan
migrate-sn
command, you can stop any
running old Storage Node by using -java
-Xmx256m -Xms256m -jar KVHOME/lib/kvstore.jar
stop -root KVROOT
.
plan remove-admin -admin <id> | -zn <id> | -znname <name> [-force] [-plan-name <name>] [-wait] [-noexecute] [-force]
Removes the desired Admin instances; either the single specified instance, or all instances deployed to the specified zone.
If you use the -admin
flag
and there are 3 or fewer Admins running in the
store, or if you use the -zn
or
-znname
flag and the
removal of all Admins from the specified zone
would result in only one or two Admins in the
store, then the desired Admins will be removed
only if you specify the -force
flag.
Also, if you use the -admin
flag and there is only one Admin in the store, or
if you use the -zn
or
-znname
flag and the
removal of all Admins from the specified zone
would result in the removal of all Admins from the
store, then the desired Admins will not be
removed.
plan remove-index -name <name> -table <name> [-plan-name <name>] [-wait] [-noexecute] [-force]
Removes an index from a table. The table name is a dot-separated name with the format tableName[.childTableName]*.
plan remove-sn -sn <id> [-plan-name <name>] [-wait] [-noexecute] [-force]
Removes the specified Storage Node from the topology.
This command is useful when removing unused, old Storage Nodes from the store. To do this, see Replacing a Failed Storage Node.
plan remove-table -name <name> [-keep-data] [-plan-name <name>] [-wait] [-noexecute] [-force]
Removes a table from the store. The named table must exist and must not have any child tables. Indexes on the table are automatically removed. By default data stored in this table is also removed. Table data may be optionally saved by specifying the -keep-data flag. Depending on the indexes and amount of data stored in the table this may be a long-running plan.
The following example removes a table.
## Remove a table. kv-> plan remove-table -name User ## Started plan 7. Use show plan -id 7 to check status. ## To wait for completion, use plan wait -id 7. kv-> show tables ## No table found.
For more information, see Introducing Oracle NoSQL Database Tables and Indexes.
plan remove-zone -zn <id> | -znname <name> [-plan-name <name>] [-wait] [-noexecute] [-force]
Removes the specified zone from the store.
Before running this command, all Storage Nodes
that belong to the specified zone must first be
removed using the plan
remove-sn
command.
plan repair-topology [-plan-name <name>] [-wait] [-noexecute] [-force]
Inspects the store's deployed, current topology for inconsistencies in location metadata that may have arisen from the interruption or cancellation of previous deploy-topology or migrate-sn plans. Where possible, inconsistencies are repaired. This operation can take a while, depending on the size and state of the store.
plan revoke [-role <role name>]* -user <user_name>
Allows revoking roles to users.
where:
-role <role name>
Specifies the roles that will be revoked. The role names
should be the system-defined roles (except public
)
listed in the Oracle NOSQL Database Security Guide.
-user <user_name>
Specifies the user who the role will be revoked from.
This command is deprecated. For more information see Revoke Role or Privilege in the Oracle NoSQL Database Administrator's Guide.
plan start-service -service <id> | -all-rns [-plan-name <name>] [-wait] [-noexecute] [-force]
Starts the specified service(s).
plan stop-service -service <id> | -all-rns [-plan-name <name>] [-wait] [-noexecute] [-force]
Stops the specified service(s).
Use this command to stop any affected services so that any attempts by the system to communicate with it are no longer made; resulting in a reduction in the amount of error output related to a failure you are already aware of.
This command is useful during disk replacement
process. Use the plan
stop-service
command to stop the
affected service prior removing the failed disk.
For more information, see Replacing a Failed Disk.
plan wait -id <id> | -last [-seconds <timeout in seconds>]
Waits indefinitely for the specified plan to complete, unless the optional timeout is specified.
Use the -seconds
option to
specify the time to wait for the plan to complete.
The -last
option references
the most recently created plan.
Encapsulates commands that manipulates Storage Node pools, which are used for resource allocations. The subcommands are as follows:
pool create -name <name>
Creates a new Storage Node pool to be used for resource distribution when creating or modifying a store.
For more information on creating a Storage Node pool, see Create a Storage Node Pool.
pool join -name <name> [-service] <snX>*
Adds Storage Nodes to an existing Storage Node pool.
Encapsulates commands that put key/value pairs to the store or put rows to a table. The subcommands are as follows:
put kv -key <keyString> -value <valueString> [-file] [-hex | -json <schemaName>] [-if-absent] [-if-present]
Put the specified key/value pair into the store. The following arguments apply to the put command:
-key<keyString>
Specifies the name of the key to be put into the store. Key can be composed of both major and minor key paths, or a major key path only. The <keyString> format is: "major-key-path/-/minor-key-path".
For example, a key containing major and minor key paths:
kv-> put -key /Smith/Bob/-/email -value "{\"id\": 1,\"email\":\"bob.smith@gmail.com\"}" -json schema.EmailInfo
For example, a key containing only a major key path:
kv-> put -key /Smith/Bob -value"{\"name\": \"bob.smith\", \"age\": 20, \"phone\":\"408 555 5555\", \"email\": \"bob.smith@gmail.com\"}" -json schema.UserInfo
-value
<valueString>
If neither -json or -file is specified, the <valueString> is treated as a raw bytes array.
For example:
kv-> put -key /Smith/Bob/-/phonenumber -value "408 555 5555"
The mapping of the raw arrays to data structures (serialization and deserialization) is left entirely to the application. This is not the recommended approach. Instead, you should use Avro even for very simple values.
If used with -json
to specify a Json string, the
valueString
should
be encapsulated in quotation marks, and
its internal field name and value with
string type should also be encapsulated by
string quote characters.
For example:
kv-> put -key /Smith/John/-/email -value "{\"id\": 1,\"email\":\"john.smith@gmail.com\"}" -json schema.EmailInfo
-file
Indicates that the value is obtained from a file. The file to use is identified by the value parameter.
For example:
kv-> put -key /Smith/Bob -value ./smith-bob-info.txt -file -json schema.UserInfo
-hex
Indicates that the value is a BinHex encoded byte value with base64 encoding.
-json<schemaName>
Indicates that the value is a JSON
string. Can be specified along with
-file
.
-if-absent
Indicates that a key/value pair is
put
only if no
value for the given key is present.
-if-present
Indicates that a key/value pair is
put
only if a value
for the given key is present.
put table -name <name> [if-absent | -if-present ] [-json <string>] [-file <file>] [-update]
Put a row into the named table. The table name is a dot-separated name with the format table[.childTableName]*.
where:
-if-absent
Indicates to put a row only if the row does not exist.
-if-present
Indicates to put a row only if the row already exists.
-json
Indicates that the value is a JSON string.
-file
Can be used to load JSON strings from a file.
-update
Can be used to partially update the existing record.
Encapsulates commands that display the state of the store and its components or schemas. The subcommands are as follows:
show events [-id <id>] | [-from <date>] [-to <date> ] [-type <stat | log | perf>]
Displays event details or list of store events. The status events indicate changes in service status.
Log events are noted if they require attention.
Performance events are not usually critical but may merit investigation. Events marked "SEVERE" should be investigated.
The following date/time formats are accepted. They are interpreted in the local time zone.
MM-dd-yy HH:mm:ss:SS |
MM-dd-yy HH:mm:ss |
MM-dd-yy HH:mm |
MM-dd-yy |
HH:mm:ss:SS |
HH:mm:ss |
HH:mm |
For more information on events, see Events.
show faults [-last] [-command <command index>]
Displays faulting commands. By default all available faulting commands are displayed. Individual fault details can be displayed using the -last and -command flags.
show indexes [-table <name>] [-name <name>]
Displays index metadata. By default the indexes metadata of all tables are listed.
If a specific table is named, its indexes metadata are displayed. If a specific index of the table is named, its metadata is displayed. For more information, see plan add-index.
show parameters -policy | -service <name>
Displays service parameters and state for the specified service. The service may be a Replication Node, Storage Node, or Admin service, as identified by any valid string, for example rg1-rn1, sn1, admin2, etc. Use the -policy flag to show global policy parameters.
show plans [-id <id> | -last]
Shows details of the specified plan or list all plans that have been created along with their corresponding plan IDs and status.
Use the -id
option to
specify the plan that you want to show additional
detail and status.
The -last
option shows
details of the most recent created plan.
For more information on plan review, see Reviewing Plans.
show schemas [-disabled] | [-name <name>]
Displays schema details of the named schema or a list of schemas registered with the store.
Use the -name
option to
specify the name of the schema you want to check
if it is currently enabled in the store.
Use the -disabled
option to
see all schemas, including those which are
currently disabled.
show snapshots [-sn <id>]
Lists snapshots on the specified Storage Node. If no Storage Node is specified, one is chosen from the store. You can use this command to view the existing snapshots.
show tables -name <name>
Displays the table information. Use
-original
flag to show the
original table information if you are building a
table for evolution. The flag is ignored for
building table for addition. For more information,
see plan add-table and plan evolve-table
show topology [-zn] [-rn] [-sn] [-store] [-status] [-perf]
Displays the current, deployed topology. By default it shows the entire topology. The optional flags restrict the display to one or more of Zones, Replication Nodes, Storage Nodes and store name, or specify service status or performance.
With this command you can obtain the ID of the zone to which Storage Nodes can be deployed to.
show upgrade-order
Lists the Storage Nodes which need to be upgraded in an order that prevents disruption to the store's operation.
This command displays one or more Storage Nodes on a line. Multiple Storage Nodes on a line are separated by a space. If multiple Storage Nodes appear on a single line, then those nodes can be safely upgraded at the same time. When multiple nodes are upgraded at the same time, the upgrade must be completed on all nodes before the nodes next on the list can be upgraded.
If at some point you lose track of which group of nodes should be upgraded next, you can always run the show upgrade-order command again.
show users [-name <name>
Lists the names of all users, or displays
information about a specific user. If no user is
specified, lists the names of all users. If a user
is specified using the -name
option, then lists detailed information about the
user.
show zones [-zn <id>] | -znname <name>
Lists the names of all zones, or display information about a specific zone.
Use the -zn
or the
-znname
flag to specify the
zone that you want to show additional information;
including the names of all of the Storage Nodes in
the specified zone, and whether that zone is a
primary of secondary zone.
Encapsulates commands that create and delete snapshots, which are used for backup and restore. The subcommands are as follows:
snapshot create -name <name>
Creates a new snapshot using the specified name as the prefix.
Use the -name
option to
specify the name of the snapshot that you want to
create.
Snapshots should not be taken while any configuration (topological) changes are being made, because the snapshot might be inconsistent and not usable.
snapshot remove -name <name> | -all
Removes the named snapshot. If -all is specified, remove all snapshots.
Use the -name
option to
specify the name of the snapshot that you want to
remove.
If the -all
option is
specified, remove all snapshots.
To create a backup of your store using a snapshot see Taking a Snapshot.
To recover your store from a previously created snapshot you can use the load utility or restore directly from a snapshot. For more information, see Using the Load Program or Restoring Directly from a Snapshot.
Deprecated with exception of table size. See execute instead. For more information, see table size
table size -name <name> -json <string> [-rows <num> [[-primarykey | -index <name>] -keyprefix <size>]]
Calculates key and data sizes for the specified table using the row input, optionally estimating the NoSQL DB cache size required for a specified number of rows of the same format. Running this command on multiple sample rows can help determine the necessary cache size for desired store performance.
-json specifies a sample row used for the calculation.
-rows specifies the number of rows to use for the cache size calculation
Use the -index or -primarykey and -keyprefix to specify the expected commonality of index keys in terms of number of bytes.
It mainly does the following:
Calculates the key and data size based on the input row in JSON format.
Estimates the DB Cache size required for a specified number of rows in the same JSON format.
The output contains both detailed size info for primary key/index and the total size; internally it calls JE's DbCacheSize utility to calculate the cache size required for primary key and indexes with the input parameters:
java -jar $KVHOME/dist/lib/je.jar DbCacheSize -records <num> -key <size> -data <size> -keyprefix <size> -outputproperties -replicated <JE properties...> -duplicates]
where:
-records <num>: The number of rows specified by -row <num>.
-key <size>: The size of key get from step 1.
-data <size>: The size of data get from step1.
-keyprefix <size>: The expected commonality of keys, specified using -primarykey | -index <name> -keyprefix <size>
-duplicates: Used only for table index.
-<JE properties...>: The JE configuration parameters used in kvstore.
For example:
Create table user (id integer, address string, zip_code string) and idx1 on user (zip_code)
kv-> execute "create table user (id integer, address string, zip_code string, primary key(id))"
kv-> execute "create index idx1 on user (zip_code)"
See the following cases:
Calculates the key size and data size based on the input row in JSON.
kv-> table size -name user -json '{"id":1, "address": "Oracle Building ZPark BeiJing China","zip_code":"100000"}' === Key and Data Size === Name Number of Bytes ----------------- --------------- Primary Key 8 Data 47 Index Key of idx1 7
Calculates the key/data size and the cache size of the table with 10000 rows.
kv-> table size -name user -json '{"id":1, "address": "Oracle Building ZPark BeiJing China","zip_code":"100000"}' -rows 10000 === Key and Data Size === Name Number of Bytes ----------------- --------------- Primary Key 8 Data 47 Index Key of idx1 7 === Environment Cache Overhead === 16,798,797 minimum bytes === Database Cache Sizes === Name Number of Bytes Description ----- --------------- ---------------------------------- 1,024,690 Internal nodes only Table 1,024,690 Internal nodes and record versions 1,024,690 Internal nodes and leaf nodes ----- --------------- ---------------------------------- 413,728 Internal nodes only idx1 413,728 Internal nodes and record versions 413,728 Internal nodes and leaf nodes ----- --------------- ---------------------------------- 1,438,418 Internal nodes only Total 1,438,418 Internal nodes and record versions 1,438,418 Internal nodes and leaf nodes
For more information, see the DbCacheSize javadoc: http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/util/DbCacheSize.html
The cache size are calculated as follows:
Cache size of table
java -jar KVHOME/lib/je.jar DbCacheSize -records 10000 key 8 -data 47 -outputproperties -replicated <JE properties...>
The parameters are as follows:
Record number: 10000
Primary key size: 8
Data size: 47
Cache size of table
java -jar KVHOME/lib/je.jar DbCacheSize -records 10000 -key 7 -data 8 -outputproperties -replicated <JE properties...> -duplicates
The parameters are as follows:
Record number: 10000
Index key size: 7
Data size: 8. The primary key size is used here, since the data of secondary index is the primary key.
Use -duplicates for index.
Total size = cache size of table + cache size of idx1.
Calculates the cache size with a key prefix size for idx1
kv-> table size -name user -json '{"id":1, "address":"Oracle Building ZPark BeiJing China", "zip_code":"100000"}' -rows 10000 -index idx1 -keyprefix 3 === Key and Data Size === Name Number of Bytes ----------------- --------------- Primary Key 8 Data 47 Index Key of idx1 7 === Environment Cache Overhead === 16,798,797 minimum bytes === Database Cache Sizes === Name Number of Bytes Description ----- --------------- ---------------------------------- 1,024,690 Internal nodes only Table 1,024,690 Internal nodes and record versions 1,024,690 Internal nodes and leaf nodes ----- --------------- ---------------------------------- 413,691 Internal nodes only idx1 413,691 Internal nodes and record versions 413,691 Internal nodes and leaf nodes ----- --------------- ---------------------------------- 1,438,381 Internal nodes only Total 1,438,381 Internal nodes and record versions 1,438,381 Internal nodes and leaf nodes
For more information, see the DbCacheSize javadoc: http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/util/DbCacheSize.html
A key prefix size is provided for idx1, the idx1's cache size is calculated like this:
java -jar KVHOME/lib/je.jar DbCacheSize -records 10000 -key 7 -data 8 -keyprefix 3 -outputproperties -replicated <JE properties...> -duplicates
The above examples show that the cache size of idx1 is 413,691 and is smaller than 413,728 of case 2. For more information about the usage of keyprefix, see JE DbCacheSize document: http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/util/DbCacheSize.html
time command [sub-command]
The time command runs the specified command with the given arguments, and prints the elapsed time of the execution.
For example, to display the time taken to retrieve records and write them to a file:
kv-> time get -all -file ./data.out 209 Records returned. Wrote value to file ./data.out. Time: 265 ms.
For example, to display the time taken to delete all existing keys:
kv-> time delete -all 210 Keys deleted starting at root Time: 265 ms.
Encapsulates commands that manipulate store
topologies. Examples are redistribution/rebalancing of
nodes or changing replication factor. Topologies are
created and modified using this command. They are then
deployed by using the plan
deploy-topology
command. For more
information, see plan deploy-topology. The
subcommands are as follows:
topology change-repfactor -name <name> -pool <pool name> -zn <id> | -znname <name> -rf <replication factor>
Modifies the topology to change the replication factor of the specified zone to a new value. The replication factor may not be decreased at this time.
For more information on modifying the replication factor, see Increase Replication Factor.
topology clone -from <from topology> -name <to topology>
or
topology clone -current -name <to topology>
Clones an existing topology so as to create a new candidate topology to be used for topology change operations.
topology create -name <candidate name> - pool <pool name> -partitions <num>
Creates a new topology with the specified number of partitions using the specified storage pool.
For more information on creating the first topology candidate, see Make the Topology Candidate.
topology preview -name <name> [-start <from topology>]
Describes the actions that would be taken to transition from the starting topology to the named, target topology. If -start is not specified, the current topology is used. This command should be used before deploying a new topology.
topology rebalance -name <name> -pool <pool name> [-zn <id> | -znname <name>]
Modifies the named topology to create a
balanced topology. If the optional
-zn
flag is used, only
Storage Nodes from the specified zone are used for
the operation.
For more information on balancing a non-compliant topology, see Balance a Non-Compliant Topology.
topology redistribute -name <name> -pool <pool name>
Modifies the named topology to redistribute resources to more efficiently use those available.
For more information on redistributing resources to enhance write throughput, see Increase Data Distribution.
topology validate [-name <name>]
Validates the specified topology. If no topology is specified, the current topology is validated. Validation generates violations and notes.
Violations are issues that can cause problems and should be investigated.
Notes are informational and highlight configuration oddities that can be potential issues or may be expected.
For more information, see Validate the Topology Candidate.
Toggles the global verbosity setting. This property
can also be set on a per-command basis using the
-verbose
flag.
Encapsulates commands that check various parameters of the store. The subcommands are as follows:
verify configuration [-silent] [-json]
Verifies the store configuration by iterating the components and checking their state against that expected in the Admin database. This call may take a while on a large store.
The -json
option specifies that
output should be displayed in JSON format.
The -silent
option
suppresses verbose verification messages that are
displayed as the verification is proceeding.
Instead, only the initial startup messages and the
final verification message is displayed. This option
has no effect when the -json
option is specified.
verify prerequisite [-silent] [-sn snX]*
Verifies that the storage nodes are at or above the prerequisite software version needed to upgrade to the current version. This call may take a while on a large store.
As part of the verification process, this command displays the components which do not meet the prerequisites or cannot be contacted. It also checks for illegal downgrade situations where the installed software is of a newer minor release than the current version.
When using this command, the current version is the version of the software running the command line interface.
Use the -sn
option to
specify those storage nodes that you want to
verify. If no storage nodes are specified, all the
nodes in the store are checked.
The -silent
option
suppresses verbose verification messages that are
displayed as the verification is proceeding.
Instead, only the initial startup messages and the
final verification message is displayed.
verify upgrade [-silent] [-sn snX]*
Verifies the storage nodes (and their managed components) are at or above the current version. This call may take a while on a large store.
As part of the verification process, this command displays the components which have not yet been upgraded or cannot be contacted.
When using this command, the current version is the version of the software running the command line interface.
Use the -sn
option to
specify those storage nodes that you want to
verify. If no storage nodes are specified, all the
nodes in the store are checked.
The -silent
option
suppresses verbose verification messages that are
displayed as the verification is proceeding.
Instead, only the initial startup messages and the
final verification message is displayed.