This chapter provides information about the SQL statements available in TimesTen.
SQL statements are generally considered to be either data manipulation language (DML) statements or data definition language (DDL) statements.
DML statements modify database objects. INSERT
, UPDATE
and DELETE
are examples of DML statements. The SELECT
statement retrieves data from one or more tables or views.
DDL statements modify the database schema. CREATE TABLE
and DROP TABLE
are examples of DDL statements.
In addition to an alphabetical listing of all statements, this chapter also contains:
Table 6-1, "SQL statements supported in TimesTen" shows a summary of the SQL statements in TimesTen. The second column indicates if the statement is supported in TimesTen Scaleout. Every statement except ALTER SEQUENCE
is supported in TimesTen Classic.
Table 6-1 SQL statements supported in TimesTen
SQL statement | Supported in TimesTen Scaleout? |
---|---|
N |
|
N |
|
N |
|
N |
|
Y |
|
N |
|
N |
|
Y Not supported in TimesTen Classic. |
|
Y |
|
Y Unsupported clauses: Aging and column-based compression Unsupported data types: LOB columns are not supported in tables. LOB variables are supported in PL/SQL programs. |
|
Y |
|
Y |
|
Y |
|
N |
|
N |
|
N |
|
Y |
|
Y with restrictions |
|
N |
|
N |
|
N |
|
Y |
|
N |
|
Y with TimesTen Scaleout specific |
|
Y |
|
Y including Unsupported clauses: Aging and column-based compression Unsupported data types: LOBs and Distribution clause is not supported for global temporary tables. |
|
Y |
|
Y |
|
Y |
|
N |
|
N |
|
N |
|
Y |
|
Y |
|
N |
|
N |
|
Y |
|
N |
|
Y |
|
Y |
|
Y |
|
Y |
|
Y |
|
N |
|
Y |
|
Y |
|
Y |
|
N |
|
N |
|
N |
|
Y |
|
Y |
|
Y |
|
Y, but |
|
N |
|
Y |
A comment can appear between keywords, parameters, or punctuation marks in a statement. You can include a comment in a statement in two ways:
Begin the comment with a slash and an asterisk (/*
). Proceed with the text of the comment. The text can span multiple lines. End the comment with an asterisk and a slash. (*/
). You do not need to separate the opening and terminating characters from the text by a space or line break.
Begin the comment with two hyphens (--
). Proceed with the text of the comment. The text cannot extend to a new line. End the comment with a line break.
Optimizer hints are instructions that are passed to the TimesTen query optimizer. The optimizer considers these hints when choosing the best execution plan for your query. Most of the hints are supported both in TimesTen Scaleout and in TimesTen Classic. There are also hints that are supported only in TimesTen Scaleout. See "Optimizer hints supported in TimesTen Scaleout only" for information.
TimesTen supports three levels of optimizer hints:
Statement level optimizer hints: When specified, the optimizer considers the hint for the particular statement. See "Statement level optimizer hints" for details.
Transaction level optimizer hints: When specified (by calling the appropriate built-in procedure), the optimizer considers the hint for the entire transaction. See "Use optimizer hints to modify the execution plan" in the Oracle TimesTen In-Memory Database Operations Guide.
Connection level optimizer hints: When specified, the optimizer considers the hint for the entire connection. See "Use optimizer hints to modify the execution plan" in the Oracle TimesTen In-Memory Database Operations Guide and "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for details.
The order of precedence for optimizer hints is statement level hints, transaction level hints and then connection level hints. Table 6-2, "Summary of statement, transaction, and connection level optimizer hints" provides a summary of the statement, transaction, and connection level optimizer hints.
Table 6-2 Summary of statement, transaction, and connection level optimizer hints
Statement level optimizer hint | Transaction level optimizer hint | Connection level optimizer hint |
---|---|---|
You specify the hint within the comment syntax and after a |
You specify the hint by calling the |
You specify the hint in the |
The hint is scoped to the SQL statement. |
The hint is scoped to the transaction. |
The hint is scoped to the connection. |
The |
The |
The |
The optimizer considers the hint for the statement only. |
The optimizer considers the hint for all statements in the transaction. |
The optimizer considers the hint for all statements in the connection. |
The hint is supported in the |
The hint is not supported in the |
The hint is not supported in the |
If you specify the hint in a transaction in which transaction level optimizer hints or connection level optimizer hints are specified, the statement level optimizer hint overrides the transaction level hint or the connection level hint for the SQL statement. After TimesTen executes the SQL statement:
|
The hint is in effect for the duration of the transaction. If you specify a statement level optimizer hint in a SQL statement, the statement level optimizer hint is in effect for the statement and the optimizer does not use the transaction level hint for the statement. After TimesTen executes the statement, the original transaction level optimizer hint remains in effect for the duration of the transaction. A hint specified at this level overrides the same hint specified at the connection level. |
The hints are in effect for the duration of the connection. The order of precedence is statement level, transaction level, and then connection level. |
You use the statement level optimizer hints if you want to influence the optimizer for a specific statement. You must specify the hint for each statement in which you want to influence the optimizer. This could result in multiple alterations to your statements. |
You use the transaction level optimizer hints to influence the optimizer for all statements in a transaction. You do not have to specify a hint for each statement. The hint applies to all statements in the transaction. The hint can be overridden by specifying the hint at the statement level. |
You use the connection level optimizer hint to influence the optimizer for all statements in the connection. The hint can be overridden by specifying the hint at the transaction or at the statement level. |
Statement level optimizer hints are comments in a SQL statement that pass instructions to the TimesTen query optimizer. The optimizer considers these hints when choosing the best execution plan for your query. It analyzes the SQL statements and generates a query plan which is then used by the SQL execution engine to execute the query and return the data.
See "Use optimizer hints to modify the execution plan" in Oracle TimesTen In-Memory Database Operations Guide for information about statement level optimizer hints.
A SQL statement can have one comment that includes one or more statement level optimizer hints.
The statements in which the hints are supported vary:
TT_CommitDMLOnSuccess
is supported in the DELETE
, INSERT
, and UPDATE
statements. It is also valid in the INSERT...SELECT
statement and must follow the SELECT
keyword. This hint is supported in TimesTen Scaleout only.
The TT_GridQueryExec
and TT_PartialResult
hints are supported in the SELECT
, INSERT...SELECT
, and CREATE TABLE... AS SELECT
SQL statements only and these hints must follow the SELECT
keyword. These hints are supported in TimesTen Scaleout only.
The remaining hints are supported in the DELETE
, INSERT
, MERGE
, SELECT
, UPDATE
, INSERT...SELECT
, and CREATE
TABLE...AS SELECT
SQL statements and these hints must follow the DELETE
, INSERT
, MERGE
, SELECT
, or UPDATE
keyword.
Table 6-3, "Placement of statement level hints in SQL statements" shows the proper placement of hints in a SQL statement.
You embed statement level optimizer hints in comment syntax. TimesTen supports hints in comments that span one line and in comments that span more than one line. If your comment that contains the hint spans one or more lines, use the comment syntax, /*+...*/
. If your comment that contains the hint spans one line, use the comment syntax, --+
.
Syntax:
SQL VERB {/*+ [CommentText] hint [{hint|CommentText} [...]] */ | --+ [CommentText] hint [{hint|CommentText} [...]] } hint::= ScaleoutHint | JoinOrderHint | IndexHint| FlagHint ScaleoutHint::= TT_CommitDMLOnSuccess({0|1})|TT_GridQueryExec({LOCAL|GLOBAL})| TT_PartialResult(0|1) JoinOrderHint::= TT_JoinOrder (CorrelationName CorrelationName [...]) IndexHint::= TT_Index (CorrelationName,IndexName,{0|1} [;...]) FlagHint::= FlagName (0|1) FlagName::= TT_BranchAndBound|TT_DynamicLoadEnable|TT_DynamicLoadErrorMode| TT_FastPrepare|TT_FirstRow|TT_ForceCompile|TT_GenPlan| TT_HashGb|TT_HashScan|TT_IndexedOr|TT_MergeJoin| TT_NestedLoop|TT_NoRemRowIdOpt|TT_Range| TT_Rowid|TT_RowLock|TT_ShowJoinOrder|TT_TblLock|TT_TblScan| TT_TmpHash|TT_TmpRange|TT_TmpTable|TT_UseBoyerMooreStringSearch
Parameter | Description |
---|---|
SQL VERB |
SQL VERB refers to one of the keywords: DELETE , INSERT , MERGE , SELECT , or UPDATE . You embed a statement level optimizer hint in comment syntax and if the comment syntax contains a statement level optimizer hint, the comment syntax must follow the SQL VERB .
The |
/*+ hint */ |
One or more hints that are embedded in comment syntax. The comment syntax can span one or more lines. The plus sign (+ ) denotes the start of a statement level optimizer hint.
Make sure there is no space between the star ( |
--+ hint |
One or more hints that are embedded in comment syntax. The comment syntax can only span one line. The plus sign (+ ) denotes the start of a statement level optimizer hint.
Make sure there is no space between the dash ( |
hint |
A statement level optimizer hint. A SQL statement supports one or more statement level optimizer hints as one comment string. For one SQL statement, you can specify one comment that contains one or more hints and that comment must follow a DELETE , INSERT , MERGE , SELECT , or UPDATE keyword (or for TT_GridQueryExec and TT_PartialResult , the SELECT keyword). TT_CommitDMLOnSuccess must follow a DELETE , INSERT , or UPDATE keyword and in the INSERT...SELECT statement, it must follow the SELECT keyword.
If you specify more than one hint within the comment, make sure there is a space between the hints. Statement level optimizer hints are scoped to a SQL statement and have per query semantics. For hints other than
|
CommentText |
Text within a comment string. You can use both statement level optimizer hints and commenting text within one comment. Make sure to include a space between the hint and the commenting text. |
FlagHint |
FlagHint refers to statement level optimizer flags that you enable or disable to influence the execution plan of the TimesTen query optimizer. These flags map to the flags used in the ttOptSetFlag built-in procedure.
Statement level optimizer hint flags are in effect for the statement only whereas transaction level optimizer hint flags are in effect for the duration of your transaction. |
FlagHint |
FlagHint refers to statement level optimizer flags that you enable or disable to influence the execution plan of the TimesTen query optimizer. These flags map to the flags used in the ttOptSetFlag built-in procedure.
Statement level optimizer hint flags are in effect for the statement only whereas transaction level optimizer hint flags are in effect for the duration of your transaction. |
ScaleoutHint |
ScaleoutHint refers to the TT_CommitDMLOnSuccess statement level hint as well as the TT_GridQueryExec and the TT_PartialResult statement level optimizer hints. These hints are supported in TimesTen Scaleout only.
SELECT /*+TT_GridQueryExec(LOCAL)*/ COUNT(*), elementId# FROM t GROUP BY elementId#; SELECT /*+TT_GridQueryExec(GLOBAL)*/ COUNT(*), elementId# FROM t GROUP BY elementId#; SELECT /*+TT_PartialResult(0)*/ COUNT (*), elementId# FROM t GROUP BY elementId#; SELECT /*+TT_PartialResult(1)*/ COUNT (*), elementId# FROM t GROUP BY elementId#; |
TT_BranchAndBound |
Flag that maps to the flag BranchAndBound in the ttOptSetFlag built-in procedure. |
TT_DynamicLoadEnable |
Flag that maps to the flag DynamicLoadEnable in the ttOptSetFlag built-in procedure. |
TT_DynamicLoadErrorMode |
Flag that maps to the flag DynamicLoadErrorMode in the ttOptSetFlag built-in procedure. |
TT_FastPrepare |
Flag that maps to the flag FastPrepare in the ttOptSetFlag built-in procedure. Default is 1. |
TT_FirstRow |
Flag that maps to the flag FirstRow in the ttOptSetFlag built-in procedure. |
TT_ForceCompile |
Flag that maps to the flag ForceCompile in the ttOptSetFlag built-in procedure. |
TT_GenPlan |
Flag that maps to the flag GenPlan in the ttOptSetFlag built-in procedure. |
TT_HashGb |
Flag that maps to the flag HashGb in the ttOptSetFlag built-in procedure. |
TT_HashScan |
Flag that maps to the flag Hash in the ttOptSetFlag built-in procedure. |
TT_IndexedOr |
Flag that maps to the flag IndexedOr in the ttOptSetFlag built-in procedure. |
TT_MergeJoin |
Flag that maps to the flag MergeJoin in the ttOptSetFlag built-in procedure. |
TT_NestedLoop |
Flag that maps to the flag NestedLoop in the ttOptSetFlag built-in procedure. |
TT_NoRemRowIdOpt |
Flag that maps to the flag NoRemRowIdOpt in the ttOptSetFlag built-in procedure. |
TT_Range |
Flag that maps to the flag Range in the ttOptSetFlag built-in procedure. |
TT_Rowid |
Flag that maps to the flag Rowid in the ttOptSetFlag built-in procedure. |
TT_RowLock |
Flag that maps to the flag Rowlock in the ttOptSetFlag built-in procedure. |
TT_ShowJoinOrder |
Flag that maps to the flag ShowJoinOrder in the ttOptSetFlag built-in procedure. |
TT_TblLock |
Flag that maps to the flag TblLock in the ttOptSetFlag built-in procedure. |
TT_TblScan |
Flag that maps to the flag Scan in the ttOptSetFlag built-in procedure. |
TT_TmpHash |
Flag that maps to the flag TmpHash in the ttOptSetFlag built-in procedure. |
TT_TmpRange |
Flag that maps to the flag TmpRange in the ttOptSetFlag built-in procedure. |
TT_TmpTable |
Flag that maps to the flag TmpTable in the ttOptSetFlag built-in procedure. |
TT_UseBoyerMooreStringSearch |
Flag that maps to the flag UseBoyerMooreStringSearch in the ttOptSetFlag built-in procedure. |
JoinOrderHint ::= TT_JoinOrder ( CorrelationName CorrelationName [...] ) |
JoinOrderHint refers to the syntax for the TT_JoinOrder statement level optimizer hint. The TT_JoinOrder hint instructs the optimizer to join your tables in a specified order. The join order is in effect for the statement only.
Specify
For example, if you are joining the Command> SELECT /*+ TT_JoinOrder (EMPS DEPTS)*/... If your You can execute the built-in procedure, For more information on |
IndexHint ::= TT_INDEX (CorrelationName IndexName {0 |1 } [; ...] ) |
IndexHint refers to the syntax for the TT_INDEX statement level optimizer hint. Use the TT_INDEX hint to direct the optimizer to use or not use an index for your table. The index hint is in effect for the statement only.
Specify a value of 0 to ask the optimizer not to consider the index. Specify a value of 1 to ask the optimizer to consider the index. For example, To direct the optimizer to use the index Command> SELECT /*+ TT_INDEX (E,EMP_NAME_IX,1) */ ... Use a semicolon (;) to include more than one If your You can execute the built-in procedure, For more information on |
Note:
For descriptions of flags discussed in the preceding table, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database ReferenceEmbed statement level optimizer hints in comment syntax. Begin the comment with either /*
or --
. Follow the beginning comment syntax with a plus sign (+
). The plus sign (+
) signals TimesTen to interpret the comment as a list of hints. The plus sign (+) must follow immediately after the comment delimiter. (For example, after /*
or after --
). No space is permitted between the comment delimiter and the plus sign (+).
In the following example, there is a space between the star (*) and the plus sign (+), so the hint is ignored:
Command> SELECT /* + TT_TblScan (1) This hint is ignored because there is a space between the star (*) and the plus (+) sign. */ ...
A hint
is one of the statement level optimizer hints supported by TimesTen. There can be a space between the plus sign (+) and the hint. If the comment contains multiple hints, separate the hints by at least one space. For example, to specify two hints on one line:
Command> SELECT --+ TT_MergeJoin (0) TT_NestedLoop (1) ...
You can intersperse commenting text with hints in a comment. For example,
Command> SELECT /*+ TT_HashScan (1) This demonstrates a hint followed by a comment string. */ ...
TimesTen ignores hints and does not return an error if:
Your hint does not follow the DELETE
, INSERT
, MERGE
, SELECT
or UPDATE
keyword (or for TT_GridQueryExec
or TT_PartialResult
, the SELECT
keyword). TT_CommitDMLOnSuccess
must follow the DELETE
, INSERT
, UPDATE
keyword and for INSERT...SELECT
, it must follow the SELECT
keyword.
Your hint contains misspellings or syntax errors. If you have hints that are within the same comment and some hints are correct syntactically and some hints are incorrect syntactically, TimesTen ignores the incorrect hints and accepts the correct hints.
You use either the TT_JoinOrder
or TT_Index
hint and you do not supply a closing parenthesis, the remainder of the hint string is ignored.
For hints that conflict with each other, TimesTen uses the rightmost hint in the comment. For example, if the comment string is /*+TT_TblScan (0)...TT_TblScan (1) */
, the rightmost hint, TT_TblScan(1)
, is used.
Statement level optimizer hints override conflicting transaction level optimizer hints. If you specify a transaction level optimizer hint that conflicts with a statement level optimizer hint, the statement level optimizer hint overrides the conflicting transaction level optimizer hint. For example, if you call ttOptSetFlag
, and enable the Range
flag and then you issue a SQL query and disable the statement level optimizer flag, TT_Range
, TimesTen disables the range flag for the query. After the query is executed, the original range flag setting that was in place in the transaction before the query was executed remains in effect for the duration of the transaction. For more information, see Example 6-1, "Using statement level optimizer hints for a SELECT query". The TT_GridQueryExec
, TT_PartialResult
hints and TT_CommitDMLOnSuccess
hints are not supported at the transaction level.
Do not use statement level optimizer hints in a subquery.
The TimesTen query optimizer does not recognize statement level optimizer hints for passthrough statements. TimesTen passes the SQL text for passthrough statements to the Oracle database and the SQL text is processed according to the SQL rules of the Oracle database. Passthrough statements are not supported in TimesTen Scaleout.
SQL statements that support statement level optimizer hints
You can specify statement level optimizer hints in SQL statements. Not all hints are supported in all statements. You must specify the hint within comment syntax and the comment syntax must immediately follow the SQL
VERB
. (For example, SELECT
/*+
hint
*
/...
) Table 6-3, "Placement of statement level hints in SQL statements" shows the correct placement of the statement level hint. It also indicates if a hint is not supported in the statement.
Table 6-3 Placement of statement level hints in SQL statements
SQL statement | Placement of hint |
---|---|
|
Do not use transaction level hints with the
|
|
The |
|
The |
|
|
|
The |
|
Do not specify a hint in a subquery. The |
|
The |
|
The |
Use optimizer hints to influence the TimesTen query optimizer in determining the choice of the execution plan for your query.
TT_GridQueryExec
, TT_PartialResult
and TT_CommitDMLOnSuccess
are supported at the connection and statement levels only. This section is not valid for these hints.
To view transaction level optimizer hints, execute the built-in procedure, ttOptSetFlag
. For more information on the built-in procedure, ttOptGetFlag
, see "ttOptGetFlag" in Oracle TimesTen In-Memory Database Reference.
For TT_CommitDMLOnSuccess
examples, see "TT_CommitDMLOnSuccess optimizer hint" for information.
For TT_GridQueryExec
and TT_PartialResult
examples:
See "TT_GridQueryExec" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.
See "TT_PartialResult" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.
The following examples illustrate usages of statement level and transaction level optimizer hints. The TimesTen optimizer is a cost based query optimizer and generates what it thinks is the most optimal execution plan for your statement. This plan differs from release to release. The plan is based on the indexes that exist on the referenced tables as well as the column and table statistics that are available. When you recompute statistics or change indexes, the TimesTen optimizer may change the execution plan based on the recomputed statistics and index changes. Because the execution plan may vary, these examples are included for demonstration purposes only. Examples include:
Example 6-1, "Using statement level optimizer hints for a SELECT query"
Example 6-4, "Using the statement level optimizer hint TT_INDEX"
Example 6-1 Using statement level optimizer hints for a SELECT query
View the execution plan for a query. Then use statement level optimizer hints to influence the optimizer to choose a different execution plan. Consider the query:
Command> SELECT r.region_name, c.country_name FROM regions r, countries c WHERE r.region_id = c.region_id > ORDER BY c.region_id;
Use the ttIsql
EXPLAIN
command to view the plan generated by the optimizer. Note:
The optimizer performs two range scans using table level locking for both scans.
The optimizer uses the MergeJoin
operation to join the two tables.
Command> EXPLAIN SELECT r.region_name, c.country_name FROM regions r, countries c WHERE r.region_id = c.region_id ORDER BY c.region_id; Query Optimizer Plan: STEP: 1 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: COUNTRIES IXNAME: COUNTR_REG_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: REGIONS IXNAME: REGIONS INDEXED CONDITION: R.REGION_ID >= C.REGION_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 1 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: C.REGION_ID = R.REGION_ID NOT INDEXED: <NULL>
Now use statement level optimizer hints to direct the optimizer to perform the scans using row level locking and to use a NestedLoop
operation to join the tables. Set autocommit to on to illustrate that the autocommit setting has no effect because statement level optimizer hints are scoped to the SQL statement.
Command> autocommit on; Command> EXPLAIN SELECT /*+ TT_RowLock (1), TT_TblLock (0), TT_MergeJoin (0), TT_NestedLoop (1) */ r.region_name, c.country_name FROM regions r, countries c WHERE r.region_id = c.region_id ORDER BY c.region_id; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: RowLkRangeScan TBLNAME: REGIONS IXNAME: REGIONS INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: RowLkRangeScan TBLNAME: COUNTRIES IXNAME: COUNTR_REG_FK INDEXED CONDITION: C.REGION_ID = R.REGION_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 2 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 4 LEVEL: 1 OPERATION: OrderBy TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL>
Prepare the query again without statement level optimizer hints. The optimizer reverts back to the original execution plan because statement level optimizer hints are scoped to the SQL statement.
Command> EXPLAIN SELECT r.region_name, c.country_name FROM regions r, countries c WHERE r.region_id = c.region_id ORDER BY c.region_id; Query Optimizer Plan: STEP: 1 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: COUNTRIES IXNAME: COUNTR_REG_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: REGIONS IXNAME: REGIONS INDEXED CONDITION: R.REGION_ID >= C.REGION_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 1 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: C.REGION_ID = R.REGION_ID NOT INDEXED: <NULL>
Example 6-2 Using on and off hinting
This example illustrates the importance of directing the optimizer to specifically enable or disable hints that perform a similar function. For example, the hash and range hints direct the optimizer to use either a hash or range access path for the table. In order to ensure the optimizer chooses the specific access path, enable one hint and disable all other related hints.
Create a table and create a hash index on the first column of the table and a range index on the second column.
Command> CREATE TABLE test (col1 NUMBER, col2 NUMBER); Command> CREATE HASH INDEX h_index ON test (col1); Command> CREATE INDEX hr_index ON test (col2);
Set autocommit to off and execute the built-in procedure, ttOptGetFlag
, to review the current transaction level optimizer hint settings for the transaction. A setting of 1 means the flag is enabled.
Command> autocommit off; Command> CALL ttOptGetFlag ('Hash'); < Hash, 1 > 1 row found. Command> CALL ttOptGetFlag ('Scan'); < Scan, 1 > 1 row found.
Use the ttIsql
EXPLAIN
command to review the plan for a SELECT
query using a WHERE
clause and dynamic parameters. The optimizer uses a hash scan.
Command> EXPLAIN SELECT * FROM test WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TEST IXNAME: H_INDEX INDEXED CONDITION: TEST.COL1 = _QMARK_1 NOT INDEXED: TEST.COL2 = _QMARK_2
Use the statement level optimizer hint TT_Range
to direct the optimizer to use a range scan. Note that the optimizer ignores the TT_Range
hint and uses a hash scan because you did not direct the optimizer to disable the hash scan. Alter the statement and direct the optimizer to use a range scan and not use a hash scan. To accomplish this, enable the statement level optimizer hint TT_Range
and disable the statement level optimizer hint TT_HashScan
. The optimizer no longer ignores the TT_Range
hint.
Command> EXPLAIN SELECT --+ TT_Range (1) Single line comment to set TT_Range * FROM TEST WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TEST IXNAME: H_INDEX INDEXED CONDITION: TEST.COL1 = _QMARK_1 NOT INDEXED: TEST.COL2 = _QMARK_2 Command> EXPLAIN SELECT /*+ TT_Range (1) TT_HashScan (0) Multiple line comment to enable TT_Range and disable TT_HashScan */ * FROM TEST WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkRangeScan TBLNAME: TEST IXNAME: HR_INDEX INDEXED CONDITION: TEST.COL2 = _QMARK_2 NOT INDEXED: TEST.COL1 = _QMARK_1
Prepare the query again without using statement level optimizer hints and without issuing a commit or rollback. The optimizer uses the transaction level optimizer hints settings that were in effect before executing the query. The optimizer uses transaction level optimizer hints because statement level optimizer hints are scoped to the SQL statement.
Command> EXPLAIN SELECT * FROM TEST WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TEST IXNAME: H_INDEX INDEXED CONDITION: TEST.COL1 = _QMARK_1 NOT INDEXED: TEST.COL2 = _QMARK_2
Example 6-3 Using TT_JoinOrder to specify a join order
Use the statement level optimizer hint TT_JoinOrder
to direct the optimizer to use a specific join order. First use a transaction level optimizer hint to direct the optimizer to use a specific join order for the transaction. Then use a statement level optimizer hint to direct the optimizer to change the join order for the statement only.
Command> CALL ttOptSetOrder ('e d j'); Command> EXPLAIN SELECT * FROM employees e, departments d, job_history j WHERE e.department_id = d.department_id AND e.hire_date = j.start_date; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_DEPT_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: DEPARTMENTS IXNAME: DEPARTMENTS INDEXED CONDITION: D.DEPARTMENT_ID >= E.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 2 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: E.DEPARTMENT_ID = D.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 4 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: JOB_HISTORY IXNAME: JOB_HISTORY INDEXED CONDITION: <NULL> NOT INDEXED: E.HIRE_DATE = J.START_DATE STEP: 5 LEVEL: 1 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL>
Use the statement level optimizer hint, TT_JoinOrder
, to direct the optimizer to override the transaction level join order optimizer hint for the SQL statement only.
Command> EXPLAIN SELECT --+ TT_JoinOrder (e j d) * FROM employees e, departments d, job_history j WHERE e.department_id = d.department_id AND e.hire_date = j.start_date; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_DEPT_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: JOB_HISTORY IXNAME: JOB_HISTORY INDEXED CONDITION: <NULL> NOT INDEXED: E.HIRE_DATE = J.START_DATE STEP: 3 LEVEL: 2 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 4 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: DEPARTMENTS IXNAME: DEPARTMENTS INDEXED CONDITION: D.DEPARTMENT_ID >= E.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 5 LEVEL: 1 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: E.DEPARTMENT_ID = D.DEPARTMENT_ID NOT INDEXED: <NULL>
Prepare the query again to verify that the join order that was in effect for the transaction remains in effect.
Command> EXPLAIN SELECT * FROM employees e, departments d, job_history j WHERE e.department_id = d.department_id AND e.hire_date = j.start_date; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_DEPT_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: DEPARTMENTS IXNAME: DEPARTMENTS INDEXED CONDITION: D.DEPARTMENT_ID >= E.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 2 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: E.DEPARTMENT_ID = D.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 4 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: JOB_HISTORY IXNAME: JOB_HISTORY INDEXED CONDITION: <NULL> NOT INDEXED: E.HIRE_DATE = J.START_DATE STEP: 5 LEVEL: 1 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL>
Example 6-4 Using the statement level optimizer hint TT_INDEX
Perform a query on the employees
table that uses the index, emp_name_ix
. Then use the statement level optimizer hint TT_INDEX
to direct the optimizer not to use this index. First run the ttIsql
command, indexes
, to view the indexes for the employees
table.
Command> indexes employees; Indexes on table TESTUSER.EMPLOYEES: EMPLOYEES: unique range index on columns: EMPLOYEE_ID (referenced by foreign key index JHIST_EMP_FK on table TESTUSER.JOB_HISTORY) TTUNIQUE_0: unique range index on columns: EMAIL EMP_DEPT_FK: non-unique range index on columns: DEPARTMENT_ID (foreign key index references table TESTUSER.DEPARTMENTS(DEPARTMENT_ID)) EMP_JOB_FK: non-unique range index on columns: JOB_ID (foreign key index references table TESTUSER.JOBS(JOB_ID)) EMP_NAME_IX: non-unique range index on columns: LAST_NAME FIRST_NAME 5 indexes found. 5 indexes found on 1 table.
Use the ttIsql
command, EXPLAIN
, to view the execution plan for a SELECT
query on the employees
table that uses a WHERE
clause on the last_name
column.
Command> EXPLAIN SELECT e.first_name FROM employees e WHERE e.last_name BETWEEN 'A' AND 'B'; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_NAME_IX INDEXED CONDITION: E.LAST_NAME >= 'A' AND E.LAST_NAME <= 'B' NOT INDEXED: <NULL>
Use the statement level optimizer hint, TT_INDEX
, to direct the optimizer not to use the index, emp_name_ix
.
Command> EXPLAIN SELECT --+ TT_INDEX (E,EMP_NAME_IX,0) e.first_name FROM employees e WHERE e.last_name BETWEEN 'A' AND 'B'; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMPLOYEES INDEXED CONDITION: <NULL> NOT INDEXED: E.LAST_NAME <= 'B' AND E.LAST_NAME >= 'A'
These optimizer hints are only supported in TimesTen Scaleout. They are valid at the statement and at the connection levels.
See "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on hints at the connection level and "Statement level optimizer hints" in this book for information on statement level optimizer hints.
The TT_GridQueryExec
optimizer hint enables you to specify whether the query should return data from the local element or from all elements, including the elements in a replica set when K-safety is set to 2.
If you do not specify this hint, the query is executed in one logical data space. It is neither local nor global. This means that exactly one full copy of the data is used to compute the query. Use this hint in cases where obtaining some result is more important than obtaining the correct result (for example, where one or more replica sets are unavailable). Valid options for this hint are LOCAL
and GLOBAL
.
For more information, see:
"TT_GridQueryExec" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for information on using this hint.
"OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on using this hint at the connection level.
"Statement level optimizer hints" for information on using this hint at the statement level.
Example 6-5 Use hint to determine elementId#, replicasetId#, and dataspaceId#
You can use the TT_GridQueryExec(GLOBAL)
hint on the dual
table to determine the ids of all elements, replica sets, and dataspaces.
Command> SELECT /*+TT_GridQueryExec(GLOBAL)*/ elementId#, replicasetId#, dataspaceId# FROM dual ORDER BY elementId#,replicasetId#,dataspaceId#; ELEMENTID#, REPLICASETID#, DATASPACEID# < 1, 1, 1 > < 2, 1, 2 > < 3, 2, 1 > < 4, 2, 2 > < 5, 3, 1 > < 6, 3, 2 > 6 rows found.
See "TT_GridQueryExec" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more examples.
The TT_PartialResult
optimizer hint enables you to specify whether the query should return partial results if some data is not available.
Use TT_PartialResult(1)
to direct the query to return partial results if all elements in a replica set are not available.
Use TT_PartialResult(0)
to direct the query to return an error if the required data is not available in the case where all elements in a replica set are not available. If at least one element from each replica set is available or the data required by the query is available, the optimizer returns the query result correctly without error.
The default is TT_PartialResult(0)
.
For more information, see:
"TT_PartialResult" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for information on using this hint and for examples.
"OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on using this hint at the connection level.
"Statement level optimizer hints" for information on using this hint at the statement level.
Use the TT_CommitDMLOnSuccess
hit to enable or disable a commit operation as part of DML execution.
At the statement level, TT_CommitDMLOnSuccess
is used in a DML statement (DELETE
, INSERT
, INSERT... SELECT
, and UPDATE
) to enable or disable the commit behavior of the transaction when the DML operation is executed. For the INSERT...SELECT
statement, specify TT_CommitDMLOnSuccess
after the SELECT
keyword.
TT_CommitDMLOnSuccess
is valid in DML operations only. It is not valid for queries or DDL operations and, if specified in a non-DML statement, is ignored and no error is returned. See "Statement level optimizer hints" for information on the syntax and semantics.
At the connection level, TT_CommitDMLOnSuccess
is also used to enable or disable the commit behavior of the transaction when a DML operation is executed. However, you specify TT_CommitDMLOnSuccess
as a parameter to the OptimizerHint
connection attribute. See "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on using TT_CommitDMLOnSuccess
at the connection level.
At both levels, valid options are 0
and 1
. If you do not specify TT_CommitDMLOnSuccess
, there are no changes to the normal commit behavior. The order of precedence is statement level followed by connection level.
The TT_CommitDMLOnSuccess
commit behavior at the statement level is:
TT_CommitDMLOnSuccess(1)
commits the current transaction if the DML statement in which the hint is specified is executed successfully. If there are open cursors at commit time, all cursors are closed and the transaction is committed. If the statement with this hint fails, the transaction is not committed.
TT_CommitDMLOnSuccess(0)
disables the commit of the current transaction if the DML statement in which the hint is specified is executed successfully.
Table 6-4, "TT_CommitDMLOnSuccess commit behavior: Autocommit 0" shows the commit behavior when not setting TT_CommitDMLOnSuccess
as well as setting TT_CommitDMLOnSuccess
to 0
and 1
at the statement and connection levels. The table shows the commit behavior when autocommit
is set to 0
.
Table 6-5, "TT_CommitDMLOnSuccess commit behavior: Autocommit 1" shows the commit behavior when not setting TT_CommitDMLOnSuccess
as well as setting TT_CommitDMLOnSuccess
to 0
and 1
at the statement and connection levels. The table shows the commit behavior when autocommit
is set to 1
.
Table 6-4 TT_CommitDMLOnSuccess commit behavior: Autocommit 0
Not set at connection level | Set to 0 at connection level | Set to 1 at connection level | |
---|---|---|---|
Not set at statement level |
|
|
|
Set to 0 at statement level |
|
|
|
Set to 1 at statement level |
|
|
|
Table 6-5 TT_CommitDMLOnSuccess commit behavior: Autocommit 1
Not set at connection level | Set to 0 at connection level | Set to 1 at connection level | |
---|---|---|---|
Not set at statement level |
|
|
|
Set to 0 at statement level |
|
|
|
Set to 1 at statement level |
|
|
|
For more information, see:
"Using the TT_CommitDMLOnSuccess hint" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for additional information.
"OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on using TT_CommitDMLOnSuccess
at the connection level.
"Statement level optimizer hints" for information on the syntax for TT_CommitDMLOnSuccess
at the statement level.
Example 6-6 Setting TT_CommitDMLOnSuccess to 1
This example first creates the mytable
table. It then sets autocommit
to 0
and inserts a row into the mytable
table. A second connection (conn2
) connects to the database and issues a SELECT
query against the mytable
table. The query returns 0 rows. The ttIsql
use
command returns the application to the first connection (database1
) and issues a second INSERT
operation, setting TT_CommitDMLOnSuccess
to 1
at the statement level. A second ttIsql
use
command returns the application to the conn2
connection. A SELECT
query shows two rows have been inserted into the mytable
table. This example illustrates that issuing TT_CommitDMLOnSuccess(1)
commits the transaction after the successful execution of the second INSERT
operation (which set the hint).
Command> CREATE TABLE mytable (col1 TT_INTEGER, col2 VARCHAR2(4000)); Command> autocommit 0; Command> INSERT INTO mytable VALUES (10, 'ABC'); 1 row inserted.
Establish a second connection (conn2
)
Command> connect as conn2; Using the connection string of connection database1 to connect... ... (Default setting AutoCommit=1)
Issue a SELECT
query and expect 0
rows due to autocommit
set to 0
.
conn2: Command> SELECT * FROM mytable; 0 rows found.
Return to the first connection (database1
) and issue an INSERT
operation with TT_CommitDMLOnSuccess
set to 1
.
conn2: Command> use database1; database1: Command> INSERT /*+TT_CommitDMLOnSuccess(1)*/ INTO mytable VALUES (10, 'ABC'); 1 row inserted.
Return to the second connection (conn2
) and issue a SELECT
query. Expect 2
rows (due to the two INSERT
statements. (The transaction is committed due to the TT_CommitDMLOnSuccess
statement level hint set to 1
and the successful execution of the two INSERT
operations.)
database1: Command> use conn2 conn2: Command> SELECT * FROM mytable; < 10, ABC > < 10, ABC > 2 rows found.
Example 6-7 Using TT_CommitDMLOnSuccess at connection level
This example first creates the mytable
table. It then uses PL/SQL to insert 1000 rows into the table. There is a second connection to the database (conn2
) and this connection connects with TT_CommitDMLOnSuccess
set to 1
at the connection level. Various operations are performed to illustrate the behavior of TT_CommitDMLOnSuccess
at both the statement and connection levels.
Command> CREATE TABLE mytable (col1 TT_INTEGER NOT NULL PRIMARY KEY, col2 VARCHAR2 (4000)); Command> BEGIN > FOR i in 1..1000 > LOOP > INSERT INTO mytable VALUES (i,i); > END LOOP; > END; > / PL/SQL procedure successfully completed.
Establish a second connection (conn2
) and connect setting TT_CommitDMLOnSuccess
at the connection level to 1
.
Command> CONNECT adding "OptimizerHint=TT_CommitDMLOnSuccess(1)" as conn2; Connection successful: ...
Set autocommit
to 0
and issue a DELETE
operation.
conn2: Command> autocommit 0; conn2: Command> DELETE FROM mytable WHERE col1=1000; 1 row deleted.
Return to the original connection (database1
) and issue a SELECT
query to see if the DELETE
operation was committed. The operation was committed due to the TT_CommitDMLOnSuccess
setting of 1
at the connection level.
conn2: Command> use database1; database1: Command> SELECT * FROM mytable WHERE col1=1000; 0 rows found.
Return to the second connection (conn2
) and issue an INSERT
operation. Then return to the original connection (database1
). The transaction containing the INSERT
operation was committed.
database1: Command> use conn2; conn2: Command> INSERT INTO mytable VALUES (1000,1000); 1 row inserted. conn2: Command> use database1 database1: Command> SELECT * FROM mytable WHERE col1=1000; < 1000, 1000 > 1 row found.
Return to the second connection (conn2
) and issue a DELETE
operation, followed by an INSERT
operation, and then a second INSERT
operation where TT_CommitDMLOnSuccess
is set to 0
at the statement level (the second INSERT
).
database1: Command> use conn2; conn2: Command> DELETE FROM mytable WHERE col1=1000; 1 row deleted. conn2: Command> INSERT INTO mytable VALUES (1001,1001); 1 row inserted. conn2: Command> INSERT /*+TT_CommitDMLOnSuccess(0)*/ INTO mytable VALUES (1002,1002); 1 row inserted.
Issue a SELECT
query and notice the results of the query. The one DELETE
operation and the two INSERT
operations were successful.
conn2: Command> SELECT * FROM mytable where col1 >= 1000; < 1001, 1001 > < 1002, 1002 > 2 rows found.
Return to the original connection (database1
) and issue the same SELECT
query. Observe that the one DELETE
statement and the first INSERT
operation were committed. This is due to the TT_CommitDMLOnSuccess
setting of 1
at the connection level. The second INSERT
statement was not committed due to the TT_CommitDMLOnSuccess
setting of 0
for this second INSERT
statement.
conn2: Command> use database1; database1: Command> SELECT * FROM mytable where col1 >= 1000; < 1001, 1001 > 1 row found.
Return to the second connection (conn2
) and issue a third INSERT
operation. Then issue a SELECT
query and observe the results.
database1: Command> use conn2; conn2: Command> INSERT INTO mytable VALUES (1003,1003); 1 row inserted. conn2: Command> SELECT * FROM mytable where col1 >= 1000 ORDER BY col1; < 1001, 1001 > < 1002, 1002 > < 1003, 1003 > 3 rows found.
Return to the original connection (database1
) and issue the same SELECT
query. Note the results are the same as in the conn2
connection. The transaction is committed due to the TT_CommitDMLOnSuccess
setting of 1 at the connection level and the successful execution of the second and third INSERT
operations.
conn2: Command> use database1 database1: Command> SELECT * FROM mytable where col1 >= 1000 ORDER BY col1; < 1001, 1001 > < 1002, 1002 > < 1003, 1003 > 3 rows found.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
You can change an active standby pair by:
Adding or dropping a subscriber database
Altering store attributes
Only the PORT
and TIMEOUT
attributes can be set for subscribers.
Including tables, sequences or cache groups in the replication scheme
Excluding tables, sequences or cache groups from the replication scheme
See "Making other changes to an active standby pair" in Oracle TimesTen In-Memory Database Replication Guide.
ALTER ACTIVE STANDBY PAIR { SubscriberOperation | StoreOperation | InclusionOperation | NetworkOperation } [...]
Syntax for SubscriberOperation
:
{ADD | DROP } SUBSCRIBER FullStoreName
Syntax for StoreOperation
:
ALTER STORE FullStoreName SET StoreAttribute
Syntax for InclusionOperation
:
[{ INCLUDE | EXCLUDE }{TABLE [[Owner.]TableName [,...]]| CACHE GROUP [[Owner.]CacheGroupName [,...]]| SEQUENCE [[Owner.]SequenceName [,...]]} [,...]]
Syntax for NetworkOperation
:
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...] DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } [...]
Parameter | Description |
---|---|
ADD SUBSCRIBER FullStoreName |
Indicates a subscriber database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
DROP SUBSCRIBER FullStoreName |
Indicates that updates should no longer be sent to the specified subscriber database. This operation fails if the replication scheme has only one subscriber. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
ALTER STORE FullStoreName SET StoreAttribute |
Indicates changes to the attributes of a database. Only the PORT and TIMEOUT attributes can be set for subscribers. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.
For information on |
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
{INCLUDE|EXCLUDE}
|
Includes in or excludes from replication the tables, sequences or cache groups listed.
You cannot use the |
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Adds NetworkOperation to replication scheme. Enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores. In the context of the ADD ROUTE clause, each master database is a subscriber of the other master database and each read-only subscriber is a subscriber of both master databases.
Can be specified more than once. For |
DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Drops NetworkOperation from replication scheme.
Can be specified more than once. For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. Valid for both |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
You must stop the replication agent before altering an active standby pair. The exceptions are for those objects and statements that are automatically replicated and included based on the values of the DDL_REPLICATION_LEVEL
and DDL_REPLICATION_ACTION
attributes, as described in "ALTER SESSION".
You may only alter the active standby pair replication scheme on the active database. See "Making other changes to an active standby pair" in Oracle TimesTen In-Memory Database Replication Guide for more information.
You may not use ALTER ACTIVE STANDBY PAIR
when using Oracle Clusterware with TimesTen. See "Restricted commands and SQL statements" in Oracle TimesTen In-Memory Database Replication Guide for more information.
Instead, perform the tasks described in "Changing the schema" section of the Oracle TimesTen In-Memory Database Replication Guide.
Use ADD SUBSCRIBER
FullStoreName
to add a subscriber to the replication scheme.
Use DROP SUBSCRIBER
FullStoreName
to drop a subscriber from the replication scheme.
Use the INCLUDE
or EXCLUDE
clause to include the listed tables, sequences or cache groups in the replication scheme or to exclude them from the replication scheme. Use one INCLUDE
or EXCLUDE
clause for each object type (table, sequence or cache group). The ALTER ACTIVE STANDBY
statement is not necessary for those objects and statements that are automatically replicated and included based on the values of the DDL_REPLICATION_LEVEL
and DDL_REPLICATION_ACTION
attributes, as described in "ALTER SESSION". However, if DDL_REPLICATION_LEVEL
is 2 or greater and DDL_REPLICATION_ACTION
="EXCLUDE
", use the INCLUDE
clause to include replicated objects into the replication scheme.
Do not use the EXCLUDE
clause for AWT cache groups.
When DDL_REPLICATION_LEVEL
is 2 or greater, the INCLUDE
clause can only be used with empty tables on the active database. The contents of the corresponding tables on the standby and any subscribers will be truncated before the table is added to the replication scheme.
Add a subscriber to the replication scheme.
ALTER ACTIVE STANDBY PAIR ADD SUBSCRIBER rep4;
Drop two subscribers from the replication scheme.
ALTER ACTIVE STANDBY PAIR DROP SUBCRIBER rep3 DROP SUBSCRIBER rep4;
Alter the store attributes of the rep3
and rep4
databases.
ALTER ACTIVE STANDBY PAIR ALTER STORE rep3 SET PORT 23000 TIMEOUT 180 ALTER STORE rep4 SET PORT 23500 TIMEOUT 180;
Add a table, a sequence and two cache groups to the replication scheme.
ALTER ACTIVE STANDBY PAIR INCLUDE TABLE my.newtab INCLUDE SEQUENCE my.newseq INCLUDE CACHE GROUP my.newcg1, my.newcg2;
Add NetworkOperation
clause to active standby pair:
ALTER ACTIVE STANDBY PAIR ADD ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1;
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The ALTER CACHE GROUP
statement enables changes to the state, interval and mode of AUTOREFRESH
.
Updates on the Oracle Database tables can be propagated back to the TimesTen cache group with the use of AUTOREFRESH
. AUTOREFRESH
can be enabled when the cache group is a user managed cache group or is defined as READONLY
with an AUTOREFRESH
clause.
Any values or states set by ALTER CACHE GROUP
are persistent. They are stored in the database and survive daemon and cache agent restarts.
For a description of cache group types, see "User managed and system managed cache groups".
No privilege is required for the cache group owner.
ALTER ANY CACHE GROUP
for another user's cache group.
This statement changes the AUTOREFRESH
mode of the cache group, which determines which rows are updated during an autorefresh operation:
ALTER CACHE GROUP [Owner.]GroupName SET AUTOREFRESH MODE {INCREMENTAL | FULL}
This statement changes the AUTOREFRESH
interval on the cache group:
ALTER CACHE GROUP [Owner.]GroupName SET AUTOREFRESH INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S]}
This statement alters the AUTOREFRESH
state:
ALTER CACHE GROUP [Owner.]GroupName SET AUTOREFRESH STATE {ON | OFF | PAUSED}
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name assigned to the new cache group. |
AUTOREFRESH |
Indicates that changes to the Oracle Database tables should be automatically propagated to TimesTen. For details, see "AUTOREFRESH in cache groups". |
MODE |
Determines which rows in the cache are updated during an autorefresh. If the INCREMENTAL clause is specified, TimesTen refreshes only rows that have been changed on the Oracle Database since the last propagation. If the FULL clause is specified or if there is neither FULL nor INCREMENTAL clause specified, TimesTen updates all rows in the cache with each autorefresh. The default mode is INCREMENTAL . |
INTERVAL
|
An integer value that specifies how often AUTOREFRESH should be scheduled, in minutes, seconds or milliseconds. The default value is five minutes. An autorefresh interval set to 0 milliseconds enables continuous autorefresh, where the next autorefresh cycle is scheduled immediately after the last autorefresh cycle has ended. See "AUTOREFRESH cache group attribute" in the Oracle TimesTen Application-Tier Database Cache User's Guide for more information.
If the specified interval is not long enough for an |
STATE |
Specifies whether AUTOREFRESH should be changed to on, off or paused. By default, the AUTOREFRESH STATE is ON . |
ON |
AUTOREFRESH is scheduled to occur at the specified interval. |
OFF |
A scheduled AUTOREFRESH is canceled, and TimesTen does not try to maintain the information necessary for an INCREMENTAL refresh. Therefore if AUTOREFRESH is turned on again at a later time, the first refresh is FULL . |
PAUSED |
A scheduled AUTOREFRESH is canceled, but TimesTen tries to maintain the information necessary for an INCREMENTAL refresh. Therefore if AUTOREFRESH is turned on again at a later time, a full refresh may not be necessary. |
A refresh does not occur immediately after issuing ALTER CACHE GROUP...SET AUTOREFRESH STATE
. This statement only changes the state of AUTOREFRESH
. When the transaction that contains the ALTER CACHE GROUP
statement is committed, the cache agent is notified to schedule an AUTOREFRESH
immediately, but the commit goes through without waiting for the completion of the refresh. The scheduling of the autorefresh operation is part of the transaction, but the refresh itself is not.
If you issue an ALTER CACHE GROUP... SET AUTOREFRESH STATE OFF
statement and there is an autorefresh operation currently running, then:
If LockWait
interval is 0, the ALTER
statement fails with a lock timeout error.
If LockWait
interval is nonzero, then the current autorefresh transaction is rolled back, and the ALTER
statement continues. This affects all cache groups with the same autorefresh interval.
Replication cannot occur between cache groups with AUTOREFRESH
and cache groups without AUTOREFRESH
.
If the ALTER CACHE GROUP
statement is part of a transaction that is being replicated, and if the replication scheme has the RETURN TWOSAFE
attribute, the transaction may fail.
You cannot execute the ALTER CACHE GROUP
statement when performed under the serializable isolation level. An error message is returned when attempted.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The ALTER FUNCTION
statement recompiles a standalone stored function. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors and performance overhead.
To recompile a function that is part of a package, recompile the package using the ALTER PACKAGE
statement.
No privilege is required for the PL/SQL function owner.
ALTER ANY PROCEDURE
for another user's function.
ALTER FUNCTION [Owner.]FunctionName COMPILE [CompilerParametersClause [...]] [REUSE SETTINGS]
Parameter | Description |
---|---|
[ Owner .] FunctionName |
Name of the function to be recompiled. |
COMPILE |
Required keyword that causes recompilation of the function. If the function does not compile successfully, use the ttIsql command SHOW ERRORS to display the compiler error messages. |
CompilerParametersClause |
Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL , PLSCOPE_SETTINGS and NLS_LENGTH_SEMANTICS .
You can specify each parameter once in the statement. If you omit a parameter from this clause and you specify |
REUSE SETTINGS |
Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS , TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified. |
The ALTER FUNCTION
statement does not change the declaration or definition of an existing function. To redeclare or redefine a function, use the CREATE FUNCTION
statement.
TimesTen first recompiles objects upon which the function depends, if any of those objects are invalid.
TimesTen also invalidates any objects that depend on the function, such as functions that call the recompiled function or package bodies that define functions that call the recompiled function.
If TimesTen recompiles the function successfully, then the function becomes valid. If recompiling the function results in compilation errors, then TimesTen returns an error and the function remains invalid. Use the ttIsql
command SHOW ERRORS
to display compilation errors.
During recompilation, TimesTen drops all persistent compiler settings, retrieves them again from the session, and stores them at the end of compilation. To avoid this process, specify the REUSE SETTINGS
clause.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The ALTER PACKAGE
statement explicitly recompiles a package specification, package body, or both. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors.
This statement recompiles all package objects together. You cannot use the ALTER PROCEDURE
or ALTER FUNCTION
statement to individually recompile a procedure or function that is part of a package.
No privilege is required for the package owner.
ALTER ANY PROCEDURE
for another user's package.
ALTER PACKAGE [Owner.]PackageName COMPILE [PACKAGE|SPECIFICATION|BODY] [CompilerParametersClause [...]] [REUSE SETTINGS]
Parameter | Description |
---|---|
[ Owner .] PackageName |
Name of the package to be recompiled. |
COMPILE |
Required clause used to force the recompilation of the package specification, package body, or both. |
[PACKAGE|SPECIFICATION|BODY ] |
Specify PACKAGE to recompile both the package specification and the body. Specify SPECIFICATION to recompile the package specification. Specify BODY t o recompile the package body.
|
CompilerParametersClause |
Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL , PLSCOPE_SETTINGS and NLS_LENGTH_SEMANTICS .
You can specify each parameter once in the statement. If you omit a parameter from this clause and you specify |
REUSE SETTINGS |
Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS , TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified. |
When you recompile a package specification, TimesTen invalidates local objects that depend on the specification, such as procedures that call procedures or functions in the package. The body of the package also depends on the specification. If you subsequently reference one of these dependent objects without first explicitly recompiling it, then TimesTen recompiles it implicitly at runtime.
When you recompile a package body, TimesTen does not invalidate objects that depend on the package specification. TimesTen first recompiles objects upon which the body depends, if any of those objects are invalid. If TimesTen recompiles the body successfully, then the body become valid.
When you recompile a package, both the specification and the body are explicitly recompiled. If there are no compilation errors, then the specification and body become valid. If there are compilation errors, then TimesTen returns an error and the package remains invalid.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The ALTER PROCEDURE
statement recompiles a standalone stored procedure. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors and performance overhead.
To recompile a procedure that is part of a package, recompile the package using the ALTER PACKAGE
statement.
No privilege is required for the procedure owner.
ALTER ANY PROCEDURE
for another user's procedure.
ALTER PROCEDURE [Owner.]ProcedureName COMPILE [CompilerParametersClause [...]] [REUSE SETTINGS]
Parameter | Description |
---|---|
[ Owner .] ProcedureName |
Name of the procedure to be recompiled. |
COMPILE |
Required keyword that causes recompilation of the procedure. If the procedure does not compile successfully, use the ttIsql command SHOW ERRORS to display the compiler error messages. |
CompilerParametersClause |
Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL , PLSCOPE_SETTINGS and NLS_LENGTH_SEMANTICS .
You can specify each parameter once in the statement. If you omit a parameter from this clause and you specify |
REUSE SETTINGS |
Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS , TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified. |
The ALTER PROCEDURE
statement does not change the declaration or definition of an existing procedure. To redeclare or redefine a procedure, use the CREATE PROCEDURE
statement.
TimesTen first recompiles objects upon which the procedure depends, if any of those objects are invalid.
TimesTen also invalidates any objects that depend on the procedure, such as procedures that call the recompiled procedure or package bodies that define procedures that call the recompiled procedure.
If TimesTen recompiles the procedure successfully, then the procedure becomes valid. If recompiling the procedure results in compilation errors, then TimesTen returns an error and the procedure remains invalid. Use the ttIsql
command SHOW ERRORS
to display compilation errors.
During recompilation, TimesTen drops all persistent compiler settings, retrieves them again from the session, and stores them at the end of compilation. To avoid this process, specify the REUSE SETTINGS
clause.
Query the system view USER_PLSQL_OBJECT_SETTINGS
to check PLSQL_OPTIMIZE_LEVEL
and PLSCOPE_SETTINGS
for procedure query_emp
. Alter query_emp
by changing PLSQL_OPTIMIZE_LEVEL
to 3. Verify results.
Command> SELECT PLSQL_OPTIMIZE_LEVEL, PLSCOPE_SETTINGS FROM user_plsql_object_settings WHERE name = 'QUERY_EMP'; < 2, IDENTIFIERS:NONE > 1 row found. Command> ALTER PROCEDURE query_emp COMPILE PLSQL_OPTIMIZE_LEVEL = 3; Procedure altered. Command> SELECT PLSQL_OPTIMIZE_LEVEL, PLSCOPE_SETTINGS FROM user_plsql_object_settings WHERE name = 'QUERY_EMP'; < 3, IDENTIFIERS:NONE > 1 row found.
The ALTER
PROFILE
statement adds, modifies, or removes one or more password parameters in a profile.
ALTER PROFILE profile LIMIT password_parameters password_parameters::= [FAILED_LOGIN_ATTEMPTS password_parameter_options] [PASSWORD_LIFE_TIME password_parameter_options] [PASSWORD_REUSE_TIME password_parameter_options] [PASSWORD_REUSE_MAX password_parameter_options] [PASSWORD_LOCK_TIME password_parameter_options] [PASSWORD_GRACE_TIME password_parameter_options] [PASSWORD_COMPLEXITY_CHECKER password_checker_options] password_parameter_options::= UNLIMITED|DEFAULT|constant password_checker_options::= NULL|DEFAULT
Parameter | Description |
---|---|
profile |
Name of the profile. |
LIMIT password_parameters |
The LIMIT clause sets the limits for the password parameters. The LIMIT keyword is required.
The password parameters consist of the name of the password parameter and the value (or limit) for the password parameter. All the parameters (with the exception of If you do not specify a password parameter after the |
FAILED_LOGIN_ATTEMPTS |
Specifies the number of consecutive failed attempts to connect to the database by a user before that user's account is locked. |
PASSWORD_LIFE_TIME |
Specifies the number of days that a user can use the same password for authentication. If you also set a value for PASSWORD_GRACE_TIME , then the password expires if it is not changed within the grace period. In such a situation, future connections to the database are rejected. |
PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX |
These two parameters must be used together.
You must specify a value for both parameters for them to have any effect. Specifically:
|
PASSWORD_LOCK_TIME |
Specifies the number of days the user account is locked after the specified number of consecutive failed connection attempts. |
PASSWORD_GRACE_TIME |
Specifies the number of days after the grace period begins during which TimesTen issues a warning, but allows the connection to the database. If the password is not changed during the grace period, the password expires. This parameter is associated with the PASSWORD_LIFE_TIME parameter. |
PASSWORD_COMPLEXITY_CHECKER {NULL |DEFAULT } |
Indicates the complexity verification that is done on passwords. Valid values are NULL or DEFAULT . This means there is no complexity verification done on the passwords.
A A |
UNLIMITED |
Indicates that there is no limit for the password parameter. If you specify UNLIMITED , it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS UNLIMITED . |
DEFAULT |
Indicates that you want to omit a limit for the password parameter in this profile. A user that is assigned this profile is subject to the limit defined in the DEFAULT profile for this password parameter.
If you specify |
constant |
Indicates the value of the password parameter if you do not specify UNLIMITED or DEFAULT . If specified, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS 3 . |
Use the ALTER
PROFILE
statement to modify a previously created profile. See "CREATE PROFILE" for information on creating a profile.
If you make a change to a profile (by using the ALTER
PROFILE
statement), and the profile is assigned to users, the change does not affect the users that are currently connected to the database. However, the change does affect the users that subsequently connect to the database.
You can alter the DEFAULT
profile. See "Example 1: Alter the DEFAULT profile" for an example of altering the DEFAULT
profile.
You cannot alter the instance administrator's profile.
Example 1: Alter the DEFAULT profile
This example verifies the values of the password parameters in the DEFAULT
profile. It then alters the profile with different values. Users that are assigned the DEFAULT
profile will inherit the modified values at the user's next connection to the database.
Command> SELECT * FROM dba_profiles WHERE profile='DEFAULT' AND resource_type='PASSWORD'; < DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 10 > < DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL > < DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, .0034 > < DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED > 7 rows found.
Create the user1
user and do not specify a profile. User1
is assigned the DEFAULT
profile. Use the ALTER
PROFILE
statement to change the value of the FAILED_LOGIN_ATTEMPTS
password parameter to 5
and the value of the PASSWORD_LOCK_TIME
password parameter to 1
for the DEFAULT
profile. Enclose DEFAULT
in double quotation marks as DEFAULT
is a reserved word. Connect to the database five times as user1
supplying an incorrect password each time. On the sixth attempt, the user1
account is locked.
Command> CREATE USER user1 IDENTIFIED BY user1; User created. Command> GRANT CONNECT TO user1;
Query the dba_users
system view to verify that user1
is assigned the DEFAULT
profile.
Command> SELECT profile FROM dba_users WHERE username='USER1'; < DEFAULT > 1 row found.
Use the ALTER
PROFILE
statement to modify the DEFAULT
profile.
Command> ALTER PROFILE "DEFAULT" LIMIT FAILED_LOGIN_ATTEMPTS 5 PASSWORD_LOCK_TIME 1; Profile altered.
Query the dba_profiles
system view to verify the values are changed (represented in bold).
Command> SELECT * FROM dba_profiles WHERE profile='DEFAULT' AND resource_type='PASSWORD'; < DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 5 > < DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL > < DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, 1 > < DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED > 7 rows found.
Attempt to connect to the database as user1
. Supply an incorrect password. On the sixth attempt, the user1
account is locked.
Command> connect adding "uid=user1;pwd=user1_test1" as user1; 7001: User authentication failed The command failed. none: Command> connect adding "uid=user1;pwd=user1_test2" as user1; 7001: User authentication failed The command failed. none: Command> connect adding "uid=user1;pwd=user1_test3" as user1; 7001: User authentication failed The command failed. none: Command> connect adding "uid=user1;pwd=user1_test4" as user1; 7001: User authentication failed The command failed. none: Command> connect adding "uid=user1;pwd=user1_test5" as user1; 7001: User authentication failed The command failed. none: Command> connect adding "uid=user1;pwd=user1_test6" as user1; 15179: the account is locked The command failed.
Example 2: Create a profile then alter the profile
This example creates the profile1
profile and specifies values for the FAILED_LOGIN_ATTEMPTS
, the PASSWORD_LIFE_TIME
, the PASSWORD_LOCK_TIME
, and the PASSWORD_GRACE_TIME
password parameters. It then alters the profile1
profile to modify the PASSWORD_REUSE_TIME
and the PASSWORD_REUSE_MAX
password parameters.
Command> CREATE PROFILE profile1 LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 90 PASSWORD_LOCK_TIME 30 PASSWORD_GRACE_TIME 10; Profile created.
Query the dba_profiles
system view to verify the values for the password parameters. Note that the PASSWORD_REUSE_TIME
and the PASSWORD_REUSE_MAX
password parameters each have a value of DEFAULT
(represented in bold). These password parameters were not specified in the CREATE
PROFILE
definition, so TimesTen assigns a value of DEFAULT
to each parameter. The values for these parameters are derived from the values in the DEFAULT
profile.
Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE1' AND resource_type= 'PASSWORD'; < PROFILE1, FAILED_LOGIN_ATTEMPTS, PASSWORD, 3 > < PROFILE1, PASSWORD_LIFE_TIME, PASSWORD, 90 > < PROFILE1, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT > < PROFILE1, PASSWORD_REUSE_MAX, PASSWORD, DEFAULT > < PROFILE1, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT > < PROFILE1, PASSWORD_LOCK_TIME, PASSWORD, 30 > < PROFILE1, PASSWORD_GRACE_TIME, PASSWORD, 10 > 7 rows found.
Alter the profile1
profile, specifying a value of 20
for the PASSWORD_REUSE_TIME
password and a value of 15
for the PASSWORD_REUSE_MAX
password parameter (represented in bold). A user assigned this profile can reuse the same password after 20
days if the password has been changed 15
times.
Command> ALTER PROFILE profile1 LIMIT PASSWORD_REUSE_TIME 20 PASSWORD_REUSE_MAX 15; Profile altered.
Query the dba_profiles
system view to verify the values for the password parameters are changed (represented in bold).
Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE1' AND resource_type= 'PASSWORD'; < PROFILE1, FAILED_LOGIN_ATTEMPTS, PASSWORD, 3 > < PROFILE1, PASSWORD_LIFE_TIME, PASSWORD, 90 > < PROFILE1, PASSWORD_REUSE_TIME, PASSWORD, 20 > < PROFILE1, PASSWORD_REUSE_MAX, PASSWORD, 15 > < PROFILE1, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT > < PROFILE1, PASSWORD_LOCK_TIME, PASSWORD, 30 > < PROFILE1, PASSWORD_GRACE_TIME, PASSWORD, 10 > 7 rows found.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The ALTER REPLICATION
statement adds, alters, or drops replication elements and changes the replication attributes of participating databases involved in a classic replication scheme.
Most ALTER REPLICATION
operations are supported only when the replication agent is stopped (ttAdmin
-repStop
). However, it is possible to dynamically add a subscriber database to a replication scheme while the replication agent is running. See "Altering a Classic Replication Scheme" in Oracle TimesTen In-Memory Database Replication Guide for more information.
The ALTER REPLICATION
statement has the syntax:
ALTER REPLICATION [Owner.]ReplicationSchemeName ElementOperation [...] | StoreOperation | NetworkOperation [...]
Specify ElementOperation
one or more times:
ADD ELEMENT ElementName { DATASTORE | { TABLE [Owner.]TableName [CheckConflicts] } | SEQUENCE [Owner.]SequenceName } { MASTER | PROPAGATOR } FullStoreName { SUBSCRIBER FullStoreName [,... ] [ReturnServiceAttribute] } [ ... ] { INCLUDE | EXCLUDE } { TABLE [[Owner.]TableName[,...]] | SEQUENCE [[Owner.]SequenceName[,...]] } [,...] ALTER ELEMENT { ElementName | * IN FullStoreName ]} ADD SUBSCRIBER FullStoreName [,...] [ReturnServiceAttribute] | ALTER SUBSCRIBER FullStoreName [,...]| SET [ReturnServiceAttribute] DROP SUBSCRIBER FullStoreName [,... ] ALTER ELEMENT * IN FullStoreName SET { MASTER | PROPAGATOR } FullStoreName ALTER ELEMENT ElementName {SET NAME NewElementName | SET CheckConflicts} ALTER ELEMENT ElementName { INCLUDE | EXCLUDE } { TABLE [Owner.]TableName | SEQUENCE [Owner.]SequenceName }[,...] DROP ELEMENT { ElementName | * IN FullStoreName }
CheckConflicts
can only be set when replicating TABLE
elements. The syntax is described in "CHECK CONFLICTS".
Syntax for ReturnServiceAttribute
is:
{ RETURN RECEIPT [BY REQUEST] | NO RETURN }
StoreOperation
clauses:
ADD STORE FullStoreName [StoreAttribute [... ]] ALTER STORE FullStoreName SET StoreAttribute [... ]
Syntax for the StoreAttribute
is:
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED DURABLE COMMIT {ON | OFF} RESUME RETURN Milliseconds LOCAL COMMIT ACTION {NO ACTION | COMMIT} RETURN WAIT TIME Seconds COMPRESS TRAFFIC {ON | OFF} PORT PortNumber TIMEOUT Seconds FAILTHRESHOLD Value CONFLICT REPORTING SUSPEND AT Value CONFLICT REPORTING RESUME AT Value TABLE DEFINITION CHECKING {EXACT|RELAXED}
Specify NetworkOperation
one or more times:
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...] DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } [...]
Parameter | Description |
---|---|
[ Owner .] ReplicationSchemeName |
Name assigned to the classic replication scheme. |
ADD ELEMENT ElementName |
Adds a new element to the existing classic replication scheme. ElementName is an identifier of up to 30 characters. With DATASTORE elements, the ElementName must be unique with respect to other DATASTORE element names within the first 20 characters.
If the element is a |
ADD ELEMENT ElementName DATASTORE
|
Adds a new DATASTORE element to the existing classic replication scheme. ElementName is an identifier of up to 30 characters. With DATASTORE elements, the ElementName must be unique with respect to other DATASTORE element names within the first 20 characters.
If the element is a sequence, |
ADD SUBSCRIBER FullStoreName |
Indicates an additional subscriber database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
ALTER ELEMENT * IN FullStoreName
|
Makes a change to all elements for which FullStoreName is the MASTER or PROPAGATOR . FullStoreName is the database file name specified in the DataStore attribute of the DSN description.
This syntax can be used on a set of element names to:
|
ALTER ELEMENT ElementName |
Name of the element to which a subscriber is to be added or dropped. |
ALTER ELEMENT
|
Renames ElementName1 with the name ElementName2 . You can only rename elements of type TABLE . |
ALTER ELEMENT ElementName
|
ElementName is the name of the element to be altered.
If the element is a sequence, |
ALTER SUBSCRIBER FullStoreName
|
Indicates an alteration to a subscriber database to enable, disable, or change the return receipt service. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
CheckConflicts |
Check for replication conflicts when simultaneously writing to bidirectionally replicating TABLE elements between databases. You cannot check for conflicts when replicating elements of type DATASTORE . See "CHECK CONFLICTS". |
COMPRESS TRAFFIC {ON | OFF} |
Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details. |
CONFLICT REPORTING SUSPEND AT Value |
Suspends conflict resolution reporting.
This clause is valid for table level replication. |
CONFLICT REPORTING RESUME AT Value |
Resumes conflict resolution reporting.
This clause is valid for table level replication. |
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures |
Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures . Selecting SUBSCRIBER applies this policy only to the subscriber that fails to acknowledge replicated updates within the set timeout period. ALL applies this policy to all subscribers should any of the subscribers fail to respond. This failure policy can be specified for either the RETURN RECEIPT or RETURN TWOSAFE service.
If |
DURABLE COMMIT {ON | OFF} |
Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. |
DROP ELEMENT * IN FullStoreName |
Deletes the replication description of all elements for which FullStoreName is the MASTER . FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
DROP ELEMENT ElementName |
Deletes the replication description of ElementName . |
DROP SUBSCRIBER FullStoreName |
Indicates that updates should no longer be sent to the specified subscriber database. This operation fails if the classic replication scheme has only one subscriber. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
FAILTHRESHOLD Value |
The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.
The value 0 means "No Limit." This is the default. See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide for more information. |
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
LOCAL COMMIT ACTION {NO ACTION | COMMIT} |
Specifies the default action to be taken for a RETURN TWOSAFE transaction in the event of a timeout.
This setting can be overridden for specific transactions by calling the |
MASTER FullStoreName |
The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
NO RETURN |
Specifies that no return service is to be used. This is the default.
For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
PORT PortNumber |
The TCP/IP port number on which the replication agent on this database listens for connections. If not specified, the replication agent allocates a port number automatically.
All TimesTen databases that replicate to each other must use the same port number. |
PROPAGATOR FullStoreName |
The database that receives replicated updates and passes them on to other databases. |
RESUME RETURN Milliseconds |
If return service blocking has been disabled by DISABLE RETURN , this attribute sets the policy on when to re-enable return service blocking. Return service blocking is re-enabled as soon as the failed subscriber acknowledges the replicated update in a period of time that is less than the specified Milliseconds .
If |
RETURN RECEIPT [BY REQUEST] |
Enables the return receipt service, so that applications that commit a transaction to a master database are blocked until the transaction is received by all subscribers.
|
RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED |
Sets return services on or off when replication is disabled (stopped or paused state).
|
RETURN TWOSAFE [BY REQUEST] |
Enables the return twosafe service, so that applications that commit a transaction to a master database are blocked until the transaction is committed on all subscribers.
|
RETURN WAIT TIME Seconds |
Specifies the number of seconds to wait for return service acknowledgment. The default value is 10 seconds. A value of 0 (zero) means there is no timeout. Your application can override this timeout setting by calling the ttRepSyncSet procedure with the returnWait parameter. |
SET {MASTER | PROPAGATOR} FullStoreName |
Sets the given database to be the MASTER or PROPAGATOR of the given elements. The FullStoreName must the be database's file base name. |
SUBSCRIBER FullStoreName |
A database that receives updates from the MASTER databases. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
TABLE DEFINITION CHECKING {EXACT|RELAXED} |
Specifies type of table definition checking that occurs on the subscriber:
The default is Note: If you use |
TIMEOUT Seconds |
The maximum number of seconds the replication agent waits for a response from remote replication agents. The default is 120 seconds.
Note: For large transactions that may cause a delayed response from the remote replication agent, the agent scales the timeout to increasingly larger values, as needed, based on the size of the transaction. This scaling will not occur, and the agent may time out waiting for responses, if you set |
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Adds NetworkOperation to replication scheme. Enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores.
Can be specified more than once. For |
DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Drops NetworkOperation from the classic replication scheme.
Can be specified more than once. For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. Valid for both |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
ALTER ELEMENT DROP SUBSCRIBER
deletes a subscriber for a particular replication element.
ALTER ELEMENT SET NAME
may be used to change the name of a replication element when it conflicts with one already defined at another database. SET NAME
does not admit the use of * IN
FullStoreName
. The FullStoreName
must be the database's file base name. For example, if the database file name is data.ds0
, then data
is the file base name.
ALTER ELEMENT SET MASTER
may be used to change the master database for replication elements. The * IN
FullStoreName
option must be used for the MASTER
operation. That is, a master database must transfer ownership of all of its replication elements, thereby giving up its master role entirely. Typically, this option is used in ALTER REPLICATION
statements requested at SUBSCRIBER
databases after the failure of a (common) MASTER
.
To transfer ownership of the master elements to the subscriber:
Manually drop the replicated elements by executing an ALTER REPLICATION DROP ELEMENT
statement for each replicated table.
Use ALTER REPLICATION ADD ELEMENT
to add each table back to the replication scheme, with the newly designated MASTER
/ SUBSCRIBER
roles.
ALTER REPLICATION ALTER ELEMENT SET MASTER
does not automatically retain the old master as a subscriber in the scheme. If this is desired, execute an ALTER REPLICATION ALTER ELEMENT ADD SUBSCRIBER
statement.
Note:
There is noALTER ELEMENT DROP MASTER
. Each replication element must have exactly one MASTER
database, and the currently designated MASTER
cannot be deleted from the replication scheme.Stop the replication agent before you use the NetworkOperation
clause.
You cannot alter the following replication schemes with the ALTER REPLICATION
statement:
Any active standby pair. Instead, use ALTER ACTIVE STANDBY PAIR
.
A Clusterware-managed active standby pair. Instead, perform the tasks described in "Changing the schema" section of the Oracle TimesTen In-Memory Database Replication Guide.
This example sets up a classic replication scheme for an additional table westleads
that is updated on database west
and replicated to database east
.
ALTER REPLICATION r1 ADD ELEMENT e3 TABLE westleads MASTER west ON "westcoast" SUBSCRIBER east ON "eastcoast";
This example adds an additional subscriber (backup
) to table westleads
.
ALTER REPLICATION r1 ALTER ELEMENT e3 ADD SUBSCRIBER backup ON "backupserver";
This example changes the element name of table westleads
from e3
to newelementname
.
ALTER REPLICATION r1 ALTER ELEMENT e3 SET NAME newelementname;
This example makes newwest
the master for all elements for which west
currently is the master.
ALTER REPLICATION r1 ALTER ELEMENT * IN west SET MASTER newwest;
This element changes the port number for east
.
ALTER REPLICATION r1 ALTER STORE east ON "eastcoast" SET PORT 22251;
This example adds my.tab1
table to the ds1
database element in my.rep1
replication scheme.
ALTER REPLICATION my.rep1 ALTER ELEMENT ds1 DATASTORE INCLUDE TABLE my.tab1;
This example adds ds1
database to my.rep1
replication scheme. Include my.tab2
table in the database.
ALTER REPLICATION my.rep1 ADD ELEMENT ds1 DATASTORE MASTER rep2 SUBSCRIBER rep1, rep3 INCLUDE TABLE my.tab2;
This example adds ds2
database to a replication scheme but excludes my.tab1
table.
ALTER REPLICATION my.rep1 ADD ELEMENT ds2 DATASTORE MASTER rep2 SUBSCRIBER rep1 EXCLUDE TABLE my.tab1;
Add NetworkOperation
clause:
ALTER REPLICATION r ADD ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1 MASTERIP "3.3.3.3" PRIORITY 2 SUBSCRIBERIP "4.4.4.4" PRIORITY 2;
Drop NetworkOperation
clause:
ALTER REPLICATION r DROP ROUTE MASTER repl ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" SUBSCRIBERIP "2.2.2.2" MASTERIP "3.3.3.3" SUBSCRIBERIP "4.4.4.4";
ALTER ACTIVE STANDBY PAIR
CREATE ACTIVE STANDBY PAIR
CREATE REPLICATION
DROP ACTIVE STANDBY PAIR
DROP REPLICATION
To drop a table from a database, see "Altering a replicated table in a classic replication scheme" in Oracle TimesTen In-Memory Database Replication Guide.
This statement is supported in TimesTen Scaleout only.
Use the ALTER SEQUENCE
statement to change the batch value of a sequence.
No privilege is required for the sequence owner.
ALTER ANY SEQUENCE
privilege for another user's sequence.
Parameter | Description |
---|---|
SEQUENCE [ Owner .] SequenceName |
Name of the sequence to be altered. |
BATCH BatchValue |
Valid with TimesTen Scaleout only. Configures the range of unique sequence values that are stored at each element of the grid. The default value is 10 million. |
Use this statement to change the batch value for a sequence in TimesTen Scaleout. The change affects future sequence numbers.
This statement cannot be used to alter any other values supported in the CREATE
SEQUENCE
statement. In this case, use the DROP SEQUENCE
statement and then create a new sequence with the same name. For example, to change the MINVALUE
, drop the sequence and recreate it with the same name and with the desired MINVALUE
.
See "Using sequences" in Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.
To change the batch value for the sequence:
ALTER SEQUENCE myseq BATCH 2000; Sequence altered
The ALTER SESSION
statement changes session parameters dynamically. This overrides the setting of the equivalent connection attribute for the current session, as applicable.
This statement is supported with TimesTen Scaleout. However, these parameters are not supported:
DDL_REPLICATION_ACTION
DDL_REPLICATION_LEVEL
REPLICATION_TRACK
ALTER SESSION SET {COMMIT_BUFFER_SIZE_MAX = n | DDL_REPLICATION_ACTION={'INCLUDE'|'EXCLUDE'} | DDL_REPLICATION_LEVEL={1|2|3} | ISOLATION_LEVEL = {SERIALIZABLE | READ COMMITTED} | NLS_SORT = {BINARY| SortName} | NLS_LENGTH_SEMANTICS = {BYTE|CHAR} | NLS_NCHAR_CONV_EXCP = {TRUE|FALSE} | PLSQL_TIMEOUT = n | PLSQL_OPTIMIZE_LEVEL = {0|1|2|3}| PLSCOPE_SETTINGS = {'IDENTIFIERS:ALL'|'IDENTIFIERS:NONE'} | PLSQL_CONN_MEM_LIMIT = n | PLSQL_CCFLAGS = 'name1:value1, name2:value2,..., nameN:valueN' | PLSQL_SESSION_CACHED_CURSORS = n | REPLICATION_TRACK = TrackNumber | }
Parameter | Description |
---|---|
COMMIT_BUFFER_SIZE_MAX= n |
Changes the maximum size of the commit buffer when a connection is in progress. n is expressed as an integer and represents the maximum size of the commit buffer (in MB).
Change takes effect starting with the next transaction. Call the For more information on the commit buffer and transaction reclaim operations, see "Transaction reclaim operations" in the Oracle TimesTen In-Memory Database Operations Guide and "CommitBufferSizeMax" in the Oracle TimesTen In-Memory Database Reference. Note: The equivalent connection attribute is |
DDL_REPLICATION_ACTION={'INCLUDE'|'EXCLUDE'} |
To include a table or sequence in the active standby pair when either is created, set DDL_REPLICATION_ACTION to INCLUDE . If you do not want to include a table or sequence in the active standby pair when either is created, set DDL_REPLICATION_ACTION to EXCLUDE . The default is INCLUDE .
If set to
This attribute is valid only if See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information. Note: The equivalent connection attribute is |
DDL_REPLICATION_LEVEL={1|2|3} |
Indicates whether DDL is replicated across all databases in an active standby pair. The value can be one of the following:
See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information. Note: The equivalent connection attribute is |
ISOLATION_LEVEL = {SERIALIZABLE|READ COMMITTED} |
Sets isolation level. Change takes effect starting with the next transaction.
For a descriptions of the isolation levels, see "Transaction isolation levels" in the Oracle TimesTen In-Memory Database Operations Guide. Note: The equivalent connection attribute is |
NLS_SORT= {BINARY| SortName } |
Indicates which collation sequence to use for linguistic comparisons.
Append If you do not specify For a complete list of supported values for For more information on case-insensitive or accent-insensitive sorting, see "Case-insensitive and accent-insensitive linguistic sorts" in Oracle TimesTen In-Memory Database Operations Guide. |
NLS_LENGTH_ SEMANTICS ={BYTE|CHAR} |
Sets the default length semantics configuration. BYTE indicates byte length semantics. CHAR indicates character length semantics. The default is BYTE .
For more information on length semantics, see "Length semantics and data storage" in Oracle TimesTen In-Memory Database Operations Guide. |
NLS_NCHAR_CONV_EXCP = {TRUE|FALSE} |
Determines whether an error should be reported when there is data loss during an implicit or explicit character type conversion between NCHAR /NVARCHAR2 data and CHAR /VARCHAR2 data. Specify TRUE to enable error reporting. Specify FALSE to not report errors. The default is FALSE . |
PLSQL_TIMEOUT= n |
Controls how long PL/SQL procedures run before being automatically terminated. n represents the time, in seconds. Specify 0 for no time limit or any positive integer. The default is 30.
When you modify this value, the new value impacts PL/SQL program units that are currently running as well as any other program units subsequently executed in the same connection. See "Choose SQL and PL/SQL timeout values" in the Oracle TimesTen In-Memory Database Operations Guide for information on setting timeout values. |
PLSQL_OPTIMIZE_LEVEL = {0|1|2|3} |
Specifies the optimization level used to compile PL/SQL library units. The higher the setting, the more effort the compiler makes to optimize PL/SQL library units. Possible values are 0, 1, 2 or 3. The default is 2.
For more information, see "PLSQL_OPTIMIZE_LEVEL" in Oracle TimesTen In-Memory Database Reference. |
PLSCOPE_SETTINGS = '{IDENTIFIERS:ALL |IDENTIFIERS:NONE}' |
Controls whether the PL/SQL compiler generates cross-reference information. Specify IDENTIFIERS:ALL to generate cross-reference information. The default is IDENTIFIERS:NONE .
For more information, see "PLSCOPE_SETTINGS" in Oracle TimesTen In-Memory Database Reference. |
PLSQL_CONN_MEM_LIMIT = n |
Specifies the maximum amount of process heap memory that PL/SQL can use for this connection, where n is an integer expressed in MB. The default is 100.
For more information, see "PLSQL_CONN_MEM_LIMIT" in Oracle TimesTen In-Memory Database Reference. |
PLSQL_CCFLAGS = ' name1:value1, name2:value2, ..., nameN:valueN ' |
Specifies inquiry directives to control conditional compilation of PL/SQL units, which enables you to customize the functionality of a PL/SQL program depending on conditions that are checked. For example, to activate debugging features:
PLSQL_CCFLAGS = 'DEBUG:TRUE' |
PLSQL_SESSION_CACHED_CURSORS= n |
Specifies the maximum number of session cursors to cache. The default is 50. The range of values is 1 to 65535.
The |
REPLICATION_TRACK = TrackNumber |
When managing track-based parallel replication, you can assign a connection to a replication track. All transactions issued by the connection are assigned to this track, unless the track is altered.
If the number specified is for a non-existent replication track You cannot change tracks in the middle of a transaction unless all preceding operations have been read operations. For more information, see "Specifying replication tracks within an automatic parallel replication environment" in Oracle TimesTen In-Memory Database Replication Guide. The equivalent connection attribute is |
The ALTER SESSION
statement affects commands that are subsequently executed by the session. The new session parameters take effect immediately.
In cases of client failover, if an ALTER
SESSION
statement is issued in the failed connection, the setting is not seen or carried over to the new connection. You must re-issue the ALTER
SESSION
statement and re-specify the value for that parameter. For more information on client failover, in TimesTen Classic, see "Using automatic client failover" in the Oracle TimesTen In-Memory Database Operations Guide and, in TimesTen Scaleout, see "Client connection failover" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.
Operations involving character comparisons support linguistic sensitive collating sequences. Case-insensitive sorts may affect DISTINCT
value interpretation.
Implicit and explicit conversions between CHAR
and NCHAR
are supported.
You can use the SQL string functions with the supported character sets. For example, UPPER
and LOWER
functions support non-ASCII
CHAR
and VARCHAR2
characters as well as NCHAR
and NVARCHAR2
characters.
Choice of character set could have an impact on memory consumption for CHAR
and VARCHAR2
column data.
The character sets of all databases involved in a replication scheme must match.
To add an existing table to an active standby pair, set DDL_REPLICATION_LEVEL
to 2 or greater and DDL_REPLICATION_ACTION
to INCLUDE
. Alternatively, you can use the ALTER ACTIVE STANDBY PAIR ... INCLUDE TABLE
statement if DDL_REPLICATION_ACTION
is set to EXCLUDE
. In this case, the table must be empty and present on all databases before executing the ALTER ACTIVE STANDBY PAIR ... INCLUDE TABLE
statement as the table contents will be truncated when this statement is executed.
To add an existing sequence or view to an active standby pair, set DDL_REPLICATION_LEVEL
to 3. To include the sequence in the replication scheme, DDL_REPLICATION_ACTION
must be set to INCLUDE
. This does not apply to materialized views.
Objects are replicated only when the receiving database is of a TimesTen release that supports that level of replication, and is configured for an active standby pair replication scheme. For example, replication of sequences (requiring DDL_REPLICATION_LEVEL=3
) to a database release prior to 11.2.2.7.0 is not supported. The receiving database must be of at least release 11.2.1.8.0 for replication of objects supported by DDL_REPLICATION_LEVEL=2
.
Use the ALTER
SESSION
statement to change COMMIT_BUFFER_SIZE_MAX
to 500 MB. First, call ttConfiguration
to display the current connection setting. Use the ALTER
SESSION
statement to change the COMMIT_BUFFER_SIZE_MAX
setting to 500. Call ttConfiguration
to display the new setting.
Command> CALL ttConfiguration ('CommitBufferSizeMax'); < CommitBufferSizeMax, 0 > 1 row found. Command> ALTER SESSION SET COMMIT_BUFFER_SIZE_MAX = 500; Session altered. Command> CALL ttConfiguration ('CommitBufferSizeMax'); < CommitBufferSizeMax, 500 > 1 row found.
Use the ALTER SESSION
statement to change PLSQL_TIMEOUT
to 60 seconds. Use a second ALTER SESSION
statement to change PLSQL_OPTIMIZE_LEVEL
to 3. Then call ttConfiguration
to display the new values.
Command> ALTER SESSION SET PLSQL_TIMEOUT = 60; Session altered.Command> ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 3; Session altered. Command> CALL TTCONFIGURATION (); < CkptFrequency, 600 > < CkptLogVolume, 0 > < CkptRate, 0 > ... < PLSQL_OPTIMIZE_LEVEL, 3 > < PLSQL_TIMEOUT, 60 > ... 47 rows found.
In this example, set PLSQL_TIMEOUT
to 20 seconds. Attempt to execute a program that loops indefinitely. In 20 seconds, execution is terminated and an error is returned.
Command> ALTER SESSION SET PLSQL_TIMEOUT = 20; Command> DECLARE v_timeout NUMBER; BEGIN LOOP v_timeout :=0; EXIT WHEN v_timeout < 0; END LOOP; END; / 8509: PL/SQL execution terminated; PLSQL_TIMEOUT exceeded
Call ttConfiguration
to display the current PLSCOPE_SETTINGS
value. Use the ALTER SESSION
statement to change the PLSCOPE_SETTINGS
value to IDENTIFIERS:ALL
. Create a dummy procedure p
. Query the system view SYS.USER_PLSQL_OBJECT_SETTINGS
to confirm that the new setting is applied to procedure p
.
Command> CALL TTCONFIGURATION (); < CkptFrequency, 600 > < CkptLogVolume, 0 > < CkptRate, 0 > ... < PLSCOPE_SETTINGS, IDENTIFIERS:NONE > ... 47 rows found. Command> ALTER SESSION SET PLSCOPE_SETTINGS = 'IDENTIFIERS:ALL'; Session altered. Command> CREATE OR REPLACE PROCEDURE p IS BEGIN NULL; END; / Procedure created. Command> SELECT PLSCOPE_SETTINGS FROM SYS.USER_PLSQL_OBJECT_SETTINGS WHERE NAME = 'p'; < IDENTIFIERS:ALL > 1 row found.
The following example uses the ALTER SESSION
statement to change the NLS_SORT
setting from BINARY
to BINARY_CI
to BINARY_AI
. The database and connection character sets are WE8ISO8859P1
.
Command> connect "dsn=cs;ConnectionCharacterSet=WE8ISO8859P1"; Connection successful: DSN=cs;UID=user;DataStore=/datastore/user/cs; DatabaseCharacterSet=WE8ISO8859P1; ConnectionCharacterSet=WE8ISO8859P1;PermSize=32; (Default setting AutoCommit=1) Command> -- Create the Table Command> CREATE TABLE collatingdemo (letter VARCHAR2 (10)); Command> -- Insert values Command> INSERT INTO collatingdemo VALUES ('a'); 1 row inserted. Command> INSERT INTO collatingdemo VALUES ('A'); 1 row inserted. Command> INSERT INTO collatingdemo VALUES ('Y'); 1 row inserted. Command> INSERT INTO collatingdemo VALUES ('ä'); 1 row inserted. Command> -- SELECT Command> SELECT * FROM collatingdemo; < a > < A > < Y > < ä > 4 rows found. Command> --SELECT with ORDER BY Command> SELECT * FROM collatingdemo ORDER BY letter; < A > < Y > < a > < ä > 4 rows found. Command>-- set NLS_SORT to BINARY_CI and SELECT Command> ALTER SESSION SET NLS_SORT = BINARY_CI; Command> SELECT * FROM collatingdemo ORDER BY letter; < a > < A > < Y > < Ä > < ä > 4 rows found. Command> -- Set NLS_SORT to BINARY_AI and SELECT Command> ALTER SESSION SET NLS_SORT = BINARY_AI; Command> SELECT * FROM collatingdemo ORDER BY letter; < ä > < a > < A > < Y > 4 rows found.
The following example enables automatic parallel replication with disabled commit dependencies. It uses the ALTER SESSION
statement to change the replication track number to 5 for the current connection. To enable automatic parallel replication with disabled commit dependencies for replication schemes, set ReplicationApplyOrdering
to 2. Then, always set REPLICATION_TRACK
to a number less than or equal to ReplicationParallelism
. For example, the ReplicationParallelism
connection attribute could be set to 6, which is higher than the value of 5 set for REPLICATION_TRACK
.
Command> ALTER SESSION SET REPLICATION_TRACK = 5; Session altered.
The following example enables replication of adding and dropping columns, tables, synonyms and indexes by setting the following on the active database in an alter standby replication pair: DDL_REPLICATON_LEVEL
set to 2
and DDLReplicationAction
set to 'INCLUDE'
.
Command > ALTER SESSION SET DDL_REPLICATION_LEVEL=2; Session altered. Command > ALTER SESSION SET DDL_REPLICATION_ACTION='INCLUDE'; Session altered.
Note:
The equivalent connection attributes forDDL_REPLICATION_LEVEL
and DDL_REPLICATION_ACTION
are DDLReplicationLevel
and DDLReplicationAction
, respectively.The ALTER TABLE
statement changes an existing table definition.
The ALTER
TABLE
statement is supported in TimesTen Scaleout and in TimesTen Classic. However, there are differences in syntax and semantics. For simplicity, the supported syntax, parameters, description (semantics), and examples for TimesTen Scaleout and for TimesTen Classic are separated into the usage with TimesTen Scaleout and the usage with TimesTen Classic. While there is repetition in the usages, it is presented this way in order to allow you to progress from syntax to parameters to semantics to examples for each usage.
Review the required privilege section and then see:
No privilege is required for the table owner.
ALTER ANY TABLE
for another user's table.
For ALTER TABLE...ADD FOREIGN KEY
, the owner of the altered table must have the REFERENCES
privilege on the table referenced by the foreign key clause.
After reviewing this section, see:
ALTER TABLE: Usage with TimesTen Scaleout
This statement is supported with TimesTen Scaleout. Column-based compression and aging are not supported.
See:
SQL syntax for ALTER TABLE: TimesTen Scaleout
To change the distribution key in TimesTen Scaleout:
ALTER TABLE [Owner.]TableName DistributionClause
ALTER TABLE [Owner.]TableName ADD [COLUMN] ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [COMPRESS (CompressColumns [,...])]
To add multiple columns:
ALTER TABLE [Owner.]TableName ADD (ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [,... ] )
To add a NOT
NULL
column (note that the DEFAULT
clause is required):
ALTER TABLE [Owner.]TableName ADD [COLUMN] ColumnName ColumnDataType NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE]
To add multiple NOT
NULL
columns (note that the DEFAULT
clause is required):
ALTER TABLE [Owner.]TableName ADD (ColumnName ColumnDataType NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE] [,...])
ALTER TABLE [Owner.]TableName DROP {[COLUMN] ColumnName | (ColumnName [,... ] )}
To add a primary key constraint using a range index:
ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName PRIMARY KEY (ColumnName [,... ])
To add a primary key constraint using a hash index:
ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName PRIMARY KEY (ColumnName [,... ]) USE HASH INDEX PAGES = RowPages | CURRENT
To add a foreign key and optionally add ON DELETE CASCADE
:
ALTER TABLE [Owner.]TableName ADD [CONSTRAINT ForeignKeyName] FOREIGN KEY (ColumnName [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]
ALTER TABLE [Owner.]TableName DROP CONSTRAINT ForeignKeyName
Note:
You cannot useALTER TABLE
to drop a primary key constraint. To drop the constraint, drop and recreate the table.ALTER TABLE [Owner.]TableName SET PAGES = RowPages | CURRENT
To change the primary key to use a hash index:
ALTER TABLE [Owner.]TableName USE HASH INDEX PAGES = RowPages | CURRENT
To change the primary key to use a range index with the USE RANGE INDEX
clause:
ALTER TABLE [Owner.]TableName USE RANGE INDEX
To change the default value of a column:
ALTER TABLE [Owner.]TableName MODIFY (ColumnName DEFAULT DefaultVal)
To add or drop a unique constraint on a column:
ALTER TABLE Owner.]TableName {ADD | DROP} UNIQUE (ColumnName)
To remove the default value of a column that is nullable, by changing it to NULL
:
ALTER TABLE [Owner.]TableName MODIFY (ColumnName DEFAULT NULL)
Parameters for ALTER TABLE: TimesTen Scaleout
Parameter | Description |
---|---|
[ Owner .] TableName |
Identifies the table to be altered. |
DistributionClause |
See "CREATE TABLE" for information on syntax. |
UNIQUE |
Specifies that in the column ColumnName each row must contain a unique value. |
MODIFY |
Specifies that an attribute of a given column is to be changed to a new value. |
DEFAULT [ DefaultVal |NULL] |
Specifies that the column has a default value, DefaultVal . If NULL , specifies that the default value of the columns is to be dropped. If a column with a default value of SYSDATE is added, the value of the column of the existing rows only is the system date at the time the column was added. If the default value is one of the USER functions the column value is the user value of the session that executed the ALTER TABLE statement. Currently, you cannot assign a default value for the ROWID data type.
Altering the default value of a column has no impact on existing rows. Note: To add a |
ColumnName |
Name of the column participating in the ALTER TABLE statement. A new column cannot have the same name as an existing column or another new column. If you add a NOT NULL column, you must include the DEFAULT clause. |
ColumnDataType |
Type of the column to be added. Some types require additional parameters. See Chapter 1, "Data Types" for the data types that can be specified. |
NOT NULL [ENABLE] |
If you add a column, you can specify NOT NULL . If you specify NOT NULL , then you must include the DEFAULT clause. Optionally, you can specify ENABLE after the NOT NULL clause. Because NOT NULL constraints are always enabled, you are not required to specify ENABLE . |
INLINE| NOT INLINE |
By default, variable-length columns whose declared column length is > 128 bytes are stored out of line. Variable-length columns whose declared column length is <= 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords. |
ADD CONSTRAINT ConstraintName PRIMARY KEY ( ColumnName
|
Adds a primary key constraint to the table. Columns of the primary key must be defined as NOT NULL .
Specify Specify the If you specify The value for TimesTen recommends that you do not specify If your estimate is too small, performance may be degraded. For more information on hash indexes, see "Column definition: TimesTen Scaleout". Note: Before you use |
CONSTRAINT |
Specifies that a foreign key is to be dropped. Optionally specifies that an added foreign key is named by the user. |
ForeignKeyName |
Name of the foreign key to be added or dropped. All foreign keys are assigned a default name by the system if the name was not specified by the user. Either the user-provided name or system name can be specified in the DROP FOREIGN KEY clause. |
FOREIGN KEY |
Specifies that a foreign key is to be added. |
REFERENCES |
Specifies that the foreign key references another table. |
RefTableName |
The name of the table that the foreign key references. |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
USE HASH INDEX PAGES = RowPages | CURRENT |
Changes primary key to use a hash index. If the primary key already uses a hash index, then this clause is equivalent to the SET PAGES clause. |
USE RANGE INDEX |
Changes primary key to use a range index. If the primary key already uses a range index, TimesTen ignores this clause. |
SET PAGES = RowPages | CURRENT |
Resizes the hash index to reflect the expected number of pages in the table. If you specify CURRENT , the current number of rows in the table is used to calculate the page count value. If you specify RowPages , the number of pages is used. To determine the value for RowPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).
The value for TimesTen recommends that you do not specify If your estimate is too small, performance may be degraded. For more information on hash indexes, see "Column definition: TimesTen Scaleout". |
Description for ALTER TABLE: TimesTen Scaleout
You can alter tables to change defaults or add and drop columns and constraints. However, you cannot change the distribution scheme unless the table is empty. In addition, you cannot drop a constraint that is named in the DISTRIBUTE
BY
REFERENCE
clause. See "CREATE TABLE" for information on the distribution schemes. See "Altering tables" in Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.
The ALTER TABLE
statement cannot be used to alter a temporary table.
The ALTER TABLE ADD [COLUMN]
ColumnName
statement adds one or more new columns to an existing table. When you add one or more columns, the new columns are added to the end of all existing rows of the table in one new partition.
Columns referenced by materialized views cannot be dropped.
You cannot use the ALTER
TABLE
statement to add a column, drop a column, or add a constraint for cache group tables.
Only one partition is added to the table per statement regardless of the number of columns added.
You can ALTER
a table to add a NOT
NULL
column with a default value. The DEFAULT
clause is required. Restrictions include:
You cannot use the column as a primary key column. Specifically, you cannot specify the column in the statement: ALTER
TABLE
ADD
ConstraintName
PRIMARY
KEY
(
ColumnName
[,...])
.
NULL
is the initial value for all added columns, unless a default value is specified for the new column.
The total number of columns in the table cannot exceed 1000. In addition, the total number of partitions in a table cannot exceed 1000, one of which is used by TimesTen.
Use the ADD CONSTRAINT ... PRIMARY KEY
clause to add a primary key constraint to a regular table or to a detailed or materialized view table. Do not use this clause on a table that already has a primary key.
If you use the ADD CONSTRAINT... PRIMARY KEY
clause to add a primary key constraint, and you do not specify the USE HASH INDEX
clause, then a range index is used for the primary key constraint.
Do not specify the ADD CONSTRAINT ... PRIMARY KEY
clause on a global temporary table.
As the result of an ALTER TABLE ADD
statement, an additional read occurs for each new partition during queries. Therefore, altered tables may have slightly degraded performance. The performance can only by restored by dropping and recreating the table, or by using the ttMigrate create -c
-relaxedUpgrade
command, and restoring the table using the ttRestore -r
-relaxedUpgrade
command. Dropping the added column does not recover the lost performance or decrease the number of partitions.
When you use the ALTER TABLE DROP
statement to remove one or more columns from an existing table, dropped columns are removed from all current rows of the table. Subsequent SQL statements must not attempt to make any use of the dropped columns. You cannot drop columns that are in the table's primary key. You cannot drop columns that are in any of the table's foreign keys until you have dropped all foreign keys. You cannot drop columns that are indexed until all indexes on the column have been dropped. ALTER TABLE
cannot be used to drop all of the columns of a table. Use DROP TABLE
instead.
When a column is dropped from a table, all commands referencing that table need to be recompiled. An error may result at recompilation time if a dropped column was referenced. The application must re-prepare those commands, and rebuild any parameters and result columns. When a column is added to a table, the commands that contain a SELECT *
statement are invalidated. Only these commands must be re-prepared. All other commands continue to work as expected.
When you drop a column, the column space is not freed.
When you add a UNIQUE
constraint, there is overhead incurred (in terms of additional space and additional time). This is because an index is created to maintain the UNIQUE
constraint. You cannot use the DROP INDEX
statement to drop an index used to maintain the UNIQUE
constraint.
A UNIQUE
constraint and its associated index cannot be dropped if it is being used as a unique index on a replicated table.
Use ALTER TABLE...USE RANGE INDEX
if your application performs range queries over a table's primary key.
Use ALTER TABLE...USE HASH INDEX
if your application performs exact match lookups on a table's primary key.
An error is generated if a table has no primary key and either the USE HASH INDEX
clause or the USE RANGE INDEX
clause is specified.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
To change the ON DELETE CASCADE
triggered action, drop then redefine the foreign key constraint.
ON DELETE CASCADE
is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
For ON DELETE CASCADE
, since different paths may lead from a parent table to a child table, the following rule is enforced:
Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths.
Specify ON DELETE CASCADE
on all child tables on the "delete" path.
This rule does not apply to paths from one parent to different children or from different parents to the same child.
For ON DELETE CASCADE
, a second rule is also enforced:
If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.
The ALTER TABLE ADD/DROP CONSTRAINT
statement has the following restrictions:
When a foreign key is dropped, TimesTen also drops the index associated with the foreign key. Attempting to drop an index associated with a foreign key using the regular DROP INDEX
statement results in an error.
Foreign keys cannot be added or dropped on views or temporary tables.
You cannot use ALTER TABLE
to drop a primary key constraint. You would have to drop and recreate the table in order to drop the constraint.
Examples for ALTER TABLE: TimesTen Scaleout
Table 6-6, "ALTER TABLE rules" shows the rules associated with altering tables. Supporting examples follow.
ALTER statement | Comment |
---|---|
ALTER TABLE t1 ADD CONSTRAINT c1 PRIMARY KEY (p); |
The primary key constraint is added to the table. The distribution key is not changed. |
CREATE TABLE t1 (c1 NUMBER, c2 VARCHAR2 (10)); ALTER TABLE t1 DISTRIBUTE BY HASH (c1); |
The operation succeeds if the table is empty. If the table is not empty, the operation fails because the distribution key cannot be changed on tables that are not empty. |
ALTER TABLE t1 ADD CONSTRAINT c1 FOREIGN KEY (f1)REFERENCES t2 (c2); |
The operation succeeds. The distribution of the |
CREATE TABLE t1...CONSTRAINT fk1... DISTRIBUTE BY REFERENCE(fk1); ALTER TABLE t1 DROP CONSTRAINT(fk1); |
The operation fails. The foreign key is used to distribute the table. |
These examples support the information in the "ALTER TABLE rules" table:
Example 6-8, "Use ALTER TABLE to add a primary key constraint"
Example 6-9, "Add primary key constraint on table distributed on unique column"
Example 6-10, "Use ALTER TABLE to change the distribution key"
Example 6-11, "Add a foreign key constraint that is not part of the distribution key"
Example 6-12, "Attempt to drop a foreign key constraint used as a distribution key"
Example 6-8 Use ALTER TABLE to add a primary key constraint
This example creates the mytable
table without a primary key or distribution clause. The table is distributed by hash on a hidden column. Then the ALTER
TABLE
statement is used to add a primary key constraint. The operation succeeds but the distribution key is not changed.
Command> CREATE TABLE mytable (col1 NUMBER NOT NULL, col2 VARCHAR2 (32)); Command> describe mytable; Table SAMPLEUSER.MYTABLE: Columns: COL1 NUMBER NOT NULL COL2 VARCHAR2 (32) INLINE DISTRIBUTE BY HASH 1 table found. (primary key columns are indicated with *)
Now alter the table to add the primary key. The operation succeeds. The distribution scheme and distribution key do not change.
Command> ALTER TABLE mytable ADD CONSTRAINT c1 PRIMARY KEY (col1); Command> describe mytable; Table SAMPLEUSER.MYTABLE: Columns: *COL1 NUMBER NOT NULL COL2 VARCHAR2 (32) INLINE DISTRIBUTE BY HASH 1 table found. (primary key columns are indicated with *)
Example 6-9 Add primary key constraint on table distributed on unique column
This example creates the mytab
table and distributes the data by hash on the id2
unique column. The example then alters the mytab
table adding the primary key constraint on the id
column. A ttIsql
describe
command shows the table remains distributed by hash on the id2
column.
Command> CREATE TABLE mytab (id TT_INTEGER NOT NULL, id2 TT_INTEGER UNIQUE, id3 TT_INTEGER) distribute by hash (id2); Command> ALTER TABLE mytab ADD CONSTRAINT c1 PRIMARY KEY (id); Command> describe mytab; Table SAMPLEUSER.MYTAB: Columns: *ID TT_INTEGER NOT NULL ID2 TT_INTEGER UNIQUE ID3 TT_INTEGER DISTRIBUTE BY HASH (ID2) 1 table found. (primary key columns are indicated with *)
Example 6-10 Use ALTER TABLE to change the distribution key
This example shows that you can use the ALTER
TABLE
statement to change the distribution key, but only if the table is empty.
Command> CREATE TABLE mytable2 (col1 NUMBER NOT NULL, col2 VARCHAR2 (32)) DISTRIBUTE BY HASH (col1,col2); Command> describe mytable2; Table SAMPLEUSER.MYTABLE2: Columns: COL1 NUMBER NOT NULL COL2 VARCHAR2 (32) INLINE DISTRIBUTE BY HASH (COL1, COL2) 1 table found. (primary key columns are indicated with *)
Use the ALTER
TABLE
statement to change the distribution key to col1
. The operation succeeds because the table is empty.
Command> ALTER TABLE mytable2 DISTRIBUTE BY HASH (col1); Command> describe mytable2; Table SAMPLEUSER.MYTABLE2: Columns: COL1 NUMBER NOT NULL COL2 VARCHAR2 (32) INLINE DISTRIBUTE BY HASH (COL1) 1 table found. (primary key columns are indicated with *)
Insert a row of data and attempt to change the distribution key back to col1
, col2
. The operation fails because the table is not empty.
Command> INSERT INTO mytable2 VALUES (10, 'test'); 1 row inserted. Command> commit; Command> ALTER TABLE mytable2 DISTRIBUTE BY HASH (col1,col2); 1069: Table not empty. Alter table distribution is only permitted on empty tables. The command failed.
Example 6-11 Add a foreign key constraint that is not part of the distribution key
This example first describes the accounts
and accounts2
tables. The example then alters the accounts2
table, adding a foreign key constraint. Since this constraint is not part of the accounts2
table distribution, the operation succeeds.
Command> describe accounts; Table SAMPLEUSER.ACCOUNTS: Columns: *ACCOUNT_ID NUMBER (10) NOT NULL PHONE VARCHAR2 (15) INLINE NOT NULL ACCOUNT_TYPE CHAR (1) NOT NULL STATUS NUMBER (2) NOT NULL CURRENT_BALANCE NUMBER (10,2) NOT NULL PREV_BALANCE NUMBER (10,2) NOT NULL DATE_CREATED DATE NOT NULL CUST_ID NUMBER (10) NOT NULL DISTRIBUTE BY REFERENCE (FK_CUSTOMER) 1 table found. (primary key columns are indicated with *) Command> describe accounts2; Table SAMPLEUSER.ACCOUNTS2: Columns: *ACCOUNTS2_ID NUMBER (10) NOT NULL ACCOUNT_ORIG_ID NUMBER (10) NOT NULL STATUS NUMBER (2) NOT NULL DISTRIBUTE BY HASH (ACCOUNTS2_ID) 1 table found. (primary key columns are indicated with *) Command> ALTER TABLE accounts2 ADD CONSTRAINT accounts2_fk FOREIGN KEY (account_orig_id) REFERENCES accounts (account_id);
Use the ttIsql
indexes
command to show the accounts_fk
constraint is created successfully.
Command> indexes accounts2; Indexes on table SAMPLEUSER.ACCOUNTS2: ACCOUNTS2: unique range index on columns: ACCOUNTS2_ID ACCOUNTS2_FK: non-unique range index on columns: ACCOUNT_ORIG_ID (foreign key index references table SAMPLEUSER.ACCOUNTS(ACCOUNT_ID)) 2 indexes found. 2 indexes found on 1 table.
Example 6-12 Attempt to drop a foreign key constraint used as a distribution key
This example attempts to drop the fk_accounts
constraint. Since the constraint is used as the distribution key, the operation fails.
Command> describe transactions; Table SAMPLEUSER.TRANSACTIONS: Columns: *TRANSACTION_ID NUMBER (10) NOT NULL *ACCOUNT_ID NUMBER (10) NOT NULL *TRANSACTION_TS TIMESTAMP (6) NOT NULL DESCRIPTION VARCHAR2 (60) INLINE OPTYPE CHAR (1) NOT NULL AMOUNT NUMBER (6,2) NOT NULL DISTRIBUTE BY REFERENCE (FK_ACCOUNTS) 1 table found. (primary key columns are indicated with *) Command> ALTER TABLE transactions DROP CONSTRAINT fk_accounts; 1072: Dropping a table's reference by distribution foreign key is not allowed. The command failed.
SQL syntax for ALTER TABLE: TimesTen Classic
ALTER TABLE [Owner.]TableName ADD [COLUMN] ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [COMPRESS (CompressColumns [,...])]
To add multiple columns:
ALTER TABLE [Owner.]TableName ADD (ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [,... ] ) [COMPRESS (CompressColumns [,...])]
To add a NOT
NULL
column (note that the DEFAULT
clause is required):
ALTER TABLE [Owner.]TableName ADD [COLUMN] ColumnName ColumnDataType NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE] [COMPRESS (CompressColumns [,...])]
To add multiple NOT
NULL
columns (note that the DEFAULT
clause is required):
ALTER TABLE [Owner.]TableName ADD (ColumnName ColumnDataType NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE] [,...]) [COMPRESS (CompressColumns [,...])]
The CompressColumns
syntax is as follows:
{ColumnDefinition | (ColumnDefinition [,...])} BY DICTIONARY [MAXVALUES = CompressMax]
ALTER TABLE [Owner.]TableName DROP {[COLUMN] ColumnName | (ColumnName [,... ] )}
Note:
If removing columns in a compressed column group, all columns in the compressed column group must be specified.To add a primary key constraint using a range index:
ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName PRIMARY KEY (ColumnName [,... ])
To add a primary key constraint using a hash index:
ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName PRIMARY KEY (ColumnName [,... ]) USE HASH INDEX PAGES = RowPages | CURRENT
To add a foreign key and optionally add ON DELETE CASCADE
:
ALTER TABLE [Owner.]TableName ADD [CONSTRAINT ForeignKeyName] FOREIGN KEY (ColumnName [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]
ALTER TABLE [Owner.]TableName DROP CONSTRAINT ForeignKeyName
Note:
You cannot useALTER TABLE
to drop a primary key constraint. To drop the constraint, drop and recreate the table.ALTER TABLE [Owner.]TableName SET PAGES = RowPages | CURRENT
To change the primary key to use a hash index:
ALTER TABLE [Owner.]TableName USE HASH INDEX PAGES = RowPages | CURRENT
To change the primary key to use a range index with the USE RANGE INDEX
clause:
ALTER TABLE [Owner.]TableName USE RANGE INDEX
To change the default value of a column:
ALTER TABLE [Owner.]TableName MODIFY (ColumnName DEFAULT DefaultVal)
To add or drop a unique constraint on a column:
ALTER TABLE Owner.]TableName {ADD | DROP} UNIQUE (ColumnName)
To remove the default value of a column that is nullable, by changing it to NULL
:
ALTER TABLE [Owner.]TableName MODIFY (ColumnName DEFAULT NULL)
ALTER TABLE [Owner.]TableName ADD AGING LRU [ON | OFF]
To add time-based aging:
ALTER TABLE [Owner.]TableName ADD AGING USE ColumnName LIFETIME num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]} [CYCLE num2 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S] }] [ON | OFF]
To change the aging state:
ALTER TABLE [Owner.]TableName SET AGING {ON | OFF}
ALTER TABLE [Owner.]TableName DROP AGING
To change the lifetime for time-based aging:
ALTER TABLE [Owner.]TableName SET AGING LIFETIME num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]}
To change the cycle for time-based aging:
ALTER TABLE [Owner.]TableName SET AGING CYCLE num2 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]}
Parameters for ALTER TABLE: TimesTen Classic
Parameter | Description |
---|---|
[ Owner .] TableName |
Identifies the table to be altered. |
UNIQUE |
Specifies that in the column ColumnName each row must contain a unique value. |
MODIFY |
Specifies that an attribute of a given column is to be changed to a new value. |
DEFAULT [ DefaultVal |NULL] |
Specifies that the column has a default value, DefaultVal . If NULL , specifies that the default value of the columns is to be dropped. If a column with a default value of SYSDATE is added, the value of the column of the existing rows only is the system date at the time the column was added. If the default value is one of the USER functions the column value is the user value of the session that executed the ALTER TABLE statement. Currently, you cannot assign a default value for the ROWID data type.
Altering the default value of a column has no impact on existing rows. Note: To add a |
ColumnName |
Name of the column participating in the ALTER TABLE statement. A new column cannot have the same name as an existing column or another new column. If you add a NOT NULL column, you must include the DEFAULT clause. |
ColumnDataType |
Type of the column to be added. Some types require additional parameters. See Chapter 1, "Data Types" for the data types that can be specified. |
NOT NULL [ENABLE] |
If you add a column, you can specify NOT NULL . If you specify NOT NULL , then you must include the DEFAULT clause. Optionally, you can specify ENABLE after the NOT NULL clause. Because NOT NULL constraints are always enabled, you are not required to specify ENABLE . |
INLINE| NOT INLINE |
By default, variable-length columns whose declared column length is > 128 bytes are stored out of line. Variable-length columns whose declared column length is <= 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords. |
COMPRESS ( CompressColumns [,...]) |
Defines a compressed column group for a table that is enabled for compression. This can include one or more columns in the table.
If you define multiple columns for a compression group, you must specify the columns as Each compressed column group is limited to a maximum of 16 columns. For more details on compression columns, see "Column-based compression of tables (TimesTen Classic)". |
BY DICTIONARY |
Defines a compression dictionary for each compressed column group. |
MAXVALUES = CompressMax |
CompressMax is the total number of distinct values in the table and sets the size for the compressed column group pointer column to 1, 2, or 4 bytes and sets the size for the maximum number of entries in the dictionary table.
For the dictionary table,
The maximum size defaults to size of 232-1 if the For more details on maximum sizing for compression dictionaries, see "Column-based compression of tables (TimesTen Classic)". |
ADD CONSTRAINT ConstraintName PRIMARY KEY ( ColumnName
|
Adds a primary key constraint to the table. Columns of the primary key must be defined as NOT NULL .
Specify Specify the If you specify The value for TimesTen recommends that you do not specify If your estimate is too small, performance may be degraded. For more information on hash indexes, see "Column definition: TimesTen Classic". Note: Before you use |
CONSTRAINT |
Specifies that a foreign key is to be dropped. Optionally specifies that an added foreign key is named by the user. |
ForeignKeyName |
Name of the foreign key to be added or dropped. All foreign keys are assigned a default name by the system if the name was not specified by the user. Either the user-provided name or system name can be specified in the DROP FOREIGN KEY clause. |
FOREIGN KEY |
Specifies that a foreign key is to be added. |
REFERENCES |
Specifies that the foreign key references another table. |
RefTableName |
The name of the table that the foreign key references. |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
USE HASH INDEX PAGES = RowPages | CURRENT |
Changes primary key to use a hash index. If the primary key already uses a hash index, then this clause is equivalent to the SET PAGES clause. |
USE RANGE INDEX |
Changes primary key to use a range index. If the primary key already uses a range index, TimesTen ignores this clause. |
SET PAGES = RowPages | CURRENT |
Resizes the hash index to reflect the expected number of pages in the table. If you specify CURRENT , the current number of rows in the table is used to calculate the page count value. If you specify RowPages , the number of pages is used. To determine the value for RowPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).
The value for TimesTen recommends that you do not specify If your estimate is too small, performance may be degraded. For more information on hash indexes, see "Column definition: TimesTen Classic". |
ADD AGING LRU [ON | OFF] |
Adds least recently used (LRU) aging to an existing table that has no aging policy defined.
The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state ( Set the aging state to either LRU attributes are defined by calling the For more information about LRU aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
ADD AGING USE ColumnName ... [ON| OFF] |
Adds time-based aging to an existing table that has no aging policy defined.
The time-based aging policy defines the type of aging (time-based), the aging state ( Set the aging state to either Time-based aging attributes are defined at the SQL level and are specified by the Specify The values of the column used for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be You can define your aging column with a data type of For more information about time-based aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
LIFETIME Num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S] |
Specify the LIFETIME clause after the ADD AGING USE ColumnName clause if you are adding the time-based aging policy to an existing table. Specify the LIFETIME clause after the SET AGING clause to change the LIFETIME setting.
The Specify The concept of time resolution is supported. If |
CYCLE Num2 {SECOND[S]| MINUTE[S]| HOUR[S]|DAY[S]} |
Specify the optional CYCLE clause after the LIFETIME clause if you are adding the time-based aging policy to an existing table.
The Specify If you do not specify the If the aging state is Specify the |
SET AGING {ON|OFF} |
Changes the aging state. The aging policy must be previously defined. ON enables automatic aging. OFF disables automatic aging. To control aging with an external scheduler, then disable aging and invoke the ttAgingScheduleNow built-in procedure. |
DROP AGING |
Drops the aging policy from the table. After you define an aging policy, you cannot alter it. Drop aging, then redefine. |
SET AGING LIFETIME Num1 {SECOND[S]| MINUTE[S]|HOUR[S] |DAY[S]} |
Use this clause to change the lifetime for time-based aging.
If you defined your aging column with data type |
SET AGING CYCLE Num2 {SECOND[S]| MINUTE[S]| HOUR[S]|DAY[S]} |
Use this clause to change the cycle for time-based aging.
|
Description for ALTER TABLE: TimesTen Classic
The ALTER TABLE
statement cannot be used to alter a temporary table.
The ALTER TABLE ADD [COLUMN]
ColumnName
statement adds one or more new columns to an existing table. When you add one or more columns, the new columns are added to the end of all existing rows of the table in one new partition.
The ALTER TABLE
ADD
or DROP COLUMN
statement can be used to add or drop columns from replicated tables.
Do not use ALTER
TABLE
to alter a replicated table that is part of a TWOSAFE BY REQUEST
transaction.
Columns referenced by materialized views cannot be dropped.
You cannot use the ALTER
TABLE
statement to add a column, drop a column, or add a constraint for cache group tables.
Only one partition is added to the table per statement regardless of the number of columns added.
You can ALTER
a table to add a NOT
NULL
column with a default value. The DEFAULT
clause is required. Restrictions include:
You cannot use the column as a primary key column. Specifically, you cannot specify the column in the statement: ALTER
TABLE
ADD
ConstraintName
PRIMARY
KEY
(
ColumnName
[,...])
.
You cannot use the column for time-based aging. Specifically, you cannot specify the column in the statement ALTER TABLE ADD AGING USE
ColumnName
.
Note:
To add aNOT NULL
column to a table that is part of a replication scheme, DDL_REPLICATON_LEVEL
must be 3 or greater.NULL
is the initial value for all added columns, unless a default value is specified for the new column.
The total number of columns in the table cannot exceed 1000. In addition, the total number of partitions in a table cannot exceed 1000, one of which is used by TimesTen.
Use the ADD CONSTRAINT ... PRIMARY KEY
clause to add a primary key constraint to a regular table or to a detailed or materialized view table. Do not use this clause on a table that already has a primary key.
If you use the ADD CONSTRAINT... PRIMARY KEY
clause to add a primary key constraint, and you do not specify the USE HASH INDEX
clause, then a range index is used for the primary key constraint.
If a table is replicated and the replication agent is active, you cannot use the ADD CONSTRAINT ... PRIMARY KEY
clause. Stop the replication agent first.
Do not specify the ADD CONSTRAINT ... PRIMARY KEY
clause on a global temporary table.
Do not specify the ADD CONSTRAINT ... PRIMARY KEY
clause on a cache group table because cache group tables defined with a primary key must be defined in the CREATE CACHE GROUP
statement.
As the result of an ALTER TABLE ADD
statement, an additional read occurs for each new partition during queries. Therefore, altered tables may have slightly degraded performance. The performance can only by restored by dropping and recreating the table, or by using the ttMigrate create -c
-relaxedUpgrade
command, and restoring the table using the ttRestore -r
-relaxedUpgrade
command. Dropping the added column does not recover the lost performance or decrease the number of partitions.
When you use the ALTER TABLE DROP
statement to remove one or more columns from an existing table, dropped columns are removed from all current rows of the table. Subsequent SQL statements must not attempt to make any use of the dropped columns. You cannot drop columns that are in the table's primary key. You cannot drop columns that are in any of the table's foreign keys until you have dropped all foreign keys. You cannot drop columns that are indexed until all indexes on the column have been dropped. ALTER TABLE
cannot be used to drop all of the columns of a table. Use DROP TABLE
instead.
When a column is dropped from a table, all commands referencing that table need to be recompiled. An error may result at recompilation time if a dropped column was referenced. The application must re-prepare those commands, and rebuild any parameters and result columns. When a column is added to a table, the commands that contain a SELECT *
statement are invalidated. Only these commands must be re-prepared. All other commands continue to work as expected.
When you drop a column, the column space is not freed.
When you add a UNIQUE
constraint, there is overhead incurred (in terms of additional space and additional time). This is because an index is created to maintain the UNIQUE
constraint. You cannot use the DROP INDEX
statement to drop an index used to maintain the UNIQUE
constraint.
A UNIQUE
constraint and its associated index cannot be dropped if it is being used as a unique index on a replicated table.
Use ALTER TABLE...USE RANGE INDEX
if your application performs range queries over a table's primary key.
Use ALTER TABLE...USE HASH INDEX
if your application performs exact match lookups on a table's primary key.
An error is generated if a table has no primary key and either the USE HASH INDEX
clause or the USE RANGE INDEX
clause is specified.
Make sure to stop the replication agent before adding or dropping a foreign key on a replicated table.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
To change the ON DELETE CASCADE
triggered action, drop then redefine the foreign key constraint.
ON DELETE CASCADE
is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
For ON DELETE CASCADE
, since different paths may lead from a parent table to a child table, the following rule is enforced:
Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths.
Specify ON DELETE CASCADE
on all child tables on the "delete" path.
This rule does not apply to paths from one parent to different children or from different parents to the same child.
For ON DELETE CASCADE
, a second rule is also enforced:
If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.
For ON DELETE CASCADE
with replication, the following restrictions apply:
The foreign keys specified with ON DELETE CASCADE
must match between the Master and subscriber for replicated tables. Checking is done at runtime. If there is an error, the receiver thread stops working.
All tables in the delete cascade tree have to be replicated if any table in the tree is replicated. This restriction is checked when the replication scheme is created or when a foreign key with ON DELETE CASCADE
is added to one of the replication tables. If an error is found, the operation is aborted. You may be required to drop the replication scheme first before trying to change the foreign key constraint.
The ALTER TABLE ADD/DROP CONSTRAINT
statement has the following restrictions:
When a foreign key is dropped, TimesTen also drops the index associated with the foreign key. Attempting to drop an index associated with a foreign key using the regular DROP INDEX
statement results in an error.
Foreign keys cannot be added or dropped on tables in a cache group.
Foreign keys cannot be added or dropped on views or temporary tables.
You cannot use ALTER TABLE
to drop a primary key constraint. You would have to drop and recreate the table in order to drop the constraint.
After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.
The aging policy must be defined to change the aging state.
The following rules determine if a row is accessed or referenced for LRU aging:
Any rows used to build the result set of a SELECT
statement.
Any rows used to build the result set of an INSERT ... SELECT
statement.
Any rows that are about to be updated or deleted.
Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.
Call the ttAgingScheduleNow
procedure to schedule the aging process right away regardless if the aging state is ON
or OFF
.
For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL
column.
You cannot drop the column that is used for time-based aging.
Tables that are related by foreign keys must have the same aging policy.
For LRU aging, if a child row is not a candidate for aging, neither this child row nor its parent row are deleted. ON DELETE CASCADE
settings are ignored.
For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE
(whether specified or not) is ignored.
Restrictions for column-based compression of tables:
You can add compressed column groups with the ALTER TABLE
statement only if the table was enabled for compression at table creation. You can add uncompressed columns to any table, including tables enabled for compression. Refer to "Column-based compression of tables (TimesTen Classic)" for more details on adding compressed column groups to a table.
You cannot modify columns of a compressed column group.
You can drop all columns within a compressed column group with the ALTER TABLE
command; when removing columns in a compressed column group, all columns in the compressed column group must be specified for removal.
You cannot use ALTER TABLE
to modify an existing uncompressed column to make it compressed. For example:
Command> create table mytab (a varchar2 (30), b int, c int) compress ((a,b) by dictionary); Command> alter table mytab add (d int) compress (c by dictionary); 2246: Cannot change compression clause for already defined column C The command failed.
Understanding partitions when using ALTER TABLE in TimesTen
When you create a table, an initial partition is created. If you ALTER
the table, and add additional columns, secondary partitions are created. There is one secondary partition created for each ALTER
TABLE
statement. For a column in secondary partitions, you cannot create a primary key constraint on the column or use the column for time-based aging.
You can use ttMigrate
-r
-relaxedUpgrade
to condense multiple partitions. This means the initial partition plus one or more secondary partitions are condensed into a single partition called the initial partition. Once you condense the partitions, you can then ALTER
the table and add a primary key constraint on the column or use the column for time-based aging. This is because the columns are no longer in secondary partitions but are now in the initial partition.
If your database is involved in replication and you want to condense multiple partitions, you must use the StoreAttribute
TABLE
DEFINITION
CHECKING
RELAXED
(of the CREATE
REPLICATION
statement). Run ttMigrate
-r
-relaxedUpgrade
on both the master and subscriber or on either the master or subscriber by using -duplicate
.
Use ttSchema
to view partition numbers for columns. ttSchema
displays secondary partition number 1 as partition 1, secondary partition number 2 as partition 2 and so on.
As an example, create a table MyTab
with 2 columns. Then ALTER
the table adding 2 columns (Col3
and Col4
) with the NOT
NULL
DEFAULT
clause.
Command> CREATE TABLE MyTab (Col1 NUMBER, Col2 VARCHAR2 (30)); Command> ALTER TABLE MyTab ADD (Col3 NUMBER NOT NULL DEFAULT 10, Col4 TIMESTAMP NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00');
Use ttSchema
to verify Col3
and Col4
are in secondary partition 1.
ttschema -DSN sampledb_1122 -- Database is in Oracle type mode create table TESTUSER.MYTAB ( COL1 NUMBER, COL2 VARCHAR2(30 BYTE) INLINE, COL3 NUMBER NOT NULL DEFAULT 10, COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00'); -- column COL3 partition 1 -- column COL4 partition 1
Attempt to add a primary key constraint on Col3
and time-based aging on Col4
. You see errors because you can neither add a primary key constraint nor add time-based aging to a column that is not in the initial partition.
Command> ALTER TABLE MyTab ADD CONSTRAINT PriKey PRIMARY KEY (Col3); 2419: All columns in a primary key constraint must be in the initial partition; column COL3 was added by ALTER TABLE The command failed. Command> ALTER TABLE MyTab ADD AGING USE Col4 LIFETIME 3 DAYS; 3023: Aging column must be in the initial partition; column COL4 was added by ALTER TABLE The command failed.
Use ttMigrate
with the -relaxedUpgrade
option to condense the partitions. Then use ttSchema
to verify the partitions are condensed and there are no columns in secondary partition 1.
ttMigrate -c dsn=sampledb_1122 test.migrate Saving user PUBLIC User successfully saved. Saving table TESTUSER.MYTAB Saving rows... 0/0 rows saved. Table successfully saved. ttDestroy sampledb_1122 ttMigrate -r -relaxedUpgrade dsn=sampledb_1122 test.migrate Restoring table TESTUSER.MYTAB Restoring rows... 0/0 rows restored. Table successfully restored. ttSchema DSN=sampledb_1122 -- Database is in Oracle type mode create table TESTUSER.MYTAB ( COL1 NUMBER, COL2 VARCHAR2(30 BYTE) INLINE, COL3 NUMBER NOT NULL DEFAULT 10, COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00');
Now add a primary key constraint on Col3
and time-based aging on Col4
. The results are successful because Col3
and Col4
are in the initial partition as a result of ttMigrate
. Use ttSchema
to verify results.
Command> ALTER TABLE MyTab ADD CONSTRAINT PriKey PRIMARY KEY (Col3); Command> ALTER TABLE MyTab ADD AGING USE Col4 LIFETIME 3 DAYS; ttschema sampledb_1122 -- Database is in Oracle type mode create table TESTUSER.MYTAB ( COL1 NUMBER, COL2 VARCHAR2(30 BYTE) INLINE, COL3 NUMBER NOT NULL DEFAULT 10, COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00') AGING USE COL4 LIFETIME 3 days CYCLE 5 minutes ON; alter table TESTUSER.MYTAB add constraint PRIKEY primary key (COL3);
Examples for ALTER TABLE: TimesTen Classic
Add returnrate
column to parts
table.
ALTER TABLE parts ADD COLUMN returnrate DOUBLE;
Add numsssign
and prevdept
columns to contractor
table.
ALTER TABLE contractor ADD ( numassign INTEGER, prevdept CHAR(30) );
Remove addr1
and addr2
columns from employee
table.
ALTER TABLE employee DROP ( addr1, addr2 );
Drop the UNIQUE
title column of the books
table.
ALTER TABLE books DROP UNIQUE (title);
Add the x1
column to the t1
table with a default value of 5:
ALTER TABLE t1 ADD (x1 INT DEFAULT 5);
Change the default value of column x1
to 2:
ALTER TABLE t1 MODIFY (x1 DEFAULT 2);
Alter table primarykeytest
to add the primary key constraint c1
. Use the ttIsql
INDEXES
command to show that the primary key constraint c1
is created and a range index is used:
Command> CREATE TABLE primarykeytest (col1 TT_INTEGER NOT NULL); Command> ALTER TABLE primarykeytest ADD CONSTRAINT c1 PRIMARY KEY (col1); Command> INDEXES primarykeytest; Indexes on table SAMPLEUSER.PRIMARYKEYTEST: C1: unique range index on columns: COL1 1 index found. 1 index found on 1 table.
Alter table prikeyhash
to add the primary key constraint c2
using a hash index. Use the ttIsql
INDEXES
command to show that the primary key constraint c2
is created and a hash index is used:
Command> CREATE TABLE prikeyhash (col1 NUMBER (3,2) NOT NULL); Command> ALTER TABLE prikeyhash ADD CONSTRAINT c2 PRIMARY KEY (col1) USE HASH INDEX PAGES = 20; Command> INDEXES prikeyhash; Indexes on table SAMPLEUSER.PRIKEYHASH: C2: unique hash index on columns: COL1 1 index found. 1 table found.
Attempt to add a primary key constraint on a table already defined with a primary key. You see an error:
Command> CREATE TABLE oneprikey (col1 VARCHAR2 (30) NOT NULL, col2 TT_BIGINT NOT NULL, col3 CHAR (15) NOT NULL, PRIMARY KEY (col1,col2)); Command> ALTER TABLE oneprikey ADD CONSTRAINT c2 PRIMARY KEY (col1,col2); 2235: Table can have only one primary key The command failed.
Attempt to add a primary key constraint on a column that is not defined as NOT NULL
. You see an error:
Command> CREATE TABLE prikeynull (col1 CHAR (30)); Command> ALTER TABLE prikeynull ADD CONSTRAINT c3 PRIMARY KEY (col1); 2236: Nullable column cannot be part of a primary key The command failed.
This example illustrates the use of range and hash indexes. It creates the pkey
table with col1
as the primary key. A range index is created by default. The table is then altered to change the index on col1
to a hash index. The table is altered again to change the index back to a range index.
Command> CREATE TABLE pkey (col1 TT_INTEGER PRIMARY KEY, col2 VARCHAR2 (20)); Command> INDEXES pkey; Indexes on table SAMPLEUSER.PKEY: PKEY: unique range index on columns: COL1 1 index found. 1 index found on 1 table.
Alter the pkey
table to use a hash index:
Command> ALTER TABLE pkey USE HASH INDEX PAGES = CURRENT; Command> INDEXES pkey; Indexes on table SAMPLEUSER.PKEY: PKEY: unique hash index on columns: COL1 1 index found. 1 table found.
Alter the pkey
table to use a range index with the USE RANGE INDEX
clause:
Command> ALTER TABLE pkey USE RANGE INDEX; Command> INDEXES pkey; Indexes on table SAMPLEUSER.PKEY: PKEY: unique range index on columns: COL1 1 index found. 1 table found.
This example generates an error when attempting to alter a table to define either a range or hash index on a column without a primary key.
Command> CREATE TABLE illegalindex (Ccl1 CHAR (20)); Command> ALTER TABLE illegalindex USE RANGE INDEX; 2810: The table has no primary key so cannot change its index type The command failed. Command> ALTER TABLE illegalindex USE HASH INDEX PAGES = CURRENT; 2810: The table has no primary key so cannot change its index type The command failed.
These examples show how time resolution works with aging. In this example, lifetime is three days.
If (SYSDATE - ColumnValue) <= 3
, do not age out the row.
If (SYSDATE - ColumnValue) > 3
, then the row is a candidate for aging.
If (SYSDATE - ColumnValue) = 3 day
s, 22 hours, then row is not aged out because lifetime was specified in days. The row would be aged out if lifetime had been specified as 72 hours.
This example alters a table by adding LRU aging. The table has no previous aging policy. The aging state is ON
by default.
ALTER TABLE agingdemo3 ADD AGING LRU; Command> DESCRIBE agingdemo3; Table USER.AGINGDEMO3: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE Aging lru on 1 table found. (primary key columns are indicated with *)
This example alters a table by adding time-based aging. The table has no previous aging policy. The agingcolumn
column is used for aging. LIFETIME
is 2 days. CYCLE
is 30 minutes.
ALTER TABLE agingdemo4 ADD AGING USE agingcolumn LIFETIME 2 DAYS CYCLE 30 MINUTES; Command> DESCRIBE agingdemo4; Table USER.AGINGDEMO4: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes on
This example illustrates that after you create an aging policy, you cannot change it. You must drop aging and redefine.
CREATE TABLE agingdemo5 (agingid NUMBER NOT NULL PRIMARY KEY ,name VARCHAR2 (20) ,agingcolumn TIMESTAMP NOT NULL ) AGING USE agingcolumn LIFETIME 3 DAYS OFF; ALTER TABLE agingdemo5 ADD AGING LRU; 2980: Cannot add aging policy to a table with an existing aging policy. Have to drop the old aging first The command failed.
Drop aging on the table and redefine with LRU aging.
ALTER TABLE agingdemo5 DROP AGING; ALTER TABLE agingdemo5 ADD AGING LRU; Command> DESCRIBE agingdemo5; Table USER.AGINGDEMO5: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging lru on 1 table found. (primary key columns are indicated with *)
This example alters a table by setting the aging state to OFF
. The table has been defined with a time-based aging policy. If you set the aging state to OFF
, aging is not done automatically. This is useful to use an external scheduler to control the aging process. Set aging state to OFF
and then call the ttAgingScheduleNow
procedure to start the aging process.
Command> DESCRIBE agingdemo4; Table USER.AGINGDEMO4: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes on ALTER TABLE AgingDemo4 SET AGING OFF;
Note that when you describe agingdemo4
, the aging policy is defined and the aging state is set to OFF
.
Command> DESCRIBE agingdemo4; Table USER.AGINGDEMO4: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes off 1 table found. (primary key columns are indicated with *)
Call ttAgingScheduleNow
to invoke aging with an external scheduler:
Command> CALL ttAgingScheduleNow ('agingdemo4');
Attempt to alter a table adding the aging column and then use that column for time-based aging. An error is generated.
Command> DESCRIBE x; Table USER1.X: Columns: *ID TT_INTEGER NOT NULL 1 table found. (primary key columns are indicated with *) Command> ALTER TABLE x ADD COLUMN t TIMESTAMP; Command> ALTER TABLE x ADD AGING USE t LIFETIME 2 DAYS; 2993: Aging column cannot be nullable The command failed.
Attempt to alter the LIFETIME
clause for a table defined with time-based aging. The aging column is defined with data type TT_DATE
. An error is generated because the LIFETIME
unit is not expressed in DAYS
.
Command> CREATE TABLE aging1 (col1 TT_DATE NOT NULL) AGING USE col1 LIFETIME 2 DAYS; Command> ALTER TABLE aging1 SET AGING LIFETIME 2 HOURS; 2977: Only DAY lifetime unit is allowed with a TT_DATE column The command failed.
Alter the employees
table to add a new compressed column of state
, which contains the full name of the state. Note that the employees
table already has a compressed column group consisting of job_id
and manager_id
.
Command> ALTER TABLE employees ADD COLUMN state VARCHAR2(20) COMPRESS (state BY DICTIONARY); Command> DESCRIBE employees; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) STATE VARCHAR2 (20) INLINE COMPRESS ( ( JOB_ID, MANAGER_ID ) BY DICTIONARY, STATE BY DICTIONARY ) 1 table found. (primary key columns are indicated with *)
The following example drops the compressed column state
from the employees
table:
Command> ALTER TABLE employees DROP state; Command> DESCRIBE employees; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) COMPRESS ( ( JOB_ID, MANAGER_ID ) BY DICTIONARY ) 1 table found. (primary key columns are indicated with *)
The ALTER USER
statement enables you to change a user's password. It also enables you to change the profile for the user, to lock or unlock the user's account, and to expire the user's password. A user with the ADMIN
privilege can perform these operations.
This statement also enables you to change a user from internal to external or from external to internal.
No privilege is required to change the user's own password.
ADMIN
privilege is required for all other operations.
This is the syntax for ALTER
USER
...IDENTIFIED
BY
. Ensure to specify at least one of these clauses: IDENTIFIED
BY
, PROFILE
, ACCOUNT
, or PASSWORD
EXPIRE
.
ALTER USER user [IDENTIFIED BY {password | "password"}] [PROFILE profile] [ACCOUNT {LOCK|UNLOCK}] [PASSWORD EXPIRE]
This is the syntax for ALTER
USER
...IDENTIFIED
EXTERNALLY
. Ensure to specify at least one of these clauses: IDENTIFIED
EXTERNALLY
, PROFILE
, or ACCOUNT
.
ALTER USER user [IDENTIFIED EXTERNALLY] [PROFILE profile] [ACCOUNT {LOCK|UNLOCK}]
Parameter | Description |
---|---|
user |
Name of the user to alter. |
IDENTIFIED BY password|" password " |
Specifies an internal user and the password for the internal user. |
IDENTIFIED EXTERNALLY |
Specifies the user is external user. |
PROFILE profile |
Use the PROFILE clause to specify the name of the profile (designated by profile ) that you want to assign to the user. The profile sets the limits for the password parameters for the user. See "CREATE PROFILE" for information on these password parameters. You can specify a PROFILE clause for external users, but the password parameters have no effect for these users. |
ACCOUNT [LOCK |UNLOCK ] |
Specify ACCOUNT LOCK to lock the user's account and disable connections to the database. Specify ACCOUNT UNLOCK to unlock the user's account and enable connections to the database. The default is ACCOUNT UNLOCK . |
PASSWORD EXPIRE |
Specify PASSWORD EXPIRE if you want the user's password to expire. This setting forces a user with ADMIN privileges to change the password before the user can connect to the database. This clause is not valid for an externally identified user (as denoted by the IDENTIFIED EXTERNALLY clause). |
Database users can be internal or external.
Internal users are defined for a TimesTen database.
External users are defined by the operating system. External users cannot be assigned a TimesTen password.
Passwords are case-sensitive.
Use the PROFILE
clause to change the profile for a user. See "CREATE PROFILE" for details.
Use the ACCOUNT
LOCK
or ACCOUNT
UNLOCK
to change the lock settings for the user account.
Use the PASSWORD
EXPIRE
clause to expire the user's password and force a password change before the user can connect to the database.
You can alter a user over a client/sever connection if the connection is encrypted with TLS. See "Transport Layer Security for TimesTen Client/Server" in the Oracle TimesTen In-Memory Database Security Guide for details.
When replication is configured, this statement is replicated.
Example 1: Change the user's profile
This example creates the user1
user and assigns the user1
user the profile1
profile. The example then uses the ALTER
USER
statement to change the user1
user's profile to profile2
.
Command> CREATE USER user1 IDENTIFIED BY user1 PROFILE profile1; User created. Command> ALTER USER user1 PROFILE profile2; User altered.
Query the dba_users
system view to verify the user1
profile has been changed to profile2
.
Command> SELECT profile FROM dba_users WHERE username = 'USER1'; < PROFILE2 > 1 row found.
Example 2: Lock and unlock a user's account
This example creates the user2
user. It then uses the ALTER
USER
statement to lock and then unlock the user2
user's account.
Command> CREATE USER user2 IDENTIFIED BY user2 PROFILE profile1; User created. Command> ALTER USER user2 ACCOUNT LOCK; User altered.
Grant the CONNECT
privilege to user2
;
Command> GRANT CONNECT TO user2;
Attempt to connect to the database as user2
. The user2
account is locked so the connection fails.
Command> connect adding "UID=user2;PWD=user2" as user2; 15179: the account is locked The command failed.
As the instance administrator, reconnect to the database and use the ALTER
USER
statement to unlock the user2
account.
none: Command> use database1 database1: Command> ALTER USER user2 ACCOUNT UNLOCK; User altered.
Attempt to connect to the database as the user2
user. The connection succeeds.
database1: Command> connect adding "UID=user2;PWD=user2" as user2; Connection successful: DSN=database1;UID=user3;DataStore=/scratch/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128; (Default setting AutoCommit=1)
Example 3: Expire a user's password
This example uses the ALTER
USER
statement to change the user2
user's account to expire the password. A user with ADMIN
privilege must change the user2
password before user2
can connect to the database.
Command> ALTER USER user2 PASSWORD EXPIRE; User altered.
Attempt to connect to the database as user2
. The user2
password must be changed before the user2
user can connect to the database.
Command> connect adding "UID=user2;PWD=user2" as user2; 15180: the password has expired The command failed.
As the instance administrator, reconnect to the database and use the ALTER
USER
statement to change the user2
password.
none: Command> use database1 database1: Command> ALTER USER user2 IDENTIFIED BY newuser2password; User altered.
Attempt to connect to the database a the user2
user. The connection succeeds.
database1: Command> connect adding "UID=user2;PWD=newuser2password" as user2; Connection successful: DSN=database1;UID=user4;DataStore=/scratch/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128; (Default setting AutoCommit=1)
Example 4: Change a user from external to internal and internal to external
This example uses the ALTER
USER
statement to change the user2
internal user to an external user and then back to an internal user.
Command> ALTER USER user2 IDENTIFIED EXTERNALLY; User altered.
Use the ALTER
USER
statement to change the user2
external user back to an internal user.
Command> ALTER USER user2 IDENTIFIED BY user2_password_change; User altered.
Use the CALL
statement to execute a TimesTen built-in procedure or to execute a PL/SQL procedure or function that is standalone or part of a package from within SQL.
The privileges required for executing each TimesTen built-in procedure are listed in the description of each procedure in the "Built-In Procedures" section in the Oracle TimesTen In-Memory Database Reference.
No privileges are required for an owner calling its own PL/SQL procedure or function that is standalone or part of a package using the CALL
statement. For all other users, the EXECUTE
privilege on the procedure or function or on the package in which it is defined is required.
To call a TimesTen built-in procedure:
CALL TimesTenBuiltIn [( arguments )]
When calling PL/SQL procedures or functions that are standalone or part of a package, you can either call these by name or as the result of an expression.
To call a PL/SQL procedure:
CALL [Owner.][Package.]ProcedureName [( arguments )]
To call a PL/SQL function that returns a parameter, one of the following are appropriate:
CALL [Owner.][Package.]FunctionName [( arguments )] INTO :return_param
Note:
A user's own PL/SQL procedure or function takes precedence over a TimesTen built-in procedure with the same name.Parameter | Description |
---|---|
TimesTenBuiltIn |
Name of the TimesTen built-in procedure. For a full list of TimesTen built-in procedures, see "Built-In Procedures" in the Oracle TimesTen In-Memory Database Reference. |
[ Owner .] ProcedureName |
Name of the PL/SQL procedure. You can optionally specify the owner of the procedure. |
[ Owner .] FunctionName |
Name of the PL/SQL function. You can optionally specify the owner of the function. |
arguments |
Specify 0 or more arguments for the PL/SQL procedure or function. |
INTO |
If the routine is a function, the INTO clause is required. |
return_param |
Specify the host variable that stores the return value of the function. |
Detailed information on how to execute PL/SQL procedures or functions with the CALL
statement in TimesTen is provided in "Executing procedures and functions" in the Oracle TimesTen In-Memory Database PL/SQL Developer's Guide, "Using CALL to execute procedures and functions" in the Oracle TimesTen In-Memory Database C Developer's Guide, or "Using CALL to execute procedures and functions" in the Oracle TimesTen In-Memory Database Java Developer's Guide.
The following is the definition of the mytest
function:
create or replace function mytest return number is begin return 1; end; /
Perform the following to execute the mytest
function in a CALL
statement:
Command> variable n number; Command> call mytest() into :n; Command> print n; N : 1
The following example creates a function that returns the salary of the employee whose employee ID is specified as input, then calls the function and displays the result that was returned.
Command> CREATE OR REPLACE FUNCTION get_sal (p_id employees.employee_id%TYPE) RETURN NUMBER IS v_sal employees.salary%TYPE := 0; BEGIN SELECT salary INTO v_sal FROM employees WHERE employee_id = p_id; RETURN v_sal; END get_sal; / Function created. Command> variable n number; Command> call get_sal(100) into :n; Command> print n; N : 24000
The COMMIT
statement ends the current transaction and makes permanent all changes performed in the transaction.
The COMMIT
statement enables the following optional keyword:
Parameter | Description |
---|---|
[WORK] |
Optional clause supported for compliance with the SQL standard. COMMIT and COMMIT WORK are equivalent. |
Until you commit a transaction:
You can see any changes you have made during the transaction but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit.
You can roll back (undo) changes made during the transaction with the ROLLBACK
statement.
This statement releases transaction locks.
For passthrough, the Oracle Database transaction will also be committed.
A commit closes all open cursors.
Insert a row into regions
table of the HR
schema and commit transaction. First set autocommit to 0:
Command> SET AUTOCOMMIT 0; Command> INSERT INTO regions VALUES (5,'Australia'); 1 row inserted. Command> COMMIT; Command> SELECT * FROM regions; < 1, Europe > < 2, Americas > < 3, Asia > < 4, Middle East and Africa > < 5, Australia > 5 rows found.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
This statement creates an active standby pair. It includes an active master database, a standby master database, and may also include one or more read-only subscribers. The active master database replicates updates to the standby master database, which propagates the updates to the subscribers.
CREATE ACTIVE STANDBY PAIR FullStoreName, FullStoreName [ReturnServiceAttribute] [SUBSCRIBER FullStoreName [,...]] [STORE FullStoreName [StoreAttribute [...]]] [NetworkOperation [...] ] [{ INCLUDE | EXCLUDE }{TABLE [[Owner.]TableName [,...]]| CACHE GROUP [[Owner.]CacheGroupName [,...]]| SEQUENCE [[Owner.]SequenceName [,...]]} [,...]]
Syntax for ReturnServiceAttribute
:
{ RETURN RECEIPT [BY REQUEST] | RETURN TWOSAFE [BY REQUEST] | NO RETURN }
Syntax for StoreAttribute
:
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED DURABLE COMMIT {ON | OFF} RESUME RETURN Milliseconds LOCAL COMMIT ACTION {NO ACTION | COMMIT} RETURN WAIT TIME Seconds COMPRESS TRAFFIC {ON | OFF} PORT PortNumber TIMEOUT Seconds FAILTHRESHOLD Value TABLE DEFINITION CHECKING {RELAXED|EXACT}
Syntax for NetworkOperation
:
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...]
Parameter | Description |
---|---|
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
RETURN RECEIPT [BY REQUEST] |
Enables the return receipt service, so that applications that commit a transaction to an active master database are blocked until the transaction is received by the standby master database.
Specifying |
RETURN TWOSAFE [BY REQUEST] |
Enables the return twosafe service, so that applications that commit a transaction to an active master database are blocked until the transaction is committed on the standby master database.
Specifying For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures |
Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures .
Specifying This failure policy can be specified for either the |
RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED |
Sets return services on or off when replication is disabled (stopped or paused state).
See "Establishing return service failure/recovery policies" in Oracle TimesTen In-Memory Database Replication Guide. |
RESUME RETURN Milliseconds |
If DISABLE RETURN has disabled return service blocking, this attribute sets the policy for when to re-enable the return service. |
NO RETURN |
Specifies that no return service is to be used. This is the default.
For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
RETURN WAIT TIME Seconds |
Specifies the number of seconds to wait for return service acknowledgment. A value of 0 (zero) means that there is no waiting. The default value is 10 seconds.
The application can override this timeout setting by using the |
SUBSCRIBER FullStoreName [,...]] |
A database that receives updates from a master database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
STORE FullStoreName [ StoreAttribute [...]] |
Defines the attributes for the specified database. Attributes include PORT , TIMEOUT and FAILTHRESHOLD . FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
TABLE DEFINITION CHECKING {EXACT|RELAXED} |
StoreAttribute clause.
Specifies type of table definition checking that occurs on the subscriber:
The default is Note: If you use |
{INCLUDE | EXCLUDE}
|
An active standby pair replicates an entire database by default.
Do not use the |
COMPRESS TRAFFIC {ON | OFF} |
Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details. |
DURABLE COMMIT {ON | OFF} |
Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. It also enables durable commits when the ttRepStateSave built-in procedure has marked the standby database as failed. |
FAILTHRESHOLD Value |
The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.The value 0 means "No Limit." This is the default.
See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide for more information. |
LOCAL COMMIT ACTION {NO ACTION | COMMIT} |
Specifies the default action to be taken for a return twosafe transaction in the event of a timeout.
Note: This attribute is valid only when the
This setting can be overridden for specific transactions by calling the |
MASTER FullStoreName |
The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
PORT PortNumber |
The TCP/IP port number on which the replication agent for the database listens for connections. If not specified, the replication agent automatically allocates a port number.
In an active standby pair, the standby master database listens for updates from the active master database. Read-only subscribers listen for updates from the standby master database. |
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Denotes the NetworkOperation clause. If specified, enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores. In the context of the ROUTE clause, you can define the following:
When using active standby pairs, For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
TIMEOUT Seconds |
The maximum number of seconds the replication agent waits for a response from remote replication agents. The default is 120 seconds.
In an active standby pair, the active master database sends messages to the standby master database. The standby master database sends messages to the read-only subscribers. Note: For large transactions that may cause a delayed response from the remote replication agent, the agent scales the timeout based on the size of the transaction. This scaling is disabled if you set |
After you create an active standby pair, make one of your databases the active database. To accomplish this, call ttRepStateSet
('ACTIVE')
. Then use ttRepAdmin
to duplicate the active database to the second database. When the operation is successful, the second database becomes the standby database. For more information, see "Setting up an active standby pair with no cache groups" in Oracle TimesTen In-Memory Database Replication Guide.
The SUBSCRIBER
clause lists one or more read-only subscriber databases. You can designate up to 127 subscriber databases.
Replication between the active master database and the standby master database can be RETURN TWOSAFE
, RETURN RECEIPT
, or asynchronous. RETURN TWOSAFE
ensures no transaction loss.
Use the INCLUDE
and EXCLUDE
clauses to exclude the listed tables, sequences and cache groups from replication, or to include only the listed tables, sequences and cache groups, excluding all others.
If the active standby pair has the RETURN TWOSAFE
attribute and replicates a cache group, a transaction may fail if:
The transaction that is being replicated contains an ALTER TABLE
statement or an ALTER CACHE GROUP
statement.
The transaction contains an INSERT
, UPDATE
or DELETE
statement on a replicated table, replicated cache group or an asynchronous writethrough cache group.
You can use an active standby pair to replicate read-only cache groups and asynchronous writethrough (AWT) cache groups. You cannot use an active standby pair to replicate synchronous writethrough (SWT) cache groups or user managed cache groups.
You cannot use the EXCLUDE
clause for AWT cache groups.
You cannot execute the CREATE ACTIVE STANDBY PAIR
statement when Oracle Clusterware is used with TimesTen.
This example creates an active standby pair whose master databases are rep1
and rep2
. There is one subscriber, rep3
. The type of replication is RETURN RECEIPT
. The statement also sets PORT
and TIMEOUT
attributes for the master databases.
CREATE ACTIVE STANDBY PAIR rep1, rep2 RETURN RECEIPT SUBSCRIBER rep3 STORE rep1 PORT 21000 TIMEOUT 30 STORE rep2 PORT 22000 TIMEOUT 30;
Specify NetworkOperation
clause to control network interface:
CREATE ACTIVE STANDBY PAIR rep1,rep2 ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1; ROUTE MASTER rep2 ON "machine2" SUBSCRIBER rep1 ON "machine1" MASTERIP "2.2.2.2" PRIORITY 1 SUBSCRIBERIP "1.1.1.1" PRIORITY 1;
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The CREATE CACHE GROUP
statement:
Creates the table defined by the cache group.
Loads all new information associated with the cache group in the appropriate system tables.
A cache group is a set of tables related through foreign keys that cache data from tables in an Oracle database. There is one root table that does not reference any of the other tables. All other cache tables in the cache group reference exactly one other table in the cache group. In other words, the foreign key relationships form a tree.
A cache table is a set of rows satisfying the conditions:
The rows constitute a subset of the rows of a vertical partition of an Oracle database table.
The rows are stored in a TimesTen table with the same name as the Oracle database table.
If a database has more than one cache group, the cache groups must correspond to different Oracle database (and TimesTen) tables.
Cache group instance refers to a row in the root table and all the child table rows related directly or indirectly to the root table rows.
User managed and system managed cache groups
A cache group can be either system managed or user managed.
A system managed cache group is fully managed by TimesTen and has fixed properties. System managed cache group types include:
Read-only cache groups are updated in the Oracle database, and the updates are propagated from the Oracle database to the cache.
Asynchronous writethrough (AWT) cache groups are updated in the cache and the updates are propagated to the Oracle database. Transactions continue executing on the cache without waiting for a commit on the Oracle database.
Synchronous writethrough (SWT) cache groups are updated in the cache and the updates are propagated to the Oracle database. Transactions are committed on the cache after notification that a commit has occurred on the Oracle database.
Because TimesTen manages system managed cache groups, including loading and unloading the cache group, certain statements and clauses cannot be used in the definition of these cache groups, including:
WHERE
clauses in AWT and SWT cache group definitions
READONLY
, PROPAGATE
and NOT PROPAGATE
in cache table definitions
AUTOREFRESH
in AWT and SWT cache group definitions
The FLUSH CACHE GROUP
and REFRESH CACHE GROUP
operations are not allowed for AWT and SWT cache groups.
You must stop the replication agent before creating an AWT cache group.
A user managed cache group must be managed by the application or user. PROPAGATE
in a user managed cache group is synchronous. The table-level READONLY
keyword can only be used for user managed cache groups.
In addition, both TimesTen and Oracle Database must be able to parse all WHERE
clauses.
Explicitly loaded cache groups and dynamic cache groups
Cache groups can be explicitly or dynamically loaded.
In cache groups that are explicitly loaded, new cache instances are loaded manually into the TimesTen cache tables from the Oracle database tables using a LOAD CACHE GROUP
or REFRESH CACHE GROUP
statement or automatically using an autorefresh operation.
In a dynamic cache group, new cache instances can be loaded manually into the TimesTen cache tables by using a LOAD CACHE GROUP
or on demand using a dynamic load operation. In a dynamic load operation, data is automatically loaded into the TimesTen cache tables from the cached Oracle database tables when a SELECT
, UPDATE
, DELETE
or INSERT
statement is issued on one of the cache tables, where the data is not present in the cache table but does exist in the cached Oracle database table. A manual refresh or automatic refresh operation on a dynamic cache group can result in the updating or deleting of existing cache instances, but not in the loading of new cache instances.
Any cache group type (read-only, asynchronous writethrough, synchronous writethrough, user managed) can be defined as an explicitly loaded cache group.
Any cache group type can be defined as a dynamic cache group except a user managed cache group that has both the AUTOREFRESH
cache group attribute and the PROPAGATE
cache table attribute.
Data in a dynamic cache group is aged out because LRU aging is defined by default. Use the ttAgingLRUConfig
built-in procedure to override the space usage thresholds for LRU aging. You can also define time-based aging on a dynamic cache group to override LRU aging.
For more information on explicitly loaded and dynamic cache groups, see "Loading data into a cache group: Explicitly loaded and dynamic cache groups" in Oracle TimesTen Application-Tier Database Cache User's Guide. For more information about the dynamic load operation, see "Dynamically loading a cache instance" in that document.
CREATE CACHE GROUP
or CREATE ANY CACHE GROUP
and CREATE TABLE
(if all tables in the cache group are owned by the current user) or CREATE ANY TABLE
(if at least one of the tables in the cache group is not owned by the current user).
There are CREATE CACHE GROUP
statements for each type of cache group:
For read-only cache groups, the syntax is:
CREATE [DYNAMIC] READONLY CACHE GROUP [Owner.]GroupName [AUTOREFRESH [MODE {INCREMENTAL | FULL}] [INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S] }] [STATE {ON|OFF|PAUSED}] ] FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [,FOREIGN KEY(ColumnName [,...]) REFERENCES RefTableName (ColumnName [,...]) [ON DELETE CASCADE] [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] [WHERE ExternalSearchCondition] } [,...]
CREATE ASYNCHRONOUS WRITETHROUGH CACHE GROUP
For asynchronous writethrough cache groups, the syntax is:
CREATE [DYNAMIC] [ASYNCHRONOUS] WRITETHROUGH CACHE GROUP [Owner.]GroupName FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [FOREIGN KEY(ColumnName [,...]) REFERENCES RefTableName (ColumnName [,...])] [ ON DELETE CASCADE ] UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] } [,...]
CREATE SYNCHRONOUS WRITETHROUGH CACHE GROUP
For synchronous writethrough cache groups, the syntax is:
CREATE [DYNAMIC] SYNCHRONOUS WRITETHROUGH CACHE GROUP [Owner.]GroupName FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [FOREIGN KEY(ColumnName [,...]) REFERENCES RefTableName (ColumnName [,...])}] [ ON DELETE CASCADE ] [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] } [,...]
CREATE USERMANAGED CACHE GROUP
For user managed cache groups, the syntax is:
CREATE [DYNAMIC][USERMANAGED] CACHE GROUP [Owner.]GroupName [AUTOREFRESH [MODE {INCREMENTAL | FULL}] [INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S] }] [STATE {ON|OFF|PAUSED}] ] FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [FOREIGN KEY(ColumnName[,...]) REFERENCES RefTableName (ColumnName [,...])] [ON DELETE CASCADE] [, {READONLY | PROPAGATE | NOT PROPAGATE}] [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] [WHERE ExternalSearchCondition] } [,...]
Following are the parameters for the cache group definition before the FROM
keyword:
Parameter | Description |
---|---|
[ Owner .] GroupName |
Owner and name assigned to the new cache group. |
[DYNAMIC] |
If specified, a dynamic cache group is created. |
AUTOREFRESH |
The AUTOREFRESH parameter automatically propagates changes from the Oracle database to the cache group. For details, see "AUTOREFRESH in cache groups". |
MODE [INCREMENTAL | FULL] |
Determines which rows in the cache are updated during an autorefresh. If the INCREMENTAL clause is specified, TimesTen refreshes only rows that have been changed on the Oracle database since the last propagation. If the FULL clause is specified, TimesTen updates all rows in the cache with each autorefresh. The default autorefresh mode is INCREMENTAL . |
INTERVAL IntervalValue |
Indicates the interval at which autorefresh should occur in units of minutes, seconds or milliseconds. IntervalValue is an integer value that specifies how often autorefresh should be scheduled, in minutes, seconds, or milliseconds. The default IntervalValue value is 5 minutes. An autorefresh interval set to 0 milliseconds enables continuous autorefresh, where the next autorefresh cycle is scheduled immediately after the last autorefresh cycle has ended. See "AUTOREFRESH cache group attribute" in the Oracle TimesTen Application-Tier Database Cache User's Guide for more information.
If the specified interval is not long enough for an autorefresh to complete, a runtime warning is generated and the next autorefresh waits until the current one finishes. An informational message is generated in the support log if the wait queue reaches 10. |
STATE [ON | OFF | PAUSED] |
Specifies whether autorefresh should be ON or OFF or PAUSED when the cache group is created. You can alter this setting later by using the ALTER CACHE GROUP statement. By default, the AUTOREFRESH state is PAUSED . |
FROM |
Designates one or more table definitions for the cache group. |
Everything after the FROM
keyword comprises the definitions of the Oracle database tables cached in the cache group. The syntax for each table definition is similar to that of a CREATE TABLE
statement. However, primary key constraints are required for the cache group table.
Table definitions have the following parameters.
Parameter | Description |
---|---|
[ Owner .] TableName |
Owner and name to be assigned to the new table. If you do not specify the owner name, your login becomes the owner name for the new table. |
ColumnDefinition |
Name of an individual column in a table, its data type and whether it is nullable. Each table must have at least one column. |
PRIMARY KEY ( ColumnName [,...]) |
Specifies that the table has a primary key. Primary key constraints are required for a cache group. ColumnName is the name of the column that forms the primary key for the table to be created. Up to 16 columns can be specified for the primary key. Cannot be specified with UNIQUE in one specification. |
FOREIGN KEY ( ColumnName [,...]) |
Specifies that the table has a foreign key. ColumnName is the name of the column that forms the foreign key for the table to be created. |
REFERENCES RefTableName ( ColumnName [,...]) |
Specifies the table which the foreign key is associated with. RefTableName is the name of the referenced table and ColumnName is the name of the column referenced in the table. |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
READONLY |
Specifies that changes cannot be made on the cached table. |
PROPAGATE| NOT PROPAGATE |
Specifies whether changes to the cached table are automatically propagate to the corresponding Oracle database table at commit time. |
UNIQUE HASH ON ( HashColumnName ) |
Specifies that a hash index is created on this table. HashColumnName identifies the column that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key. |
PAGES = PrimaryPages |
Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).
The value for If your estimate for For more information on hash indexes, see "CREATE TABLE". |
WHERE ExternalSearchCondition |
The WHERE clause evaluated by the Oracle database for the cache group table. This WHERE clause is added to every LOAD and REFRESH operation on the cache group. It may not directly reference other tables. It is parsed by both TimesTen and Oracle Database. See "Using a WHERE clause" in Oracle TimesTen Application-Tier Database Cache User's Guide. |
AGING LRU [ON | OFF] |
If specified, defines the LRU aging policy on the root table. The LRU aging policy applies to all tables in the cache group. The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state (ON or OFF ) and the LRU aging attributes.
Set the aging state to either In dynamic cache groups, LRU aging is LRU aging cannot be specified on a cache group with the autorefresh attribute, unless the cache group is dynamic. LRU attributes are defined by calling the For more information about LRU aging, see "Implementing aging in a cache group" in Oracle TimesTen Application-Tier Database Cache User's Guide. |
AGING USE ColumnName ...[ON|OFF] |
If specified, defines the time-based aging policy on the root table. The time-based aging policy applies to all tables in the cache group. The time-based aging policy defines the type of aging (time-based), the aging state (ON or OFF ) and the time-based aging attributes.
Set the aging state to either Time-based aging attributes are defined at the SQL level and are specified by the Specify The values of the column used for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be For more information about time-based aging, see "Implementing aging in a cache group" in Oracle TimesTen Application-Tier Database Cache User's Guide. |
LIFETIME Num1 {SECOND[S]|MINUTE[S]|HOUR[S]DAY[S]} |
LIFETIME is a time-based aging attribute and is a required clause.
Specify the The Specify The concept of time resolution is supported. If |
[CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S]| DAY[S]}] |
CYCLE is a time-based aging attribute and is optional. Specify the CYCLE clause after the LIFETIME clause.
The Specify If you do not specify the If the aging state is |
Two cache groups cannot have the same owner name and group name. If you do not specify the owner name, your login becomes the owner name for the new cache group.
Neither a cache table name nor a cache group name can contain #.
Dynamic parameters are not allowed in the WHERE
clause.
Oracle Database temporary tables cannot be cached.
Each table must correspond to a table in the Oracle database.
In the Oracle database, you can define a parent/child relationship and then insert a null value into the foreign key column of the child table. This means this row in the child table references a null parent. You can then create a cache group and cache the parent/child relationship of the Oracle database tables. However, if you load data from the Oracle database tables into the cache group, the row that contains the null value of the foreign key column is not loaded. TimesTen recommends that you do not create cache groups if the tables you cache define a parent/child relationship in which the foreign key represents a null parent.
You cannot use lowercase delimited identifiers to name your cache tables. Table names in TimesTen are case-insensitive and are stored as uppercase. The name of the cache table must be the same as the Oracle database table name. Uppercase table names on TimesTen will not match mixed case table names on the Oracle database. As a workaround, create a synonym for your table in the Oracle database and use that synonym as the table name for the cache group. This workaround is not available for read-only cache groups or cache groups with the AUTOREFRESH
parameter set.
Each column in the cache table must match each column in the Oracle database table, both in name and in data type. See "Mappings between Oracle Database and TimesTen data types" in Oracle TimesTen Application-Tier Database Cache User's Guide. In addition, each column name must be fully qualified with an owner and table name when referenced in a WHERE
clause.
The WHERE
clause can only directly refer to the cache group table. Tables that are not in the cache group can only be referenced with a subquery.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the CREATE CACHE GROUP
, LOAD CACHE GROUP
, UNLOAD CACHE GROUP
, REFRESH CACHE GROUP
or FLUSH CACHE GROUP
statements. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name.
By default, a range index is created to enforce the primary key for a cache group table. Use the UNIQUE HASH
clause to specify a hash index for the primary key.
If your application performs range queries over a cache group table's primary key, then choose a range index for that cache group table by omitting the UNIQUE HASH
clause.
If, however, your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH
clause. See "CREATE TABLE" for more information on the UNIQUE HASH
clause.
Use ALTER TABLE
to change the representation of the primary key index for a table.
For cache group tables with the PROPAGATE
attribute and for tables of SWT and AWT cache groups, foreign keys specified with ON DELETE CASCADE
must be a proper subset of foreign keys with ON DELETE CASCADE
in the Oracle database tables.
You cannot execute the CREATE CACHE GROUP
statement when performed under the serializable isolation level. An error message is returned when attempted.
The AUTOREFRESH
parameter automatically propagates changes from the Oracle database to TimesTen cache groups. For explicitly loaded cache groups, deletes, updates and inserts are automatically propagated from the Oracle database to the cache group. For dynamic cache groups, only deletes and updates are propagated. Inserts to the specified Oracle database tables are not propagated to dynamic cache groups. They are dynamically loaded into TimesTen Cache when referenced by the application. They can also be explicitly loaded by the application.
To use autorefresh with a cache group, you must specify AUTOREFRESH
when you create the cache group. You can change the MODE
, STATE
and INTERVAL
AUTOREFRESH
settings after a cache group has been created by using the ALTER CACHE GROUP
command. Once a cache group has been specified as either AUTOREFRESH
or PROPAGATE
, you cannot change these attributes.
TimesTen supports FULL
or INCREMENTAL AUTOREFRESH
. In FULL
mode, the entire cache is periodically unloaded and then reloaded. In INCREMENTAL
mode, TimesTen installs triggers in the Oracle database to track changes and periodically updates only the rows that have changed in the specified Oracle database tables. The first incremental refresh is always a full refresh, unless the autorefresh state is PAUSED
. The default mode is INCREMENTAL
.
FULL AUTOREFRESH
is more efficient when most of the Oracle database table rows have been changed. INCREMENTAL AUTOREFRESH
is more efficient when there are fewer changes.
TimesTen schedules an autorefresh operation when the transaction that contains a statement with AUTOREFRESH
specified is committed. The statement types that cause autorefresh to be scheduled are:
A CREATE CACHE GROUP
statement in which AUTOREFRESH
is specified, and the AUTOREFRESH
state is specified as ON
.
An ALTER CACHE GROUP
statement in which the AUTOREFRESH
state has been changed to ON
.
A LOAD CACHE GROUP
statement on an empty cache group whose autorefresh state is PAUSED
.
The specified interval determines how often autorefresh occurs.
The current STATE
of AUTOREFRESH
can be ON
, OFF
or PAUSED
. By default, the autorefresh state is PAUSED
.
The NOT PROPAGATE
attribute cannot be used with the AUTOREFRESH
attribute.
You can implement sliding windows with time-based aging. See "Configuring a sliding window" in Oracle TimesTen Application-Tier Database Cache User's Guide.
After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.
The aging policy must be defined to change the aging state.
LRU and time-based aging can be combined in one system. If you use only LRU aging, the aging thread wakes up based on the cycle specified for the whole database. If you use only time-based aging, the aging thread wakes up based on an optimal frequency. This frequency is determined by the values specified in the CYCLE
clause for all tables. If you use both LRU and time-based aging, then the thread wakes up based on a combined consideration of both types.
Call the ttAgingScheduleNow
procedure to schedule the aging process right away regardless if the aging state is ON
or OFF
.
The following rules determine if a row is accessed or referenced for LRU aging:
Any rows used to build the result set of a SELECT
statement.
Any rows used to build the result set of an INSERT...SELECT
statement.
Any rows that are about to be updated or deleted.
Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.
For LRU aging, if a child row is not a candidate for aging, then neither this child row nor its parent row are deleted. ON DELETE CASCADE
settings are ignored.
For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE
(whether specified or not) is ignored.
Specify either the LRU aging or time-based aging policy on the root table. The policy applies to all tables in the cache group.
For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL
column.
Restrictions on defining aging for a cache group:
LRU aging is not supported on a cache group defined with the autorefresh attribute, unless it is a dynamic cache group.
The aging policy cannot be added, altered, or dropped for read-only cache groups or cache groups with the AUTOREFRESH
attribute while the cache agent is active. Stop the cache agent first.
You cannot drop the column that is used for time-based aging.
Create a read-only cache group:
CREATE READONLY CACHE GROUP customerorders AUTOREFRESH INTERVAL 10 MINUTES FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, region CHAR(10), PRIMARY KEY(custid)), ordertab (orderid INT NOT NULL, custid INT NOT NULL, PRIMARY KEY (orderid), FOREIGN KEY (custid) REFERENCES customer(custid));
Create an asynchronous writethrough cache group:
CREATE ASYNCHRONOUS WRITETHROUGH CACHE GROUP cstomers FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, PRIMARY KEY(custid));
Create a synchronous writethrough cache group:
CREATE SYNCHRONOUS WRITETHROUGH CACHE GROUP customers FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, PRIMARY KEY(custid));
Create a user managed cache group:
CREATE USERMANAGED CACHE GROUP updateanywherecustomers AUTOREFRESH MODE INCREMENTAL INTERVAL 30 SECONDS STATE ON FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, PRIMARY KEY(custid), PROPAGATE);
Create a cache group with time-based aging. Specify agetimestamp
as the column for aging. Specify LIFETIME
2 hours, CYCLE
30 minutes. Aging state is not specified, so the default setting (ON
) is used.
CREATE READONLY CACHE GROUP agingcachegroup AUTOREFRESH MODE INCREMENTAL INTERVAL 5 MINUTES STATE PAUSED FROM customer (customerid NUMBER NOT NULL, agetimestamp TIMESTAMP NOT NULL, PRIMARY KEY (customerid)) AGING USE agetimestamp LIFETIME 2 HOURS CYCLE 30 MINUTES; Command> DESCRIBE customer; Table USER.CUSTOMER: Columns: *CUSTOMERID NUMBER NOT NULL AGETIMESTAMP TIMESTAMP (6) NOT NULL AGING USE AgeTimestamp LIFETIME 2 HOURS CYCLE 30 MINUTES ON 1 table found. (primary key columns are indicated with *)
Use a synonym for a mixed case delimited identifier table name in the Oracle database so the mixed case table name can be cached in TimesTen. First attempt to cache the mixed case Oracle database table name. You see the error "Could not find '
NameofTable
' in Oracle"
:
Command> AUTOCOMMIT 0; Command> PASSTHROUGH 3; Command> CREATE TABLE "MixedCase" (col1 NUMBER PRIMARY KEY NOT NULL); Command> INSERT INTO "MixedCase" VALUES (1); 1 row inserted. Command> COMMIT; Command> CREATE CACHE GROUP MixedCase1 from "MixedCase" (col1 NUMBER PRIMARY KEY NOT NULL); 5140: Could not find SAMPLEUSER.MIXEDCASE in Oracle. May not have privileges. The command failed.
Now, using the PassThrough
attribute, create the synonym "MIXEDCASE"
in the Oracle database and use that synonym as the table name.
Command> AUTOCOMMIT 0; Command> PASSTHROUGH 3; Command> CREATE SYNONYM "MIXEDCASE" FOR "MixedCase"; Command> COMMIT; Command> CREATE CACHE GROUP MixedCase2 FROM "MIXEDCASE" (col1 NUMBER PRIMARY KEY NOT NULL); Warning 5147: Cache group contains synonyms Command> COMMIT;
Attempt to use a synonym name with a read-only cache group or a cache group with the AUTOREFRESH
attribute. You see an error:
Command> AUTOCOMMIT 0; Command> PASSTHROUGH 3; Command> CREATE SYNONYM "MIXEDCASE_AUTO" FOR "MixedCase"; Command> COMMIT; Command> CREATE READONLY CACHE GROUP MixedCase3 AUTOREFRESH MODE INCREMENTAL INTERVAL 10 MINUTES FROM "MIXEDCASE_AUTO" (Col1 NUMBER PRIMARY KEY NOT NULL); 5142: Autorefresh is not allowed on cache groups with Oracle synonyms The command failed.
The CREATE FUNCTION
statement creates a standalone stored function.
CREATE [OR REPLACE] FUNCTION [Owner.]FunctionName [(arguments [IN|OUT|IN OUT][NOCOPY] DataType [DEFAULT expr][,...])] RETURN DataType [InvokerRightsClause][AccessibleByClause][DETERMINISTIC] {IS|AS} PlsqlFunctionBody InvokerRightsClause::= AUTHID {CURRENT_USER|DEFINER} AccessibleByClause::= ACCESSIBLE BY (accessor[,...]) accessor::= [UnitKind][Owner.]UnitName
You can specify InvokerRightsClause
, AccessibleByClause
, or DETERMINISTIC
in any order.
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to recreate the function if it already exists. Use this clause to change the definition of an existing function without dropping and recreating it. When you recreate a function, TimesTen recompiles it. |
FunctionName |
Name of function. |
arguments |
Name of argument or parameter. You can specify 0 or more parameters for the function. If you specify a parameter, you must specify a data type for the parameter. The data type must be a PL/SQL data type. |
IN|OUT|IN OUT |
Parameter modes.
|
NOCOPY |
Specify NOCOPY to instruct TimesTen to pass the parameter as fast as possible. You can enhance performance when passing a large value such as a record, an index-by-table, or a varray to an OUT or IN OUT parameter. IN parameters are always passed NOCOPY . |
DEFAULT expr |
Use this clause to specify a default value for the parameter. You can specify := in place of the keyword DEFAULT . |
RETURN DataType |
Required clause. A function must return a value. You must specify the data type of the return value of the function.
Do not specify a length, precision, or scale for the data type. The data type is a PL/SQL data type. |
InvokerRightsClause |
Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:
For more information, see "Definer's rights and invoker's rights (AUTHID clause)" in the Oracle TimesTen In-Memory Database Security Guide. |
AccessibleByClause |
Use this clause to specify one or more accessors (PL/SQL units) that can invoke the function directly. The list of accessors that can access the function is called a white list. A white list gives you the ability to add an extra layer of security to your PL/SQL objects. Specifically, you can restrict access to the function to only those objects on the white list.
Syntax: |
accessor |
Used in AccessibleByClause . An accessor is a PL/SQL unit that can invoke the function.
An accessor can appear more than once in the Syntax: |
UnitKind |
Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the kind of PL/SQL unit that can invoke the function.UnitKind is optional, but if specified, valid options are:
|
[ Owner .] UnitName |
Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the name of the PL/SQL unit that can invoke the function. If you specify UnitKind , then UnitName must be a name of a unit of that kind. For example, if you specify PROCEDURE for UnitKind , then UnitName must be the name of a procedure. UnitName is required.
You can optionally specify |
DETERMINISTIC |
Specify DETERMINISTIC to indicate that the function returns the same result value whenever it is called with the same values for its parameters. |
IS|AS |
Specify either IS or AS to declare the body of the function. |
plsql_function_spec |
Specifies the function body. |
AccessibleByClause
:
The compiler checks the validity of the syntax of the ACCESSIBLE
BY
clause, but does not check that the accessor exists. Therefore, you can define an accessor that does yet exist in the owner's schema.
When you invoke the function, the compiler first does the normal permission checks on the invocation. If any check fails, the invocation fails, even if the invoker is an accessor. If all normal permission checks on the invocation succeed, and the function has no ACCESSIBLE
BY
clause, the invocation succeeds. If the function has an ACCESSIBLE
BY
clause, the invocation succeeds only if the invoker is an accessor.
When you create or replace a function, the privileges granted on the function remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.
In a replication environment, the CREATE FUNCTION
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
TimesTen does not support:
parallel_enable_clause
You can specify this clause, but it has no effect.
call_spec
clause
AS EXTERNAL
clause
This example creates the ProtectedFunction
function. The ACCESSIBLE
BY
clause is used to restrict the invocation of the function to the CallingProc1
and CallingProc2
procedures. Note that for CallingProc1
, the type of PL/SQL unit is not specified and for CallingProc2
, the type of PL/SQL unit is specified (PROCEDURE
).
Command> CREATE OR REPLACE FUNCTION ProtectedFunction (a IN NUMBER) RETURN NUMBER ACCESSIBLE BY (CallingProc1, PROCEDURE CallingProc2) AS BEGIN RETURN a * 1; END; / Function created.
Create the CallingProc1
and CallingProc2
procedures.
Command> CREATE OR REPLACE PROCEDURE CallingProc1 AS a NUMBER:=1; BEGIN a:=ProtectedFunction(a); DBMS_OUTPUT.PUT_LINE ('Calling Procedure: '|| a); END; / Procedure created. Command> CREATE OR REPLACE PROCEDURE CallingProc2 AS a NUMBER:=2; BEGIN a:=ProtectedFunction(a); DBMS_OUTPUT.PUT_LINE ('Calling Procedure: '|| a); END; / Procedure created.
Call the procedures. CallingProc1
and CallingProc2
are in the white list, resulting in successful execution.
Command> SET SERVEROUTPUT ON Command> exec CallingProc1; Calling Procedure: 1 PL/SQL procedure successfully completed. Command> exec CallingProc2; Calling Procedure: 2 PL/SQL procedure successfully completed.
Illustrating the syntax for creating a PL/SQL function
Create function get_sal
with one input parameter. Return salary
as type NUMBER
.
Command> CREATE OR REPLACE FUNCTION get_sal (p_id employees.employee_id%TYPE) RETURN NUMBER IS v_sal employees.salary%TYPE := 0; BEGIN SELECT salary INTO v_sal FROM employees WHERE employee_id = p_id; RETURN v_sal; END get_sal; / Function created.
The CREATE INDEX
statement creates an index on one or more columns of a table or materialized view.
No privilege is required for owner.
If not the owner, the system privilege, CREATE
ANY
INDEX
, or the object privilege, INDEX
, is required.
To create a range index:
CREATE [UNIQUE] INDEX [Owner.]IndexName ON [Owner.]TableName ({ColumnName [ASC | DESC]} [,... ])
To create a hash index:
CREATE [UNIQUE] HASH INDEX [Owner.]IndexName ON [Owner.]TableName ({ColumnName [ASC | DESC]} [,... ] ) [ PAGES = RowPages | CURRENT ]
TimesTen creates a nonunique range index by default. Specify CREATE
UNIQUE INDEX
to create a unique range index.
To create a nonunique hash index, specify CREATE
HASH
INDEX
. To create a unique hash index, specify CREATE
UNIQUE
HASH
INDEX
.
You cannot create an index on LOB columns.
The CREATE INDEX
statement enters the definition of the index in the system catalog and initializes the necessary data structures. Any rows in the table are then added to the index.
If UNIQUE
is specified, all existing rows must have unique values in the indexed column(s).
The new index is maintained automatically until the index is deleted by a DROP INDEX
statement or until the table associated with it is dropped.
Any prepared statements that reference the table with the new index are automatically prepared again the next time they are executed. Then the statements can take advantage, if possible, of the new index.
NULL
compares higher than all other values for sorting.
An index on a temporary table cannot be created by a connection if any other connection has a non-empty instance of the table.
If you are using linguistic comparisons, you can create a linguistic index. A linguistic index uses sort key values and storage is required for these values. Only one unique value for NLS_SORT
is allowed for an index. For more information on linguistic indexes and linguistic comparisons, see "Using linguistic indexes" in Oracle TimesTen In-Memory Database Operations Guide.
If you create indexes that are redundant, TimesTen generates warnings or errors. Call ttRedundantIndexCheck
to see the list of redundant indexes for your tables.
In a replicated environment for an active standby pair, if DDL_REPLICATION_LEVEL
is 2 or greater when you execute CREATE INDEX
on the active database, the index is replicated to all databases in the replication scheme. The table on which the index is created must be empty. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
Indexes can be created on over any columns in the table. This includes compressed columns, even columns that exist in separate compression column groups.
To change the size or type of a hash index, drop the hash index and create a new index.
A hash index is created with a fixed size that remains constant for the life of the table. To resize the hash index, drop and recreate the index. A smaller hash index results in more hash collisions. A larger hash index reduces collisions but can waste memory. Hash key comparison is a fast operation, so a small number of hash collisions should not cause a performance problem for TimesTen.
To ensure that your hash index is sized correctly, your application must indicate the expected size of your table with the value of the RowPages
parameter of the SET
PAGES
clause. Compute this value by dividing the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for the value of RowPages (256000/256=1000).
The maximum number of columns that can be specified for an index is 16.
Using indexes in query processing
Proper indexes can improve query performance. Some queries can benefit from the use of indexes and some queries do not benefit from the use of indexes. Additionally, the choice of indexes for your queries is important.
A range index is ideal for processing range searches and exact matches, especially if most of the values in the index columns are unique. For example, if a range index is defined on columns (C1,C2)
, the index can be used to process the following types of predicates. ConstantOrParam
refers to a constant value or dynamic parameter and range
refers to the operators >,<,>=, or <=:
C1
= ConstantOrParam
AND
C2
= ConstantOrParam
C1
= ConstantOrParam
AND
C2
range
ConstantOrParam
C1
= ConstantOrParam
C1
range
ConstantOrParam
A range index efficiently processes equality and range predicates and efficiently processes sort and group operations. Use range indexes on index columns with many unique values. The order of columns you specify in a range index is relevant. The order of expressions in the predicate of a query that uses the range index is not relevant. When your query is processed, only one range index is used for each scan of your table even if you have defined multiple range indexes on your table.
A hash index efficiently processes equality predicates. You must size your hash index correctly for optimal performance. Use the PAGES
parameter to size your hash index. If you specify a PAGES
value that is too small, a large number of hash collisions may result, leading to performance degradation for statements that access the hash index. The order of columns specified in the hash index is not relevant and the order of expressions in the predicate of the query that uses the hash index is not relevant. If either a hash index or a range index can be used to process a particular equality predicate, the hash index is chosen because a lookup in a hash index is faster than a scan of a range index.
You can influence the indexes used by the optimizer by setting statement level or transaction level optimizer hints. For more information on statement level optimizer hints, see "Statement level optimizer hints". For more information on transaction level optimizer hints, see "ttOptSetFlag", ttOptSetOrder", or "ttOptUseIndex" in the Oracle TimesTen In-Memory Database Reference. You can also use the TimesTen Index Advisor to provide recommendations for indexes, given a specific set of queries or a specific workload. For more information on the index advisor, see "Using the Index Advisor to recommend indexes" in the Oracle TimesTen In-Memory Database Operations Guide.
Create a table and then create a unique hash index on col2
. Do not specify the PAGES
clause. If PAGES
is not specified, the current table page count is used for the size of the hash table. Use INDEXES
to verify the index was created. Insert a row in the table, set SHOWPLAN
to 1 and then verify the optimizer uses the hash index.
Command> CREATE TABLE tab (col1 NUMBER PRIMARY KEY NOT NULL, col2 VARCHAR2 (30)); Command> CREATE UNIQUE HASH INDEX hash1 ON tab (col2); Command> INDEXES; Indexes on table TESTUSER.TAB: HASH1: unique hash index on columns: COL2 TAB: unique range index on columns: COL1 2 indexes found. 2 indexes found on 1 table. Command> INSERT INTO tab VALUES (10, 'ABC'); Command> SHOWPLAN 1; Command> SELECT * FROM tab where col2 = 'ABC'; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TAB IXNAME: HASH1 INDEXED CONDITION: TAB.COL2 = 'ABC' NOT INDEXED: <NULL> < 10, ABC > 1 row found.
Create a table and create a nonunique hash index on col1
. Use PAGES = CURRENT
to use the current table page count to size the hash index. Use INDEXES
to verify the nonunique hash index is created.
Command> CREATE TABLE tab2 (col1 NUMBER); Command> CREATE HASH INDEX hash_index ON tab2 (col1) PAGES = CURRENT; Command> INDEXES; Indexes on table TESTUSER.TAB2: HASH_INDEX: non-unique hash index on columns: COL1 1 index found. 1 index found on 1 table.
Create table and create unique hash index on col3
. Use PAGES = 100
to specify a page count of 100 for the size of the hash table. Use INDEXES
to verify the unique hash index is created.
Command> CREATE TABLE tab3 (col1 NUMBER, col2 NUMBER, col3 TT_INTEGER); Command> CREATE UNIQUE HASH INDEX unique_hash1 on tab3 (col3) PAGES = 100; Command> INDEXES; Indexes on table TESTUSER.TAB3: UNIQUE_HASH1: unique hash index on columns: COL3 1 index found. 1 index found on 1 table.
The regions
table in the HR
schema creates a unique index on region_id
. Issue the ttIsql
INDEXES
command on table regions
. You see the unique range index regions
.
Command> INDEXES REGIONS; Indexes on table SAMPLEUSER.REGIONS: REGIONS: unique range index on columns: REGION_ID (referenced by foreign key index COUNTR_REG_FK on table SAMPLEUSER.COUNTRIES) 1 index found. 1 index found on 1 table.
Attempt to create a unique index i
on table regions
indexing on column region_id
. You see a warning message.
Command> CREATE UNIQUE INDEX i ON regions (region_id); Warning 2232: New index I is identical to existing index REGIONS; consider dropping index I
Call ttRedundantIndexCheck
to see warning message for this index:
Command> CALL ttRedundantIndexCheck ('regions'); < Index SAMPLEUSER.REGIONS.I is identical to index SAMPLEUSER.REGIONS.REGIONS; consider dropping index SAMPLEUSER.REGIONS.I > 1 row found.
Create table redundancy
and define columns co11
and col2
. Create two user indexes on col1
and col2
. You see an error message when you attempt to create the second index r2
. Index r1
is created. Index r2
is not created.
Command> CREATE TABLE redundancy (col1 CHAR (30), col2 VARCHAR2 (30)); Command> CREATE INDEX r1 ON redundancy (col1, col2); Command> CREATE INDEX r2 ON redundancy (col1, col2); 2231: New index R2 would be identical to existing index R1 The command failed.
Issue the ttIsql
command INDEXES
on table redundancy
to show that only index r1
is created:
Command> INDEXES redundancy; Indexes on table SAMPLEUSER.REDUNDANCY: R1: non-unique range index on columns: COL1 COL2 1 index found. 1 index found on 1 table.
This unique index ensures that all part numbers are unique.
CREATE UNIQUE INDEX purchasing.partnumindex ON purchasing.parts (partnumber);
Create a linguistic index named german_index
on table employees1
. To have more than one linguistic sort, create a second linguistic index.
Command> CREATE TABLE employees1 (id CHARACTER (21), id2 character (21)); Command> CREATE INDEX german_index ON employees1 (NLSSORT(id, 'NLS_SORT=GERMAN')); Command> CREATE INDEX german_index2 ON employees1 (NLSSORT(id2, 'nls_sort=german_ci')); Command> indexes employees1; Indexes on table SAMPLEUSER.EMPLOYEES1: GERMAN_INDEX: non-unique range index on columns: NLSSORT(ID,'NLS_SORT=GERMAN') GERMAN_INDEX2: non-unique range index on columns: NLSSORT(ID2,'nls_sort=german_ci') 2 indexes found. 1 table found.
The CREATE MATERIALIZED VIEW
statement creates a view of the table specified in the SelectQuery
clause. The original tables used to create a view are referred to as detail tables. The view is refreshed synchronously with regard to changes in the detail tables.
User executing the statement must have CREATE MATERIALIZED VIEW
(if owner) or CREATE ANY MATERIALIZED VIEW
(if not owner).
Owner of the materialized view must have SELECT
on the detail tables.
Owner of the materialized view must have CREATE TABLE
.
This statement is supported with TimesTen Scaleout. You must specify the DISTRIBUTE
BY
HASH
clause and you must define a distribution key. The DISTRIBUTE
BY
REFERENCE
and DUPLICATE
clauses are not supported. See "Understanding materialized views" and "Materialized views as a secondary form of distribution" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.
CREATE MATERIALIZED VIEW [Owner.]ViewName DISTRIBUTE BY HASH (ColumnName [,...]) AS SelectQuery [PRIMARY KEY (ColumnName [,...])] [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages]
CREATE MATERIALIZED VIEW [Owner.]ViewName AS SelectQuery [PRIMARY KEY (ColumnName [,...])] [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages]
Parameter | Description |
---|---|
[ Owner .] ViewName |
Name assigned to the new view. |
DISTRIBUTE BY HASH ( ColumnName [,...]) |
TimesTen Scaleout only. Must specify the DISTRIBUTE BY HASH clause and must specify one or more columns for the distribution key (even if you have specified a primary key).
The detail table must be distributed by hash.
This clause must appear before the |
SelectQuery |
Select column from the detail tables to be used in the view. |
ColumnName |
Name of the column(s) that forms the primary key for the view to be created. Up to 16 columns can be specified for the primary key. Each result column name of a viewed table must be unique. The column name definition cannot contain the table or owner component. |
UNIQUE HASH ON |
Hash index for the table. Only unique hash indexes are created. This parameter is used for equality predicates. UNIQUE HASH ON requires that a primary key be defined. |
HashColumnName |
Column defined in the view that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key. |
PAGES = PrimaryPages |
Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).
The value for If your estimate for For more information on hash indexes, see "CREATE TABLE". |
Description and restrictions for CREATE MATERIALIZED VIEW: TimesTen Scaleout
Materialized views enable you to create a secondary form of distribution for a table and can be useful in these situations:
If you have a table with a primary key and a unique column and you distribute the table by hash based on the primary key column, TimesTen Scaleout would need to connect to every element of the database to verify the uniqueness of the values inserted or updated in the unique column.
Consider:
Creating a materialized view on the table that is distributed by hash based on the unique column
Creating an index on the unique column of the materialized view
If you have a table with two independent groups of columns that are commonly joined in queries, consider distributing the table by hash based on one of the groups of columns. Then create a materialized view of the table that is distributed by hash based on the second group of columns.
See "Materialized views as a secondary form of distribution" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.
Also:
The SQL optimizer may re-write a query against a base table to use an available materialized view if the use of the materialized view is expected to improve the execution time of the query.
You must specify the DISTRIBUTE
BY
HASH
clause and you must specify it with a distribution key (even if you have specified a primary key and intend to use the primary key as the distribution key).
You must specify the DISTRIBUTE
BY
HASH
clause before the AS
SelectQuery
clause.
Restrictions include:
You can only specify the DISTRIBUTE
BY
HASH
clause. The DISTRIBUTE
BY
REFERENCE
and DUPLICATE
clauses are not supported.
The SelectQuery
must be restricted to single table SELECT
statements.
You cannot specify the GROUP
BY
or the WHERE
clause in the SelectQuery
.
You cannot use SQL functions in the SelectQuery
.
You cannot use an expression in the SelectQuery
.
The detail table of the materialized view cannot have a foreign key with a cascade delete clause.
The distribution key columns must be in the project list of the SelectQuery
.
There are no DDL rewrites. For example, if you create a unique index on the detail table, a corresponding index on the materialized view (which is distributed on the unique column) is not created.
Description: TimesTen Scaleout and TimesTen Classic
This section describes restrictions, requirements, and other considerations for materialized views, covering the following topics:
Restrictions and requirements for materialized views
The restrictions and requirements on the defining query include:
Each expression in the select list must have a unique name.
Do not use non-materialized views to define a materialized view.
Do not define CLOB
, BLOB
, or NCLOB
data types for columns in the select list of the materialized view query.
The detail tables cannot belong to a cache group and the detail tables cannot have compression.
Do not use SELECT
FOR
UPDATE
.
Do not reference system tables or views.
Do not use nested definitions for a materialized view.
Do not use dynamic parameters.
Do not use ROWNUM
.
Do not use analytic functions.
Do not use GROUPING
SETS
, ROLLUP
, or CUBE
.
Do not use the SYSDATE
function.
Do not use the functions SYSTEM_USER
, USER
, CURRENT_USER
, or SESSION_USER
.
Do not use NEXTVAL
or CURRVAL
.
Outer joins are allowed but the select list must project at least one non-nullable column from each of the inner tables specified in the outer join.
Do not use the WITH
subquery
clause.
The restrictions (not on the defining query) include:
Do not have a hash-based primary key that contains any aggregate columns of the materialized view.
A materialized view cannot be replicated directly using TimesTen replication. You can replicate the detail tables. You must define the same materialized view on both sides of replication. TimesTen automatically updates the corresponding materialized views.
You cannot define a foreign key if the referencing or referenced table is a materialized view.
The following restrictions and requirements on the defining query are:
The view definition must include all columns in the group by list in the select list.
An aggregate view must include a COUNT (*)
or COUNT
(non-nullable column) in the select list.
Do not use derived tables or JOIN
tables.
Do not use SELECT
DISTINCT
or an aggregate distinct function.
Do not use the set operators UNION
, MINUS
, or INTERSECT
.
Do not use SUM
of nullable expressions.
Use only simple columns as group by columns.
Group by columns cannot belong to self join tables.
Do not use these clauses:
HAVING
ORDER
BY
DISTINCT
FIRST
JOIN
Do not use the TT_HASH
function.
You can use SUM
and COUNT
but do not use expressions involving SUM
and COUNT
. Do not use AVG
, which is treated as SUM/COUNT
.
Do not specify MIN
or MAX
functions in the select list.
For joins:
Join predicates cannot have an OR
.
Do not specify Cartesian product joins (joins with no join predicate).
For outer joins, outer join each inner table with at most one table.
Additional considerations for materialized views
Additional considerations include:
A materialized view is read-only and cannot be updated directly. A materialized view is updated only when changes are made to the associated detail tables. Therefore a materialized view cannot be the target of a DELETE
, UPDATE
or INSERT
statement.
By default, a range index is created to enforce the primary key for a materialized view. Alternatively, use the UNIQUE HASH
clause to specify a hash index for the primary key.
If your application performs range queries over a materialized view's primary key, then choose a range index for that view by omitting the UNIQUE HASH
clause.
If your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH
clause. See "CREATE TABLE" for more information about the UNIQUE HASH
clause.
You can use ALTER TABLE
to change the representation of the primary key index or resize a hash index of a materialized view.
You cannot add or drop columns in the materialized view with the ALTER TABLE
statement. To change the structure of the materialized view, drop and recreate the view.
You can create indexes on the materialized view with the CREATE INDEX
SQL statement.
The owner of a materialized view must have the SELECT
privilege on its detail tables. The SELECT
privilege is implied by the SELECT ANY TABLE
and ADMIN
system privileges. When the SELECT
privilege or a higher-level system privilege on the detail tables is revoked from the owner of the materialized view, the materialized view becomes invalid.
Selecting from an invalid materialized view fails with an error. Updates to the detail tables of an invalid materialized view do not update the materialized view.
You can identify invalid materialized views by using the ttIsql describe
command and by inspecting the STATUS
column of the SYS.DBA_OBJECTS
, SYS.ALL_OBJECTS
or SYS.USER_OBJECTS
system tables. See Oracle TimesTen In-Memory Database System Tables and Views Reference.
If the revoked privilege is restored, you can make an invalid materialized view valid again by dropping and recreating the materialized view.
For more information, see "Object privileges for materialized views" in Oracle TimesTen In-Memory Database Security Guide.
Examples for CREATE MATERIALIZED VIEW: TimesTen Scaleout
For detailed examples, see "Understanding materialized views" and "Materialized views as a secondary form of distribution" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.
Syntax example:
Command> CREATE MATERIALIZED VIEW mv DISTRIBUTE BY HASH (phone) AS SELECT phone FROM accounts; 1010 rows materialized.
Create a materialized view of columns from the customer
and bookorder
tables.
CREATE MATERIALIZED VIEW custorder AS SELECT custno, custname, ordno, book FROM customer, bookorder WHERE customer.custno=bookorder.custno;
Create a materialized view of columns x1
and y1
from the t1
table.
CREATE MATERIALIZED VIEW v1 AS SELECT x1, y1 FROM t1 PRIMARY KEY (x1) UNIQUE HASH ON (x1) PAGES=100;
Create a materialized view from an outer join of columns x1
and y1
from the t1
and t2
tables.
CREATE MATERIALIZED VIEW v2 AS SELECT x1, y1 FROM t1, t2 WHERE x1=x2(+);
The following example creates a materialized view empmatview2
based on selected columns employee_id
and email
from table employees
. After the materialized view is created, create an index on the materialized view column mvemp_id
of the materialized view empmatview2
.
CREATE MATERIALIZED VIEW empmatview2 AS SELECT employee_id mvemp_id, email mvemail FROM employees; 107 rows materialized. CREATE INDEX empmvindex ON empmatview2 (mvemp_id);
The CREATE PACKAGE
statement creates the specification for a standalone package, which is an encapsulated collection of related procedures, functions, and other program objects stored together in your database. The package specification declares these objects. The package body defines these objects.
CREATE [OR REPLACE] PACKAGE [Owner.]PackageName [InvokerRightsClause] [AccessibleByClause] {IS|AS} PlsqlPackageSpec InvokerRightsClause::= AUTHID {CURRENT_USER | DEFINER} AccessibleByClause::= ACCESSIBLE BY (accessor[,...]) accessor::= [UnitKind][Owner.]UnitName
You can specify InvokerRightsClause
or AccessibleByClause
in any order.
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to recreate the package specification if it already exists. Use this clause to change the specification of an existing package without dropping and recreating the package. When you change a package specification, TimesTen recompiles it. |
PackageName |
Name of the package. |
InvokerRightsClause |
Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:
For more information, see "Definer's rights and invoker's rights (AUTHID clause)" in the Oracle TimesTen In-Memory Database Security Guide. |
AccessibleByClause |
Use this clause to specify one or more accessors (PL/SQL units) that can invoke the package directly. The list of accessors that can access the package is called a white list. A white list gives you the ability to add an extra layer of security to your PL/SQL objects. Specifically, you can restrict access to the package to only those objects on the white list.
Syntax: |
accessor |
Used in AccessibleByClause . An accessor is a PL/SQL unit that can invoke the package.
An accessor can appear more than once in the Syntax: |
UnitKind |
Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the kind of PL/SQL unit that can invoke the package.UnitKind is optional, but if specified, valid options are:
|
[ Owner .] UnitName |
Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the name of the PL/SQL unit that can invoke the package. If you specify UnitKind , then UnitName must be a name of a unit of that kind. For example, if you specify PROCEDURE for UnitKind , then UnitName must be the name of a procedure. UnitName is required.
You can optionally specify |
IS|AS |
Specify either IS or AS to declare the body of the function. |
PlsqlPackageSpec |
Specifies the package specification. Can include type definitions, cursor declarations, variable declarations, constant declarations, exception declarations and PL/SQL subprogram declarations. |
AccessibleByClause
:
AccessibleByClause
is valid at the top-level package definition. You cannot specify AccessibleByClause
in the individual procedures or functions within the package. In addition, you cannot specify AccessibleByClause
in the CREATE
PACKAGE
BODY
statement.
You can use this clause to restrict access to helper packages. For example, assume your PL/SQL package defines an API for a given functionality and that functionality is implemented using a set of helper procedures and functions. You want to limit applications to only be able to call the API procedure or function that is defined in your package, and to not be able to call the helper procedures and functions directly. You can use the ACCESSIBLE
BY
clause to achieve this. See the examples in the "Using the AccessibleByClause" for details.
The compiler checks the validity of the syntax of the ACCESSIBLE
BY
clause, but does not check that the accessor exists. Therefore, you can define an accessor that does yet exist in the owner's schema.
When you invoke the package, the compiler first does the normal permission checks on the invocation. If any check fails, the invocation fails, even if the invoker is an accessor. If all normal permission checks on the invocation succeed, and the package has no ACCESSIBLE
BY
clause, the invocation succeeds. If the package has an ACCESSIBLE
BY
clause, the invocation succeeds only if the invoker is an accessor.
When you create or replace a package, the privileges granted on the package remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.
In a replicated environment, the CREATE PACKAGE
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
Example 1: Correct use of the AccessibleByClause
This example illustrates the correct usage of the AccessibleByClause
. The clause is specified at the top-level of the CREATE
PACKAGE
statement. Note that the CallingProc procedure does not need to exist.
Command> CREATE OR REPLACE PACKAGE ProtectedPkg ACCESSIBLE BY (PROCEDURE CallingProc) AS PROCEDURE ProtectedProc; END; / Package created.
Examples 2: Incorrect use of the AccessibleByClause
Examples 2a and 2b show the incorrect use of the AccessibleByClause
. The first example attempts to use AccessibleByClause
in the packaged procedure, resulting in a compilation error. The second example attempts to use AccessibleByClause
in the CREATE
PACKAGE
BODY
statement, resulting in a compilation error.
Example 2a
Command> CREATE OR REPLACE PACKAGE ProtectedPkg1 AS PROCEDURE ProtectedProc1 ACCESSIBLE BY (PROCEDURE CallingProc) END; / Warning: Package created with compilation errors. Command> SHOW ERRORS Errors for PACKAGE PROTECTEDPKG1: LINE/COL ERROR -------- ----------------------------------------------------------------- 0/0 PLS-00157: Only schema-level programs allow ACCESSIBLE BY
Example 2b
Command> CREATE OR REPLACE PACKAGE ProtectedPkg3 ACCESSIBLE BY (PROCEDURE CallingProc3) AS PROCEDURE ProtectedProc3; END; / Package created. Command> CREATE OR REPLACE PACKAGE BODY ProtectedPkg3 ACCESSIBLE BY (PROCEDURE CallingProc3) AS PROCEDURE ProtectedProc3 AS BEGIN NULL; END; ; / Warning: Package body created with compilation errors. Command> SHOW ERRORS Errors for PACKAGE BODY PROTECTEDPKG3: LINE/COL ERROR -------- ----------------------------------------------------------------- 2/1 PLS-00103: Encountered the symbol "ACCESSIBLE" when expecting one of the following: is as compress compiled wrapped
Example 3: Ensuring only the API can access the helper package
This example walks through a series of steps to illustrate the use of the AccessibleByClause
. The example creates the SampleAPI
package and the SampleHelper
package. The ACCESSIBLE
BY
clause is specified on the SampleHelper
to ensure that only the SampleAPI
package can access the SampleHelper
package.
Steps:
Create the SampleHelper
package. Specify the ACCESSIBLE
BY
clause, giving the SampleAPI
package access to the SampleHelper
package. The SampleAPI
package is in the white list.
Command> CREATE OR REPLACE PACKAGE SampleHelper ACCESSIBLE BY (SampleAPI) AS PROCEDURE SampleH1; PROCEDURE SampleH2; END; / Package created.
Create the SampleHelper
package body.
Command> CREATE OR REPLACE PACKAGE BODY SampleHelper AS PROCEDURE SampleH1 AS BEGIN DBMS_OUTPUT.PUT_LINE('Sample helper procedure SampleH1'); END; PROCEDURE SampleH2 AS BEGIN DBMS_OUTPUT.PUT_LINE('Sample helper procedure SampleH2'); END; END; / Package body created.
Create the SampleAPI
package.
Command> CREATE OR REPLACE PACKAGE SampleAPI AS PROCEDURE p1; PROCEDURE p2; END; / Package created.
Create the SampleAPI
package body. The p1
procedure references the SampleHelper.SampleH1
procedure. The p2
procedure references the SampleHelper.SampleH2
procedure.
Command> CREATE OR REPLACE PACKAGE BODY SampleAPI AS PROCEDURE p1 AS BEGIN DBMS_OUTPUT.PUT_LINE('SampleAPI procedure p1'); SampleHelper.SampleH1; END; PROCEDURE p2 AS BEGIN DBMS_OUTPUT.PUT_LINE('SampleAPI procedure p2'); SampleHelper.SampleH2; END; END; / Package body created.
Call the SampleAPI.p1
and the SampleAPI.p2
procedures. The SampleAPI
package is in the white list of the SampleHelper
package, resulting in successful execution.
Command> SET SERVEROUTPUT ON Command> BEGIN SampleAPI.p1; SampleAPI.p2; END; / SampleAPI procedure p1 Sample helper procedure SampleH1 SampleAPI procedure p2 Sample helper procedure SampleH2 PL/SQL procedure successfully completed.
Call the SampleHelper.SampleH1
procedure directly. An error is returned due to insufficient access privileges.
Command> BEGIN SampleHelper.SampleH1; END; / 8503: ORA-06550: line 2, column 3: PLS-00904: insufficient privilege to access object SAMPLEHELPER 8503: ORA-06550: line 2, column 3: PL/SQL: Statement ignored The command failed.
The CREATE PACKAGE BODY
statement creates the body of a standalone package. A package is an encapsulated collection of related procedures, functions, and other program objects stored together in your database. A package specification declares these objects. A package body defines these objects.
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to recreate the package body if it already exists. Use this clause to change the body of an existing package without dropping and recreating it. When you change a package body, TimesTen recompiles it. |
PackageBody |
Name of the package body. |
IS|AS |
Specify either IS or AS to declare the body of the function. |
plsql_package_body |
Specifies the package body which consists of PL/SQL subprograms. |
In a replicated environment, the CREATE PACKAGE BODY
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
When you create or replace a package body, the privileges granted on the package body remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.
The CREATE PROCEDURE
statement creates a standalone stored procedure.
CREATE [OR REPLACE] PROCEDURE [Owner.]ProcedureName [(arguments [IN|OUT|IN OUT][NOCOPY] DataType [DEFAULT expr][,...])] [InvokerRightsClause][AccessibleByClause] [DETERMINISTIC] {IS|AS} plsql_procedure_body InvokerRightsClause::= AUTHID {CURRENT_USER|DEFINER} AccessibleByClause::= ACCESSIBLE BY(accessor[,...]) accessor::= [UnitKind][Owner.]UnitName
You can specify InvokerRightsClause
, AccessibleByClause
, or DETERMINISTIC
in any order.
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to recreate the procedure if it already exists. Use this clause to change the definition of an existing procedure without dropping and recreating it. When you recreate a procedure, TimesTen recompiles it. |
ProcedureName |
Name of procedure. |
arguments |
Name of argument/parameter. You can specify 0 or more parameters for the procedure. If you specify a parameter, you must specify a data type for the parameter. The data type must be a PL/SQL data type. |
[IN|OUT|IN OUT] |
Parameter modes.
|
NOCOPY |
Specify NOCOPY to instruct TimesTen to pass the parameter as fast as possible. Can enhance performance when passing a large value such as a record, an index-by-table, or a varray to an OUT or IN OUT parameter. IN parameters are always passed NOCOPY . |
DEFAULT expr |
Use this clause to specify a DEFAULT value for the parameter. You can specify := in place of the keyword DEFAULT . |
InvokerRightsClause |
Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:
For more information, see "Definer's rights and invoker's rights (AUTHID clause)" in the Oracle TimesTen In-Memory Database Security Guide. |
AccessibleByClause |
Use this clause to specify one or more accessors (PL/SQL units) that can invoke the procedure directly. The list of accessors that can access the procedure is called a white list. A white list gives you the ability to add an extra layer of security to your PL/SQL objects. Specifically, you can restrict access to the procedure to only those objects on the white list.
The Syntax: |
accessor |
Used in the AccessibleByClause . An accessor is a PL/SQL unit that can invoke the procedure.
An accessor can appear more than once in the Syntax: |
UnitKind |
Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the kind of PL/SQL unit that can invoke the procedure.UnitKind is optional, but if specified, valid options are:
|
[ Owner .] UnitName |
Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the name of the PL/SQL unit that can invoke the procedure. If you specify UnitKind , then UnitName must be a name of a unit of that kind. For example, if you specify PROCEDURE for UnitKind , then UnitName must be the name of a procedure. UnitName is required.
You can optionally specify |
DETERMINISTIC |
Specify DETERMINISTIC to indicate that the procedure returns the same result value whenever it is called with the same values for its parameters. |
IS|AS |
Specify either IS or AS to declare the body of the procedure. |
plsql_procedure_body |
Specifies the procedure body. |
AccessibleByClause
:
The compiler checks the validity of the syntax of the AccessibleByClause
, but does not check that the accessor exists. Therefore, you can define an accessor that does yet exist in the owner's schema.
When you invoke the procedure, the compiler first does the normal permission checks on the invocation. If any check fails, the invocation fails, even if the invoker is an accessor. If all normal permission checks on the invocation succeed, and the procedure has no AccessibleByClause
, the invocation succeeds. If the procedure has an AccessibleByClause
, the invocation succeeds only if the invoker is an accessor.
When you create or replace a procedure, the privileges granted on the procedure remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.
The namespace for PL/SQL procedures is distinct from the TimesTen built-in procedures. You can create a PL/SQL procedure with the same name as a TimesTen built-in procedure.
TimesTen does not support:
call_spec
clause
AS EXTERNAL
clause
In a replicated environment, the CREATE PROCEDURE
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
Example 1:
This example creates the ProtectedProc
procedure and uses the ACCESSIBLE
BY
clause to restrict access to the CallingProc
procedure. The CallingProc
procedure does not yet exist. The example then creates the CallingProc
procedure, which calls the ProtectedProc
procedure. The CallingProc
procedure is successfully created, as it is specified in the ACCESSIBLE
BY
clause. The example then attempts to call the ProtectedProc
procedure directly, resulting in an error. It concludes with attempting to create the AnotherCallingProc
procedure that references the ProtectedProc
procedure, but the AnotherCallingProc
procedure is not in the white list. A compilation error results.
Steps to illustrate the example:
Create the ProtectedProc
procedure, specifying the ACCESSIBLE
BY
clause. The CallingProc
procedure is in the white list. It does not yet exist.
Command> CREATE OR REPLACE PROCEDURE ProtectedProc ACCESSIBLE BY (CallingProc) AS BEGIN DBMS_OUTPUT.PUT_LINE ('ProtectedProc'); END; / Procedure created.
Create the CallingProc
procedure, referencing the ProtectedProc
procedure.
Command> CREATE OR REPLACE PROCEDURE CallingProc AS BEGIN DBMS_OUTPUT.PUT_LINE ('CallingProc'); ProtectedProc; END; / Procedure created.
Call the CallingProc
procedure. The procedure is successfully executed.
Command> SET SERVEROUTPUT ON Command> exec CallingProc; CallingProc ProtectedProc PL/SQL procedure successfully completed.
Attempt to call the ProtectedProc
procedure directly. An error is thrown due to insufficient access privileges.
Command> exec ProtectedProc; 8503: ORA-06550: line 1, column 7: PLS-00904: insufficient privilege to access object PROTECTEDPROC 8503: ORA-06550: line 1, column 7: PL/SQL: Statement ignored The command failed.
Create the AnotherCallingProc
procedure that references the ProtectedProc
procedure. The AnotherCallingProc
is not in the white list (not listed in the ACCESSIBLE
BY
clause of ProtectedProc
), resulting in a compilation error.
Command> CREATE OR REPLACE PROCEDURE AnotherCallingProc AS BEGIN DBMS_OUTPUT.PUT_LINE ('AnotherCallingProc'); ProtectedProc; END; / Warning: Procedure created with compilation errors. Command> SHOW ERRORS Errors for PROCEDURE ANOTHERCALLINGPROC: LINE/COL ERROR -------- ----------------------------------------------------------------- 5/1 PL/SQL: Statement ignored 5/1 PLS-00904: insufficient privilege to access object PROTECTEDPROC
Example 2:
This example illustrates the uses of the accessor clause through a sequence of steps.
Create the SampleUser1
and SampleUser2
users and grant ADMIN
privileges to both users.
Command> CREATE USER SampleUser1 IDENTIFIED BY SampleUser1; User created. Command> CREATE USER SampleUser2 IDENTIFIED BY SampleUser2; User created. Command> GRANT ADMIN TO SampleUser1, SampleUser2;
Create the SampleUser1.ProtectedProc
procedure, specifying the ACCESSIBLE
BY
clause. The CallingProc
procedure is specified in the white list without an owner. The owner of the CallingProc
procedure is assumed to be in the same schema as the owner of the procedure with the ACCESSIBLE
BY
clause. Thus, CallingProc
is assumed to be in the SampleUser1
schema.
Command> CREATE OR REPLACE PROCEDURE SampleUser1.ProtectedProc ACCESSIBLE BY (CallingProc) AS BEGIN DBMS_OUTPUT.PUT_LINE ('SampleUser1 ProtectedProc'); END; / Procedure created.
Connect as SampleUser1
. Create the CallingProc
procedure, referencing the SampleUser1.ProtectedProc
procedure.
Command> Connect adding "uid=SampleUser1;pwd=SampleUser1PW" as SampleUser1; Connection successful: DSN=database1;UID=SampleUser1;DataStore=/scratch/sampleuser1/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8; PermSize=128; (Default setting AutoCommit=1) sampleuser1: Command> CREATE OR REPLACE PROCEDURE CallingProc AS BEGIN DBMS_OUTPUT.PUT_LINE ('SampleUser1 CallingProc'); ProtectedProc; END; / Procedure created.
From the SampleUser1
connection, call the CallingProc
procedure. The call succeeds.
sampleuser1: Command> SET SERVEROUTPUT ON sampleuser1: Command> exec CallingProc; SampleUser1 CallingProc SampleUser1 ProtectedProc PL/SQL procedure successfully completed.
Connect to SampleUser2
. Create the CallingProc
procedure, referencing the SampleUser1.ProtectedProc
procedure. A compilation error results.
SampleUser1: Command> connect adding "uid=Sampleuser2;pwd=SampleUser2PW" as SampleUser2; Connection successful: DSN=database1;UID=Sampleuser2;DataStore=/scratch/sampleuser2/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8; PermSize=128; (Default setting AutoCommit=1) sampleuser2: Command> CREATE OR REPLACE PROCEDURE CallingProc AS BEGIN DBMS_OUTPUT.PUT_LINE ('SampleUser2 CallingProc'); SampleUser1.ProtectedProc; END; / Warning: Procedure created with compilation errors. sampleuser2: Command> SHOW ERRORS Errors for PROCEDURE CALLINGPROC: LINE/COL ERROR -------- ----------------------------------------------------------------- 5/1 PL/SQL: Statement ignored 5/1 PLS-00904: insufficient privilege to access object PROTECTEDPROC
Switch to the SampleUser1
connection. Recreate the ProtectedProc
procedure.
sampleuser2: Command> use SampleUser1 sampleuser1: Command> CREATE OR REPLACE PROCEDURE ProtectedProc ACCESSIBLE BY (CallingProc, SampleUser2.CallingProc) AS BEGIN DBMS_OUTPUT.PUT_LINE ('SampleUser1 ProtectedProc'); END; / Procedure created.
From the SampleUser2
connection, call the CallingProc
procedure. The SampleUser2.CallingProc
is in the white list of the SampleUser1.ProtectedProc
procedure, resulting in successful execution.
sampleuser1: Command> use SampleUser2; sampleuser2: Command> SET SERVEROUTPUT ON sampleuser2: Command> exec CallingProc SampleUser2 CallingProc SampleUser1 ProtectedProc PL/SQL procedure successfully completed.
Using the CREATE PROCEDURE statement to retrieve information
Create a procedure query_emp
to retrieve information about an employee. Pass the employee_id
171
to the procedure and retrieve the last_name
and salary
into two OUT
parameters.
Command> CREATE OR REPLACE PROCEDURE query_emp (p_id IN employees.employee_id%TYPE, p_name OUT employees.last_name%TYPE, p_salary OUT employees.salary%TYPE) IS BEGIN SELECT last_name, salary INTO p_name, p_salary FROM employees WHERE employee_id = p_id; END query_emp; / Procedure created.
The CREATE
PROFILE
statement creates a profile, which is a set of limits on the database resources. If you assign a profile to a user, that user cannot exceed the limits specified in the profile.
CREATE PROFILE profile LIMIT password_parameters password_parameters::= [FAILED_LOGIN_ATTEMPTS password_parameter_options] [PASSWORD_LIFE_TIME password_parameter_options] [PASSWORD_REUSE_TIME password_parameter_options] [PASSWORD_REUSE_MAX password_parameter_options] [PASSWORD_LOCK_TIME password_parameter_options] [PASSWORD_GRACE_TIME password_parameter_options] [PASSWORD_COMPLEXITY_CHECKER password_checker_options] password_parameter_options::= UNLIMITED|DEFAULT|constant password_checker_options::= NULL|DEFAULT
Parameter | Description |
---|---|
profile |
Name of the profile. |
LIMIT password_parameters |
The LIMIT clause sets the limits for the password parameters. The LIMIT keyword is required.
The password parameters consist of the name of the password parameter and the value (or limit) for the password parameter. All the parameters (with the exception of If you do not specify a password parameter after the |
FAILED_LOGIN_ATTEMPTS |
Specifies the number of consecutive failed attempts to connect to the database by a user before that user's account is locked. |
PASSWORD_LIFE_TIME |
Specifies the number of days that a user can use the same password for authentication. If you also set a value for PASSWORD_GRACE_TIME , then the password expires if it is not changed within the grace period. In such a situation, future connections to the database are rejected. |
PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX |
These two parameters must be used together.
You must specify a value for both parameters for them to have any effect. Specifically:
|
PASSWORD_LOCK_TIME |
Specifies the number of days the user account is locked after the specified number of consecutive failed connection attempts. |
PASSWORD_GRACE_TIME |
Specifies the number of days after the grace period begins during which TimesTen issues a warning, but allows the connection to the database. If the password is not changed during the grace period, the password expires. This parameter is associated with the PASSWORD_LIFE_TIME parameter. |
PASSWORD_COMPLEXITY_CHECKER {NULL |DEFAULT } |
Indicates the complexity verification that is done on passwords. Valid values are NULL or DEFAULT . This means there is no complexity verification done on the passwords.
A A |
UNLIMITED |
Indicates that there is no limit for the password parameter. If you specify UNLIMITED , it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS UNLIMITED . |
DEFAULT |
Indicates that you want to omit a limit for the password parameter in this profile. A user that is assigned this profile is subject to the limit defined in the DEFAULT profile for this password parameter.
If you specify |
constant |
Indicates the value of the password parameter if you do not specify UNLIMITED or DEFAULT . If specified, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS 3 . |
Use the CREATE
PROFILE
statement to create a profile for the password parameters, which is a set of limits on the database resources. If you assign the profile to a user, the user cannot exceed the limits specified for the profile. If you do not assign a profile to a user, TimesTen assigns the DEFAULT
profile. See "Password management" in the Oracle TimesTen In-Memory Database Security Guide for more information on password management and profiles.
To specify the password parameter limits for a user, do the following:
Use the CREATE
PROFILE
statement to create a profile that defines the password parameter limits.
Use the CREATE
USER
or ALTER
USER
statement to assign the profile to the user.
There is a DEFAULT
profile that defines a limit for each of the password parameters. This profile initially defines UNLIMITED
for these parameters (which indicates that no limit has been set for the parameter). The exceptions are:
FAILED_LOGIN_ATTEMPTS
: Set to 10
.
PASSWORD_LOCK_TIME
: Set to 0.0034722222222222
days (equal to 5 minutes, 5/1440 days)
PASSWORD_COMPLEXITY_CHECKER
: Set to NULL
.
You can change these limits by using the ALTER
PROFILE
statement and specifying "DEFAULT"
for the profile name. (Note that DEFAULT
must be enclosed in double quotation marks.) See "ALTER PROFILE" for information.
If a user is not assigned a profile, the user is subject to the limits defined in the DEFAULT
profile. If a user is assigned a profile and that profile omits a limit on the password parameter or specifies DEFAULT
for the password parameter, then the user is subject to the limits on those password parameters as defined by the DEFAULT
profile.
The instance administrator is assigned a system profile. You cannot alter or drop the profile of an instance administrator.
Example 1: Create a profile and set limits on the password parameters
This example creates the profile1
profile and sets various limits on the password parameters. It then queries the dba_profiles
system view to verify the limits.
Command> CREATE PROFILE profile1 LIMIT FAILED_LOGIN_ATTEMPTS 5 PASSWORD_LIFE_TIME 60 PASSWORD_REUSE_TIME 60 PASSWORD_REUSE_MAX 5 PASSWORD_LOCK_TIME 1 PASSWORD_GRACE_TIME 10; Profile created.
Query the dba_profiles
system view to verify the limits. Note that since the PASSWORD_COMPLEXITY_CHECKER
password parameter was not specified in the CREATE
PROFILE
statement, the value of PASSWORD_COMPLEXITY_CHECKER
is DEFAULT
(the value comes from the value that is in the DEFAULT
profile).
Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE1' AND resource_type='PASSWORD'; < PROFILE1, FAILED_LOGIN_ATTEMPTS, PASSWORD, 5 > < PROFILE1, PASSWORD_LIFE_TIME, PASSWORD, 60 > < PROFILE1, PASSWORD_REUSE_TIME, PASSWORD, 60 > < PROFILE1, PASSWORD_REUSE_MAX, PASSWORD, 5 > < PROFILE1, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT > < PROFILE1, PASSWORD_LOCK_TIME, PASSWORD, 1 > < PROFILE1, PASSWORD_GRACE_TIME, PASSWORD, 10 > 7 rows found.
Example 2: Create a profile and specify FAILED_LOGIN_ATTEMPTS
This example creates the profile2
profile and specifies a value of 1
for FAILED_LOGIN_ATTEMPTS
. The example then creates the user2
user and assigns user2
the profile2
profile. The user2
user attempts to connect to the database, but specifies an invalid password. The connection fails. After five minutes, the user2
user attempts to reconnect to the database. The connection succeeds due to the 0.0034722222222222
(equal to 5 minutes) value for PASSWORD_LOCK_TIME
(specified in the DEFAULT
profile).
Command> CREATE PROFILE profile2 LIMIT FAILED_LOGIN_ATTEMPTS 1; Profile created. Command> CREATE USER user2 IDENTIFIED BY user2 PROFILE profile2; User created.
Grant admin
privilege to user2
.
Command> GRANT ADMIN TO user2;
Attempt to connect to the database. The connection fails due to an invalid password specified in the connection string.
Command> connect adding "UID=user2;PWD=user3" as user2; 7001: User authentication failed The command failed.
Attempt to connect again specifying the correct password in the connection string. The connection fails due to:
One previous failed connection attempt
An attempt to connect to the database before the five minute password lock time.
none: Command> use database1 database1: Command> connect adding "UID=user2;PWD=user2" as user2; 15179: the account is locked The command failed.
After five minutes, attempt to connect to the database again. The connection succeeds.
none: Command> use database1 database1: Command> connect adding "UID=user2;PWD=user2" as user2; Connection successful: DSN=database1;UID=user2;DataStore=/scratch/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128; (Default setting AutoCommit=1)
Example 3: Determine the password parameter values in the DEFAULT profile
This example queries the dba_profiles
system view to determine the password parameter values for the DEFAULT
profile.
Command> SELECT * FROM dba_profiles WHERE profile = 'DEFAULT' AND resource_type='PASSWORD'; < DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 10 > < DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL > < DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, .0034 > < DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED > 7 rows found.
Example 4: Specify PASSWORD_LIFE_TIME and PASSWORD_GRACE_TIME
This example creates the profile4
profile and specifies a value of 0.0034722222222222
(equal to 5 minutes) for the PASSWORD_LIFE_TIME
password parameter and a value of 0.01041667
(equal to 15 minutes) for the PASSWORD_GRACE_TIME
password parameter. It then creates the user4
user and assigns the profile4
profile to user4
. The example continues with attempts to connect to the database as user4
.
Command> CREATE PROFILE profile4 LIMIT PASSWORD_LIFE_TIME 0.0034722222222222 PASSWORD_GRACE_TIME 0.01041667; Profile created.
Query the dba_profiles
system view to verify the values for the password parameters.
Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE4' AND resource_type='PASSWORD'; < PROFILE2, FAILED_LOGIN_ATTEMPTS, PASSWORD, DEFAULT > < PROFILE2, PASSWORD_LIFE_TIME, PASSWORD, .0034 > < PROFILE2, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT > < PROFILE2, PASSWORD_REUSE_MAX, PASSWORD, DEFAULT > < PROFILE2, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT > < PROFILE2, PASSWORD_LOCK_TIME, PASSWORD, DEFAULT > < PROFILE2, PASSWORD_GRACE_TIME, PASSWORD, .0104 > 7 rows found.
Create the user4
user and assign user4
the profile4
profile. Grant the CONNECT
privilege to user4
.
Command> CREATE USER user4 IDENTIFIED BY user4 PROFILE profile4; User created. Command> GRANT CONNECT TO user4;
Connect to the database as user4
. The connection succeeds.
Command> connect adding "UID=user4;PWD=user4" as user4; Connection successful: DSN=access1;UID=user4;DataStore=/scratch/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128; (Default setting AutoCommit=1)
Disconnect from the database. After 5 minutes, reconnect to the database as user4
. The connection succeeds but a warning is issued. The password lifetime is 5 minutes and the password grace time is 15 minutes.
user4: Command> disconnect user4; Disconnecting from user4... none: Command> use database1 database1: Command> connect adding "UID=user4;PWD=user4" as user4; Warning 15182: Password will expire within 0.010417 days Connection successful: DSN=access1;UID=user4;DataStore=/scratch/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128; (Default setting AutoCommit=1)
Disconnect from the database. After 15 minutes, reconnect to the database as user4
. The connection fails as the password grace time of 15 minutes has ended.
user4: Command> disconnect user4; Disconnecting from user4... none: Command> use database1 database1: Command> connect adding "UID=user4;PWD=user4" as user4; 15180: the password has expired The command failed.
Example 5: Create a profile specifying only the LIMIT keyword
This example creates the profile5
profile and specifies just the LIMIT
keyword. The example then queries the dba_profiles
system view to illustrate the password parameter limits for the profile5
profile are all set to a value of DEFAULT
.
Command> CREATE PROFILE profile5 LIMIT; Profile created. Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE5' AND resource_type='PASSWORD < PROFILE5, FAILED_LOGIN_ATTEMPTS, PASSWORD, DEFAULT > < PROFILE5, PASSWORD_LIFE_TIME, PASSWORD, DEFAULT > < PROFILE5, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT > < PROFILE5, PASSWORD_REUSE_MAX, PASSWORD, DEFAULT > < PROFILE5, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT > < PROFILE5, PASSWORD_LOCK_TIME, PASSWORD, DEFAULT > < PROFILE5, PASSWORD_GRACE_TIME, PASSWORD, DEFAULT > 7 rows found.
Example 6: Specify UNLIMITED for PASSWORD_REUSE_TIME
This example creates the profile6
profile and specifies a PASSWORD_REUSE_TIME
of UNLIMITED
. The password cannot be reused.
Command> CREATE PROFILE profile6 LIMIT PASSWORD_REUSE_MAX 2 PASSWORD_REUSE_TIME UNLIMITED; Profile created.
Create the user6
user and assign user6
the profile6
profile. Change the user6
password two times. Attempt to reuse the user6
password. The attempt fails due to the PASSWORD_REUSE_TIME
value of UNLIMITED
.
Command> CREATE USER user6 IDENTIFIED BY user6 PROFILE profile6; User created. Command> ALTER USER user6 IDENTIFIED BY user6_test1; User altered. Command> ALTER USER user6 IDENTIFIED BY user6_test2; User altered. Command> ALTER USER user6 IDENTIFIED BY user6; 15183: Password cannot be reused The command failed.
Example 7: Specify DEFAULT for PASSWORD_REUSE_TIME
This example creates the profile7
profile, specifying the value of DEFAULT
for the PASSWORD_REUSE_TIME
password parameter and the value of 3
for the PASSWORD_REUSE_MAX
password parameter. TimesTen uses the value in the DEFAULT
profile for the PASSWORD_REUSE_TIME
password parameter.
Command> CREATE PROFILE profile7 LIMIT PASSWORD_REUSE_TIME DEFAULT PASSWORD_REUSE_MAX 3; Profile created.
Query the dba_profiles
system view to verify the password parameter values for the profile7
profile. Note the value of DEFAULT
for PASSWORD_REUSE_TIME
and a value of 3
for PASSWORD_REUSE_MAX
(represented in bold).
Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE7' AND resource_type = 'PASSWORD'; < PROFILE7, FAILED_LOGIN_ATTEMPTS, PASSWORD, DEFAULT > < PROFILE7, PASSWORD_LIFE_TIME, PASSWORD, DEFAULT > < PROFILE7, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT > < PROFILE7, PASSWORD_REUSE_MAX, PASSWORD, 3 > < PROFILE7, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT > < PROFILE7, PASSWORD_LOCK_TIME, PASSWORD, DEFAULT > < PROFILE7, PASSWORD_GRACE_TIME, PASSWORD, DEFAULT > 7 rows found.
Query the dba_profiles
system view to verify the password parameter values for the DEFAULT
profile. Note the value of UNLIMITED
for PASSWORD_REUSE_TIME
(represented in bold).
Command> SELECT * FROM dba_profiles WHERE profile = 'DEFAULT' AND resource_type = 'PASSWORD'; < DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 10 > < DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED > < DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL > < DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, .0034 > < DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED > 7 rows found.
Create the user7
user and assign the profile7
profile to user7
. Change the user7
password three times. The user7
password cannot be reused due to the value of UNLIMITED
for the PASSWORD_REUSE_TIME
parameter.
Command> CREATE USER user7 IDENTIFIED BY user7 PROFILE profile7; User created. Command> ALTER USER user7 IDENTIFIED BY user7_test1; User altered. Command> ALTER USER user7 IDENTIFIED BY user7_test2; User altered. Command> ALTER USER user7 IDENTIFIED BY user_test3; User altered. Command> ALTER USER user7 IDENTIFIED BY user7; 15183: Password cannot be reused The command failed.
Example 8: Specify PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX
This example creates the profile8
profile, specifying a value of 0.0020833
(equal to approximately 2 minutes) for the PASSWORD_REUSE_TIME
password parameter and a value of 2
for the PASSWORD_REUSE_MAX
password parameter. The example then creates the user8
user and assigns user8
the profile8
profile. The user8
password is changed two times within two minutes. Then, still within the two minutes, the original user8
password (user8_pwd
) is reused. The ALTER
USER
operation fails. Even though the password is changed 2
times, the original password can only be reused after 0.00208333
days (equal to approximately two minutes). After two minutes, the original user8
password (user8_pwd
) is reused again. The ALTER
USER
operation succeeds. The user's password was changed two times and more than two minutes had passed.
Command> CREATE PROFILE profile8 LIMIT PASSWORD_REUSE_TIME 0.00208333 PASSWORD_REUSE_MAX 2; Profile created.
Create the user8
user and assign user8
the profile8
profile.
Command> CREATE USER user8 IDENTIFIED BY user8_pwd PROFILE profile8; User created.
Immediately alter the user, changing the password two times.
Command> ALTER USER user8 IDENTIFIED BY user8_test1; User altered. Command> ALTER USER user8 IDENTIFIED BY user8_test2; User altered.
Within two minutes, attempt to reuse the original user8_pwd
password (represented in bold). The ALTER
USER
operation fails as the original password can only be reused after two minutes.
Command> ALTER USER user8 IDENTIFIED BY user8_pwd;
15183: Password cannot be reused
The command failed.
After two minutes, attempt to reuse the original user8_pwd
password (represented in bold). The ALTER
USER
operation succeeds. The original password can be reused as the password was changed two times and two minutes had expired.
Command> ALTER USER user8 IDENTIFIED BY user8_pwd;
User altered.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The CREATE REPLICATION
statement:
Defines a classic replication scheme on a participating database.
Installs the specified configuration in the executing database's replication system tables.
Typically consists of one or more replication element specifications and zero or more STORE
specifications.
TimesTen SQL configuration for replication also provides a programmable way to configure a classic replication scheme. The configuration can be embedded in C, C++ or Java code. Replication can be configured locally or from remote systems using client/server.
In addition, you need to use the ttRepAdmin
utility to maintain operations not covered by the supported SQL statements. Use ttRepAdmin
to change replication state, duplicate databases, list the replication configuration, and view replication status.
A replication element is an entity that TimesTen synchronizes between databases. A replication element can be a whole table or a database. A database can include most types of tables and sequences. It can include only specified tables and sequences, or include all tables except specified tables and sequences. It cannot include temporary tables or views, whether materialized or nonmaterialized.
A replication scheme is a set of replication elements, as well as the databases that maintain copies of these elements.
For more detailed information on SQL configuration for classic replication, see "Defining a classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
CREATE REPLICATION [Owner.]ReplicationSchemeName { ELEMENT ElementName { DATASTORE | { TABLE [Owner.]TableName [CheckConflicts]} | SEQUENCE [Owner.]SequenceName} { MASTER | PROPAGATOR } FullStoreName [TRANSMIT { NONDURABLE | DURABLE }] { SUBSCRIBER FullStoreName [,...] [ReturnServiceAttribute] } [,...] } [...] [{INCLUDE | EXCLUDE} {TABLE [[Owner.]TableName[,...]] | SEQUENCE [[Owner.]SequenceName[,...]} [,...]] [ STORE FullStoreName [StoreAttribute [... ]]] [...] [ NetworkOperation[...]]
Syntax for CheckConflicts
is described in "CHECK CONFLICTS".
Syntax for ReturnServiceAttribute
:
{ RETURN RECEIPT [BY REQUEST] | RETURN TWOSAFE [BY REQUEST] | NO RETURN }
Syntax for StoreAttribute
:
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED DURABLE COMMIT {ON | OFF} RESUME RETURN Milliseconds LOCAL COMMIT ACTION {NO ACTION | COMMIT} RETURN WAIT TIME Seconds COMPRESS TRAFFIC {ON | OFF} PORT PortNumber TIMEOUT Seconds FAILTHRESHOLD Value CONFLICT REPORTING SUSPEND AT Value CONFLICT REPORTING RESUME AT Value TABLE DEFINITION CHECKING {RELAXED|EXACT}
Syntax for NetworkOperation
:
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...]
Parameter | Description |
---|---|
[ Owner .] ReplicationSchemeName |
Name assigned to the new classic replication scheme. Classic replication schemes should have names that are unique from all other database objects. |
CheckConflicts |
Check for replication conflicts when simultaneously writing to bidirectionally replicated databases. See "CHECK CONFLICTS". |
COMPRESS TRAFFIC {ON | OFF} |
Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details. |
CONFLICT REPORTING SUSPEND AT Value |
Suspends conflict resolution reporting.
This clause is valid for table level replication. |
CONFLICT REPORTING RESUME AT Value |
Resumes conflict resolution reporting.
This clause is valid for table level replication. |
DATASTORE |
Define entire database as element. This type of element can only be defined for a master database that is not configured with an element of type TABLE in the same or a different replication scheme. |
{INCLUDE|EXCLUDE}
|
INCLUDE includes in the DATASTORE element only the tables or sequences listed. Use one INCLUDE clause for each object type (table or sequence).
|
DISABLE RETURN {SUBSCRIBER|ALL} NumFailures |
Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures . Selecting SUBSCRIBER applies this policy only to the subscriber that fails to acknowledge replicated updates within the set timeout period. ALL applies this policy to all subscribers should any of the subscribers fail to respond. This failure policy can be specified for either the RETURN RECEIPT or RETURN TWOSAFE service.
If |
DURABLE COMMIT {ON|OFF} |
Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. |
ELEMENT ElementName |
The entity that TimesTen synchronizes between databases. TimesTen supports the entire database (DATASTORE ) and whole tables (TABLE ) as replication elements.
See "Defining replication elements" in Oracle TimesTen In-Memory Database Replication Guide for details. |
FAILTHRESHOLD Value |
The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.The value 0 means "No Limit." This is the default.
See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide. |
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
LOCAL COMMIT ACTION {NO ACTION | COMMIT} |
Specifies the default action to be taken for a return twosafe transaction in the event of a timeout.
Note: This attribute is only valid when the
This setting can be overridden for specific transactions by calling the |
MASTER FullStoreName |
The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
NO RETURN |
Specifies that no return service is to be used. This is the default.
For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
PORT PortNumber |
The TCP/IP port number on which the replication agent for the database listens for connections. If not specified, the replication agent automatically allocates a port number. |
PROPAGATOR FullStoreName |
The database that receives replicated updates and passes them on to other databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
RESUME RETURN Milliseconds |
If return service blocking has been disabled by DISABLE RETURN , this attribute sets the policy on when to re-enable return service blocking. Return service blocking is re-enabled as soon as the failed subscriber acknowledges the replicated update in a period of time that is less than the specified Milliseconds .
If |
RETURN RECEIPT [BY REQUEST] |
Enables the return receipt service, so that applications that commit a transaction to a master database are blocked until the transaction is received by all subscribers.
|
RETURN SERVICES {ON|OFF} WHEN [REPLICATION] STOPPED |
Sets return services on or off when replication is disabled (stopped or paused state).
|
RETURN TWOSAFE [BY REQUEST] |
Enables the return twosafe service, so that applications that commit a transaction to a master database are blocked until the transaction is committed on all subscribers.
Note: This service can only be used in a bidirectional replication scheme where the elements are defined as Specifying |
RETURN WAIT TIME Seconds |
Specifies the number of seconds to wait for return service acknowledgment. The default value is 10 seconds. A value of 0 (zero) means that there is no timeout. Your application can override this timeout setting by calling the returnWait parameter in the ttRepSyncSet procedure. |
SEQUENCE [ Owner .] SequenceName |
Define the sequence specified by [ Owner .] SequenceName as element. See "Defining replication elements" in Oracle TimesTen In-Memory Database Replication Guide for details. |
STORE FullStoreName |
Defines the attributes for a given database. Attributes include PORT , TIMEOUT and FAILTHRESHOLD . The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
SUBSCRIBER FullStoreName |
A database that receives updates from the MASTER databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
TABLE [ Owner .] TableName |
Define the table specified by [ Owner .] TableName as element. See "Defining replication elements" in Oracle TimesTen In-Memory Database Replication Guide for details. |
TIMEOUT Seconds |
The maximum number of seconds the replication agent waits for a response from remote replication agents. The default is 120 seconds.
Note: For large transactions that may cause a delayed response from the remote replication agent, the agent scales the timeout based on the size of the transaction. This scaling is disabled if you set |
TRANSMIT {DURABLE | NONDURABLE} |
Specifies whether to flush the master log to the file system before sending a batch of committed transactions to the subscribers.
Note: Note: See "Setting transmit durability on DATASTORE element" and "Replicating the entire master database with TRANSMIT NONDURABLE" in Oracle TimesTen In-Memory Database Replication Guide for more information. |
TABLE DEFINITION CHECKING {EXACT|RELAXED} |
Specifies type of table definition checking that occurs on the subscriber:
The default is Note: If you use |
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Denotes the NetworkOperation clause. If specified, enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores.
Can be specified more than once. For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
The syntax for CHECK CONFLICTS
is:
{NO CHECK | CHECK CONFLICTS BY ROW TIMESTAMP COLUMN ColumnName [ UPDATE BY { SYSTEM | USER } ] [ ON EXCEPTION { ROLLBACK [ WORK ] | NO ACTION } ] [ {REPORT TO 'FileName' [ FORMAT { XML | STANDARD } ] | NO REPORT } ] }
Note:
ACHECK CONFLICT
clause can only be used for elements of type TABLE
.The CHECK CONFLICTS
clause of the CREATE REPLICATION
or ALTER REPLICATION
statement has the following parameters:
Parameter | Description |
---|---|
CHECK CONFLICTS BY ROW TIMESTAMP |
Indicates that all update and uniqueness conflicts are to be detected. Conflicts are resolved in the manner specified by the ON EXCEPTION parameter.
It also detects delete conflicts with |
COLUMN ColumnName |
Indicates the column in the replicated table to be used for timestamp comparison. The table is specified in the ELEMENT description by TableName .
|
NO CHECK |
Specify to suppress conflict resolution for a given element. |
UPDATE BY {SYSTEM | USER} |
Specifies whether the timestamp values are maintained by TimesTen (SYSTEM ) or the application (USER ). The replicated table in the master and subscriber databases must use the same UPDATE BY specification. See "Enabling system timestamp column maintenance" and "Enabling user timestamp column maintenance" in Oracle TimesTen In-Memory Database Replication Guide for more information. The default is UPDATE BY SYSTEM . |
ON EXCEPTION {ROLLBACK[WORK |NO ACTION} |
Specifies how to resolve a detected conflict. ROW TIMESTAMP conflict detection has the resolution options:
The default is |
REPORT TO ' FileName ' |
Specifies the file to log updates that fail the timestamp comparison. FileName is a SQL character string that cannot exceed 1,000 characters. (SQL character string literals are single-quoted strings that may contain any sequence of characters, including spaces.) The same file can be used to log failed updates for multiple tables. |
[FORMAT {XML|STANDARD}] |
Optionally specifies the conflict report format for an element. The default format is STANDARD . |
NO REPORT |
Specify to suppress logging of failed timestamp comparisons. |
The names of all databases on the same host must be unique for each classic replication scheme for each TimesTen instance.
Replication elements can only be updated (by normal application transactions) through the MASTER
database. PROPAGATOR
and SUBSCRIBER
databases are read-only.
If you define a classic replication scheme that permits multiple databases to update the same table, see "Resolving Replication Conflicts" in Oracle TimesTen In-Memory Database Replication Guide for recommendations on how to avoid conflicts when updating rows.
SELF
is intended for classic replication schemes where all participating databases are local. Do not use SELF
for a distributed classic replication scheme in a production environment, where spelling out the host name for each database in a script enables it to be used at each participating database.
Each attribute for a given STORE
may be specified only once, or not at all.
Specifying the PORT
of a database for one classic replication scheme specifies it for all classic replication schemes. All other connection attributes are specific to the classic replication scheme specified in the command.
For replication schemes, DataStoreName
is always the prefix of the TimesTen database checkpoint file names. These are the files with the.ds0
and.ds1
suffixes that are saved on the file system by checkpoint operations.
If a row with a default NOT INLINE VARCHAR
value is replicated, the receiver creates a copy of this value for each row instead of pointing to the default value if and only if the default value of the receiving node is different from the sending node.
To use timestamp comparison on replicated tables, you must specify a nullable column of type BINARY(8)
to hold the timestamp value. Define the timestamp column when you create the table. You cannot add the timestamp column with the ALTER TABLE
statement. In addition, the timestamp column cannot be part of a primary key or index.
If you specify the XML report format, two XML documents are generated:
FileName
.xml
: This file contains the DTD for the report and the root node for the report. It includes the document definition and the include directive.
FileName
.include
: This file is included in FileName
.xml
and contains all the actual conflicts.
The FileName
.include
file can be truncated. Do not truncate the FileName
.xml
file.
For a complete description of the XML format, including examples of each conflict, see "Reporting conflicts to an XML file" in Oracle TimesTen In-Memory Database Replication Guide.
If you specify a report format for an element and then drop the element, the corresponding report files are not deleted.
Use the CONFLICT REPORTING SUSPEND AT
clause to specify a high water mark threshold at which the reporting of conflict resolution is suspended.
Use the CONFLICT REPORTING RESUME AT
clause to specify a low water mark threshold where the reporting of conflict resolution is resumed. When the rate of conflict falls below the low water mark threshold, conflict resolution reporting is resumed.
The state of whether conflict reporting is suspended or not by a replication agent does not persist across the local replication agent and the peer agent stop and restart.
Do not use the CREATE REPLICATION
statement to replicate cache groups. Only active standby pairs can replicate cache groups. See the CREATE ACTIVE STANDBY PAIR
statement.
Replicate the contents of repl.tab
from masterds
to two subscribers, subscriber1ds
and subscriber2ds
.
CREATE REPLICATION repl.twosubscribers ELEMENT e TABLE repl.tab MASTER masterds ON "server1" SUBSCRIBER subscriber1ds ON "server2", subscriber2ds ON "server3";
Replicate the entire masterds
database to the subscriber, subscriber1ds
. The FAILTHRESHOLD
specifies that a maximum of 10 log files can accumulate on masterds
before it decides that subscriber1ds
has failed.
CREATE REPLICATION repl.wholestore ELEMENT e DATASTORE MASTER masterds ON "server1" SUBSCRIBER subscriber1ds ON "server2" STORE masterds FAILTHRESHOLD 10;
Bidirectionally replicate the entire westds
and eastds
databases and enable the RETURN TWOSAFE
service.
CREATE REPLICATION repl.biwholestore ELEMENT e1 DATASTORE MASTER westds ON "westcoast" SUBSCRIBER eastds ON "eastcoast" RETURN TWOSAFE ELEMENT e2 DATASTORE MASTER eastds ON "eastcoast" SUBSCRIBER westds ON "westcoast" RETURN TWOSAFE;
Enable the return receipt service for select transaction updates to the subscriber1ds
subscriber.
CREATE REPLICATION repl.twosubscribers ELEMENT e TABLE repl.tab MASTER masterds ON "server1" SUBSCRIBER subscriber1ds ON "server2" RETURN RECEIPT BY REQUEST SUBSCRIBER subscriber2ds ON "server3";
Replicate the contents of the customerswest
table from the west
database to the ROUNDUP
database and the customerseast
table from the east
database. Enable the return receipt service for all transactions.
CREATE REPLICATION r ELEMENT west TABLE customerswest MASTER west ON "serverwest" SUBSCRIBER roundup ON "serverroundup" RETURN RECEIPT ELEMENT east TABLE customerseast MASTER east ON "servereast" SUBSCRIBER roundup ON "serverroundup" RETURN RECEIPT;
Replicate the contents of the repl.tab
table from the centralds
database to the propds
database, which propagates the changes to the backup1ds
and backup2ds
databases.
CREATE REPLICATION repl.propagator ELEMENT a TABLE repl.tab MASTER centralds ON "finance" SUBSCRIBER proprds ON "nethandler" ELEMENT b TABLE repl.tab PROPAGATOR proprds ON "nethandler" SUBSCRIBER backup1ds ON "backupsystem1" bakcup2ds ON "backupsystem2";
Bidirectionally replicate the contents of the repl.accounts
table between the eastds
and westds
databases. Each database is both a master and a subscriber for the repl.accounts
table.
Because the repl.accounts
table can be updated on either the eastds
or westds
database, it includes a timestamp column (tstamp
). The CHECK CONFLICTS
clause establishes automatic timestamp comparison to detect any update conflicts between the two databases. In the event of a comparison failure, the entire transaction that includes an update with the older timestamp is rolled back (discarded).
CREATE REPLICATION repl.r1 ELEMENT elem_accounts_1 TABLE repl.accounts CHECK CONFLICTS BY ROW TIMESTAMP COLUMN tstamp UPDATE BY SYSTEM ON EXCEPTION ROLLBACK MASTER westds ON "westcoast" SUBSCRIBER eastds ON "eastcoast" ELEMENT elem_accounts_2 TABLE repl.accounts CHECK CONFLICTS BY ROW TIMESTAMP COLUMN tstamp UPDATE BY SYSTEM ON EXCEPTION ROLLBACK MASTER eastds ON "eastcoast" SUBSCRIBER westds ON "westcoast";
Replicate the contents of the repl.accounts
table from the activeds
database to the backupds
database, using the return twosafe service, and using TCP/IP port 40000 on activeds
and TCP/IP port 40001 on backupds
. The transactions on activeds
need to be committed whenever possible, so configure replication so that the transaction is committed even after a replication timeout using LOCAL COMMIT
ACTION
, and so that the return twosafe service is disabled when replication is stopped. To avoid significant delays in the application if the connection to the backupds
database is interrupted, configure the return service to be disabled after five transactions have timed out, but also configure the return service to be re-enabled when the backupds
database's replication agent responds in under 100 milliseconds. Finally, the bandwidth between databases is limited, so configure replication to compress the data when it is replicated from the activeds
database.
CREATE REPLICATION repl.r ELEMENT elem_accounts_1 TABLE repl.accounts MASTER activeds ON "active" SUBSCRIBER backupds ON "backup" RETURN TWOSAFE ELEMENT elem_accounts_2 TABLE repl.accounts MASTER activeds ON "active" SUBSCRIBER backupds ON "backup" RETURN TWOSAFE STORE activeds ON "active" PORT 40000 LOCAL COMMIT ACTION COMMIT RETURN SERVICES OFF WHEN REPLICATION STOPPED DISABLE RETURN SUBSCRIBER 5 RESUME RETURN 100 COMPRESS TRAFFIC ON STORE backupds ON "backup" PORT 40001;
Illustrate conflict reporting suspend and conflict reporting resume clauses for table level replication. Use these clauses for table level replication not database replication. Issue repschemes
command to show that replication scheme is created.
Command> CREATE TABLE repl.accounts (tstamp BINARY (8) NOT NULL PRIMARY KEY, tstamp1 BINARY (8)); Command> CREATE REPLICATION repl.r2 ELEMENT elem_accounts_1 TABLE repl.accounts CHECK CONFLICTS BY ROW TIMESTAMP COLUMN tstamp1 UPDATE BY SYSTEM ON EXCEPTION ROLLBACK WORK MASTER westds ON "west1" SUBSCRIBER eastds ON "east1" ELEMENT elem_accounts_2 TABLE repl.accounts CHECK CONFLICTS BY ROW TIMESTAMP COLUMN tstamp1 UPDATE BY SYSTEM ON EXCEPTION ROLLBACK WORK MASTER eastds ON "east1" SUBSCRIBER westds ON "west1" STORE westds CONFLICT REPORTING SUSPEND AT 20 CONFLICT REPORTING RESUME AT 10; Command> REPSCHEMES; Replication Scheme REPL.R2: Element: ELEM_ACCOUNTS_1 Type: Table REPL.ACCOUNTS Conflict Check Column: TSTAMP1 Conflict Exception Action: Rollback Work Conflict Timestamp Update: System Conflict Report File: (none) Master Store: WESTDS on WEST1 Transmit Durable Subscriber Store: EASTDS on EAST1 Element: ELEM_ACCOUNTS_2 Type: Table REPL.ACCOUNTS Conflict Check Column: TSTAMP1 Conflict Exception Action: Rollback Work Conflict Timestamp Update: System Conflict Report File: (none) Master Store: EASTDS on EAST1 Transmit Durable Subscriber Store: WESTDS on WEST1 Store: EASTDS on EAST1 Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled Store: WESTDS on WEST1 Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled Conflict Reporting Suspend: 20 Conflict Reporting Resume: 10 1 replication scheme found.
Example of NetworkOperation
clause with 2 MASTERIP
and SUBSCRIBERIP
clauses:
CREATE REPLICATION r ELEMENT e DATASTORE MASTER rep1 SUBSCRIBER rep2 RETURN RECEIPT MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1 MASTERIP "3.3.3.3" PRIORITY 2 SUBSCRIBERIP "4.4.4.4" PRIORITY 2;
Example of NetworkOperation
clause. Use the default sending interface but a specific receiving network:
CREATE REPLICATION r ELEMENT e DATASTORE MASTER rep1 SUBSCRIBER rep2 ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" SUBSCRIBERIP "rep2nic2" PRIORITY 1;
Example of using the NetworkOperation
clause with multiple subscribers:
CREATE REPLICATION r ELEMENT e DATASTORE MASTER rep1 SUBSCRIBER rep2,rep3 ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1 ROUTE MASTER Rep1 ON "machine1" SUBSCRIBER Rep3 ON "machine2" MASTERIP "3.3.3.3" PRIORITY 2 SUBSCRIBERIP "4.4.4.4";
The CREATE SEQUENCE
statement creates a new sequence number generator that can subsequently be used by multiple users to generate unique integers. Use the CREATE SEQUENCE
statement to define the initial value of the sequence, define the increment value, the maximum or minimum value and determine if the sequence continues to generate numbers after the minimum or maximum is reached.
This statement is supported with TimesTen Scaleout. The BATCH
clause is supported in TimesTen Scaleout only.
CREATE SEQUENCE [Owner.]SequenceName [INCREMENT BY IncrementValue] [MINVALUE MinimumValue] [MAXVALUE MaximumValue] [CYCLE] [CACHE CacheValue] [START WITH StartValue] [BATCH BatchValue]
All parameters in the CREATE SEQUENCE
statement must be integer values.
If you do not specify a value in the parameters, TimesTen defaults to an ascending sequence that starts with 1, increments by 1, has the default maximum value and does not cycle.
Do not create a sequence with the same name as a view or materialized view.
Sequences with the CYCLE
attribute cannot be replicated (TimesTen Classic).
In TimesTen Classic, in which there is a replicated environment for an active standby pair, if DDL_REPLICATION_LEVEL
is 3 or greater when you execute CREATE SEQUENCE
on the active database, the sequence is replicated to all databases in the replication scheme. To include the sequence in the replication scheme, set DDL_REPLICATION_ACTION
to INCLUDE
. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
The CREATE
SEQUENCE
statement creates a global object. Once you create the sequence, the sequence values are retrieved from any element of the database.
Sequence values are unique, but across elements the values might not be returned in monotonic order. Within a single element, sequence values are in monotonic order. But over time, across elements, sequence values are not returned monotonically. However, the monotonic property is guaranteed within an element.
The batch value is the range of unique sequence values stored in the element. Each element has its own batch. An element will get a new batch when its local batch is consumed. There is one element that owns the sequence and is responsible for allocating batch sequence blocks to other elements.
For the BATCH
clause:
Use this clause to specify the range of sequence values that are stored on each element of the grid.
The default is 10 million.
BatchValue
must be greater than or equal to CacheValue
.
The maximum value for BatchValue
is dependent on the maximum value of the signed integer for the platform.
Each element in a replica set has its own batch.
An element's batch sequence values are recoverable. Cache values are not recoverable.
See "Using sequences" in Oracle TimesTen In-Memory Database Scaleout User's Guide for detailed information and examples.
Using CURRVAL and NEXTVAL in TimesTen Scaleout
To refer to the SEQUENCE
values in a SQL statement, use CURRVAL
and NEXTVAL
.
CURRVAL
returns the value of the last call to NEXTVAL
if there is one in the current session, otherwise it returns an error.
NEXTVAL
increments the current sequence value by the specified increment and returns the value for each row accessed.
If you execute a single SQL statement with multiple NEXTVAL
references, TimesTen only increments the sequence once, returning the same value for all occurrences of NEXTVAL
. If a SQL statement contains both NEXTVAL
and CURRVAL
, NEXTVAL
is executed first. CURRVAL
and NEXTVAL
have the same value in that SQL statement.
NEXTVAL
and CURRVAL
can be used in the following.
The SelectList
of a SELECT
statement, but not the SelectList
of a subquery
The SelectList
of an INSERT...SELECT
statement
The SET
clause of an UPDATE
statement
See "Using sequences" in Oracle TimesTen In-Memory Database Scaleout User's Guide for information on the usage of CURRVAL
and NEXTVAL
in a grid and for examples.
Using CURRVAL and NEXTVAL in TimesTen Classic
To refer to the SEQUENCE
values in a SQL statement, use CURRVAL
and NEXTVAL
.
CURRVAL
returns the value of the last call to NEXTVAL
if there is one in the current session, otherwise it returns an error.
NEXTVAL
increments the current sequence value by the specified increment and returns the value for each row accessed.
The current value of a sequence is a connection-specific value. If there are two concurrent connections to the same database, each connection has its own CURRVAL
of the same sequence set to its last NEXTVAL
reference. When the maximum value is reached, SEQUENCE
either wraps or issues an error statement, depending on the value of the CYCLE
option of the CREATE SEQUENCE
. In the case of recovery, sequences are not rolled back. It is possible that the range of values of a sequence can have gaps; however, each sequence value is still unique.
If you execute a single SQL statement with multiple NEXTVAL
references, TimesTen only increments the sequence once, returning the same value for all occurrences of NEXTVAL
. If a SQL statement contains both NEXTVAL
and CURRVAL
, NEXTVAL
is executed first. CURRVAL
and NEXTVAL
have the same value in that SQL statement.
Note:
NEXTVAL
cannot be used in a query on a standby node of an active standby pair.NEXTVAL
and CURRVAL
can be used in the following.
The SelectList
of a SELECT
statement, but not the SelectList
of a subquery
The SelectList
of an INSERT...SELECT
statement
The SET
clause of an UPDATE
statement
For detailed examples, see "Using sequences" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.
Syntax example:
Command> CREATE SEQUENCE mysequence BATCH 100; Command> describe mysequence; Sequence SAMPLEUSER.MYSEQUENCE: Minimum Value: 1 Maximum Value: 9223372036854775807 Current Value: 1 Increment: 1 Cache: 20 Cycle: Off Batch: 100 1 sequence found.
Create a sequence.
CREATE SEQUENCE mysequence INCREMENT BY 1 MINVALUE 2 MAXVALUE 1000;
This example assumes that tab1
has 1 row in the table and that CYCLE
is used:
CREATE SEQUENCE s1 MINVALUE 2 MAXVALUE 4 CYCLE; SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 2; */ SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 3; */ SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 4; */
After the maximum value is reached, the cycle starts from the minimum value for an ascending sequence.
SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 2; */
To create a sequence and generate a sequence number:
CREATE SEQUENCE seq INCREMENT BY 1; INSERT INTO student VALUES (seq.NEXTVAL, 'Sally');
To use a sequence in an UPDATE SET
clause:
UPDATE student SET studentno = seq.NEXTVAL WHERE name = 'Sally';
To use a sequence in a query:
SELECT seq.CURRVAL FROM student;
The CREATE SYNONYM
statement creates a public or private synonym for a database object. A synonym is an alias for a database object. The object can be a table, view, synonym, sequence, PL/SQL stored procedure, PL/SQL function, PL/SQL package, materialized view or cache group.
A private synonym is owned by a specific user and exists in that user's schema. A private synonym is accessible to users other than the owner only if those users have appropriate privileges on the underlying object and specify the schema along with the synonym name.
A public synonym is accessible to all users as long as the user has appropriate privileges on the underlying object.
CREATE SYNONYM
is a DDL statement.
Synonyms can be used in these SQL statements:
DML statements: SELECT
, DELETE
, INSERT
, UPDATE
, MERGE
Some DDL statements: GRANT
, REVOKE
, CREATE TABLE ... AS SELECT
, CREATE VIEW ... AS SELECT
, CREATE INDEX
, DROP INDEX
Some cache group statements: LOAD CACHE GROUP
, UNLOAD CACHE GROUP
, REFRESH CACHE GROUP
, FLUSH CACHE GROUP
CREATE SYNONYM
(if owner) or CREATE ANY SYNONYM
(if not owner) to create a private synonym.
CREATE PUBLIC SYNONYM
to create a public synonym.
Parameter | Description |
---|---|
[OR REPLACE] |
Specify OR REPLACE to recreate the synonym if it already exists. Use this clause to change the definition of an existing synonym without first dropping it. |
[PUBLIC] |
Specify PUBLIC to create a public synonym. Public synonyms are accessible to all users, but each user must have appropriate privileges on the underlying object in order to use the synonym.
When resolving references to an object, TimesTen uses a public synonym only if the object is not prefaced by a schema name. |
[ Owner1 .] synonym |
Specify the owner of the synonym. You cannot specify an owner if you have specified PUBLIC . If you omit both PUBLIC and Owner1 , TimesTen creates the synonym in your own schema.
Specify the name for the synonym, which is limited to 30 bytes. |
[ Owner2 .] object |
Specify the owner in which the object resides. Specify the object name for which you are creating a synonym. If you do not qualify object with Owner2 , the object is in your own schema. The Owner2 and object do not need to exist when the synonym is created. |
The schema object does not need to exist when its synonym is created.
Do not create a public synonym with the same name as a TimesTen built-in procedure.
In order to use the synonym, appropriate privileges must be granted to a user for the object aliased by the synonym before using the synonym.
A private synonym cannot have the same name as tables, views, sequences, PLSQL packages, functions, procedures, and cache groups that are in the same schema as the private synonym.
A public synonym may have the same name as a private synonym or an object name.
If the PassThrough
attribute is set so that a query needs to executed in the Oracle database, the query is sent to the Oracle database without any changes. If the query uses a synonym for a table in a cache group, then a synonym with the same name must be defined for the corresponding Oracle database table for the query to be successful.
When an object name is used in the DML and DDL statements in which a synonym can be used, the object name is resolved as follows:
Search for a match within the current schema. If no match is found, then:
Search for a match with a public synonym name. If no match is found, then:
Search for a match in the SYS schema. If no match is found, then:
The object does not exist.
TimesTen creates a public synonym for some objects in the SYS
schema. The name of the public synonym is the same as the object name. Thus steps 2 and 3 in the object name resolution can be switched without changing the results of the search.
In a replicated environment for an active standby pair, if DDL_REPLICATION_LEVEL
is 2 or greater when you execute CREATE SYNONYM
on the active database, the synonym is replicated to all databases in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
As user ttuser
, create a synonym for the jobs
table. Verify that you can retrieve the information using the synonym. Display the contents of the SYS.USER_SYNONYMS
system view.
Command> CREATE SYNONYM synjobs FOR jobs; Synonym created. Command> SELECT FIRST 2 * FROM jobs; < AC_ACCOUNT, Public Accountant, 4200, 9000 > < AC_MGR, Accounting Manager, 8200, 16000 > 2 rows found. Command> SELECT FIRST 2 * FROM synjobs; < AC_ACCOUNT, Public Accountant, 4200, 9000 > < AC_MGR, Accounting Manager, 8200, 16000 > 2 rows found. Command> SELECT * FROM sys.user_synonyms; < SYNJOBS, TTUSER, JOBS, <NULL> > 1 row found.
Create a public synonym for the employees
table.
Command> CREATE PUBLIC SYNONYM pubemp FOR employees; Synonym created.
Verify that pubemp
is listed as a public synonym in the SYS.ALL_SYNONYMS
system view.
Command> SELECT * FROM sys.all_synonyms; < PUBLIC, TABLES, SYS, TABLES, <NULL> > ... < TTUSER, SYNJOBS, TTUSER, JOBS, <NULL> > < PUBLIC, PUBEMP, TTUSER, EMPLOYEES, <NULL> > 57 rows found.
Create a synonym for the tab
table in the terry
schema. Describe the synonym.
Command> CREATE SYNONYM syntab FOR terry.tab; Synonym created. Command> DESCRIBE syntab; Synonym TTUSER.SYNTAB: For Table TERRY.TAB Columns: COL1 VARCHAR2 (10) INLINE COL2 VARCHAR2 (10) INLINE 1 Synonyms found.
Redefine the synjobs
synonym to be an alias for the employees
table by using the OR REPLACE
clause. Describe synjobs
.
Command> CREATE OR REPLACE synjobs FOR employees; Synonym created. Command> DESCRIBE synjobs; Synonym TTUSER.SYNJOBS: For Table TTUSER.EMPLOYEES Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE UNIQUE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) 1 Synonyms found.
The CREATE TABLE
statement defines a table.
The CREATE
TABLE
statement is supported in TimesTen Scaleout and in TimesTen Classic. However, there are differences in syntax and semantics. For simplicity, the supported syntax, parameters, description (semantics), and examples for TimesTen Scaleout and for TimesTen Classic are separated into the usage with TimesTen Scaleout and the usage with TimesTen Classic. While there is repetition in the usages, it is presented this way in order to allow you to progress from syntax to parameters to semantics to examples for each usage.
Review the required privilege section and then see:
CREATE TABLE
(if owner) or CREATE ANY TABLE
(if not owner).
The owner of the created table must have the REFERENCES
privilege on tables referenced by the REFERENCE
clause.
In TimesTen Classic:
ADMIN
privilege is required if replicating a new table across an active standby pair when DDL_REPLICATION_LEVEL=2
or greater and DDL_REPLICATION_ACTION
=INCLUDE
.
These attributes cause the CREATE TABLE
to implicitly execute an ALTER ACTIVE STANDBY PAIR
... INCLUDE TABLE
statement. See "ALTER SESSION" for more details.
After reviewing this section, see:
CREATE TABLE: Usage with TimesTen Scaleout
This statement is supported with TimesTen Scaleout. Column-based compression and aging are not supported. The distribution clause is not supported for global temporary tables.
See:
SQL syntax for CREATE TABLE: TimesTen Scaleout
You cannot specify a PRIMARY
KEY
in both the ColumnDefinition
clause and the PRIMARY
KEY
clause.
The syntax for a persistent table:
CREATE TABLE [Owner.]TableName ( {{ColumnDefinition} [,...] [PRIMARY KEY (ColumnName [,...]) | [[CONSTRAINT ForeignKeyName] FOREIGN KEY ([ColumnName] [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]] [...] } ) [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages] [DistributionClause] [AS SelectQuery]
The syntax for the distribution clause:
DistributionClause::= DISTRIBUTE BY HASH [(ColumnName [,...])] | DISTRIBUTE BY REFERENCE [(ForeignKeyConstraint)] | DUPLICATE
The distribution clause is not supported for global temporary tables. The syntax is:
CREATE GLOBAL TEMPORARY TABLE [Owner.]TableName ( {{ColumnDefinition} [,...] [PRIMARY KEY (ColumnName [,...]) | [[CONSTRAINT ForeignKeyName] FOREIGN KEY ([ColumnName] [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]] [...] } ) [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages] [ON COMMIT { DELETE | PRESERVE } ROWS ]
Parameters for CREATE TABLE: TimesTen Scaleout
Parameter | Description |
---|---|
[ Owner .] TableName |
Name to be assigned to the new table. Two tables cannot have the same owner name and table name.
If you do not specify the owner name, your login name becomes the owner name for the new table. Owners of tables in TimesTen are determined by the user ID settings or login names. Oracle Database table owner names must always match TimesTen table owner names. For rules on creating names, see "Basic names". |
GLOBAL TEMPORARY |
Specifies that the table being created is a global temporary table. A temporary table is similar to a persistent table but it is effectively materialized only when referenced in a connection.
A global temporary table definition is persistent and is visible to all connections, but the table instance is local to each connection. It is created when a command referencing the table is compiled for a connection and dropped when the connection is disconnected. All instances of the same temporary table have the same name but they are identified by an additional connection ID together with the table name. Global temporary tables are allocated in temp space. The contents of a global temporary table cannot be shared between connections. Each connection sees only its own content of the table and compiled commands that reference temporary tables are not shared among connections. Operations on temporary tables do generate log records. The amount of log they generate is less than for permanent tables. The
Local temporary tables are not supported. No object privileges are needed to access global temporary tables. Do not specify the |
ColumnDefinition |
An individual column in a table. Each table must have at least one column.
If you specify the |
ColumnName |
Name of the column in a table. Is used in various clauses of the CREATE TABLE statement.
If the name is used in the primary key definition, it forms the primary key for the table to be created. Up to 16 columns can be specified for the primary key. For a foreign key, the If you specify the |
PRIMARY KEY |
PRIMARY KEY may only be specified once in a table definition. It provides a way of identifying one or more columns that, together, form the primary key of the table. The contents of the primary key have to be unique and NOT NULL . You cannot specify a column as both UNIQUE and a single column PRIMARY KEY . |
CONSTRAINT ForeignKeyName |
Specifies an optional user-defined name for a foreign key. If not provided by the user, the system provides a default name. |
FOREIGN KEY |
This specifies a foreign key constraint between the new table and the referenced table identified by RefTableName . There are two lists of columns specified in the foreign key constraint.
Columns in the first list are columns of the new table and are called the referencing columns. Columns in the second list are columns of the referenced table and are called referenced columns. These two lists must match in data type, including length, precision and scale. The referenced table must already have a primary key or unique index on the referenced column. The column name list of referenced columns is optional. If omitted, the primary index of The declaration of a foreign key creates a range index on the referencing columns. The user cannot drop the referenced table or its referenced index until the referencing table is dropped. The foreign key constraint asserts that each row in the new table must match a row in the referenced table such that the contents of the referencing columns are equal to the contents of the referenced columns. Any TimesTen supports SQL-92 A foreign key can be defined on a global temporary table, but it can only reference a global temporary table. If a parent table is defined with A foreign key cannot reference an active parent table. An active parent table is one that has some instance materialized for a connection. If you specify the |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
UNIQUE |
UNIQUE provides a way of identifying a column where each row must contain a unique value. |
UNIQUE HASH ON |
Hash index for the table. This parameter is used for equality predicates. UNIQUE HASH ON requires that a primary key be defined. |
HashColumnName |
Column defined in the table that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key.
If you specify the |
PAGES = PrimaryPages |
Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).
The value for If your estimate for |
[ON COMMIT {DELETE|PRESERVE} ROWS] |
The optional statement specifies whether to delete or preserve rows when a transaction that touches a global temporary table is committed. If not specified, the rows of the temporary table are deleted. |
AS SelectQuery |
If specified, creates a new table from the contents of the result set of the SelectQuery . The rows returned by SelectQuery are inserted into the table.
Data types and data type lengths are derived from
You can specify a statement level optimizer hint after the |
DistributionClause |
Supported in TimesTen Scaleout only. There are three options:
The The The If you do not specify a clause, the default is You must specify the You cannot update the distribution key columns. |
Column definition: TimesTen Scaleout
You can only use the keyword, ENABLE
, when defining columns in the CREATE
TABLE
statement.
The syntax is as follows:
ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [PRIMARY KEY | UNIQUE | NULL [UNIQUE] | NOT NULL [ENABLE] [PRIMARY KEY | UNIQUE] ]
The column definition has the following parameters:
Parameter | Description |
---|---|
ColumnName |
Name to be assigned to one of the columns in the new table. No two columns in the table can be given the same name. A table can have a maximum of 1000 columns.
If you specify the |
ColumnDataType |
Type of data the column can contain. Some data types require that you indicate a length. See Chapter 1, "Data Types" for the data types that can be specified.
If you specify the |
DEFAULT DefaultVal |
Indicates that if a value is not specified for the column in an INSERT statement, the default value DefaultVal is inserted into the column. The default value specified must have a type that is compatible with the data type of the column. A default value can be as long as the data type of the associated column allows. You cannot assign a default value for the ROWID data type or for columns in read-only cache groups. In addition, you cannot use a function within the DEFAULT clause.
The following are legal data types for
If the default value is one of the users, the data type of the column must be either If you specify the |
INLINE| NOT INLINE |
By default, variable-length columns whose declared column length is greater than 128 bytes are stored out of line. Variable-length columns whose declared column length is less than or equal to 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords.
If you specify the |
NULL |
Indicates that the column can contain NULL values.
If you specify the If you specify |
NOT NULL [ENABLE] |
Indicates that the column cannot contain NULL values. If NOT NULL is specified, any statement that attempts to place a NULL value in the column is rejected.
If you specify the If you specify You can only use the keyword, |
UNIQUE |
A unique constraint placed on the column. No two rows in the table may have the same value for this column. TimesTen creates a unique range index to enforce uniqueness. So a column with a unique constraint can use more memory and time during execution than a column without the constraint. Cannot be used with PRIMARY KEY .
If you specify the |
PRIMARY KEY |
A unique NOT NULL constraint placed on the column. No two rows in the table may have the same value for this column. Cannot be used with UNIQUE .
If you specify the |
Description for CREATE TABLE: TimesTen Scaleout
TimesTen Scaleout distributes data by one of three distribution schemes:
Hash: TimesTen Scaleout distributes data based on the hash of the primary key column(s) or one or more columns you specify in the DISTRIBUTED
BY
HASH
clause. A given row is stored in a replica set. Rows are evenly distributed across the replica sets. Hash is the default distribution scheme as it is appropriate for most tables.
Reference: TimesTen Scaleout distributes data of a child table based on the location of the parent table that is identified by the foreign key. A given row of a child table is present in the same replica set as its parent table. This distribution scheme optimizes joins by distributing related data within a single replica set. You can distribute the parent table by hash or reference. The parent is called the root table if it is distributed by hash. You must define the child (foreign) key columns as NOT
NULL
.
Duplicate: TimesTen Scaleout distributes full identical copies of data to all elements of the database. All rows are present in all elements. This distribution scheme optimizes the performance of reads by storing identical data in every data instance. This distribution scheme is appropriate for tables that are relatively small, frequently read, and infrequently modified.
See "Defining the distribution scheme for tables" and "Defining table distribution schemes" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.
For tables with a hash distribution scheme:
The distribution key is used if specified.
The primary key is used if the distribution key is not specified.
A hidden column is used if there is no primary key or distribution key. Data is distributed randomly and evenly.
You should specify a distribution key if there is a primary key defined on the table, but the primary key is not the best way to distribute the data. If there is no primary key, but there is a unique column, then you may want to distribute the data on this unique column. If there is no primary key and no unique column, then do not specify a distribution key. TimesTen Scaleout distributes the data on the hidden column.
If the distribution scheme is by reference:
Only a single foreign key constraint can be referenced in the DISTRIBUTE
BY
REFERENCE
clause. There may be multiple foreign key constraints in the child table, but only one can be used to determine the reference distribution.
A referenced foreign key constraint must be named in the constraint clause if there is more than one.
The foreign key constraint in the reference distribution clause must reference the primary key or a unique key of the parent table. If the parent table is the root, the referenced key must be the distribution key.
You can create a foreign key relationship to a non distribution key column of the parent table, but you cannot then distribute by reference based on this foreign key relationship.
You cannot update the foreign key column that is used in the DISTRIBUTE
BY
REFERENCE
clause.
You can use the CREATE
TABLE
...AS
SELECT
statement to create a new table based on the definition of the original table. Note that primary key constraints are not carried over to the new table so how the data is distributed changes if you do not define a primary key constraint on the new table.
See Example 6-24, "Use CREATE TABLE...AS SELECT" for more information.
You cannot update the distribution key column(s) unless you update the column(s) to the same value.
All columns participating in the primary key are NOT NULL
.
A PRIMARY KEY
that is specified in the ColumnDefinition
can only be specified for one column.
You cannot specify a PRIMARY
KEY
in both the ColumnDefinition
clause and the PRIMARY
KEY
clause.
For both primary key and foreign key constraints, duplicate column names are not allowed in the constraint column list.
You cannot update primary key column(s) unless you update the column(s) to the same value.
There are performance considerations when you define out of line columns instead of inline columns:
Accessing data is slower because TimesTen does not store data contiguously with out of line columns.
Populating data is slower because TimesTen generates more logging operations.
Deleting data is slower because TimesTen performs more reclaim and logging operations.
Storing a column requires less overhead.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
To change the ON DELETE CASCADE
triggered action, drop then redefine the foreign key constraint.
You cannot create a table that has a foreign key referencing a cached table.
UNIQUE
column constraint and default column values are not supported with materialized views.
Use the ALTER TABLE
statement to change the representation of the primary key index for a table.
If you specify the AS
SelectQuery
clause:
Data types and data type lengths are derived from the SelectQuery
. Do not specify data types on the columns of the table you are creating.
TimesTen defines on columns in the new table NOT NULL
constraints that were explicitly created on the corresponding columns of the selected table if SelectQuery
selects the column rather than an expression containing the column.
NOT NULL
constraints that were implicitly created by TimesTen on columns of the selected table (for example, primary keys) are carried over to the new table. You can override the NOT NULL
constraint on the selected table by defining the new column as NULL
. For example:
CREATE TABLE newtable (newcol NULL) AS SELECT (col) FROM tab;
NOT INLINE
/INLINE
attributes are carried over to the new table.
Unique keys, foreign keys, indexes and column default values are not carried over to the new table.
If all expressions in SelectQuery
are columns, rather than expressions, then you can omit the columns from the table you are creating. In this case, the name of the columns are the same as the columns in SelectQuery
. If the SelectQuery
contains an expression rather than a simple column reference, either specify a column alias or name the column in the CREATE TABLE
statement.
Do not specify foreign keys on the table you are creating.
Do not specify the SELECT FOR UPDATE
clause in SelectQuery
.
The ORDER BY
clause is not supported when you use the AS
SelectQuery
clause.
SelectQuery
cannot contain set operators UNION
, MINUS
, INTERSECT
.
By default, a range index is created to enforce the primary key. Use the UNIQUE HASH
clause to specify a hash index for the primary key.
If your application performs range queries using a table's primary key, then choose a range index for that table by omitting the UNIQUE HASH
clause.
If your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH
clause.
A hash index is created with a fixed size that remains constant for the life of the table or until the hash index is resized with the ALTER TABLE
statement or when the index is dropped and recreated. A smaller hash index results in more hash collisions. A larger hash index reduces collisions but can waste memory. Hash key comparison is a fast operation, so a small number of hash collisions should not cause a performance problem for TimesTen.
To ensure that your hash index is sized correctly, your application must indicate the expected size of your table with the value of the RowPages
parameter of the SET
PAGES
clause. Compute this value by dividing the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for the value of RowPages (256000/256=1000).
At most 16 columns are allowed in a hash key.
ON DELETE CASCADE
is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
For ON DELETE CASCADE
: Since different paths may lead from a parent table to a child table, the following rule is enforced:
Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths. Specify ON DELETE CASCADE
on all child tables on the "delete" path.
This rule does not apply to paths from one parent to different children or from different parents to the same child.
For ON DELETE CASCADE
, the following rule is also enforced.
If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.
The data in a global temporary table is private to the current connection and does not need to be secured between users. Thus, global temporary tables do not require object privileges.
These examples illustrate how to create tables with the duplicate, hash, and reference distribution schemes.
These examples illustrate how to create tables with the DISTRIBUTE
BY
REFERENCE
distribution scheme:
Example 6-20, "DISTRIBUTE BY REFERENCE with one foreign key"
Example 6-22, "Foreign key relationship not on distribution key of the parent table"
Example 6-23, "Using first and second level child foreign key relationship"
Example 6-24, "Use CREATE TABLE...AS SELECT" shows how to use the CREATE
TABLE
...AS SELECT
clause in TimesTen Scaleout.
Example 6-13 Create the account_type table
This example runs ttIsql
to create the account_type
table and use a duplicate distribution scheme to distribute the data. This table contains few rows and uses a duplicate distribution scheme to optimize reads. Copies of the data in the table are distributed to all elements of the database.
Command> CREATE TABLE account_type ( type CHAR(1) NOT NULL PRIMARY KEY, description VARCHAR2(100) NOT NULL) DUPLICATE;
Example 6-14 Create the account_status table
This example runs ttIsql
to create the account_status
table and use a duplicate distribution scheme. The table size is small and uses a distribution scheme to optimize reads. Copies of the data in the table are distributed to all elements of the database.
Command> CREATE TABLE account_status(status NUMBER(2) NOT NULL PRIMARY KEY, description VARCHAR2(100) NOT NULL) DUPLICATE;
Example 6-15 Create the customers table
This example runs ttIsql
to create the customers
table and distributes the table by hash. The data in the table is distributed to each element based on the hash of the cust_id
column (the primary key).
Command> CREATE TABLE customers(cust_id NUMBER(10,0) NOT NULL PRIMARY KEY, first_name VARCHAR2(30) NOT NULL,last_name VARCHAR2(30) NOT NULL, addr1 VARCHAR2(64),addr2 VARCHAR2(64), zipcode VARCHAR2(5), member_since DATE NOT NULL) DISTRIBUTE BY HASH;
Example 6-16 Create the accounts table
This example runs ttIsql
to create the accounts
table and defines three primary/foreign key relationships. The accounts
table is distributed by reference and the data is distributed based on the fk_customer
foreign key constraint. This scheme optimizes the performance of joins by distributing the data in the accounts
table based on the location of the corresponding value of the customers.cust_id
parent column (of the fk_customer
foreign key constraint). The row of a child table exists in the same replica set as the parent table. If the join is performed on the primary or foreign key, the data is stored on one element, so TimesTen Scaleout does not have to access different elements.
Command> CREATE TABLE accounts(account_id NUMBER(10,0) NOT NULL PRIMARY KEY, phone VARCHAR2(15) NOT NULL,account_type CHAR(1) NOT NULL, status NUMBER(2) NOT NULL,current_balance NUMBER(10,2) NOT NULL, prev_balance NUMBER(10,2) NOT NULL,date_created DATE NOT NULL, cust_id NUMBER(10,0) NOT NULL, CONSTRAINT fk_customer FOREIGN KEY (cust_id) REFERENCES customers(cust_id),CONSTRAINT fk_acct_type FOREIGN KEY (account_type) REFERENCES account_type(type), CONSTRAINT fk_acct_status FOREIGN KEY (status) REFERENCES account_status(status) ) DISTRIBUTE BY REFERENCE (fk_customer);
Example 6-17 Create the transactions table
This example runs ttIsql
to create the transactions
table. The transactions
table is distributed by reference and the data is distributed based on the fk_accounts
foreign key constraint. This scheme optimizes the performance of joins by distributing the data in the transaction table based on the location of the corresponding value of the accounts.account_id
parent column (of the fk_accounts
foreign key constraint). The row of a child table exists in the same replica set as the parent table. If the join is performed on the primary or foreign key, the data is stored on one element, so TimesTen Scaleout does not have to access different elements.
The accounts
parent table is also distributed by reference. This defines a two level distribute by reference distribution hierarchy.
Command> CREATE TABLE transactions(transaction_id NUMBER(10,0) NOT NULL, account_id NUMBER(10,0) NOT NULL , transaction_ts TIMESTAMP NOT NULL, description VARCHAR2(60), optype CHAR(1) NOT NULL, amount NUMBER(6,2) NOT NULL, PRIMARY KEY (account_id, transaction_id, transaction_ts), CONSTRAINT fk_accounts FOREIGN KEY (account_id) REFERENCES accounts(account_id) ) DISTRIBUTE BY REFERENCE (fk_accounts);
This example runs the ttIsql
tables
command to view the tables in the database.
Command> tables; SAMPLEUSER.ACCOUNTS SAMPLEUSER.ACCOUNT_STATUS SAMPLEUSER.ACCOUNT_TYPE SAMPLEUSER.CUSTOMERS SAMPLEUSER.TRANSACTIONS 5 tables found.
Example 6-19 View the definition of the accounts table
This example runs the ttIsql
describe
command to view the definition of the accounts
table.
Command> describe accounts; Table SAMPLEUSER.ACCOUNTS: Columns: *ACCOUNT_ID NUMBER (10) NOT NULL PHONE VARCHAR2 (15) INLINE NOT NULL ACCOUNT_TYPE CHAR (1) NOT NULL STATUS NUMBER (2) NOT NULL CURRENT_BALANCE NUMBER (10,2) NOT NULL PREV_BALANCE NUMBER (10,2) NOT NULL DATE_CREATED DATE NOT NULL CUST_ID NUMBER (10) NOT NULL DISTRIBUTE BY REFERENCE (FK_CUSTOMER) 1 table found. (primary key columns are indicated with *)
Example 6-20 DISTRIBUTE BY REFERENCE with one foreign key
This example illustrates that you do not have to specify the foreign key constraint in the DISTRIBUTE
BY
REFERENCE
clause. There is only one foreign key.
First create the Orders
table and distribute by hash.
Command> CREATE TABLE Orders (OrderId TT_INTEGER NOT NULL PRIMARY KEY, OrderDate DATE NOT NULL, discount BINARY_FLOAT) DISTRIBUTE BY HASH;
Create the OrderDetails
table with one foreign key constraint. There is no need to name the constraint in the distribution clause.
Command> CREATE TABLE OrderDetails (OrderId TT_INTEGER NOT NULL, PartId TT_INTEGER NOT NULL, Quantity TT_INTEGER NOT NULL, FOREIGN KEY (OrderId) REFERENCES Orders (OrderId)) DISTRIBUTE BY REFERENCE;
Run the ttIsql
describe
command to view the tables.
Command> describe Orders; Table SAMPLEUSER.ORDERS: Columns: *ORDERID TT_INTEGER NOT NULL ORDERDATE DATE NOT NULL DISCOUNT BINARY_FLOAT DISTRIBUTE BY HASH (ORDERID) 1 table found. (primary key columns are indicated with *) Command> describe OrderDetails; Table SAMPLEUSER.ORDERDETAILS: Columns: ORDERID TT_INTEGER NOT NULL PARTID TT_INTEGER NOT NULL QUANTITY TT_INTEGER NOT NULL DISTRIBUTE BY REFERENCE 1 table found. (primary key columns are indicated with *)
Example 6-21 Table with more than one foreign key
This example illustrates that if a table contains more than one foreign key constraint, the DISTRIBUTE
BY
REFERENCE
clause must name the foreign key constraint that will be used as the reference. The customers2
table is the parent and is distributed by hash. The OrderDetails2
table contains two foreign key constraints and this table is distributed by reference on the c1_1
constraint. This constraint must be included in the DISTRIBUTED
BY
REFERENCE
clause.
Command> CREATE TABLE customers2 (CustomerId TT_INTEGER NOT NULL PRIMARY KEY, LastOrderDate DATE NOT NULL,PromotionDiscount BINARY_FLOAT) DISTRIBUTE BY HASH; Command> CREATE TABLE OrderDetails2 (OrderId TT_INTEGER NOT NULL, CustomerId TT_INTEGER NOT NULL, Quantity TT_INTEGER NOT NULL, CONSTRAINT c1_1 FOREIGN KEY (OrderId) REFERENCES Orders (OrderId), CONSTRAINT c2_2 FOREIGN KEY (CustomerId) REFERENCES Customers2 (CustomerId)) DISTRIBUTE BY REFERENCE (c1_1);
Example 6-22 Foreign key relationship not on distribution key of the parent table
This example creates the orders2
parent table with the OrderId
primary key and the CouponId
unique key. The table is distributed by hash. Since no distribution key is specified, the data is distributed by hash on the OrderId
primary key. The coupons
child table establishes a foreign key relationship on the CouponId
unique key. Since this key is not the distribution key of the orders2
parent table, TimesTen Scaleout throws an error.
Command> CREATE TABLE Orders2 (OrderId TT_INTEGER NOT NULL PRIMARY KEY, CouponId TT_INTEGER NOT NULL UNIQUE, OrderDate DATE NOT NULL, discount BINARY_FLOAT) DISTRIBUTE BY HASH; Command> CREATE TABLE Coupons (CouponId TT_INTEGER NOT NULL, discount BINARY_FLOAT, CONSTRAINT CouponC1 FOREIGN KEY (CouponId) REFERENCES Orders2 (CouponId) ) DISTRIBUTE BY REFERENCE (CouponC1); 1067: The Parent keys for a distribute by reference table with hash distributed parent must include the distribution keys of the parent. The command failed.
Example 6-23 Using first and second level child foreign key relationship
This example creates the Coupons2
parent table and distributes the data by hash. The Orders3
child table is created as a first level foreign key relationship and the parent table (Coupons2
) is the root table. The OrderDetails3
child table is created as a second level foreign key relationship and the parent table (Orders3
) is a reference table.
Command> CREATE TABLE Coupons2 (CouponId TT_INTEGER NOT NULL PRIMARY KEY, discount BINARY_FLOAT) DISTRIBUTE BY HASH; Command> CREATE TABLE Orders3 (OrderId TT_INTEGER NOT NULL PRIMARY KEY, CouponId TT_INTEGER NOT NULL, OrderDate DATE NOT NULL, discount BINARY_FLOAT, CONSTRAINT c1_coupons FOREIGN KEY (CouponId) REFERENCES Coupons2 (CouponId)) DISTRIBUTE BY REFERENCE (c1_coupons); Command> CREATE TABLE OrderDetails3 (OrderId TT_INTEGER NOT NULL, PartId TT_INTEGER NOT NULL, quantity TT_INTEGER NOT NULL, CONSTRAINT c1_orders FOREIGN KEY (OrderId) REFERENCES Orders3 (OrderId)) DISTRIBUTE BY REFERENCE (C1_orders);
Example 6-24 Use CREATE TABLE...AS SELECT
This example creates the NewCustomers
table based on the customers
table. It defines a primary key constraint to maintain the same distribution scheme and ensure the data is distributed on the primary key.
Command> CREATE TABLE NewCustomers(cust_id PRIMARY KEY, first_name, last_name, addr1, addr2, zipcode, member_since) AS SELECT * FROM customers; 0 rows inserted. Command> describe NewCustomers; Table SAMPLEUSER.NEWCUSTOMERS: Columns: *CUST_ID NUMBER (10) NOT NULL FIRST_NAME VARCHAR2 (30) INLINE NOT NULL LAST_NAME VARCHAR2 (30) INLINE NOT NULL ADDR1 VARCHAR2 (64) INLINE ADDR2 VARCHAR2 (64) INLINE ZIPCODE VARCHAR2 (5) INLINE MEMBER_SINCE DATE NOT NULL DISTRIBUTE BY HASH (CUST_ID) 1 table found. (primary key columns are indicated with *)
Run ttIsql
describe
to view the original customers
table:
Command> describe Customers; Table SAMPLEUSER.CUSTOMERS: Columns: *CUST_ID NUMBER (10) NOT NULL FIRST_NAME VARCHAR2 (30) INLINE NOT NULL LAST_NAME VARCHAR2 (30) INLINE NOT NULL ADDR1 VARCHAR2 (64) INLINE ADDR2 VARCHAR2 (64) INLINE ZIPCODE VARCHAR2 (5) INLINE MEMBER_SINCE DATE NOT NULL DISTRIBUTE BY HASH (CUST_ID) 1 table found. (primary key columns are indicated with *)
SQL syntax for CREATE TABLE: TimesTen Classic
You cannot specify a PRIMARY
KEY
in both the ColumnDefinition
clause and the PRIMARY
KEY
clause.
The syntax for a persistent table:
CREATE TABLE [Owner.]TableName ( {{ColumnDefinition} [,...] [PRIMARY KEY (ColumnName [,...]) | [[CONSTRAINT ForeignKeyName] FOREIGN KEY ([ColumnName] [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]] [...] } ) [ColumnBasedCompression] [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] | HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] [AS SelectQuery]
The syntax for a global temporary table is:
CREATE GLOBAL TEMPORARY TABLE [Owner.]TableName ( {{ColumnDefinition} [,...] [PRIMARY KEY (ColumnName [,...]) | [[CONSTRAINT ForeignKeyName] FOREIGN KEY ([ColumnName] [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]] [...] } ) [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages] [ON COMMIT { DELETE | PRESERVE } ROWS]
Parameters for CREATE TABLE: TimesTen Classic
Parameter | Description |
---|---|
[ Owner .] TableName |
Name to be assigned to the new table. Two tables cannot have the same owner name and table name.
If you do not specify the owner name, your login name becomes the owner name for the new table. Owners of tables in TimesTen are determined by the user ID settings or login names. Oracle Database table owner names must always match TimesTen table owner names. For rules on creating names, see "Basic names". |
GLOBAL TEMPORARY |
Specifies that the table being created is a global temporary table. A temporary table is similar to a persistent table but it is effectively materialized only when referenced in a connection.
A global temporary table definition is persistent and is visible to all connections, but the table instance is local to each connection. It is created when a command referencing the table is compiled for a connection and dropped when the connection is disconnected. All instances of the same temporary table have the same name but they are identified by an additional connection ID together with the table name. Global temporary tables are allocated in temp space. The contents of a global temporary table cannot be shared between connections. Each connection sees only its own content of the table and compiled commands that reference temporary tables are not shared among connections. When Temporary tables are automatically excluded from active standby pairs or when the A cache group table cannot be defined as a temporary table. Changes to temporary tables cannot be tracked with XLA. Operations on temporary tables do generate log records. The amount of log they generate is less than for permanent tables. Truncate table is not supported with global temporary tables. Local temporary tables are not supported. No object privileges are needed to access global temporary tables. Do not specify the |
ColumnDefinition |
An individual column in a table. Each table must have at least one column.
If you specify the |
ColumnName |
Name of the column in a table. Is used in various clauses of the CREATE TABLE statement.
If the name is used in the primary key definition, it forms the primary key for the table to be created. Up to 16 columns can be specified for the primary key. For a foreign key, the If you specify the |
PRIMARY KEY |
PRIMARY KEY may only be specified once in a table definition. It provides a way of identifying one or more columns that, together, form the primary key of the table. The contents of the primary key have to be unique and NOT NULL . You cannot specify a column as both UNIQUE and a single column PRIMARY KEY . |
CONSTRAINT ForeignKeyName |
Specifies an optional user-defined name for a foreign key. If not provided by the user, the system provides a default name. |
FOREIGN KEY |
This specifies a foreign key constraint between the new table and the referenced table identified by RefTableName . There are two lists of columns specified in the foreign key constraint.
Columns in the first list are columns of the new table and are called the referencing columns. Columns in the second list are columns of the referenced table and are called referenced columns. These two lists must match in data type, including length, precision and scale. The referenced table must already have a primary key or unique index on the referenced column. The column name list of referenced columns is optional. If omitted, the primary index of The declaration of a foreign key creates a range index on the referencing columns. The user cannot drop the referenced table or its referenced index until the referencing table is dropped. The foreign key constraint asserts that each row in the new table must match a row in the referenced table such that the contents of the referencing columns are equal to the contents of the referenced columns. Any TimesTen supports SQL-92 A foreign key can be defined on a global temporary table, but it can only reference a global temporary table. If a parent table is defined with A foreign key cannot reference an active parent table. An active parent table is one that has some instance materialized for a connection. If you specify the |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
ColumnBasedCompression |
Defines compression at the column level, which stores data more efficiently. Eliminates redundant storage of duplicate values within columns and improves the performance of SQL queries that perform full table scans. See "Column-based compression of tables (TimesTen Classic)" for details. |
UNIQUE |
UNIQUE provides a way of identifying a column where each row must contain a unique value. |
UNIQUE HASH ON |
Hash index for the table. This parameter is used for equality predicates. UNIQUE HASH ON requires that a primary key be defined. |
HashColumnName |
Column defined in the table that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key.
If you specify the |
PAGES = PrimaryPages |
Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).
The value for If your estimate for |
[ON COMMIT {DELETE|PRESERVE} ROWS] |
The optional statement specifies whether to delete or preserve rows when a transaction that touches a global temporary table is committed. If not specified, the rows of the temporary table are deleted. |
[AGING LRU [ON|OFF]] |
If specified, defines the LRU aging policy for the table. The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state (ON or OFF ) and the LRU aging attributes.
Set the aging state to either LRU attributes are defined by calling the For more information about LRU aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
[AGING USE ColumnName ... [ON|OFF]] |
If specified, defines the time-based aging policy for the table. The time-based aging policy defines the type of aging (time-based), the aging state (ON or OFF ) and the time-based aging attributes.
Set the aging state to either Time-based aging attributes are defined at the SQL level and are specified by the Specify The values of the column that you use for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be You can define your aging column with a data type of If you specify the AS For more information about time-based aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
LIFETIME Num1 {SECOND[S]| MINUTE[S]|HOUR[S]| DAY[S]} |
LIFETIME is a time-based aging attribute and is a required clause.
Specify the The Specify The concept of time resolution is supported. If |
[CYCLE Num2 {SECOND[S] |MINUTE[S]|HOUR[S]| DAY[S]}] |
CYCLE is a time-based aging attribute and is optional. Specify the CYCLE clause after the LIFETIME clause.
The Specify If you do not specify the If the aging state is |
AS SelectQuery |
If specified, creates a new table from the contents of the result set of the SelectQuery . The rows returned by SelectQuery are inserted into the table.
Data types and data type lengths are derived from
You can specify a statement level optimizer hint after the |
Column definition: TimesTen Classic
You can only use the keyword, ENABLE
, when defining columns in the CREATE
TABLE
statement.
For all data types other than LOBs, the syntax is as follows:
ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [PRIMARY KEY | UNIQUE | NULL [UNIQUE] | NOT NULL [ENABLE] [PRIMARY KEY | UNIQUE] ]
For LOB data types, you cannot create a primary key or unique constraint on LOB columns. In addition, LOB data types are stored out of line, so the INLINE
attribute cannot be specified.
LOB data types are not supported with TimesTen Scaleout.
For all LOB data types, the syntax is:
ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] NULL [ENABLE]] | [[NOT] NULL [ENABLE]] [DEFAULT DefaultVal]
The column definition has the following parameters:
Parameter | Description |
---|---|
ColumnName |
Name to be assigned to one of the columns in the new table. No two columns in the table can be given the same name. A table can have a maximum of 1000 columns.
If you specify the |
ColumnDataType |
Type of data the column can contain. Some data types require that you indicate a length. See Chapter 1, "Data Types" for the data types that can be specified.
If you specify the |
DEFAULT DefaultVal |
Indicates that if a value is not specified for the column in an INSERT statement, the default value DefaultVal is inserted into the column. The default value specified must have a type that is compatible with the data type of the column. A default value can be as long as the data type of the associated column allows. You cannot assign a default value for the ROWID data type or for columns in read-only cache groups. In addition, you cannot use a function within the DEFAULT clause.
The following are legal data types for
If the default value is one of the users, the data type of the column must be either If you specify the |
INLINE| NOT INLINE |
By default, variable-length columns whose declared column length is greater than 128 bytes are stored out of line. Variable-length columns whose declared column length is less than or equal to 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords.
If you specify the |
NULL |
Indicates that the column can contain NULL values.
If you specify the If you specify |
NOT NULL [ENABLE] |
Indicates that the column cannot contain NULL values. If NOT NULL is specified, any statement that attempts to place a NULL value in the column is rejected.
If you specify the If you specify You can only use the keyword, |
UNIQUE |
A unique constraint placed on the column. No two rows in the table may have the same value for this column. TimesTen creates a unique range index to enforce uniqueness. So a column with a unique constraint can use more memory and time during execution than a column without the constraint. Cannot be used with PRIMARY KEY .
If you specify the |
PRIMARY KEY |
A unique NOT NULL constraint placed on the column. No two rows in the table may have the same value for this column. Cannot be used with UNIQUE .
If you specify the |
Description for CREATE TABLE: TimesTen Classic
All columns participating in the primary key are NOT NULL
.
A PRIMARY KEY
that is specified in the ColumnDefinition
can only be specified for one column.
You cannot specify a PRIMARY
KEY
in both the ColumnDefinition
clause and the PRIMARY
KEY
clause.
For both primary key and foreign key constraints, duplicate column names are not allowed in the constraint column list.
You cannot update primary key column(s) unless you update the column(s) to the same value.
There are performance considerations when you define out of line columns instead of inline columns:
Accessing data is slower because TimesTen does not store data contiguously with out of line columns.
Populating data is slower because TimesTen generates more logging operations.
Deleting data is slower because TimesTen performs more reclaim and logging operations.
Storing a column requires less overhead.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
To change the ON DELETE CASCADE
triggered action, drop then redefine the foreign key constraint.
You cannot create a table that has a foreign key referencing a cached table.
UNIQUE
column constraint and default column values are not supported with materialized views.
Use the ALTER TABLE
statement to change the representation of the primary key index for a table.
If you specify the AS
SelectQuery
clause:
Data types and data type lengths are derived from the SelectQuery
. Do not specify data types on the columns of the table you are creating.
TimesTen defines on columns in the new table NOT NULL
constraints that were explicitly created on the corresponding columns of the selected table if SelectQuery
selects the column rather than an expression containing the column.
NOT NULL
constraints that were implicitly created by TimesTen on columns of the selected table (for example, primary keys) are carried over to the new table. You can override the NOT NULL
constraint on the selected table by defining the new column as NULL
. For example:
CREATE TABLE newtable (newcol NULL) AS SELECT (col) FROM tab;
NOT INLINE
/INLINE
attributes are carried over to the new table.
Unique keys, foreign keys, indexes and column default values are not carried over to the new table.
If all expressions in SelectQuery
are columns, rather than expressions, then you can omit the columns from the table you are creating. In this case, the name of the columns are the same as the columns in SelectQuery
. If the SelectQuery
contains an expression rather than a simple column reference, either specify a column alias or name the column in the CREATE TABLE
statement.
Do not specify foreign keys on the table you are creating.
Do not specify the SELECT FOR UPDATE
clause in SelectQuery
.
The ORDER BY
clause is not supported when you use the AS
SelectQuery
clause.
SelectQuery
cannot contain set operators UNION
, MINUS
, INTERSECT
.
In a replicated environment, be aware of the following.
To include a new table, including global temporary tables, into an active standby pair when the table is created, set DDL_REPLICATION_LEVEL
to 2 or greater and DDL_REPLICATION_ACTION
to INCLUDE
before executing the CREATE TABLE
statement on the active database. In this configuration, the table is included in the active standby pair and is replicated to all databases in the replication scheme.
If DDL_REPLICATION_ACTION
is set to EXCLUDE
, then the new table is not included in the active standby pair but is replicated to all databases in the replication scheme. Any DML issued on that table will not be replicated, as the table will not be part of the replication scheme. To enable DML replication for the table, you must execute the ALTER ACTIVE STANDBY PAIR ... INCLUDE TABLE
statement to include the table. In this case, the table must be empty and present on all databases before executing ALTER ACTIVE STANDBY PAIR ... INCLUDE TABLE
, as the table contents will be truncated when this statement is executed.
See "ALTER SESSION" for more information.
By default, a range index is created to enforce the primary key. Use the UNIQUE HASH
clause to specify a hash index for the primary key.
If your application performs range queries using a table's primary key, then choose a range index for that table by omitting the UNIQUE HASH
clause.
If your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH
clause.
A hash index is created with a fixed size that remains constant for the life of the table or until the hash index is resized with the ALTER TABLE
statement or when the index is dropped and recreated. A smaller hash index results in more hash collisions. A larger hash index reduces collisions but can waste memory. Hash key comparison is a fast operation, so a small number of hash collisions should not cause a performance problem for TimesTen.
To ensure that your hash index is sized correctly, your application must indicate the expected size of your table with the value of the RowPages
parameter of the SET
PAGES
clause. Compute this value by dividing the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for the value of RowPages (256000/256=1000).
At most 16 columns are allowed in a hash key.
ON DELETE CASCADE
is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
For ON DELETE CASCADE
: Since different paths may lead from a parent table to a child table, the following rule is enforced:
Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths. Specify ON DELETE CASCADE
on all child tables on the "delete" path.
This rule does not apply to paths from one parent to different children or from different parents to the same child.
For ON DELETE CASCADE
, the following rule is also enforced.
If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.
For ON DELETE CASCADE
with replication, the following restrictions apply:
The foreign keys specified with ON DELETE CASCADE
must match between the Master and subscriber for replicated tables. Checking is done at runtime. If there is an error, the receiver thread stops working.
All tables in the delete cascade tree have to be replicated if any table in the tree is replicated. This restriction is checked when the replication scheme is created or when a foreign key with ON DELETE CASCADE
is added to one of the replication tables. If an error is found, the operation is aborted. You may be required to drop the replication scheme first before trying to change the foreign key constraint.
You must stop the replication agent before adding or dropping a foreign key on a replicated table.
The data in a global temporary table is private to the current connection and does not need to be secured between users. Thus, global temporary tables do not require object privileges.
After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.
The aging policy must be defined to change the aging state.
For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL
column.
LRU and time-based aging can be combined in one system. If you use only LRU aging, the aging thread wakes up based on the cycle specified for the whole database. If you use only time-based aging, the aging thread wakes up based on an optimal frequency. This frequency is determined by the values specified in the CYCLE
clause for all tables. If you use both LRU and time-based aging, then the thread wakes up based on a combined consideration of both types.
The following rules determine if a row is accessed or referenced for LRU aging:
Any rows used to build the result set of a SELECT
statement.
Any rows used to build the result set of an INSERT ... SELECT
statement.
Any rows that are about to be updated or deleted.
Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.
Call the ttAgingScheduleNow
procedure to schedule the aging process immediately regardless of the aging state.
LRU aging and time-based aging are not supported on detail tables of materialized views.
LRU aging and time-based aging are not supported on global temporary tables.
You cannot drop the column that is used for time-based aging.
The aging policy and aging state must be the same in all sites of replication.
Tables that are related by foreign keys must have the same aging policy.
For LRU aging, if a child row is not a candidate for aging, neither this child row nor its parent row are deleted. ON DELETE CASCADE
settings are ignored.
For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE
(whether specified or not) is ignored.
Column-based compression of tables (TimesTen Classic)
You can compress tables at the column level, which stores data more efficiently. This eliminates redundant storage of duplicate values within columns and improves the performance of SQL queries that perform full table scans.
You can define one or more columns in a table to be compressed together, which is called a compressed column group. You can define one or more compressed column groups in each table.
A dictionary table is created for each compressed column group that contains a column with all the distinct values of the compressed column group. The compressed column group now contains a pointer to the row in the dictionary table for the appropriate value. The width of this pointer can be 1, 2, or 4 bytes long depending on the maximum number of entries you defined for the dictionary table. So if the sum of the widths of the columns in a compressed column group is wider than the 1, 2, or 4 byte pointer width, and if there are a lot of duplicate values of those column values, you have reduced the amount of space used by the table.
Figure 6-1 shows the compressed column group in the table pointing to the appropriate row in the dictionary table.
The dictionary table has a column of pointers to each of the distinct values. When the user configures the maximum number of distinct entries for the compressed column group, the size of the compressed column group is set as follows:
1 byte for a maximum number of entries of 255 (28-1). When the maximum number is between 1 and 255, the dictionary size is set to 255 (28-1) values and the compressed column group pointer column is 1 byte.
2 bytes for a maximum number of entries of 65,535 (216-1). When the maximum number is between 256 and 65,535, the dictionary size is set to 65,535 (216-1) values and the compressed column group pointer column is 2 bytes.
4 bytes for a maximum number of entries of 4,294,967,295 (232-1). When the maximum number is between 65,536 and 4,294,967,295, the dictionary size is set to 4,294,967,295 (232-1) values and the compressed column group pointer column is 4 bytes. This is the default.
Syntax: column-based compression (TimesTen Classic)
The syntax for ColumnBasedCompression
is:
[COMPRESS (CompressColumns [,...])]
The CompressColumns
syntax is as follows:
{ColumnDefinition | (ColumnDefinition [,...])} BY DICTIONARY [MAXVALUES = CompressMax]
ColumnBasedCompression
syntax has the following parameters:
Parameter | Description |
---|---|
COMPRESS ( CompressColumns [,...]) |
Defines a compressed column group for a table that is enabled for compression. This can include one or more columns in the table. However, a column can be included in only one compressed column group.
Only Each compressed column group is limited to a maximum of 16 columns. |
BY DICTIONARY |
Defines a compression dictionary for each compressed column group. |
MAXVALUES = CompressMax |
CompressMax is the total number of distinct values in the table and sets the size for the compressed column group pointer column to 1, 2, or 4 bytes and sets the size for the maximum number of entries in the dictionary table.
For the dictionary table,
The maximum size defaults to size of 232-1 if the |
Description: column-based compression (TimesTen Classic)
Compressed column groups can be added at the time of table creation or added later using ALTER TABLE
. You can drop a compressed column group with the ALTER TABLE
statement, but you must drop the entire group.
You can create indexes on any columns in the table and on columns that exist in separate compression column groups. However, you cannot create single column compression groups on unique columns or on single column primary keys. You also cannot create unique indexes or primary keys where all the indexes or primary keys are in the same compression group.
LOB columns cannot be compressed.
Compression is not supported on columns in replicated tables, cache group tables, or on global temporary tables. You cannot create a table with the CREATE TABLE AS SELECT
statement when defining column-based compression for that table in that statement.
You cannot create materialized views on tables enabled for compression.
Column-based compression is not supported with TimesTen Scaleout.
A range index is created on partnumber
because it is the primary key.
Command> CREATE TABLE price (partnumber INTEGER NOT NULL PRIMARY KEY, vendornumber INTEGER NOT NULL, vendpartnum CHAR(20) NOT NULL, unitprice DECIMAL(10,2), deliverydays SMALLINT, discountqty SMALLINT); Command> INDEXES price; Indexes on table SAMPLEUSER.PRICE: PRICE: unique range index on columns: PARTNUMBER 1 index found. 1 index found on 1 table.
A hash index is created on column clubname
, the primary key.
CREATE TABLE recreation.clubs (clubname CHAR(15) NOT NULL PRIMARY KEY, clubphone SMALLINT, activity CHAR(18)) UNIQUE HASH ON (clubname) PAGES = 30;
A range index is created on the two columns membername
and club
because together they form the primary key.
Command> CREATE TABLE recreation.members (membername CHAR(20) NOT NULL, club CHAR(15) NOT NULL, memberphone SMALLINT, PRIMARY KEY (membername, club)); Command> INDEXES recreation.members; Indexes on table RECREATION.MEMBERS: MEMBERS: unique range index on columns: MEMBERNAME CLUB 1 index found on 1 table.
No hash index is created on the table recreation.events
.
CREATE TABLE recreation.events (sponsorclub CHAR(15), event CHAR(30), coordinator CHAR(20), results VARBINARY(10000));
A hash index is created on the column vendornumber
.
CREATE TABLE purchasing.vendors (vendornumber INTEGER NOT NULL PRIMARY KEY, vendorname CHAR(30) NOT NULL, contactname CHAR(30), phonenumber CHAR(15), vendorstreet CHAR(30) NOT NULL, vendorcity CHAR(20) NOT NULL, vendorstate CHAR(2) NOT NULL, vendorzipcode CHAR(10) NOT NULL, vendorremarks VARCHAR(60)) UNIQUE HASH ON (vendornumber) PAGES = 101;
A hash index is created on the columns membername
and club
because together they form the primary key.
CREATE TABLE recreation.members (membername CHAR(20) NOT NULL, club CHAR(15) NOT NULL, memberphone SMALLINT, PRIMARY KEY (membername, club)) UNIQUE HASH ON (membername, club) PAGES = 100;
A hash index is created on the columns firstname
and lastname
because together they form the primary key in the table authors
. A foreign key is created on the columns authorfirstname
and authorlastname
in the table books
that references the primary key in the table authors
.
CREATE TABLE authors (firstname VARCHAR(255) NOT NULL, lastname VARCHAR(255) NOT NULL, description VARCHAR(2000), PRIMARY KEY (firstname, lastname)) UNIQUE HASH ON (firstname, lastname) PAGES=20; CREATE TABLE books (title VARCHAR(100), authorfirstname VARCHAR(255), authorlastname VARCHAR(255), price DECIMAL(5,2), FOREIGN KEY (authorfirstname, authorlastname) REFERENCES authors(firstname, lastname));
The following statement overrides the default character of VARCHAR
columns and creates a table where one VARCHAR (10)
column is NOT INLINE
and one VARCHAR (144)
is INLINE
.
CREATE TABLE t1 (c1 VARCHAR(10) NOT INLINE NOT NULL, c2 VARCHAR(144) INLINE NOT NULL);
The following statement creates a table with a UNIQUE
column for book titles.
CREATE TABLE books (title VARCHAR(100) UNIQUE, authorfirstname VARCHAR(255), authorlastname VARCHAR(255), price DECIMAL(5,2), FOREIGN KEY (authorfirstname, authorlastname) REFERENCES authors(firstname, lastname));
The following statement creates a table with a default value of 1 on column x1
and a default value of SYSDATE
on column d
.
CREATE TABLE t1 (x1 INT DEFAULT 1, d TIMESTAMP DEFAULT SYSDATE);
This example creates the rangex
table and defines col1
as the primary key. A range index is created by default.
Command> CREATE TABLE rangex (col1 TT_INTEGER PRIMARY KEY); Command> INDEXES rangex; Indexes on table SAMPLEUSER.RANGEX: RANGEX: unique range index on columns: COL1 1 index found 1 index found on 1 table.
The following statement illustrates the use of the ON DELETE CASCADE
clause for parent/child tables of the HR
schema. Tables with foreign keys have been altered to enable ON DELETE CASCADE
.
ALTER TABLE countries ADD CONSTRAINT countr_reg_fk FOREIGN KEY (region_id) REFERENCES regions(region_id) ON DELETE CASCADE; ALTER TABLE locations ADD CONSTRAINT loc_c_id_fk FOREIGN KEY (country_id) REFERENCES countries(country_id) ON DELETE CASCADE; ALTER TABLE departments ADD CONSTRAINT dept_loc_fk FOREIGN KEY (location_id) REFERENCES locations (location_id) ON DELETE CASCADE; ALTER TABLE employees ADD CONSTRAINT emp_dept_fk FOREIGN KEY (department_id) REFERENCES departments ON DELETE CASCADE; ALTER TABLE employees ADD CONSTRAINT emp_job_fk FOREIGN KEY (job_id) REFERENCES jobs (job_id); ALTER TABLE job_history ADD CONSTRAINT jhist_job_fk FOREIGN KEY (job_id) REFERENCES jobs; ALTER TABLE job_history ADD CONSTRAINT jhist_emp_fk FOREIGN KEY (employee_id) REFERENCES employees ON DELETE CASCADE; ALTER TABLE job_history ADD CONSTRAINT jhist_dept_fk FOREIGN KEY (department_id) REFERENCES departments ON DELETE CASCADE; ;
This example shows how time resolution works with aging.
If lifetime is three days (resolution is in days):
If (SYSDATE -
ColumnValue
) <= 3
, do not age.
If (SYSDATE -
ColumnValue
) > 3
, then the row is a candidate for aging.
If (SYSDATE -
ColumnValue
) = 3 days, 22 hours
, then the row is not aged out if you specified a lifetime of three days. The row would be aged out if you had specified a lifetime of 72 hours.
This example creates a table with LRU aging. Aging state is ON
by default.
CREATE TABLE agingdemo (agingid NUMBER NOT NULL PRIMARY KEY, name VARCHAR2 (20) ) AGING LRU; Command> DESCRIBE agingdemo; Table USER.AGINGDEMO: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGING LRU ON 1 table found. (primary key columns are indicated with *)
This example creates a table with time-based aging. Lifetime is three days. Cycle is not specified, so the default is five minutes. Aging state is OFF
.
CREATE TABLE agingdemo2 (agingid NUMBER NOT NULL PRIMARY KEY, name VARCHAR2 (20), agingcolumn TIMESTAMP NOT NULL ) AGING USE agingcolumn LIFETIME 3 DAYS OFF; Command> DESCRIBE agingdemo2; Table USER.AGINGDEMO2: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 3 days cycle 5 minutes off 1 table found. (primary key columns are indicated with *)
This example generates an error message. It illustrates that after you create an aging policy, you cannot change it. You must drop aging and redefine aging.
CREATE TABLE agingdemo2 (agingid NUMBER NOT NULL PRIMARY KEY, name VARCHAR2 (20), agingcolumn TIMESTAMP NOT NULL ) AGING USE agingcolumn LIFETIME 3 DAYS OFF; ALTER TABLE agingdemo2 ADD AGING LRU; 2980: Cannot add aging policy to a table with an existing aging policy. Have to drop the old aging first The command failed. DROP aging on the table and redefine with LRU aging. ALTER TABLE agingdemo2 DROP AGING; ALTER TABLE agingdemo2 ADD AGING LRU; Command> DESCRIBE agingdemo2; Table USER.AGINGDEMO2: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging lru on 1 table found. (primary key columns are indicated with *)
Attempt to create a table with time-based aging. Define aging column with data type TT_DATE
and LIFETIME
3 hours. An error is generated because the LIFETIME
unit must be expressed as DAYS
.
Command> CREATE TABLE aging1 (col1 TT_INTEGER PRIMARY KEY, col2 TT_DATE NOT NULL) AGING USE col2 LIFETIME 3 HOURS; 2977: Only DAY lifetime unit is allowed with a TT_DATE column The command failed.
Use AS
SelectQuery
clause to create the table emp
. Select last_name
from the employees
table where employee_id
between 100 and 105. You see six rows inserted into emp
. First issue the SELECT
statement to see rows that should be returned.
Command> SELECT last_name FROM employees WHERE employee_id BETWEEN 100 AND 105; < King > < Kochhar > < De Haan > < Hunold > < Ernst > < Austin > 6 rows found. Command> CREATE TABLE emp AS SELECT last_name FROM employees WHERE employee_id BETWEEN 100 AND 105; 6 rows inserted. Command> SELECT * FROM emp; < King > < Kochhar > < De Haan > < Hunold > < Ernst > < Austin > 6 rows found.
Use AS
SelectQuery
to create table totalsal
. Sum salary
and insert result into totalsalary
. Define alias s
for SelectQuery
expression.
Command> CREATE TABLE totalsal AS SELECT SUM (salary) s FROM employees; 1 row inserted. Command> SELECT * FROM totalsal; < 691400 > 1 row found.
Use AS
SelectQuery
to create table defined with column commission_pct
. Set default to .3. First describe table employees
to show that column commission_pct
is of type NUMBER (2,2)
. For table c_pct
, column commission_pct
inherits type NUMBER (2,2)
from column commission_pct
of employees
table.
Command> DESCRIBE employees; Table SAMPLEUSER.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE UNIQUE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) 1 table found. (primary key columns are indicated with *) Command> CREATE TABLE c_pct (commission_pct DEFAULT .3) AS SELECT commission_pct FROM employees; 107 rows inserted. Command> DESCRIBE c_pct; Table SAMPLEUSER.C_PCT: Columns: COMMISSION_PCT NUMBER (2,2) DEFAULT .3 1 table found. (primary key columns are indicated with *)
The following example creates the employees
table where the job_id
is compressed.
Command> CREATE TABLE EMPLOYEES (EMPLOYEE_ID NUMBER (6) PRIMARY KEY, FIRST_NAME VARCHAR2(20), LAST_NAME VARCHAR2(25) NOT NULL, EMAIL VARCHAR2(25) NOT NULL, PHONE_NUMBER VARCHAR2(20), HIRE_DATE DATE NOT NULL, JOB_ID VARCHAR2(10) NOT NULL, SALARY NUMBER (8,2), COMMISSION_PCT NUMBER (2,2), MANAGER_ID NUMBER(6), DEPARTMENT_ID NUMBER(4)) COMPRESS (JOB_ID BY DICTIONARY); Command> DESCRIBE EMPLOYEES; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) COMPRESS ( JOB_ID BY DICTIONARY ) 1 table found. (primary key columns are indicated with *)
The following example shows that there are three dictionary table sizes. The value you specify for the maximum number of entries is rounded up to the next size. For example, specifying 400 as the maximum number of job IDs creates a dictionary table that can have at most 65535 entries. The default size of 232-1 is not shown in the DESCRIBE
output.
Command> CREATE TABLE employees (employee_id NUMBER(6) PRIMARY KEY, first_name VARCHAR2(20), last_name VARCHAR2(25), email VARCHAR2(25) NOT NULL, job_id VARCHAR2(10) NOT NULL, manager_id NUMBER(6), department_id NUMBER(4)) COMPRESS (last_name BY DICTIONARY MAXVALUES=70000, job_id BY DICTIONARY MAXVALUES=400, department_id BY DICTIONARY MAXVALUES=100); Command> DESCRIBE employees; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE EMAILS VARCHAR2 (25) INLINE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) COMPRESS ( LAST_NAME BY DICTIONARY, JOB_ID BY DICTIONARY MAXVALUES=65535, DEPARTMENT_ID BY DICTIONARY MAXVALUES=255 ) 1 table found. (primary key columns are indicated with *)
The CREATE USER
statement creates a user of a TimesTen database.
CREATE USER user IDENTIFIED BY {password | "password"} [PROFILE profile] [ACCOUNT {LOCK|UNLOCK}] [PASSWORD EXPIRE]
or
CREATE USER user IDENTIFIED EXTERNALLY [PROFILE profile] [ACCOUNT {LOCK|UNLOCK}]
Parameter | Description |
---|---|
user |
Name of the user. |
IDENTIFIED BY { password | " password "} |
Identification clause for an internal user. You must supply a password for an internal user. |
IDENTIFIED EXTERNALLY |
Identifies an external user (the operating system user). To perform database operations as an external user, the external user name must match the user name authenticated by the operating system or network. A password is not required by TimesTen as the user has been authenticated by the operating system at login time. |
PROFILE profile |
Use the PROFILE clause to specify the name of the profile (designated by profile ) that you want to assign to the user. The profile sets the limits for the password parameters for the user. See "CREATE PROFILE" for information on these password parameters. If you omit the PROFILE clause, TimesTen assigns the DEFAULT profile to the user. If you create an external user (denoted by specifying the EXTERNALLY keyword), you can specify a PROFILE clause, but the password parameters have no effect on external users. Additionally, if you do not specify the PROFILE clause for an external user, TimesTen assigns the DEFAULT profile to the user (but the password parameters have no effect). |
ACCOUNT [LOCK |UNLOCK ] |
Specify ACCOUNT LOCK to lock the user's account and disable connections to the database. Specify ACCOUNT UNLOCK to unlock the user's account and enable connections to the database. The default is ACCOUNT UNLOCK . |
PASSWORD EXPIRE |
Specify PASSWORD EXPIRE if you want the user's password to expire. This setting forces a user with ADMIN privileges to change the password before the user can connect to the database. In order to change the expired password, a user with ADMIN privileges must use the ALTER USER statement with the IDENTIFIED BY clause to change the password. Once the password is changed, the user can log in to the database with the new password. Note that even if the newly created user is granted ADMIN privileges, that newly created user cannot login to the database and therefore cannot initially change the password. See "ALTER USER" for information. This clause is not valid for an externally identified user (as denoted by the IDENTIFIED EXTERNALLY clause). |
Database users can be internal or external.
Internal users are defined for a TimesTen database.
External users are defined by the operating system. External users cannot be assigned a TimesTen password.
Passwords are case-sensitive.
When a user is created, the user has the privileges granted to PUBLIC
and no additional privileges.
Use the PROFILE
clause to assign a profile to a user. If you do not assign a profile to an internal user, a DEFAULT
profile is assigned to that user. See "CREATE PROFILE" for details.
Use the ACCOUNT
LOCK
or ACCOUNT
UNLOCK
to lock or unlock the user account.
Use the PASSWORD
EXPIRE
clause to expire the user's password and force a password change before the user can connect to the database.
You can create a user over a client/sever connection if the connection is encrypted with TLS. See "Transport Layer Security for TimesTen Client/Server" in the Oracle TimesTen In-Memory Database Security Guide for details.
In TimesTen, user brad
is the same as user "brad"
. In both cases, the name of the user is created as BRAD
.
User names are TT_CHAR
data type.
This statement is replicated.
Example 1: Create a user and assign a profile
This example creates the user1
user and assigns the profile1
profile to the user.
Command> CREATE USER user1 IDENTIFIED BY user1 PROFILE profile1; User created.
Example 2: Create a user and do not assign a profile
This example creates the user2
user and does not assign a profile. The user2
user is assigned the values of the password parameters in the DEFAULT
profile.
Command> CREATE USER user2 identified by user2; User created.
Query the dba_users
system view to verify the user2
user is assigned the DEFAULT
profile.
Command> SELECT profile FROM dba_users WHERE username='USER2'; < DEFAULT > 1 row found.
Example 3: Create a user and lock the user account
This example creates the user3
user and locks the user3
account. The user3
account must be unlocked by a user with the ADMIN
privilege before the user3
user can connect to the database.
Command> CREATE USER user3 IDENTIFIED BY user3 ACCOUNT LOCK; User created.
Grant the CONNECT
privilege to user3
;
Command> GRANT CONNECT TO user3;
Attempt to connect to the database as user3
. The user3
account is locked so the connection fails.
Command> connect adding "UID=user3;PWD=user3" as user3; 15179: the account is locked The command failed.
As the instance administrator, reconnect to the database and use the ALTER
USER
statement to unlock the user3
account.
none: Command> use database1 database1: Command> ALTER USER user3 ACCOUNT UNLOCK; User altered.
Attempt to connect to the database a the user3
user. The connection succeeds.
database1: Command> connect adding "UID=user3;PWD=user3" as user3; Connection successful: DSN=database1;UID=user3;DataStore=/scratch/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128; (Default setting AutoCommit=1)
Example 4: Create a user. Lock the account and enforce a password change
This example creates the user4
user. The user4
user is assigned the profile1
profile. The user4
account is locked and the password for user4
must be changed before the user4
user can connect to the database.
Command> CREATE USER user4 identified by user4 PROFILE profile1 ACCOUNT LOCK PASSWORD EXPIRE; User created.
Attempt to connect to the database as user4
. The user4
account is locked and the password must be changed before the user4
user can connect to the database.
Command> connect adding "UID=user4;PWD=user4" as user4; 15179: the account is locked The command failed.
As the instance administrator, reconnect to the database and use the ALTER
USER
statement to unlock the user4
account.
none: Command> use database1 database1: Command> ALTER USER user4 ACCOUNT UNLOCK; User altered.
Grant the CONNECT
privilege to user4
. Then change the user4
's expired password. (This example changes the password to user4_changed
, represented in bold.)
database1: Command> GRANT CONNECT TO user4; database1: Command> ALTER USER user4 IDENTIFIED BY user4_changed; User altered.
Attempt to connect to the database as the user4
user. The connection succeeds. The account is unlock and the password is changed.
database1: Command> connect adding "UID=user4;PWD=user4_changed" as user4; Connection successful: DSN=database1;UID=user4;DataStore=/scratch/database1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128; (Default setting AutoCommit=1)
Example 5: Create an external user
This example creates the user5
user as an external user.
Command> CREATE USER user5 IDENTIFIED EXTERNALLY; User created.
The CREATE VIEW
statement creates a view of the tables specified in the SelectQuery
clause. A view is a logical table that is based on one or more detail tables. The view itself contains no data. It is sometimes called a nonmaterialized view to distinguish it from a materialized view, which does contain data that has already been calculated from detail tables.
In a replicated environment for an active standby pair, if DDL_REPLICATION_LEVEL
is 3 or greater when you execute CREATE VIEW
on the active database, the view is replicated to all databases in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
The user executing the statement must have the CREATE VIEW
privilege (if owner) or CREATE ANY VIEW
(if not the owner) for another user's view.
The owner of the view must have the SELECT
privilege on the detail tables.
Parameter | Description |
---|---|
[ Owner .] ViewName |
Name of view |
SelectQuery |
Selects column from the detail tables to be used in the view.
You can also create indexes on the view. |
Restrictions on the SELECT query
There are several restrictions on the query that is used to define the view.
A SELECT *
query in a view definition is expanded when the view is created. Any columns added after a view is created do not affect the view.
Do not use the following in a SELECT
statement that is used to create a view:
FIRST
ORDER BY
If used, this is ignored by CREATE VIEW
. The result will not be sorted.
Arguments
Each expression in the select list must have a unique name. A name of a simple column expression would be that column's name unless a column alias is defined. ROWID
is considered an expression and needs an alias.
Do not use SELECT FOR UPDATE
to create a view.
Certain TimesTen query restrictions are not checked when a non-materialized view is created. Views that violate those restrictions may be allowed to be created, but an error is returned when the view is referenced later in an executed statement.
When a view is referenced in the FROM
clause of a SELECT
statement, its name is replaced by its definition as a derived table at parsing time. If it is not possible to merge all clauses of a view to the same clause in the original select query to form a legal query without the derived table, the content of this derived table is materialized. For example, if both the view and the referencing select specify aggregates, the view is materialized before its result can be joined with other tables of the select.
Use the DROP VIEW
statement to drop a view.
A view cannot be altered with an ALTER TABLE
statement.
Referencing a view can fail because of dropped or altered detail tables.
Create a nonmaterialized view from the employees
table.
Command> CREATE VIEW v1 AS SELECT employee_id, email FROM employees; Command> SELECT FIRST 5 * FROM v1; < 100, SKING > < 101, NKOCHHAR > < 102, LDEHAAN > < 103, AHUNOLD > < 104, BERNST > 5 rows found.
Create a nonmaterialized view tview
with column max1
from an aggregate query on the table t1
.
CREATE VIEW tview (max1) AS SELECT MAX(x1) FROM t1;
The DELETE
statement deletes rows from a table.
No privilege is required for the table owner.
DELETE
on the table for another user's table.
DELETE [hint] [FIRST NumRows] FROM [Owner.]TableName [CorrelationName] [WHERE SearchCondition] [RETURNING|RETURN Expression[,...]INTO DataItem[,...]]
Parameter | Description |
---|---|
hint |
Specifies a statement level optimizer hint for the DELETE statement. For more information on hints, see "Statement level optimizer hints". |
FIRST NumRows |
Specifies the number of rows to delete. FIRST NumRows is not supported in subquery statements. NumRows must be either a positive INTEGER or a dynamic parameter placeholder. The syntax for a dynamic parameter placeholder is either ? or :DynamicParameter . The value of the dynamic parameter is supplied when the statement is executed. |
[ Owner .] TableName [ CorrelationName ] |
Designates a table from which any rows satisfying the search condition are to be deleted.
|
SearchCondition |
Specifies which rows are to be deleted. If no rows satisfy the search condition, the table is not changed. If the WHERE clause is omitted, all rows are deleted. The search condition can contain a subquery. |
Expression |
Valid expression syntax. See Chapter 3, "Expressions". |
DataItem |
Host variable or PL/SQL variable that stores the retrieved Expression value. |
If all the rows of a table are deleted, the table is empty but continues to exist until you issue a DROP TABLE
statement.
If your table has out of line columns and there are millions of rows to delete, consider calling the ttCompact
built-in procedure to free memory.
The DELETE
operation fails if it violates any foreign key constraint. See "CREATE TABLE" for a description of the foreign key constraint.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
Restrictions on the RETURNING
clause:
Each Expression
must be a simple expression. Aggregate functions are not supported.
You cannot return a sequence number into an OUT
parameter.
ROWNUM
and subqueries cannot be used in the RETURNING
clause.
Parameters in the RETURNING
clause cannot be duplicated anywhere in the DELETE
statement.
Using the RETURNING
clause to return multiple rows requires PL/SQL BULK COLLECT
functionality. See "FORALL and BULK COLLECT operations" in Oracle TimesTen In-Memory Database PL/SQL Developer's Guide for information about BULK COLLECT
.
In PL/SQL, you cannot use a RETURNING
clause with a WHERE CURRENT
operation.
Rows for orders whose quantity is less than 50 are deleted.
DELETE FROM purchasing.orderitems WHERE quantity < 50;
The following query deletes all the duplicate orders assuming that id
is not a primary key:
DELETE FROM orders a WHERE EXISTS (SELECT 1 FROM orders b WHERE a.id = b.id and a.rowid < b.rowid);
The following sequence of statements causes a foreign key violation.
CREATE TABLE master (name CHAR(30), id CHAR(4) NOT NULL PRIMARY KEY); CREATE TABLE details (masterid CHAR(4),description VARCHAR(200), FOREIGN KEY (masterid) REFERENCES master(id)); INSERT INTO master('Elephant', '0001'); INSERT INTO details('0001', 'A VERY BIG ANIMAL'); DELETE FROM master WHERE id = '0001';
If you attempt to delete a "busy" table, an error results. In this example, t1
is a "busy" table that is a parent table with foreign key constraints based on it.
CREATE TABLE t1 (a INT NOT NULL, b INT NOT NULL, PRIMARY KEY (a)); CREATE TABLE t2 (c INT NOT NULL, FOREIGN KEY (c) REFERENCES t1(a)); INSERT INTO t1 VALUES (1,1); INSERT INTO t2 VALUES (1); DELETE FROM t1;
An error is returned:
SQL ERROR (3001): Foreign key violation [TTFOREIGN_0] a row in child table T2 has a parent in the delete range.
Delete an employee from employees
. Declare empid
and name
as variables with the same data types as employee_id
and last_name
. Delete the row, returning employee_id
and last_name
into the variables. Verify that the correct row was deleted.
Command> VARIABLE empid NUMBER(6) NOT NULL; Command> VARIABLE name VARCHAR2(25) INLINE NOT NULL; Command> DELETE FROM employees WHERE last_name='Ernst' RETURNING employee_id, last_name INTO :empid,:name; 1 row deleted. Command> PRINT empid name; EMPID : 104 NAME : Ernst
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
This statement drops an active standby pair replication scheme.
The active standby pair is dropped, but all objects such as tables, cache groups, and materialized views still exist on the database on which the statement was issued.
You cannot execute the DROP ACTIVE STANDBY PAIR
statement when Oracle Clusterware is used with TimesTen.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The DROP CACHE GROUP
statement drops the table associated with the cache group, and removes the cache group definition from the CACHE_GROUP
system table.
No privilege is required for the cache group owner.
If not the cache group owner, DROP ANY CACHE GROUP
and
DROP ANY TABLE
if at least one table in the cache group is not owned by the current user.
If you attempt to delete a cache group table that is in use, TimesTen returns an error.
Asynchronous writethrough cache groups cannot be dropped while the replication agent is running.
Automatically installed Oracle Database objects for read-only cache groups and cache groups with the AUTOREFRESH
attribute are uninstalled by the cache agent. If the cache agent is not running during the DROP CACHE GROUP
operation, the Oracle Database objects are uninstalled on the next startup of the cache agent.
You cannot execute the DROP CACHE GROUP
statement when performed under the serializable isolation level. An error message is returned when attempted.
If you issue a DROP CACHE GROUP
statement, and there is an autorefresh operation currently running, then:
If LockWait
interval is 0, the DROP CACHE GROUP
statement fails with a lock timeout error.
If LockWait
interval is nonzero, then the current autorefresh transaction is preempted (rolled back), and the DROP
statement continues. This affects all cache groups with the same autorefresh interval.
The DROP FUNCTION
statement removes a standalone stored function from the database. Do not use this statement to remove a function that is part of a package.
No privilege is required for the function owner.
DROP ANY PROCEDURE
for another user's function.
When you drop a function, TimesTen invalidates objects that depend on the dropped function. If you subsequently reference one of these objects, TimesTen attempts to recompile the object and returns an error message if you have not recreated the dropped function.
Do not use this statement to remove a function that is part of a package. Either drop the package or redefine the package without the function using the CREATE PACKAGE
statement with the OR REPLACE
clause.
To use the DROP FUNCTION
statement, you must have PL/SQL enabled in your database. If you do not have PL/SQL enabled in your database, an error is thrown.
The following statement drops the function myfunc
and invalidates all objects that depend on myfunc
:
Command> DROP FUNCTION myfunc; Function dropped.
If PL/SQL is not enabled in your database, TimesTen returns an error:
Command> DROP FUNCTION myfunc; 8501: PL/SQL feature not installed in this TimesTen database The command failed.
The DROP INDEX
statement removes the specified index.
No privilege is required for the index owner.
DROP ANY INDEX
for another user's index.
Parameter | Description |
---|---|
[ Owner .] IndexName |
Name of the index to be dropped. It may include the name of the owner of the table that has the index. |
[ Owner .] TableName |
Name of the table upon which the index was created. |
If you attempt to drop a "busy" index—an index that is in use or that enforces a foreign key—an error results. To drop a foreign key and the index associated with it, use the ALTER TABLE
statement.
If an index is created through a UNIQUE
column constraint, it can only be dropped by dropping the constraint with an ALTER TABLE
DROP UNIQUE
statement. See "CREATE TABLE" for more information about the UNIQUE
column constraint.
If a DROP INDEX
operation is or was active in an uncommitted transaction, other transactions doing DML operations that do not access that index are blocked.
If an index is dropped, any prepared statement that uses the index is prepared again automatically the next time the statement is executed.
If no table name is specified, the index name must be unique for the specified or implicit owner. The implicit owner, in the absence of a specified table or owner, is the current user running the program.
If no index owner is specified and a table is specified, the default owner is the table owner.
If a table is specified and no owner is specified for it, the default table owner is the current user running the program.
The table and index owners must be the same.
An index on a temporary table cannot be dropped by a connection if some other connection has an instance of the table that is not empty.
If the index is replicated across an active standby pair and if DDL_REPLICATION_LEVEL
is 2 or greater, use the DROP INDEX
statement to drop the index from the standby pair in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
Drop index partsorderedindex
which is defined on table orderitems
using one of the following:
DROP INDEX partsorderedindex FROM purchasing.orderitems;
Or:
DROP INDEX purchasing.partsorderedindex;
The DROP MATERIALIZED VIEW
statement removes the specified materialized view, including any hash indexes and any range indexes associated with it.
View owner or DROP ANY MATERIALIZED VIEW
(if not owner) and
Table owner or DROP ANY TABLE
(if not owner) and
Index owner or DROP ANY INDEX
(if not owner) if there is an index on the view.
When you execute a DROP MATERIALIZED VIEW
operation, the detail tables are updated and locked. An error may result if the detail table was already locked by another transaction.
The following statement drops the custorder
materialized view.
DROP MATERIALIZED VIEW custorder;
The DROP PACKAGE
statement removes a stored package from the database. Both the specification and the body are dropped. DROP PACKAGE BODY
removes only the body of the package.
No privilege is required for the package owner.
DROP ANY PROCEDURE
for another user's package.
Parameter | Description |
---|---|
PACKAGE [BODY] |
Specify BODY to drop only the body of the package. Omit BODY to drop both the specification and body of the package. |
[ Owner .] PackageName |
Name of the package to be dropped. |
When you drop only the body of the package, TimesTen does not invalidate dependent objects. However, you cannot execute one of the procedures or stored functions declared in the package specification until you recreate the package body.
TimesTen invalidates any objects that depend on the package specification. If you subsequently reference one of these objects, then TimesTen tries to recompile the object and returns an error if you have not recreated the dropped package.
Do not use this statement to remove a single object from the package. Instead, recreate the package without the object using the CREATE PACKAGE
and CREATE PACKAGE BODY
statements with the OR REPLACE
clause.
To use the DROP PACKAGE [BODY]
statement, you must have PL/SQL enabled in your database. If you do not have PL/SQL enabled in your database, TimesTen returns an error.
The following statement drops the body of package samplePackage
:
Command> DROP PACKAGE BODY SamplePackage; Package body dropped.
To drop both the specification and body of package samplepackage
:
Command> DROP PACKAGE samplepackage; Package dropped.
The DROP PROCEDURE
statement removes a standalone stored procedure from the database. Do not use this statement to remove a procedure that is part of a package.
No privilege is required for the procedure owner.
DROP ANY PROCEDURE
for another user's procedure.
When you drop a procedure, TimesTen invalidates objects that depend on the dropped procedure. If you subsequently reference one of these objects, TimesTen attempts to recompile the object and returns an error message if you have not recreated the dropped procedure.
Do not use this statement to remove a procedure that is part of a package. Either drop the package or redefine the package without the procedure using the CREATE PACKAGE
statement with the OR REPLACE
clause.
To use the DROP PROCEDURE
statement, you must have PL/SQL enabled in your database. If you do not have PL/SQL enabled in your database, an error is thrown.
The following statement drops the procedure myproc
and invalidates all objects that depend on myproc
:
Command> DROP PROCEDURE myproc; Procedure dropped.
If PL/SQL is not enabled in your database, TimesTen returns an error:
Command> DROP PROCEDURE myproc; 8501: PL/SQL feature not installed in this TimesTen databaseThe command failed.
The DROP
PROFILE
statement removes a profile from the database.
Parameter | Description |
---|---|
profile |
Name of the profile to be dropped. |
CASCADE |
Specify CASCADE to de-assign the profile from any users to whom the profile is assigned. TimesTen reassigns the DEFAULT profile to such users. You must specify CASCADE to drop a profile that is currently assigned to users. |
Use this statement to drop an existing profile. You cannot drop the DEFAULT
profile. See "CREATE PROFILE" for information on the DEFAULT
profile.
If you create a profile that is not currently assigned to a user, you do not need to specify CASCADE
to drop the profile. If, however, the profile is currently assigned to a user, you must specify CASCADE
to drop the profile.
This example creates the test_profile
profile and the test_profile_assign_to_user
profile. It then creates the test_user
user and assigns the test_profile_assign_to_user
profile to that user. The example attempts to drop the test_profile
profile. The operation succeeds as there are no users assigned to this profile. The example then attempts to drop the test_profile_assign_to_user
profile. The operation succeeds if CASCADE
is specified. After the test_profile_assign_to_user
profile is dropped, the test_user
user is assigned the DEFAULT
profile.
Create the test_profile
profile. Set FAILED_LOGIN_ATTEMPTS
to a value of 5
.
Command> CREATE PROFILE test_profile LIMIT FAILED_LOGIN_ATTEMPTS 5; Profile created.
Create the test_profile_assign_to_user
profile. Set FAILED_LOGIN_ATTEMPTS
to a value of 3
.
Command> CREATE PROFILE test_profile_assign_to_user LIMIT FAILED_LOGIN_ATTEMPTS 3; Profile created.
Create the test_user
user and assign the test_profile_assign_to_user
profile to this user.
Command> CREATE USER test_user identified by test_user_pwd PROFILE test_profile_assign_to_user; User created.
Drop the test_profile
profile. The DROP
PROFILE
operation succeeds. There are no users assigned to this test_profile
profile.
Command> DROP PROFILE test_profile; Profile dropped.
Attempt to drop the test_profile_assign_to_user
profile. The DROP
PROFILE
operation fails. There is a user assigned to this profile. Repeat the DROP
PROFILE
operation again, but this time specify CASCADE
. The DROP
PROFILE
operation succeeds.
Command> DROP PROFILE test_profile_assign_to_user; 15178: Profile TEST_PROFILE_ASSIGN_TO_USER has users assigned, cannot drop without CASCADE The command failed. Command> DROP PROFILE test_profile_assign_to_user CASCADE; Profile dropped.
Query the DBA_USERS
system view to verify that the test_user
user has been assigned the DEFAULT
profile.
Command> SELECT profile FROM dba_users WHERE username = 'TEST_USER'; PROFILE < DEFAULT > 1 row found.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The DROP REPLICATION
statement destroys a classic replication scheme and removes it from the executing database.
Parameter | Description |
---|---|
[ Owner .] ReplicationSchemeName |
Name assigned to the classic replication scheme. |
Dropping the last replication scheme on a database does not delete the replicated tables. These tables exist and persist at a database whether any replication schemes are defined.
The following statement erases the executing database's knowledge of a classic replication scheme, r
:
DROP REPLICATION r;
The DROP SEQUENCE
statement removes an existing sequence number generator.
If the sequence is replicated across an active standby pair and if DDL_REPLICATION_LEVEL
is 3 or greater, the DROP SEQUENCE
statement drops the sequence from the active standby pair for all databases in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
No privilege is required for the sequence owner.
DROP ANY SEQUENCE
for another user's sequence.
Sequences can be dropped while they are in use.
If you are using TimesTen Scaleout, you can modify the batch value with the ALTER
SEQUENCE
statement. Otherwise, to alter a sequence, use the DROP SEQUENCE
statement and then create a new sequence with the same name. For example, to change the MINVALUE
, drop the sequence and recreate it with the same name and with the desired MINVALUE
.
If the sequence is part of a replication scheme, use the ALTER REPLICATION
statement to drop the sequence from the replication scheme. Then use the DROP SEQUENCE
statement to drop the sequence.
The DROP SYNONYM
statement removes a synonym from the database.
If the synonym is replicated across an active standby pair and if DDL_REPLICATION_LEVEL
is 2 or greater, the DROP SYNONYM
statement drops the synonym from the active standby pair for all databases in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
No privilege is required to drop the private synonym by its owner. The DROP ANY SYNONYM
privilege is required to drop another user's private synonym.
The DROP PUBLIC SYNONYM
privilege is required to drop a PUBLIC
synonym.
To drop a private synonym, use the following syntax:
DROP SYNONYM [Owner.]SynonymName
To drop a public synonym:
DROP PUBLIC SYNONYM SynonymName
Parameter | Description |
---|---|
PUBLIC |
Specify PUBLIC to drop a public synonym. |
Owner |
Optionally, specify the owner for a private synonym. If you omit the owner, the private synonym must exist in the current user's schema. |
SynonymName |
Specify the name of the synonym to be dropped. |
Drop the public synonym pubemp
:
DROP PUBLIC SYNONYM pubemp; Synonym dropped.
Drop the private synjobs
synonym:
DROP SYNONYM synjobs; Synonym dropped.
As user terry
with DROP ANY SYNONYM
privilege, drop the private syntab
synonym owned by ttuser
.
DROP SYNONYM ttuser.syntab; Synonym dropped.
The DROP TABLE
statement removes the specified table, including any hash indexes and any range indexes associated with it.
No privilege is required for the table owner.
DROP ANY TABLE
for another user's table.
If you attempt to drop a table that is in use, an error results.
If DROP TABLE
is or was active in an uncommitted transaction, other transactions doing DML operations that do not access that table are allowed to proceed.
If the table is a replicated table, you can do one of the following:
Use the DROP REPLICATION
statement to drop the replication scheme before issuing the DROP TABLE
statement.
If DDL_REPLICATION_LEVEL
is 2 or greater, the DROP TABLE
statement drops the table from the active standby pair for all databases in the replication scheme.
If DDL_REPLICATION_LEVEL
is 1, stop the replication agent and use the ALTER ACTIVE STANDBY PAIR ... EXCLUDE TABLE
statement to exclude the table from the replication scheme. Then use the DROP TABLE
statement to drop the table.
See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
A temporary table cannot be dropped by a connection if some other connection has some non-empty instance of the table.
CREATE TABLE vendorperf (ordernumber INTEGER, delivday TT_SMALLINT, delivmonth TT_SMALLINT, delivyear TT_SMALLINT, delivqty TT_SMALLINT, remarks VARCHAR2(60)); CREATE UNIQUE INDEX vendorperfindex ON vendorperf (ordernumber);
The following statement drops the table and index.
DROP TABLE vendorperf;
The DROP USER
statement removes a user from the database.
Before you can drop a user:
The user must exist either internally or externally in the database.
You must drop objects that the user owns.
When replication is configured, this statement is replicated.
The DROP VIEW
statement removes the specified view.
If the view is replicated across an active standby pair and if DDL_REPLICATION_LEVEL
is 3 or greater, the DROP VIEW
statement drops the view from the active standby pair for all databases in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The FLUSH CACHE GROUP
statement flushes data from TimesTen cache tables to Oracle Database tables. This statement is available only for user managed cache groups. For a description of cache group types, see "User managed and system managed cache groups".
There are two variants to this operation: one that accepts a WHERE
clause, and one that accepts a WITH ID
clause.
FLUSH CACHE GROUP
is meant to be used when commit propagation (from TimesTen to Oracle Database) is turned off. Instead of propagating every transaction upon commit, many transactions can be committed before changes are propagated to Oracle Database. For each cache instance ID, if the cache instance exists in the Oracle database, the operation in the Oracle database consists of an update. If the cache instance does not exist in the Oracle database, TimesTen inserts it.
This is useful, for example, in a shopping cart application in which many changes may be made to the cart, which uses TimesTen as a high-speed cache, before the order is committed to the master Oracle database table.
Note:
Using aWITH ID
clause usually results in better system performance than using a WHERE
clause.Only inserts and updates are flushed. Inserts are propagated as inserts if the record does not exist in the Oracle database table or as updates (if the record already exists). It is not possible to flush a delete. That is, if a record is deleted on TimesTen, there is no way to "flush" that delete to the Oracle database table. Deletes must be propagated either manually or by turning commit propagation on. Attempts to flush deleted records are silently ignored. No error or warning is issued. Records from tables that are specified as READ ONLY
or PROPAGATE
cannot be flushed to the Oracle database tables.
No privilege is required for the cache group owner.
FLUSH
or FLUSH ANY CACHE GROUP
for another user's cache group.
FLUSH CACHE GROUP [Owner.]GroupName [WHERE ConditionalExpression]
or
FLUSH CACHE GROUP [Owner.]GroupName WITH ID (ColumnValueList)
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name of the cache group to be flushed. |
WHERE ConditionalExpression |
Use the WHERE clause to specify a search condition to qualify the target rows of the cache operation. If you use more than one table in the WHERE clause and the tables have columns with the same names, fully qualify the table names. |
WITH ID ColumnValueList |
The WITH ID clauses enables you to use primary key values to flush the cache instance. Specify ColumnValueList as either a list of literals or binding parameters to represent the primary key values. |
WHERE
clauses are generally used to apply the operation to a set of cache instances, rather than to a single cache instance or to all cache instances. The flush operation uses the WHERE
clause to determine which cache instances to send to the Oracle database.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the FLUSH CACHE GROUP
statement. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name. Without an owner name, all tables referenced by cache group WHERE
clauses are owned by the current login name executing the cache group operation.
When the WHERE
clause is omitted, the entire contents of the cache group is flushed to the Oracle database tables. When the WHERE
clause is included, it is allowed to include only the root table.
Following the execution of a FLUSH CACHE GROUP
statement, the ODBC function SQLRowCount()
, the JDBC method getUpdateCount()
, and the OCI function OCIAttrGet()
with the OCI_ATTR_ROW_COUNT
argument return the number of cache instances that were flushed.
Use the WITH ID
clause to specify binding parameters.
Do not use the WITH ID
clause on AWT or SWT cache groups, user managed cache groups with the propagate attribute, or autorefreshed and propagated user managed cache groups unless the cache group is a dynamic cache group.
The GRANT
statement assigns one or more privileges to a user.
ADMIN
to grant system privileges.
ADMIN
or the object owner to grant object privileges.
GRANT {SystemPrivilege [,...] | ALL [PRIVILEGES]} [...] TO {user |PUBLIC} [,...]
or
GRANT {{ObjectPrivilege [,...] | ALL [PRIVILEGES]} ON {[Owner.]object}[,...]} TO {user | PUBLIC} [,...]
The following parameters are for granting system privileges:
Parameter | Description |
---|---|
SystemPrivilege |
This is the system privilege to grant. See "System privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Assigns all system privileges to the user. |
user |
Name of the user to whom privileges are being granted. The user name must first have been introduced to the TimesTen database by a CREATE USER statement. |
PUBLIC |
Specifies that the privilege is granted to all users. |
The following parameters are for granting object privileges:
Parameter | Description |
---|---|
ObjectPrivilege |
This is the object privilege to grant. See "Object privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Assigns all object privileges to the user. |
[ Owner .] object |
object is the name of the object on which privileges are being granted. Owner is the owner of the object. If Owner is not specified, the user who is granting the privilege is the owner. |
user |
Name of the user to whom privileges are being granted. The user must exist in the database. |
PUBLIC |
Specifies that the privilege is granted to all users. |
One or more system privileges can be granted to a user by a user with ADMIN
privilege.
One or more object privileges can be granted to a user by the owner of the object.
One or more object privileges can be granted to a user on any object by a user with ADMIN
privilege.
To remove a privilege from a user, use the REVOKE
statement.
You cannot grant system privileges and object privileges in the same statement.
Only one object can be specified in an object privilege statement.
When replication is configured, this statement is replicated.
Grant the ADMIN
privilege to the user terry
:
GRANT admin TO terry;
Assuming the grantor has ADMIN
privilege, grant the SELECT
privilege to user terry
on the customers
table owned by user pat
:
GRANT SELECT ON pat.customers TO terry;
Grant an object privilege to user terry
:
GRANT SELECT ON emp_details_view TO terry;
The INSERT
statement adds rows to a table.
The following expressions can be used in the VALUES
clause of an INSERT
statement:
Sequence NEXTVAL
and Sequence CURRVAL
DEFAULT
INSERT [hint] INTO [Owner.]TableName [(Column [,...])] VALUES (SingleRowValues) [RETURNING|RETURN Expression[,...] INTO DataItem[,...]]
The SingleRowValues
parameter has the syntax:
{NULL|{?|:DynamicParameter}|{Constant}| DEFAULT}[,...]
Parameter | Description |
---|---|
hint |
Specifies a statement level optimizer hint for the INSERT statement. For more information on hints, see "Statement level optimizer hints". |
Owner |
The owner of the table into which data is inserted. |
TableName |
Name of the table into which data is inserted. |
Column |
Each column in this list is assigned a value from SingleRowValues .
If you omit one or more of the table's columns from this list, then the value of the omitted column in the inserted row is the column default value as specified when the table was created or last altered. If any omitted column has a If you omit a list of columns completely, then you must specify values for all columns in the table. |
?
: |
Placeholder for a dynamic parameter in a prepared SQL statement. The value of the dynamic parameter is supplied when the statement is executed. |
Constant |
A specific value. See "Constants". |
DEFAULT |
Specifies that the column should be updated with the default value. |
Expression |
Valid expression syntax. See Chapter 3, "Expressions". |
DataItem |
Host variable or PL/SQL variable that stores the retrieved Expression value. |
If you omit any of the table's columns from the column name list, the INSERT
statement places the default value in the omitted columns. If the table definition specifies NOT NULL
for any of the omitted columns and there is no default value, the INSERT
statement fails.
BINARY
and VARBINARY
data can be inserted in character or hexadecimal format:
Character format requires single quotes.
Hexadecimal format requires the prefix 0x
before the value.
The INSERT
operation fails if it violates a foreign key constraint. See "CREATE TABLE" for a description of the foreign key constraint.
Restrictions on the RETURNING
clause:
Each Expression
must be a simple expression. Aggregate functions are not supported.
You cannot return a sequence number into an OUT
parameter.
ROWNUM
and subqueries cannot be used in the RETURNING
clause.
Parameters in the RETURNING
clause cannot be duplicated anywhere in the INSERT
statement.
In PL/SQL, you cannot use a RETURNING
clause with a WHERE CURRENT
operation.
A new single row is added to the purchasing.vendors
table.
INSERT INTO purchasing.vendors VALUES (9016, 'Secure Systems, Inc.', 'Jane Secret', '454-255-2087', '1111 Encryption Way', 'Hush', 'MD', '00007', 'discount rates are secret');
For dynamic parameters :pno
and :pname
, values are supplied at runtime.
INSERT INTO purchasing.parts (partnumber, partname) VALUES (:pno, :pname);
Return the annual salary
and job_id
of a new employee. Declare the variables sal
and jobid
with the same data types as salary
and job_id
. Insert the row into employees
. Print the variables for verification.
Command> VARIABLE sal12 NUMBER(8,2); Command> VARIABLE jobid VARCHAR2(10) INLINE NOT NULL; Command> INSERT INTO employees(employee_id, last_name, email, hire_date, job_id, salary) VALUES (211,'Doe','JDOE',sysdate,'ST_CLERK',2400) RETURNING salary*12, job_id INTO :sal12,:jobid; 1 row inserted. PRINT sal12 jobid; SAL12 : 28800 JOBID : ST_CLERK
The INSERT...SELECT
statement inserts the results of a query into a table.
No privilege is required for the object owner.
INSERT
and SELECT
for another user's object.
Parameter | Description |
---|---|
[ Owner .] TableName |
Table to which data is to be added. |
ColumnName |
Column for which values are supplied. If you omit any of the table's columns from the column name list, the INSERT...SELECT statement places the default value in the omitted columns. If the table definition specifies NOT NULL , without a default value, for any of the omitted columns, the INSERT...SELECT statement fails. You can omit the column name list if you provide values for all columns of the table in the same order the columns were specified in the CREATE TABLE statement. If too few values are provided, the remaining columns are assigned default values. |
InsertQuery |
Any supported SELECT query. See "SELECT". You can specify a statement level optimizer hint after the SELECT verb. For more information on statement level optimizer hints, see "Statement level optimizer hints". |
The column types of the result set must be compatible with the column types of the target table.
You can specify a sequence CURRVAL
or NEXTVAL
when inserting values. See "Using CURRVAL and NEXTVAL in TimesTen Classic" for more details.
In the InsertQuery
, the ORDER BY
clause is allowed. The sort order may be modified using the ORDER BY
clause when the result set is inserted into the target table, but the order is not guaranteed.
The INSERT
operation fails if there is an error in the InsertQuery
.
A RETURNING
clause cannot be used in an INSERT...SELECT
statement.
The SELECT
subquery in a UNION
, UNION
ALL
, MINUS
, or INTERSECT
must have the same number of projected expressions.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The LOAD CACHE GROUP
statement loads data from Oracle database tables into a TimesTen cache group. The load operation is local.
No privilege is required for the cache group owner.
LOAD CACHE GROUP
or LOAD ANY CACHE GROUP
for another user's cache group.
LOAD CACHE GROUP [Owner.]GroupName [WHERE ConditionalExpression] COMMIT EVERY n ROWS [PARALLEL NumThreads [READERS NumReaders]]
or
LOAD CACHE GROUP [Owner.]GroupName WITH ID (ColumnValueList)
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name assigned to the cache group. |
WHERE ConditionalExpression |
Use the WHERE clause to specify a search condition to qualify the target rows of the cache operation. If you use more than one table in the WHERE clause and the tables have columns with the same names, fully qualify the table names. |
COMMIT EVERY n ROWS |
Use the COMMIT EVERY n ROWS clause to indicate the frequency (based on the number of rows that are loaded into the cache group) at which a commit is issued during the load operation. This clause is required if you do not specify the WITH ID clause.
|
[ PARALLEL NumThreads ] |
Provides parallel loading for cache group tables. Specifies the number of loading threads to run concurrently. One thread performs the bulk fetch from the Oracle database and the other threads (NumThreads - 1 threads) perform the inserts into TimesTen. Each thread uses its own connection or transaction.
The minimum value for |
[ READERS NumReaders ] |
This option specifies the total number of threads from the NumThreads parameter to use for bulk fetching from the Oracle database.
For example, if you specify a NumThreads parameter of Express NumReaders as an integer where |
WITH ID ColumnValueList |
The WITH ID clauses enables you to use primary key values to load the cache instance. Specify ColumnValueList as either a list of literals or binding parameters to represent the primary key values. |
LOAD CACHE GROUP
loads all new cache instances from the Oracle database that satisfy the cache group definition and are not yet present in the cache group.
Before issuing the LOAD CACHE GROUP
statement, ensure that the replication agent is running if the cache group is replicated or is an AWT cache group. Make sure the cache agent is running.
LOAD CACHE GROUP
is executed in its own transaction, and must be the first operation in a transaction.
LOAD CACHE GROUP
only loads new (inserted) rows on the Oracle database tables into the corresponding TimesTen cache tables.
Errors cause a rollback. When cache instances are committed periodically, errors abort the remainder of the load. The load is rolled back to the last commit.
If the LOAD CACHE GROUP
statement fails when you specify COMMIT EVERY
n
ROWS
(where n
>= 0
), the content of the target cache group could be in an inconsistent state since some loaded rows are already committed. Some cache instances may be partially loaded. Use the UNLOAD CACHE GROUP
statement to unload the cache group, then reload the cache group.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the LOAD CACHE GROUP
statement. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name.
When loading a read-only cache group:
The AUTOREFRESH
state must be paused.
The LOAD CACHE GROUP
statement cannot have a WHERE
clause (except on a dynamic cache group).
The cache group must be empty.
The automatic refresh state of a cache group may change after a LOAD
CACHE
GROUP
operation completes. See "Loading and refreshing a dynamic cache group with autorefresh" in the Oracle TimesTen Application-Tier Database Cache User's Guide for information.
Following the execution of a LOAD CACHE GROUP
statement, the ODBC function SQLRowCount()
, the JDBC method getUpdateCount()
, and the OCI function OCIAttrGet()
with the OCI_ATTR_ROW_COUNT
argument return the number of cache instances that were loaded.
Use the WITH ID
clause as follows:
In place of the WHERE
clause for faster loading of the cache instance
To specify binding parameters
To roll back the load transaction upon failure
Do not reference child tables in the WHERE
clause.
Do not specify the PARALLEL
clause in the following circumstances:
With the WITH ID
clause
With the COMMIT EVERY 0 ROWS
clause
When database level locking is enabled (connection attribute LockLevel
is set to 1)
Do not use the WITH ID
clause when loading these types of cache groups:
Explicitly loaded read-only cache group
Explicitly loaded user managed cache group with the autorefresh attribute
User managed cache group with the AUTOREFRESH
and PROPAGATE
attributes
Do not use the WITH ID
clause with the COMMIT EVERY
n
ROWS
clause.
CREATE CACHE GROUP recreation.cache FROM recreation.clubs ( clubname CHAR(15) NOT NULL, clubphone SMALLINT, activity CHAR(18), PRIMARY KEY(clubname)) WHERE (recreation.clubs.activity IS NOT NULL); LOAD CACHE GROUP recreation.cache COMMIT EVERY 30 ROWS;
Use the HR
schema to illustrate the use of the PARALLEL
clause with the LOAD CACHE GROUP
statement. The COMMIT EVERY
n
ROWS
clause is required. Issue the CACHEGROUPS
command. You see cache group cg2
is defined and the autorefresh state is paused. Unload cache group cg2
, then specify the LOAD CACHE GROUP
statement with the PARALLEL
clause to provide parallel loading. You see 25 cache instances loaded.
Command> CACHEGROUPS; Cache Group SAMPLEUSER.CG2: Cache Group Type: Read Only Autorefresh: Yes Autorefresh Mode: Incremental Autorefresh State: Paused Autorefresh Interval: 1.5 Minutes Root Table: SAMPLEUSER.COUNTRIES Table Type: Read Only Child Table: SAMPLEUSER.LOCATIONS Table Type: Read Only Child Table: SAMPLEUSER.DEPARTMENTS Table Type: Read Only 1 cache group found. Command> UNLOAD CACHE GROUP cg2; 25 cache instances affected. Command> COMMIT; Command> LOAD CACHE GROUP cg2 COMMIT EVERY 10 ROWS PARALLEL 2; 25 cache instances affected. Command> COMMIT;
The following example loads only the cache instances for customers whose customer number is greater than or equal to 5000 into the TimesTen cache tables in the new_customers
cache group from the corresponding Oracle database tables:
LOAD CACHE GROUP new_customers WHERE (oratt.customer.cust_num >= 5000) COMMIT EVERY 256 ROWS;
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The MERGE
statement enables you to select rows from one or more sources for update or insertion into a target table. You can specify conditions that are used to evaluate which rows are updated or inserted into the target table.
Use this statement to combine multiple INSERT
and UPDATE
statements.
MERGE
is a deterministic statement: You cannot update the same row of the target table multiple times in the same MERGE
statement.
No privilege is required for the owner of the target table and the source table.
INSERT
or UPDATE
on a target table owned by another user and SELECT
on a source table owned by another user.
MERGE [hint] INTO [Owner.]TargetTableName [Alias] USING {[Owner.]SourceTableName|(Subquery)}[Alias] ON (Condtion) {MergeUpdateClause MergeInsertClause | MergeInsertClause MergeUpdateClause | MergeUpdateClause | MergeInsertClause }
The syntax for MergeUpdateClause
is as follows:
WHEN MATCHED THEN UPDATE SET SetClause [WHERE Condition1]
The syntax for MergeInsertClause
is as follows:
WHEN NOT MATCHED THEN INSERT [Columns [,...]] VALUES ( {{Expression | DEFAULT|NULL} [,...] }) [WHERE Condition2]
Parameter | Description |
---|---|
hint |
Specifies a statement level optimizer hint for the MERGE statement. For more information on hints, see "Statement level optimizer hints". |
[ Owner .] TargetTableName |
Name of the target table. This is the table in which rows are either updated or inserted. |
[ Alias ] |
You can optionally specify an alias name for the target or source table. |
USING {[ Owner .] SourceTableName | ( Subquery )} [ Alias ] |
The USING clause indicates the table name or the subquery that is used for the source of the data. Use a subquery to use joins or aggregates. Optionally, you can specify an alias for the table name or the subquery. |
ON ( Condition ) |
Specify the condition used to evaluate each row of the target table to determine if the row should be considered for either a merge insert or a merge update. If the condition is true when evaluated, then the MergeUpdateClause is considered for the target row using the matching row from the SourceTableName . An error is generated if more than one row in the source table matches the same row in the target table. If the condition is not true when evaluated, then the MergeInsertClause is considered for that row. |
SET SetClause |
Clause used with the UPDATE statement. For information on the UPDATE statement, see "UPDATE". |
[ WHERE Condition1 ] |
For each row that matches the ON ( Condition ) , Condition1 is evaluated. If the condition is true when evaluated, the row is updated. You can refer to either the target table or the source table in this clause. You cannot use a subquery. The clause is optional. |
INSERT [ Columns [,...]]VALUES ({{ Expression |DEFAULT|NULL} [,...]}) |
Columns to insert into the target table. For more information on the INSERT statement, see "INSERT". |
[WHERE Condition2 ] |
If specified, Condition2 is evaluated. If the condition is true when evaluated, the row is inserted into the target table. The condition can refer to the source table only. You cannot use a subquery. |
You can specify the MergeUpdateClause
and MergeInsertClause
together or separately. If you specify both, they can be in either order.
If DUAL
is the only table specified in the USING
clause and it is not referenced elsewhere in the MERGE
statement, specify DUAL
as a simple table rather than using it in a subquery. In this simple case, to help performance, specify a key condition on a unique index of the target table in the ON
clause.
Restrictions on the MergeUpdateClause
:
You cannot update a column that is referenced in the ON
condition clause.
You cannot update source table columns.
Restrictions on the MergeInsertClause
:
You cannot insert values of target table columns.
Other restrictions:
Do not use the set operators in the subquery of the source table.
Do not use a subquery in the WHERE
condition of either the MergeUpdateClause
or the MergeInsertClause
.
The target table cannot be a detail table of a materialized view.
The RETURNING
clause cannot be used in a MERGE
statement.
In this example, dual
is specified as a simple table. There is a key condition on the UNIQUE
index of the target table specified in the ON
clause. The DuplicateBindMode
attribute is set to 1 in this example. (The default is 0.)
Command> CREATE TABLE mergedualex (col1 TT_INTEGER NOT NULL, col2 TT_INTEGER, PRIMARY KEY (col1)); Command> MERGE INTO mergedualex USING dual ON (col1 = :v1) WHEN MATCHED THEN UPDATE SET col2 = col2 + 1 WHEN NOT MATCHED THEN INSERT VALUES (:v1, 1); Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 'V1' (TT_INTEGER) > 10 1 row merged. Command> SELECT * FROM mergedualex; < 10, 1 > 1 row found.
In this example, a table called contacts
is created with columns employee_id
and manager_id
. One row is inserted into contacts
with values 101 and NULL
for employee_id
and manager_id
, respectively. The MERGE
statement is used to insert rows into contacts
using the data in the employees
table. A SELECT FIRST 3
rows is used to illustrate that in the case where employee_id
is equal to 101, manager_id
is updated to 100. The remaining 106 rows from the employees
table are inserted into contacts
:
Command> CREATE TABLE contacts (employee_id NUMBER (6) NOT NULL PRIMARY KEY, manager_id NUMBER (6)); Command> SELECT employee_id, manager_id FROM employees WHERE employee_id =101; < 101, 100 > 1 row found. Command> INSERT INTO contacts VALUES (101,null); 1 row inserted. Command> SELECT COUNT (*) FROM employees; < 107 > 1 row found. Command> MERGE INTO contacts c USING employees e ON (c.employee_id = e.employee_id) WHEN MATCHED THEN UPDATE SET c.manager_id = e.manager_id WHEN NOT MATCHED THEN INSERT (employee_id, manager_id) VALUES (e.employee_id, e.manager_id); 107 rows merged. Command> SELECT COUNT (*) FROM contacts; < 107 > 1 row found. Command> SELECT FIRST 3 employee_id,manager_id FROM employees; < 100, <NULL> > < 101, 100 > < 102, 100 > 3 rows found. Command> SELECT FIRST 3 employee_id, manager_id FROM contacts; < 100, <NULL> > < 101, 100 > < 102, 100 > 3 rows found.
This statement is not supported in TimesTen Scaleout.
In TimesTen Classic:
The REFRESH CACHE GROUP
statement replaces data in the TimesTen cache tables with the most current committed data from the Oracle database cached tables.
CREATE SESSION
on the Oracle Database schema and SELECT
on the Oracle Database tables.
No privilege for the cache group is required for the cache group owner.
REFRESH CACHE GROUP
or REFRESH ANY CACHE GROUP
for another user's cache group.
REFRESH CACHE GROUP [Owner.]GroupName [WHERE ConditionalExpression] COMMIT EVERY n ROWS [PARALLEL NumThreads]
or
REFRESH CACHE GROUP [Owner.]GroupName WITH ID (ColumnValueList)
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name assigned to the cache group. |
WHERE ConditionalExpression |
Use the WHERE clause to specify a search condition to qualify the target rows of the cache operation. If you use more than one table in the WHERE clause and the tables have columns with the same names, fully qualify the table names. |
COMMIT EVERY n ROWS |
Use the COMMIT EVERY n ROWS clause to indicate the frequency (based on the number of rows that are refreshed in the cache group) at which a commit is issued during the refresh operation. This clause is required if you do not specify the WITH ID clause.
|
[PARALLEL NumThreads ] |
Provides parallel loading for cache group tables. Specifies the number of loading threads to run concurrently. One thread performs the bulk fetch from the Oracle database and the other threads (NumThreads - 1 threads) perform the inserts into TimesTen. Each thread uses its own connection or transaction.
The minimum value for |
WITH ID ColumnValueList |
The WITH ID clauses enables you to use primary key values to refresh the cache instance. Specify ColumnValueList as either a list of literals or binding parameters to represent the primary key values. |
A REFRESH CACHE GROUP
statement must be executed in its own transaction.
Before issuing the REFRESH CACHE GROUP
statement, ensure that the replication agent is running if the cache group is replicated or is an AWT cache group. Make sure the cache agent is running.
The REFRESH
CACHE
GROUP
statement replaces data in the TimesTen cached tables with the most current committed data from the cached Oracle database tables, including data that already exists in the TimesTen cached tables. For an explicitly loaded cache group, a refresh operation is equivalent to issuing an UNLOAD CACHE GROUP
statement followed by a LOAD CACHE GROUP
statement. Operations on all rows in the Oracle database tables including inserts, updates, and deletes are applied to the cache tables. For dynamic cache groups, a refresh operation refreshes only rows that are updated or deleted on the Oracle database tables into the cache tables. For more information on explicitly loaded and dynamic cache groups, see "Loading data into a cache group: Explicitly loaded and dynamic cache groups" in Oracle TimesTen Application-Tier Database Cache User's Guide.
When refreshing a read-only cache group:
The AUTOREFRESH
statement must be paused.
If the cache group is a read-only dynamic cache group, do not use the PARALLEL
clause.
If the automatic refresh state of a cache group (dynamic or explicitly loaded) is PAUSED
, the state is changed to ON
after an unconditional REFRESH CACHE GROUP
statement issued on the cache group completes.
If the automatic refresh state of a dynamic cache group is PAUSED
, the state remains PAUSED
after a REFRESH CACHE GROUP...WITH ID
statement completes.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the REFRESH CACHE GROUP
statement. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name.
If the REFRESH CACHE GROUP
statement fails when you specify COMMIT EVERY
n
ROWS
(where n
>= 0
), the content of the target cache group could be in an inconsistent state since some loaded rows are already committed. Some cache instances may be partially loaded. Use the UNLOAD CACHE GROUP
statement to unload the cache group, then use the LOAD CACHE GROUP
statement to reload the cache group.
Following the execution of a REFRESH CACHE GROUP
statement, the ODBC function SQLRowCount()
, the JDBC method getUpdateCount()
, and the OCI function OCIAttrGet()
with the OCI_ATTR_ROW_COUNT
argument return the number of cache instances that were refreshed.
Use the WITH ID
clause:
In place of the WHERE
clause for faster refreshing of the cache instance
To specify binding parameters
To roll back the refresh transaction upon failure
Do not specify the PARALLEL
clause:
With the WITH ID
clause
With the COMMIT
EVERY
n
ROWS
clause
When database level locking is enabled (connection attribute LockLevel
is set to 1)
For read-only dynamic cache groups
Do not use the WITH ID
clause when refreshing these types of cache groups:
Explicitly loaded read-only cache groups
Explicitly loaded user managed cache groups with the autorefresh attribute
User managed cache groups with the autorefresh and propagate attributes
Do not use the WITH ID
clause with the COMMIT EVERY
n
ROWS
clause.
Do not use the WHERE
clause with dynamic or read-only cache groups.
REFRESH CACHE GROUP recreation.cache COMMIT EVERY 30 ROWS;
Is equivalent to:
UNLOAD CACHE GROUP recreation.cache; LOAD CACHE GROUP recreation.cache COMMIT EVERY 30 ROWS;
Use the HR
schema to illustrate the use of the PARALLEL
clause with the REFRESH CACHE GROUP
statement. The COMMIT EVERY
n
ROWS
is required. Issue the CACHEGROUPS
command. You see cache group cg2
is defined and the autorefresh state is paused. Specify the REFRESH CACHE GROUP
statement with the PARALLEL
clause to provide parallel loading. You see 25 cache instances refreshed.
Command> CACHEGROUPS; Cache Group SAMPLEUSER.CG2: Cache Group Type: Read Only Autorefresh: Yes Autorefresh Mode: Incremental Autorefresh State: Paused Autorefresh Interval: 1.5 Minutes Root Table: SAMPLEUSER.COUNTRIES Table Type: Read Only Child Table: SAMPLEUSER.LOCATIONS Table Type: Read Only Child Table: SAMPLEUSER.DEPARTMENTS Table Type: Read Only 1 cache group found. Command> REFRESH CACHE GROUP cg2 COMMIT EVERY 20 ROWS PARALLEL 2; 25 cache instances affected.
The REVOKE
statement removes one or more privileges from a user.
ADMIN
to revoke system privileges.
ADMIN
or object owner to revoke object privileges.
REVOKE {SystemPrivilege [,...] | ALL [PRIVILEGES]} FROM {User |PUBLIC} [,...]
or
REVOKE {{ObjectPrivilege [,...] | ALL [PRIVILEGES]} ON {[Owner.Object}} [,...] FROM {user | PUBLIC}[,...]
The following parameters are for revoking system privileges:
Parameter | Description |
---|---|
SystemPrivilege |
This is the system privilege to revoke. See "System privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Revokes all system privileges from the user. |
User |
Name of the user from whom privileges are being revoked. The user name must first have been introduced to the TimesTen database by a CREATE USER statement. |
PUBLIC |
Specifies that the privilege is revoked for all users. |
The following parameters are for revoking object privileges:
Parameter | Description |
---|---|
ObjectPrivilege |
This is the object privilege to revoke. See "Object privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Revokes all object privileges from the user. |
User |
Name of the user from whom privileges are to be revoked. The user name must first have been introduced to the TimesTen database through a CREATE USER statement. |
[ Owner .] Object |
Object is the name of the object on which privileges are being revoked. Owner is the owner of the object. If Owner is not specified, then the user who is revoking the privilege is known as the owner. |
PUBLIC |
Specifies that the privilege is revoked for all users. |
Privileges on objects cannot be revoked from the owner of the objects.
Any user who can grant a privilege can revoke the privilege even if they were not the user who originally granted the privilege.
Privileges must be revoked at the same level they were granted. You cannot revoke an object privilege from a user who has the associated system privilege. For example, if you grant SELECT ANY TABLE
to a user and then try to revoke SELECT ON BOB.TABLE1
, the revoke fails unless you have specifically granted SELECT ON BOB.TABLE1
in addition to SELECT ANY TABLE
.
If a user has been granted all system privileges, you can revoke a specific privilege. For example, you can revoke ALTER ANY TABLE
from a user who has been granted all system privileges.
If a user has been granted all object privileges, you can revoke a specific privilege on a specific object from the user. For example, you can revoke the DELETE
privilege on table CUSTOMERS
from user TERRY
even if TERRY
has previously been granted all object privileges.
You can revoke all privileges from a user even if the user has not previously been granted all privileges.
You cannot revoke a specific privilege from a user who has not been granted the privilege.
You cannot revoke privileges on objects owned by a user.
You cannot revoke system privileges and object privileges in the same statement.
You can specify only one object in an object privilege statement.
Revoking the SELECT
privilege on a detail table or a system privilege that includes the SELECT
privilege from user2
on a detail table owned by user1
causes associated materialized views owned by user2
to be marked invalid. See "Invalid materialized views".
When replication is configured, this statement is replicated.
Revoke the ADMIN
and DDL
privileges from the user terry
:
REVOKE admin, ddl FROM terry;
Assuming the revoker has ADMIN
privilege, revoke the UPDATE
privilege from terry
on the customers
table owned by pat
:
REVOKE update ON pat.customers FROM terry;
Use the ROLLBACK
statement to undo work done in the current transaction.
The ROLLBACK
statement enables the following optional keyword:
Parameter | Description |
---|---|
[WORK] |
Optional clause supported for compliance with the SQL standard. ROLLBACK and ROLLBACK WORK are equivalent. |
When the PassThrough
connection attribute is specified with a value greater than zero, the Oracle database transaction will also be rolled back.
A rollback closes all open cursors.
Insert a row into the regions
table of the HR
schema and then roll back the transaction. First set AUTOCOMMIT
to 0:
Command> SET AUTOCOMMIT 0; Command> INSERT INTO regions VALUES (5,'Australia'); 1 row inserted. Command> SELECT * FROM regions; < 1, Europe > < 2, Americas > < 3, Asia > < 4, Middle East and Africa > < 5, Australia > 5 rows found. Command> ROLLBACK; Command> SELECT * FROM regions; < 1, Europe > < 2, Americas > < 3, Asia > < 4, Middle East and Africa > 4 rows found.
The SELECT
statement retrieves data from one or more tables. The retrieved data is presented in the form of a table that is called the result table, result set, or query result.
No privilege is required for the object owner.
SELECT
for another user's object.
SELECT...FOR UPDATE
also requires UPDATE
privilege for another user's object.
The general syntax for a SELECT
statement is the following:
[WithClause] SELECT [hint][FIRST NumRows | ROWS m TO n] [ALL | DISTINCT] SelectList FROM TableSpec [,...] [WHERE SearchCondition] [GROUP BY GroupByClause [,...] [HAVING SearchCondition]] [ORDER BY OrderByClause [,...]] [FOR UPDATE [OF [[Owner.]TableName.]ColumnName [,...]] [NOWAIT | WAIT Seconds]]
The syntax for a SELECT
statement that contains the set operators UNION
, UNION ALL
, MINUS
, or INTERSECT
is as follows:
SELECT [hint] [ROWS m TO n] [ALL] SelectList FROM TableSpec [,...] [WHERE SearchCondition] [GROUP BY GroupByClause [,...] [HAVING SearchCondition] [,...]] {UNION [ALL] | MINUS | INTERSECT} SELECT [ROWS m TO n] [ALL] SelectList FROM TableSpec [,...] [WHERE SearchCondition] [GROUP BY GroupByClause [,...] [HAVING SearchCondition [,...] ] ] [ORDER BY OrderByClause [,...] ]
The syntax for OrderByClause
is as follows:
{ColumnID|ColumnAlias|Expression} [ASC|DESC] [NULLS { FIRST|LAST }]
Parameter | Description |
---|---|
[ WithClause ] |
The WITH clause, also known as subquery factoring, enables you to assign a name to a subquery block, which can subsequently be referenced multiple times within the top-level SELECT statement. The syntax of the WITH clause is presented under "WithClause". |
hint |
Specifies a statement level optimizer hint for the SELECT statement. For more information on hints, see "Statement level optimizer hints". |
FIRST NumRows |
Specifies the number of rows to retrieve. NumRows must be either a positive INTEGER value or a dynamic parameter placeholder. The syntax for a dynamic parameter placeholder is either ? or :DynamicParameter . The value of the dynamic parameter is supplied when the statement is executed. |
ROWS m TO n |
Specifies the range of rows to retrieve where m is the first row to be selected and n is the last row to be selected. Row counting starts at row 1. The query SELECT ROWS 1 TO n returns the same rows as SELECT FIRST NumRows assuming the queries are ordered and n and NumRows have the same value.
Use either a positive |
ALL |
Prevents elimination of duplicate rows from the query result. If neither ALL nor DISTINCT is specified, ALL is the default. |
DISTINCT |
Ensures that each row in the query result is unique. All NULL values are considered equal for this comparison. Duplicate rows are not evaluated.
You cannot use |
SelectList |
Specifies how the columns of the query result are to be derived. The syntax of select list is presented under "SelectList". |
FROM TableSpec |
Identifies the tables referenced in the SELECT statement. The maximum number of tables per query is 24.
|
WHERE SearchCondition |
The WHERE clause determines the set of rows to be retrieved. Normally, rows for which SearchCondition is FALSE or NULL are excluded from processing, but SearchCondition can be used to specify an outer join in which rows from an outer table that do not have SearchCondition evaluated to TRUE with respect to any rows from the associated inner table are also returned, with projected expressions referencing the inner table set to NULL .
The unary (+) operator may follow some column and See Chapter 5, "Search Conditions" for more information on search conditions. |
GROUP BY GroupByClause [,...] |
The GROUP BY clause identifies one or more expressions to be used for grouping when aggregate functions are specified in the select list and when you want to apply the function to groups of rows. The syntax and description for the GROUP BY clause is described in "GROUP BY clause". |
HAVING SearchCondition |
The HAVING clause can be used in a SELECT statement to filter groups of an aggregate result. The existence of a HAVING clause in a SELECT statement turns the query into an aggregate query. All columns referenced outside the sources of aggregate functions in any clause except the WHERE clause must be included in the GROUP BY clause.
Subqueries can be specified in the |
(+) |
A simple join (also called an inner join) returns a row for each pair of rows from the joined tables that satisfy the join condition specified in SearchCondition . Outer joins are an extension of this operator in which all rows of the outer table are returned, whether or not matching rows from the joined inner table are found. In the case no matching rows are found, any projected expressions referencing the inner table are given the value NULL . |
ORDER BY OrderByClause [,...] |
Sorts the query result rows in order by specified columns or expressions. Specify the sort key columns in order from major sort key to minor sort key.
The |
ColumnID |
Must correspond to a column in the select list. You can identify a column to be sorted by specifying its name or its ordinal number. The first column in the select list is column number 1. It is better to use a column number when referring to columns in the select list if they are not simple columns. Some examples are aggregate functions, arithmetic expressions, and constants.
A
|
ColumnAlias |
Used in an ORDER BY clause, the column alias must correspond to a column in the select list. The same alias can identify multiple columns.
|
ASC|DESC |
For each column designated in the ORDER BY clause, you can specify whether the sort order is to be ascending or descending. If neither ASC (ascending) nor DESC (descending) is specified, ascending order is used. All character data types are sorted according to the current value of the NLS_SORT session parameter. |
NULLS { FIRST|LAST } |
Valid with ORDER BY clause and is optional. If you specify ASC or DESC , NULLS FIRST or NULLS LAST must follow ASC or DESC .
Specify If you specify the |
FOR UPDATE
|
FOR UPDATE
|
SelectQuery1
|
Specifies that the results of SelectQuery1 and SelectQuery2 are to be combined, where SelectQuery1 and SelectQuery2 are general SELECT statements with some restrictions.
The The The The data type of corresponding selected entries in both The length of a column in the result is the longer length of correspondent selected values for the column. The column names of the final result are the column names of the leftmost select. You can combine multiple queries using the set operators One or both operands of a set operator can be a set operator. Multiple or nested set operators are evaluated from left to right. The set operators can be mixed in the same query. Restrictions on the
|
When you use a correlation name, the correlation name must conform to the syntax rules for a basic name. (See "Basic names".) All correlation names within one SELECT
statement must be unique. Correlation names are useful when you join a table to itself. Define multiple correlation names for the table in the FROM
clause and use the correlation names in the select list and the WHERE
clause to qualify columns from that table. See "TableSpec" for more information about correlation names.
SELECT...FOR UPDATE
is supported in a SELECT
statement that specifies a subquery, but it can be specified only in the outermost query.
If your query specifies either FIRST
NumRows
or ROWS
m
TO
n
, ROWNUM
may not be used to restrict the number of rows returned.
FIRST
NumRows
and ROWS
m
TO
n
cannot be used together in the same SELECT
statement.
Use the SELECT...INTO
statement in PL/SQL. If you use the SELECT...INTO
statement outside of PL/SQL, TimesTen accepts, but silently ignores, the syntax.
This example shows the use of a column alias (max_salary
) in the SELECT
statement:
SELECT MAX(salary) AS max_salary FROM employees WHERE employees.hire_date > '2000-01-01 00:00:00'; < 10500 > 1 row found.
This example uses two tables, orders
and lineitems
.
The orders
table and lineitems
table are created as follows:
CREATE TABLE orders(orderno INTEGER, orderdate DATE, customer CHAR(20)); CREATE TABLE lineitems(orderno INTEGER, lineno INTEGER, qty INTEGER, unitprice DECIMAL(10,2));
Thus for each order, there is one record in the orders
table and a record for each line of the order in lineitems
.
To find the total value of all orders entered since the beginning of the year, use the HAVING
clause to select only those orders that were entered on or after January 1, 2000:
SELECT o.orderno, customer, orderdate, SUM(qty * unitprice) FROM orders o, lineitems l WHERE o.orderno=l.orderno GROUP BY o.orderno, customer, orderdate HAVING orderdate >= DATE '2000-01-01';
Consider this query:
SELECT * FROM tablea, tableb WHERE tablea.column1 = tableb.column1 AND tableb.column2 > 5 FOR UPDATE;
The query locks all rows in tablea
where:
The value of tablea
.column1
equals at least one tableb
.column1
value where tableb
.column2
is greater than 5.
The query also locks all rows in tableb
where:
The value of tableb
.column2
is greater than 5.
The value of tableb
.column1
equals at least one tablea
.column1
value.
If no WHERE
clause is specified, all rows in both tables are locked.
This example demonstrates the (+) join operator:
SELECT * FROM t1, t2 WHERE t1.x = t2.x(+);
The following query returns an error because an outer join condition cannot be connected by OR
.
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) OR t3.y = 5;
The following query is valid:
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) AND (t3.y = 4 OR t3.y = 5);
A condition cannot use the IN
operator to compare a column marked with (+). For example, the following query returns an error.
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) AND t2.y(+) IN (4,5);
The following query is valid:
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) AND t1.y IN (4,5);
The following query results in an inner join. The condition without the (+) operator is treated as an inner join condition.
SELECT * FROM t1, t2 WHERE t1.x = t2.x(+) AND t1.y = t2.y;
In the following query, the WHERE
clause contains a condition that compares an inner table column of an outer join with a constant. The (+) operator is not specified and hence the condition is treated as an inner join condition.
SELECT * FROM t1, t2 WHERE t1.x = t2.x(+) AND t2.y = 3;
For more join examples, see "JoinedTable".
The following example returns the current sequence value in the student
table.
SELECT SEQ.CURRVAL FROM student;
The following query produces a derived table because it contains a SELECT
statement in the FROM
clause.
SELECT * FROM t1, (SELECT MAX(x2) maxx2 FROM t2) tab2 WHERE t1.x1 = tab2.maxx2;
The following query joins the results of two SELECT
statements.
SELECT * FROM t1 WHERE x1 IN (SELECT x2 FROM t2) UNION SELECT * FROM t1 WHERE x1 IN (SELECT x3 FROM t3);
In the following, select all orders that have the same price as the highest price in their category.
SELECT * FROM orders WHERE price = (SELECT MAX(price) FROM stock WHERE stock.cat=orders.cat);
The next example illustrates the use of the INTERSECT
set operator. There is a department_id
value in the employees
table that is NULL
. In the departments
table, the department_id
is defined as a NOT NULL
primary key. The rows returned from using the INTERSECT
set operator do not include the row in the departments
table whose department_id
value is NULL
.
Command> SELECT department_id FROM employees INTERSECT SELECT department_id FROM departments; < 10 > < 20 > < 30 > < 40 > < 50 > < 60 > < 70 > < 80 > < 90 > < 100 > < 110 > 11 rows found. Command> SELECT DISTINCT department_id FROM employees; < 10 > < 20 > < 30 > < 40 > < 50 > < 60 > < 70 > < 80 > < 90 > < 100 > < 110 > < <NULL> > 12 rows found.
The next example illustrates the use of the MINUS
set operator by combining rows returned by the first query but not the second. The row containing the NULL
department_id
value in the employees
table is the only row returned.
Command> SELECT department_id FROM employees MINUS SELECT department_id FROM departments; < <NULL> > 1 row found.
The following example illustrates the use of the SUBSTR
expression in a GROUP BY
clause and the use of a subquery in a HAVING
clause. The first 10 rows are returned.
Command> SELECT ROWS 1 TO 10 SUBSTR (job_id, 4,10), department_id, manager_id, SUM (salary) FROM employees GROUP BY SUBSTR (job_id,4,10),department_id, manager_id HAVING (department_id, manager_id) IN (SELECT department_id, manager_id FROM employees x WHERE x.department_id = employees.department_id) ORDER BY SUBSTR (job_id, 4,10),department_id,manager_id; < ACCOUNT, 100, 108, 39600 > < ACCOUNT, 110, 205, 8300 > < ASST, 10, 101, 4400 > < CLERK, 30, 114, 13900 > < CLERK, 50, 120, 22100 > < CLERK, 50, 121, 25400 > < CLERK, 50, 122, 23600 > < CLERK, 50, 123, 25900 > < CLERK, 50, 124, 23000 > < MAN, 20, 100, 13000 > 10 rows found.
The following example locks the employees
table for update and waits 10 seconds for the lock to be available. An error is returned if the lock is not acquired in 10 seconds. The first five rows are selected.
Command> SELECT FIRST 5 last_name FROM employees FOR UPDATE WAIT 10; < King > < Kochhar > < De Haan > < Hunold > < Ernst > 5 rows found.
The next example locks the departments
table for update. If the selected rows are locked by another process, an error is returned if the lock is not available. This is because NOWAIT
is specified.
Command> SELECT FIRST 5 last_name e FROM employees e, departments d WHERE e.department_id = d.department_id FOR UPDATE OF d.department_id NOWAIT; < Whalen > < Hartstein > < Fay > < Raphaely > < Khoo > 5 rows found.
In the following, use the HR
schema to illustrate the use of a subquery with the FOR UPDATE
clause.
Command> SELECT employee_id, job_id FROM job_history WHERE (employee_id, job_id) NOT IN (SELECT employee_id, job_id FROM employees) FOR UPDATE; < 101, AC_ACCOUNT > < 101, AC_MGR > < 102, IT_PROG > < 114, ST_CLERK > < 122, ST_CLERK > < 176, SA_MAN > < 200, AC_ACCOUNT > < 201, MK_REP > 8 rows found.
In the following, use a dynamic parameter placeholder for SELECT ROWS
m
TO
n
and SELECT FIRST
.
Command> SELECT ROWS ? TO ? employee_id FROM employees; Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 (TT_INTEGER) > 1 Enter Parameter 2 (TT_INTEGER) > 3 < 100 > < 101 > < 102 > 3 rows found. Command> SELECT ROWS :a TO :b employee_id FROM employees; Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 (TT_INTEGER) > 1 Enter Parameter 2 (TT_INTEGER) > 3 < 100 > < 101 > < 102 > 3 rows found. Command> SELECT FIRST ? employee_id FROM employees; Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 (TT_INTEGER) > 3 < 100 > < 101 > < 102 > 3 rows found.
The following example illustrates the use of NULLS LAST
in the ORDER BY
clause. Query the employees
table to find employees with a commission percentage greater than .30 or a commission percentage that is NULL. Select the first seven employees and order by commission_pct
and last_name
. Order commision_pct
in descending order and use NULLS LAST
to display rows with NULL values last in the query. Output commission_pct
and last_name
.
Command> SELECT FIRST 7 commission_pct,last_name FROM employees where commission_pct > .30 OR commission_pct IS NULL ORDER BY commission_pct DESC NULLS LAST,last_name; < .4, Russell > < .35, King > < .35, McEwen > < .35, Sully > < <NULL>, Atkinson > < <NULL>, Austin > < <NULL>, Baer > 7 rows found.
WithClause
has the following syntax:
WITH QueryName AS ( Subquery ) [, QueryName AS ( Subquery )] ...
WithClause
has the following parameter:
Parameter | Description |
---|---|
QueryName AS (Subquery ) |
Specifies an alias for a subquery that can be used multiple times within the SELECT statement. |
Subquery factoring provides the WITH
clause that enables you to assign a name to a subquery block, which can subsequently be referenced multiple times within the main SELECT
query. The query name is visible to the main query and any subquery contained in the main query.
The WITH
clause can only be defined as a prefix to the main SELECT
statement.
Subquery factoring is useful in simplifying complex queries that use duplicate or complex subquery blocks in one or more places. In addition, TimesTen uses subquery factoring to optimize the query by evaluating and materializing the subquery block once and providing the result for each reference in the SELECT
statement.
You can specify the set operators: UNION
, MINUS
, INTERSECT
in the main query.
Restrictions using the WITH
clause:
Do not use the WITH
clause in a view or materialized view definition.
Recursive subquery factoring is not supported.
Do not use the WITH
clause in subqueries or derived tables.
You cannot provide a column parameter list for the query alias. For example, TimesTen does not support: WITH
w1
(
c1
,
c2
)
AS
...
The following example creates the query names dept_costs
and avg_cost
for the initial query block, then uses these names in the body of the main query.
Command> WITH dept_costs AS ( SELECT department_name, SUM(salary) dept_total FROM employees e, departments d WHERE e.department_id = d.department_id GROUP BY department_name), avg_cost AS ( SELECT SUM(dept_total)/COUNT(*) avg FROM dept_costs) SELECT * FROM dept_costs WHERE dept_total > (SELECT avg FROM avg_cost) ORDER BY department_name; > DEPARTMENT_NAME DEPT_TOTAL ------------------------------- Sales 304500 Shipping 156400
The SelectList
parameter of the SELECT
statement has the following syntax:
{* | [Owner.]TableName.* | { Expression | [[Owner.]TableName.]ColumnName | [[Owner.]TableName.]ROWID | NULL } [[AS] ColumnAlias] } [,...]
The SelectList
parameter of the SELECT
statement has the following parameters:
Parameter | Description |
---|---|
* |
Includes, as columns of the query result, all columns of all tables specified in the FROM clause. |
[ Owner .] TableName .* |
Includes all columns of the specified table in the result. |
Expression |
An aggregate query includes a GROUP BY clause or an aggregate function.
When the select list is not an aggregate query, the column reference must reference a table in the A column reference in the select list of an aggregate query must reference a column list in the |
[[ Owner .] Table .] ColumnName |
Includes a particular column from the named owner's indicated table. You can also specify the CURRVAL or NEXTVAL column of a sequence. See "Using CURRVAL and NEXTVAL in TimesTen Classic" for more details. |
[[ Owner .] Table .] ROWID |
Includes the ROWID pseudocolumn from the named owner's indicated table. |
NULL |
When NULL is specified, the default for the resulting data type is VARCHAR(0) . You can use the CAST function to convert the result to a different data type. NULL can be specified in the ORDER BY clause. |
ColumnAlias |
Used in an ORDER BY clause, the column alias must correspond to a column in the select list. The same alias can identify multiple columns.
|
The clauses must be specified in the order given in the syntax.
TimesTen does not support subqueries in the select list.
A result column in the select list can be derived in any of the following ways.
A result column can be taken directly from one of the tables listed in the FROM
clause.
Values in a result column can be computed, using an arithmetic expression, from values in a specified column of a table listed in the FROM
clause.
Values in several columns of a single table can be combined in an arithmetic expression to produce the result column values.
Aggregate functions (AVG
, MAX
, MIN
, SUM
, and COUNT
) can be used to compute result column values over groups of rows. Aggregate functions can be used alone or in an expression. You can specify aggregate functions containing the DISTINCT
qualifier that operate on different columns in the same table. If the GROUP BY
clause is not specified, the function is applied over all rows that satisfy the query. If the GROUP BY
clause is specified, the function is applied once for each group defined by the GROUP BY
clause. When you use aggregate functions with the GROUP BY
clause, the select list can contain aggregate functions, arithmetic expressions, and columns in the GROUP BY
clause. For more details on the GROUP BY
clause, see "GROUP BY clause".
A result column containing a fixed value can be created by specifying a constant or an expression involving only constants.
In addition to specifying how the result columns are derived, the select list also controls their relative position from left to right in the query result. The first result column specified by the select list becomes the leftmost column in the query result, and so on.
Result columns in the select list are numbered from left to right. The leftmost column is number 1. Result columns can be referred to by column number in the ORDER BY
clause. This is especially useful to refer to a column defined by an arithmetic expression or an aggregate.
To join a table with itself, define multiple correlation names for the table in the FROM
clause and use the correlation names in the select list and the WHERE
clause to qualify columns from that table.
When you use the GROUP BY
clause, one answer is returned per group in accordance with the select list, as follows:
The WHERE
clause eliminates rows before groups are formed.
The GROUP BY
clause groups the resulting rows. See "GROUP BY clause" for more details.
The select list aggregate functions are computed for each group.
In the following example, one value, the average number of days you wait for a part, is returned:
SELECT AVG(deliverydays) FROM purchasing.supplyprice;
The part number and delivery time for all parts that take fewer than 20 days to deliver are returned by the following statement.
SELECT partnumber, deliverydays FROM purchasing.supplyprice WHERE deliverydays < 20;
Multiple rows may be returned for a single part.
The part number and average price of each part are returned by the following statement.
SELECT partnumber, AVG(unitprice) FROM purchasing.supplyprice GROUP BY partnumber;
In the following example, the join returns names and locations of California suppliers. Rows are returned in ascending order by partnumber
values. Rows containing duplicate part numbers are returned in ascending order by vendorname
values. The FROM
clause defines two correlation names (v
and s
), which are used in both the select list and the WHERE
clause. The vendornumber
column is the only common column between vendors
and supplyprice
.
SELECT partnumber, vendorname, s.vendornumber,vendorcity FROM purchasing.supplyprice s, purchasing.vendors v WHERE s.vendornumber = v.vendornumber AND vendorstate = 'CA' ORDER BY partnumber, vendorname;
The following query joins table purchasing.parts
to itself to determine which parts have the same sales price as the part whose serial number is '1133-P-01
'.
SELECT q.partnumber, q.salesprice FROM purchasing.parts p, purchasing.parts q WHERE p.salesprice = q.salesprice AND p.serialnumber = '1133-P-01';
The next example shows how to retrieve the rowid of a specific row. The retrieved rowid value can be used later for another SELECT
, DELETE
, or UPDATE
statement.
SELECT rowid FROM purchasing.vendors WHERE vendornumber = 123;
The following example shows how to use a column alias to retrieve data from the table employees
.
SELECT MAX(salary) AS max_salary FROM employees;
The TableSpec
parameter of the SELECT
statement has the following syntax:
TableNameSyntax | JoinedTable | DerivedTable TableNameSyntax::= [Owner.]TableName [CorrelationName] | ([Owner.]TableName) [CorrelationName] | ([Owner.]TableName [CorrelationName])
A simple table specification has the following syntax:
[Owner.]TableName or ([Owner.]TableName)
The TableSpec
parameter of the SELECT
statement has the following parameters:
Parameter | Description |
---|---|
TableNameSyntax |
Identifies a table to be referenced. Parentheses are optional. |
CorrelationName |
CorrelationName specifies an alias for the immediately preceding table. When accessing columns of that table elsewhere in the SELECT statement, use the correlation name instead of the actual table name within the statement. The scope of the correlation name is the SQL statement in which it is used. The correlation name must conform to the syntax rules for a basic name. See "Basic names".
All correlation names within one statement must be unique. |
JoinedTable |
Specifies the query that defines the table join. The syntax of JoinedTable is presented under "JoinedTable". |
DerivedTable |
Specifies a table derived from the evaluation of a SELECT statement. No FIRST NumRows or ROWS m TO n clauses are allowed in this SELECT statement. The syntax of DerivedTable is presented under "DerivedTable". |
The JoinedTable
parameter specifies a table derived from CROSS JOIN
, INNER JOIN
, LEFT OUTER JOIN
or RIGHT OUTER JOIN
.
The syntax for JoinedTable
is as follows:
{CrossJoin | QualifiedJoin}
Where CrossJoin
is:
TableSpec1 CROSS JOIN TableSpec2
And QualifiedJoin
is:
TableSpec1 [JoinType] JOIN TableSpec2 ON SearchCondition
In the QualifiedJoin
parameter, JoinType
syntax is as follows:
{INNER | LEFT [OUTER] | RIGHT [OUTER]}
The JoinedTable
parameter of the TableSpec
clause of a SELECT
statement has the following parameters:
FULL OUTER JOIN
is not supported.
A joined table can be used to replace a table in a FROM
clause anywhere except in a statement that defines a materialized view. Thus, a joined table can be used in UNION
, MINUS
, INTERSECT
, a subquery, a nonmaterialized view, or a derived table.
A subquery cannot be specified in the operand of a joined table. For example, the following statement is not supported:
SELECT * FROM regions INNER JOIN (SELECT * FROM countries) table2 ON regions.region_id=table2.region_id;
A view can be specified as an operand of a joined table.
A temporary table cannot be specified as an operand of a joined table.