6 SQL Statements

This chapter provides information about the SQL statements available in TimesTen.

SQL statements are generally considered to be either data manipulation language (DML) statements or data definition language (DDL) statements.

DML statements modify database objects. INSERT, UPDATE and DELETE are examples of DML statements. The SELECT statement retrieves data from one or more tables or views.

DDL statements modify the database schema. CREATE TABLE and DROP TABLE are examples of DDL statements.

In addition to an alphabetical listing of all statements, this chapter also contains:

Summary of SQL statements supported in TimesTen

Table 6-1 shows a summary of the SQL statements in TimesTen. The second column indicates if the statement is supported in TimesTen Scaleout. Every statement except ALTER SEQUENCE is supported in TimesTen Classic.

Table 6-1 SQL statements supported in TimesTen

SQL statement Supported in TimesTen Scaleout?

ALTER ACTIVE STANDBY PAIR

N

ALTER CACHE GROUP

Y

ALTER FUNCTION

N

ALTER PACKAGE

N

ALTER PROFILE

Y

ALTER PROCEDURE

N

ALTER REPLICATION

N

ALTER SEQUENCE

Y

Not supported in TimesTen Classic.

ALTER SESSION

Y

ALTER TABLE

Y

Unsupported clauses:

Aging and column-based compression

Unsupported data types:

LOB columns are not supported in tables. LOB variables are supported in PL/SQL programs.

ALTER USER

Y

CALL

Y

COMMIT

Y

CREATE ACTIVE STANDBY PAIR

N

CREATE CACHE GROUP

Y static read-only with incremental autorefresh

CREATE FUNCTION

Y

CREATE INDEX

Y

CREATE MATERIALIZED VIEW

Y with restrictions

CREATE PACKAGE

Y

CREATE PACKAGE BODY

Y

CREATE PROCEDURE

Y

CREATE PROFILE

Y

CREATE REPLICATION

N

CREATE SEQUENCE

Y with TimesTen Scaleout specific BATCH clause.

CREATE SYNONYM

Y

CREATE TABLE

Y including CREATE TABLE... AS SELECT

Unsupported clauses:

Aging and column-based compression

Unsupported data types:

LOBs and ROWID

Distribution clause is not supported for global temporary tables.

CREATE USER

Y

CREATE VIEW

Y

DELETE

Y

DROP ACTIVE STANDBY PAIR

N

DROP CACHE GROUP

Y static read-only cache groups with incremental autorefresh

DROP FUNCTION

Y

DROP INDEX

Y

DROP MATERIALIZED VIEW

Y

DROP PACKAGE [BODY]

Y

DROP PROCEDURE

Y

DROP PROFILE

Y

DROP REPLICATION

N

DROP SEQUENCE

Y

DROP SYNONYM

Y

DROP TABLE

Y

DROP USER

Y

DROP VIEW

Y

FLUSH CACHE GROUP

N

GRANT

Y

INSERT

Y

INSERT...SELECT

Y

LOAD CACHE GROUP

Y static read-only cache groups with incremental autorefresh

MERGE

N

REFRESH CACHE GROUP

Y static read-only cache groups with incremental autorefresh

REVOKE

Y

ROLLBACK

Y

SELECT

Y

TRUNCATE TABLE

Y, but TRUNCATE TABLE is similar to a DDL statement that invalidates all commands that depend on the table being truncated. It is preferable to use the DELETE statement rather than the TRUNCATE statement to delete all rows in a table.

UNLOAD CACHE GROUP

Y static read-only cache groups with incremental autorefresh

UPDATE

Y

Comments within SQL statements

A comment can appear between keywords, parameters, or punctuation marks in a statement. You can include a comment in a statement in two ways:

  • Begin the comment with a slash and an asterisk (/*). Proceed with the text of the comment. The text can span multiple lines. End the comment with an asterisk and a slash. (*/). You do not need to separate the opening and terminating characters from the text by a space or line break.

  • Begin the comment with two hyphens (--). Proceed with the text of the comment. The text cannot extend to a new line. End the comment with a line break.

Optimizer hints

Optimizer hints are instructions that are passed to the TimesTen query optimizer. The optimizer considers these hints when choosing the best execution plan for your query. Most of the hints are supported both in TimesTen Scaleout and in TimesTen Classic. There are also hints that are supported only in TimesTen Scaleout. See "Optimizer hints supported in TimesTen Scaleout only" for information.

TimesTen supports three levels of optimizer hints:

  • Statement level optimizer hints: When specified, the optimizer considers the hint for the particular statement. See "Statement level optimizer hints" for details.

  • Transaction level optimizer hints: When specified (by calling the appropriate built-in procedure), the optimizer considers the hint for the entire transaction. See "Use optimizer hints to modify the execution plan" in the Oracle TimesTen In-Memory Database Operations Guide.

  • Connection level optimizer hints: When specified, the optimizer considers the hint for the entire connection. See "Use optimizer hints to modify the execution plan" in the Oracle TimesTen In-Memory Database Operations Guide and "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for details.

The order of precedence for optimizer hints is statement level hints, transaction level hints and then connection level hints. Table 6-2 provides a summary of the statement, transaction, and connection level optimizer hints.

Table 6-2 Summary of statement, transaction, and connection level optimizer hints

Statement level optimizer hint Transaction level optimizer hint Connection level optimizer hint

You specify the hint within the comment syntax and after a SQL VERB in a SQL statement.

You specify the hint by calling the ttOptSetFlag, or the ttOptSetOrder, or the ttOptUseIndex built-in procedure.

You specify the hint in the OptimizerHint general connection attribute.

The hint is scoped to the SQL statement.

The hint is scoped to the transaction.

The hint is scoped to the connection.

The autocommit setting has no effect. After the statement containing the hint is executed, the hint has no effect on future statements or queries.

The autocommit setting has an effect. You must set autocommit to off. Doing so ensures the hint is in effect for the duration of your transaction (until you issue a commit or rollback). If you do not set autocommit to off, the statement is executed in its own transaction and the hint only has an effect on the statement.

The autocommit setting has no effect. The hint is in effect for the duration of the connection.

The optimizer considers the hint for the statement only.

The optimizer considers the hint for all statements in the transaction.

The optimizer considers the hint for all statements in the connection.

The hint is supported in the CREATE TABLE...AS SELECT statement.

The hint is not supported in the CREATE TABLE...AS SELECT statement. This is a DDL statement and TimesTen implicitly commits a DDL statement.

The hint is not supported in the CREATE TABLE...AS SELECT statement. This is a DDL statement and TimesTen implicitly commits a DDL statement.

If you specify the hint in a transaction in which transaction level optimizer hints or connection level optimizer hints are specified, the statement level optimizer hint overrides the transaction level hint or the connection level hint for the SQL statement. After TimesTen executes the SQL statement:

  • The original transaction level optimizer hint remains in effect for the duration of the transaction or

  • The original connection level optimizer hint remains in effect for the duration of the connection.

The hint is in effect for the duration of the transaction. If you specify a statement level optimizer hint in a SQL statement, the statement level optimizer hint is in effect for the statement and the optimizer does not use the transaction level hint for the statement. After TimesTen executes the statement, the original transaction level optimizer hint remains in effect for the duration of the transaction.

A hint specified at this level overrides the same hint specified at the connection level.

The hints are in effect for the duration of the connection. The order of precedence is statement level, transaction level, and then connection level.

You use the statement level optimizer hints if you want to influence the optimizer for a specific statement. You must specify the hint for each statement in which you want to influence the optimizer. This could result in multiple alterations to your statements.

You use the transaction level optimizer hints to influence the optimizer for all statements in a transaction. You do not have to specify a hint for each statement. The hint applies to all statements in the transaction. The hint can be overridden by specifying the hint at the statement level.

You use the connection level optimizer hint to influence the optimizer for all statements in the connection. The hint can be overridden by specifying the hint at the transaction or at the statement level.

Statement level optimizer hints

Statement level optimizer hints are comments in a SQL statement that pass instructions to the TimesTen query optimizer. The optimizer considers these hints when choosing the best execution plan for your query. It analyzes the SQL statements and generates a query plan which is then used by the SQL execution engine to execute the query and return the data.

See "Use optimizer hints to modify the execution plan" in Oracle TimesTen In-Memory Database Operations Guide for information about statement level optimizer hints.

SQL Syntax

A SQL statement can have one comment that includes one or more statement level optimizer hints.
These hints are only supported in TimesTen Cache:
  • TT_DynamicLoadMultiplePKs
  • TT_DynamicLoadRootTbl
  • TT_DynamicPassThrough

Some hints are not supported in certain SQL statements:

  • TT_CommitDMLOnSuccess is supported in the DELETE, INSERT, and UPDATE statements. It is also valid in the INSERT...SELECT statement and must follow the SELECT keyword. This hint is supported in TimesTen Scaleout only.

  • The TT_GridQueryExec and TT_PartialResult hints are supported in the SELECT, INSERT...SELECT, and CREATE TABLE... AS SELECT SQL statements only and these hints must follow the SELECT keyword. These hints are supported in TimesTen Scaleout only.

  • The remaining hints are supported in the DELETE, INSERT, MERGE, SELECT, UPDATE, INSERT...SELECT, and CREATE TABLE...AS SELECT SQL statements and these hints must follow the DELETE, INSERT, MERGE, SELECT, or UPDATE keyword.

Table 6-3 shows the proper placement of hints in a SQL statement.

You embed statement level optimizer hints in comment syntax. TimesTen supports hints in comments that span one line and in comments that span more than one line. If your comment that contains the hint spans one or more lines, use the comment syntax, /*+...*/. If your comment that contains the hint spans one line, use the comment syntax, --+.

Syntax:

SQL VERB {/*+ [CommentText] hint [{hint|CommentText} [...]] */ |
          --+ [CommentText] hint [{hint|CommentText} [...]] }
hint::= ScaleoutHint | CacheHint | JoinOrderHint | IndexHint| FlagHint
ScaleoutHint::= TT_CommitDMLOnSuccess({0|1})|TT_GridQueryExec({LOCAL|GLOBAL})|
                TT_PartialResult(0|1)
CacheHint::= TT_DynamicLoadMultiplePKs ({0|1})|TT_DynamicLoadRootTbl ({0|1})|
             TT_DynamicPassthrough(N) 
JoinOrderHint::= TT_JoinOrder (CorrelationName CorrelationName [...])
IndexHint::= TT_Index (CorrelationName,IndexName,{0|1} [;...])
FlagHint::= FlagName (0|1)
FlagName::= TT_BranchAndBound|TT_CountAsInt|TT_DynamicLoadEnable|
            TT_DynamicLoadErrorMode| TT_FirstRow|TT_ForceCompile|
            TT_GenPlan|TT_HashGb|TT_HashScan|TT_IndexedOr|TT_MergeJoin|
            TT_NestedLoop|TT_NoRemRowIdOpt|TT_Range|TT_Rowid|TT_RowLock|
            TT_ShowJoinOrder|TT_TblLock|TT_TblScan|TT_TmpHash|TT_TmpRange|
            TT_TmpTable|TT_UseBoyerMooreStringSearch|

Parameters

Parameter Description

SQL VERB

SQL VERB refers to one of the keywords: DELETE, INSERT, MERGE, SELECT, or UPDATE. You embed a statement level optimizer hint in comment syntax and if the comment syntax contains a statement level optimizer hint, the comment syntax must follow the SQL VERB.

The TT_GridQueryExec and TT_PartialResult hints are valid for the SELECT keyword only.

/*+ hint */

One or more hints that are embedded in comment syntax. The comment syntax can span one or more lines. The plus sign (+) denotes the start of a statement level optimizer hint.

Make sure there is no space between the star (*) and the plus sign (+).

--+ hint

One or more hints that are embedded in comment syntax. The comment syntax can only span one line. The plus sign (+) denotes the start of a statement level optimizer hint.

Make sure there is no space between the dash (-) and the plus sign (+).

hint

A statement level optimizer hint. A SQL statement supports one or more statement level optimizer hints as one comment string. For one SQL statement, you can specify one comment that contains one or more hints and that comment must follow a DELETE, INSERT, MERGE, SELECT, or UPDATE keyword (or for TT_GridQueryExec and TT_PartialResult, the SELECT keyword). TT_CommitDMLOnSuccess must follow a DELETE, INSERT, or UPDATE keyword and in the INSERT...SELECT statement, it must follow the SELECT keyword.

If you specify more than one hint within the comment, make sure there is a space between the hints.

Statement level optimizer hints are scoped to a SQL statement and have per query semantics.

For hints other than TT_GridQueryExec, TT_PartialResult, or TT_CommitDMLOnSuccess:

  • The name and type of statement level optimizer hints map to the transaction level optimizer hints. Transaction level optimizer hints are invoked by calling the built-in procedures ttOptSetFlag, ttOptSetOrder, or ttOptUseIndex.

  • Transaction level hints are scoped to the transaction and have transaction semantics. You must set autocommit to 0 and the transaction level optimizer hint is in effect for the duration of your transaction.

  • For more information, see "ttOptSetFlag," "ttOptSetOrder" or "ttOptUseIndex" in the Oracle TimesTen In-Memory Database Reference.

CommentText

Text within a comment string. You can use both statement level optimizer hints and commenting text within one comment. Make sure to include a space between the hint and the commenting text.

ScaleoutHint

ScaleoutHint refers to the TT_CommitDMLOnSuccess statement level hint as well as the TT_GridQueryExec and the TT_PartialResult statement level optimizer hints. These hints are supported in TimesTen Scaleout only.

  • See "TT_CommitDMLOnSuccess optimizer hint" for detailed information.

  • TT_GridQueryExec (LOCAL|GLOBAL) returns query results for the local element (if LOCAL) or for all elements (if GLOBAL). If K-safety is set to 2 and GLOBAL is specified, the results include the data in all elements in a replica set.

    LOCAL: Queries are executed in the local element only. If the local element does not have a full copy of the data, TimesTen Scaleout returns partial results.

    GLOBAL: TimesTen Scaleout retrieves data from all elements, including copies of the rows from all tables from all replica sets to generate the results. This results in duplicate data returned if K-safety is set to 2 or if tables have a duplicate distribution scheme.

    The default is neither local nor global. If you do not specify this hint, the query is executed in one logical data space. Exactly one full copy of the data is used to compute the query.

  • TT_PartialResult (0|1) returns an error (if 0) or partial results (if 1) when data is not available.

    0: Returns error if the required data is not available in the case where all elements in a replica set are not available. If at least one element from each replica set is available or the data required by the query is available, the optimizer returns the query result correctly without error. This is the default.

    1: Returns partial results if all elements in a replica set are not available.

SELECT /*+TT_GridQueryExec(LOCAL)*/   
        COUNT(*), elementId# FROM t
        GROUP BY elementId#;
SELECT /*+TT_GridQueryExec(GLOBAL)*/   
        COUNT(*), elementId# FROM t 
        GROUP BY elementId#;
SELECT /*+TT_PartialResult(0)*/ COUNT  
       (*), elementId# FROM t GROUP BY
       elementId#;
SELECT /*+TT_PartialResult(1)*/ COUNT  
      (*), elementId# FROM t GROUP BY
      elementId#;
CacheHint CacheHint refers to the supported optimizer hints for TimesTen Cache. These hints are TT_DynamicLoadMultiplePKs, TT_DynamicLoadRootTbl, and TT_DynamicPassthrough. These hints are described later in this table (in alphabetical order).

JoinOrderHint::= TT_JoinOrder ( CorrelationName CorrelationName [...] )

JoinOrderHint refers to the syntax for the TT_JoinOrder statement level optimizer hint. The TT_JoinOrder hint instructs the optimizer to join your tables in a specified order. The join order is in effect for the statement only.

Specify TT_JoinOrder and two or more correlation names. Make sure to enclose the correlation names in parentheses.

CorrelationName refers to the alias of your table specified in the query or subquery of your SQL statement. Make sure to use the correlation name rather than the actual table name when using this hint.

For example, if you are joining the employees and departments tables and you specify the emps correlation name for the employees table and the depts correlation name for the departments table, then to specify the TT_JoinOrder hint in ttIsql:

Command> SELECT /*+ TT_JoinOrder
           (EMPS DEPTS)*/...

If your TT_JoinOrder hint contains syntax errors, TimesTen ignores the hint. If your TT_JoinOrder hint does not contain a closing parenthesis, then the remainder of the comment string is ignored. So if you specify additional statement level optimizer hints after the TT_JoinOrder hint, and the TT_JoinOrder hint is missing the closing parenthesis, these additional statement level optimizer hints are ignored.

You can execute the built-in procedure, ttOptSetOrder, to specify a join order for the duration of your transaction. The TT_JoinOrder statement level optimizer hint is in effect for the statement only.

For more information on ttOptSetOrder, see "ttOptSetOrder" in the Oracle TimesTen In-Memory Database Reference.

IndexHint::= TT_INDEX (CorrelationName IndexName {0|1} [;...] )

IndexHint refers to the syntax for the TT_INDEX statement level optimizer hint. Use the TT_INDEX hint to direct the optimizer to use or not use an index for your table. The index hint is in effect for the statement only.

CorrelationName refers to the alias of your table specified in the query or subquery of your SQL statement. Make sure to use the correlation name rather than the actual table name. If you omit the correlation name, the setting affects all tables with the specified index name.

IndexName refers to the index you want the optimizer to consider or not consider for the table If you omit the index name, the setting applies to all indexes of the specified correlation.

Specify a value of 0 to ask the optimizer not to consider the index. Specify a value of 1 to ask the optimizer to consider the index.

For example, To direct the optimizer to use the index emp_name_ix for a query on the employees table and you specify the correlation name, e, for the employees table, then to specify the TT_INDEX hint in ttIsql:

Command> SELECT /*+ TT_INDEX
          (E,EMP_NAME_IX,1) */ ...

Use a semicolon (;) to include more than one CorrelationName, IndexName, {0|1} for the TT_INDEX hint. You must specify each of the parameters: the CorrelationName, the IndexName, and either 0 or 1.

If your TT_Index hint contains syntax errors, TimesTen ignores the hint. If your TT_Index hint does not contain a closing parenthesis, then the remainder of the comment string is ignored. So if you specify additional statement level optimizer hints after the TT_Index hint and the TT_Index hint is missing the closing parenthesis, these additional statement level optimizer hints are ignored.

You can execute the built-in procedure, ttOptUseIndex, to specify an index hint for the duration of your transaction. The TT_INDEX statement level optimizer hint is in effect for the statement only.

For more information on ttOptUseIndex, see "ttOptUseIndex" in the Oracle TimesTen In-Memory Database Reference.

FlagHint

FlagHint refers to statement level optimizer flags that you enable or disable to influence the execution plan of the TimesTen query optimizer. These flags map to the flags used in the ttOptSetFlag built-in procedure.

Statement level optimizer hint flags are in effect for the statement only whereas transaction level optimizer hint flags are in effect for the duration of your transaction.

TT_BranchAndBound

Flag that maps to the flag BranchAndBound in the ttOptSetFlag built-in procedure.

TT_CountAsInt

This hint controls the return data type for the COUNT function (used in a query). Specify 1 to have the return data type be TT_INTEGER. Specify 0 to have the return data type be TT_BIGINT. If you do not specify this hint, the default return data type is TT_BIGINT. This hint is supported at the statement and at the connection levels. See "COUNT" for information on the COUNT function.

This hint is provided for backward compatibility. If you specify the hint with a value of 1, it may result in an unexpected integer overflow. New applications should not specify this hint. This ensures TimesTen uses the default return data type of TT_BIGINT for the COUNT function.

This example specifies a value of 1 for the hint. The return data type is TT_INTEGER:

Command> describe SELECT /*+TT_CountAsInt(1)*/ COUNT (*) FROM dual;

Prepared Statement:
  Columns:
    EXP                             TT_INTEGER NOT NULL

This example specifies a value of 0 for the hint. The return data type is TT_BIGINT.

Command> describe SELECT /*+TT_CountAsInt(0)*/ COUNT (*) FROM dual;

Prepared Statement:
  Columns:
    EXP                             TT_BIGINT NOT NULL

This example does not set the optimizer hint. The default return data type is TT_BIGINT.

describe SELECT COUNT (*) FROM dual;

Prepared Statement:
  Columns:
    EXP                             TT_BIGINT NOT NULL

TT_DynamicLoadEnable

Flag that maps to the flag DynamicLoadEnable in the ttOptSetFlag built-in procedure.

TT_DynamicLoadErrorMode

Flag that maps to the flag DynamicLoadErrorMode in the ttOptSetFlag built-in procedure.

TT_DynamicLoadMultiplePKs{ 0|1} TimesTen Cache optimizer hint, supported in TimesTen Classic. This hint enables (if set to 1) or disables (if set to 0), the ability to dynamically load multiple cache instances on a single table cache group. The dynamic load operation must be triggered by a qualified SELECT statement that contains a WHERE clause, in which the WHERE clause references multiple primary key values of the root table of the cache group. The default is 1. When both the TT_DynamicLoadMultiplePKs and the TT_DynamicLoadRootTbl hints are specified, the TT_DynamicLoadMultiplePKs take precedence. See "Dynamically loading multiple cache instances with multiple primary keys" in the Oracle TimesTen In-Memory Database Cache Guide for details.
TT_DynamicLoadRootTbl TimesTen Cache optimizer hint, supported in TimesTen Classic. This hint enables (if set to 1) or disables (if set to 0), the ability to dynamically load multiple cache instances on a single table cache group. The dynamic load operation must be triggered by a qualified SELECT statement that contains a WHERE clause, in which the WHERE clause does not reference multiple primary key values of the root table of the cache group. The default is 0. When both the TT_DynamicLoadMultiplePKs and the TT_DynamicLoadRootTbl hints are specified, the TT_DynamicLoadMultiplePKs take precedence. See "Dynamically loading multiple cache instances without multiple primary keys" in the Oracle TimesTen In-Memory Database Cache Guide for details.
TT_DynamicPassThrough(N) TimesTen Cache optimizer hint, supported in TimesTen Classic. If specified, this hint limits the number of rows that can be dynamically loaded into a TimesTen cache instance. Specifically, if a dynamic load operation triggered by a qualified SELECT statement results in a number of rows that is greater than the specified N row limit, the cache instance is not loaded and instead the query is passed to the Oracle database. The dynamic load must be triggered by a qualified SELECT statement and the cache group must not have a WHERE clause. The hint is ignored for non-SELECT statements. Set this hint to the maximum number of rows you want dynamically loaded. If you set the hint to a value less than or equal to 0 or if you do not specify the hint, the dynamic load has no row limit. In this case, there is not a limit in the number of rows can be loaded into the cache instance. See "Automatic passthrough of dynamic load to the Oracle database" in the Oracle TimesTen In-Memory Database Cache Guide for details.

TT_FirstRow

Flag that maps to the flag FirstRow in the ttOptSetFlag built-in procedure.

TT_ForceCompile

Flag that maps to the flag ForceCompile in the ttOptSetFlag built-in procedure.

TT_GenPlan

Flag that maps to the flag GenPlan in the ttOptSetFlag built-in procedure.

TT_HashGb

Flag that maps to the flag HashGb in the ttOptSetFlag built-in procedure.

TT_HashScan

Flag that maps to the flag Hash in the ttOptSetFlag built-in procedure.

TT_IndexedOr

Flag that maps to the flag IndexedOr in the ttOptSetFlag built-in procedure.

TT_MergeJoin

Flag that maps to the flag MergeJoin in the ttOptSetFlag built-in procedure.

TT_NestedLoop

Flag that maps to the flag NestedLoop in the ttOptSetFlag built-in procedure.

TT_NoRemRowIdOpt

Flag that maps to the flag NoRemRowIdOpt in the ttOptSetFlag built-in procedure.

TT_Range

Flag that maps to the flag Range in the ttOptSetFlag built-in procedure.

TT_Rowid

Flag that maps to the flag Rowid in the ttOptSetFlag built-in procedure.

TT_RowLock

Flag that maps to the flag Rowlock in the ttOptSetFlag built-in procedure.

TT_ShowJoinOrder

Flag that maps to the flag ShowJoinOrder in the ttOptSetFlag built-in procedure.

TT_TblLock

Flag that maps to the flag TblLock in the ttOptSetFlag built-in procedure.

TT_TblScan

Flag that maps to the flag Scan in the ttOptSetFlag built-in procedure.

TT_TmpHash

Flag that maps to the flag TmpHash in the ttOptSetFlag built-in procedure.

TT_TmpRange

Flag that maps to the flag TmpRange in the ttOptSetFlag built-in procedure.

TT_TmpTable

Flag that maps to the flag TmpTable in the ttOptSetFlag built-in procedure.

TT_UseBoyerMooreStringSearch

Flag that maps to the flag UseBoyerMooreStringSearch in the ttOptSetFlag built-in procedure.

Note:

For descriptions of flags discussed in the preceding table, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference

Description

  • Embed statement level optimizer hints in comment syntax. Begin the comment with either /* or --. Follow the beginning comment syntax with a plus sign (+). The plus sign (+) signals TimesTen to interpret the comment as a list of hints. The plus sign (+) must follow immediately after the comment delimiter. (For example, after /* or after --). No space is permitted between the comment delimiter and the plus sign (+).

    In the following example, there is a space between the star (*) and the plus sign (+), so the hint is ignored:

    Command> SELECT /* + TT_TblScan (1) This hint is ignored because there is a
              space between the star (*) and the plus (+) sign. */ ...
    
  • A hint is one of the statement level optimizer hints supported by TimesTen. There can be a space between the plus sign (+) and the hint. If the comment contains multiple hints, separate the hints by at least one space. For example, to specify two hints on one line:

    Command> SELECT --+ TT_MergeJoin (0) TT_NestedLoop (1) 
              ...
    
  • You can intersperse commenting text with hints in a comment. For example,

    Command> SELECT /*+ TT_HashScan (1) This demonstrates a hint followed by a
               comment string. */ ...
    
  • TimesTen ignores hints and does not return an error if:

    • Your hint does not follow the DELETE, INSERT, MERGE, SELECT or UPDATE keyword (or for TT_GridQueryExec or TT_PartialResult, the SELECT keyword). TT_CommitDMLOnSuccess must follow the DELETE, INSERT, UPDATE keyword and for INSERT...SELECT, it must follow the SELECT keyword.

    • Your hint contains misspellings or syntax errors. If you have hints that are within the same comment and some hints are correct syntactically and some hints are incorrect syntactically, TimesTen ignores the incorrect hints and accepts the correct hints.

    • You use either the TT_JoinOrder or TT_Index hint and you do not supply a closing parenthesis, the remainder of the hint string is ignored.

  • For hints that conflict with each other, TimesTen uses the rightmost hint in the comment. For example, if the comment string is /*+TT_TblScan (0)...TT_TblScan (1) */, the rightmost hint, TT_TblScan(1), is used.

  • Statement level optimizer hints override conflicting transaction level optimizer hints. If you specify a transaction level optimizer hint that conflicts with a statement level optimizer hint, the statement level optimizer hint overrides the conflicting transaction level optimizer hint. For example, if you call ttOptSetFlag, and enable the Range flag and then you issue a SQL query and disable the statement level optimizer flag, TT_Range, TimesTen disables the range flag for the query. After the query is executed, the original range flag setting that was in place in the transaction before the query was executed remains in effect for the duration of the transaction. For more information, see "Using statement level optimizer hints for a SELECT query". The TT_GridQueryExec, TT_PartialResult, TT_CommitDMLOnSuccess, and TT_CountAsInt hints are not supported at the transaction level.

  • Do not use statement level optimizer hints in a subquery.

  • The TimesTen query optimizer does not recognize statement level optimizer hints for passthrough statements. TimesTen passes the SQL text for passthrough statements to the Oracle database and the SQL text is processed according to the SQL rules of the Oracle database. Passthrough statements are not supported in TimesTen Scaleout.

SQL statements that support statement level optimizer hints

You can specify statement level optimizer hints in SQL statements. Not all hints are supported in all statements. You must specify the hint within comment syntax and the comment syntax must immediately follow the SQL VERB. (For example, SELECT /*+ hint */...) Table 6-3 shows the correct placement of the statement level hint. It also indicates if a hint is not supported in the statement.

Table 6-3 Placement of statement level hints in SQL statements

SQL statement Placement of hint

CREATE TABLE... AS SELECT

CREATE TABLE...AS SELECT [hint]...

Do not use transaction level hints with the CREATE TABLE...AS SELECT statement.

TT_CommitDMLOnSuccess is not supported in this statement.

DELETE

DELETE [hint]...

The TT_GridQueryExec and TT_PartialResult hints are not supported in this statement.

INSERT

INSERT [hint]...

The TT_GridQueryExec and TT_PartialResult hints are not supported in this statement.

INSERT...SELECT

INSERT...SELECT [hint]...

MERGE

MERGE [hint]...

The TT_GridQueryExec and TT_PartialResult hints are not supported.

SELECT

SELECT [hint]...

Do not specify a hint in a subquery.

The TT_CommitDMLOnSuccess hint is not supported in this statement.

SELECT...{UNION|MINUS| INTERSECT} SELECT...

SELECT [hint] {UNION|MINUS|INTERSECT} SELECT...

The TT_CommitDMLOnSuccess hint is not supported in this statement.

UPDATE

UPDATE [hint]...

The TT_GridQueryExec and TT_PartialResult hints are not supported in this statement.

Understanding hints

Use optimizer hints to influence the TimesTen query optimizer in determining the choice of the execution plan for your query.

TT_GridQueryExec, TT_PartialResult and TT_CommitDMLOnSuccess are supported at the connection and statement levels only. This section is not valid for these hints.

To view transaction level optimizer hints, execute the built-in procedure, ttOptSetFlag. For more information on the built-in procedure, ttOptGetFlag, see "ttOptGetFlag" in Oracle TimesTen In-Memory Database Reference.

Examples

For TT_CommitDMLOnSuccess examples, see "TT_CommitDMLOnSuccess optimizer hint" for information.

For TT_GridQueryExec and TT_PartialResult examples:

The following examples illustrate usages of statement level and transaction level optimizer hints. The TimesTen optimizer is a cost based query optimizer and generates what it thinks is the most optimal execution plan for your statement. This plan differs from release to release. The plan is based on the indexes that exist on the referenced tables as well as the column and table statistics that are available. When you recompute statistics or change indexes, the TimesTen optimizer may change the execution plan based on the recomputed statistics and index changes. Because the execution plan may vary, these examples are included for demonstration purposes only. Examples include:

Using statement level optimizer hints for a SELECT query

View the execution plan for a query. Then use statement level optimizer hints to influence the optimizer to choose a different execution plan. Consider the query:

Command> SELECT r.region_name, c.country_name
         FROM regions r, countries c
         WHERE r.region_id = c.region_id
         ORDER BY c.region_id;

Use the ttIsql EXPLAIN command to view the plan generated by the optimizer. Note:

  • The optimizer performs two range scans using table level locking for both scans.

  • The optimizer uses the MergeJoin operation to join the two tables.

Command> EXPLAIN SELECT r.region_name, c.country_name
         FROM regions r, countries c
         WHERE r.region_id = c.region_id
         ORDER BY c.region_id;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               2
  OPERATION:           TblLkRangeScan
  TBLNAME:             COUNTRIES
  IXNAME:              COUNTR_REG_FK
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
 
  STEP:                2
  LEVEL:               2
  OPERATION:           TblLkRangeScan
  TBLNAME:             REGIONS
  IXNAME:              REGIONS
  INDEXED CONDITION:   R.REGION_ID >= C.REGION_ID
  NOT INDEXED:         <NULL>
 
  STEP:                3
  LEVEL:               1
  OPERATION:           MergeJoin
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   C.REGION_ID = R.REGION_ID
  NOT INDEXED:         <NULL>

Now use statement level optimizer hints to direct the optimizer to perform the scans using row level locking and to use a NestedLoop operation to join the tables. Set autocommit to on to illustrate that the autocommit setting has no effect because statement level optimizer hints are scoped to the SQL statement.

Command> autocommit on;
Command> EXPLAIN SELECT /*+ TT_RowLock (1), TT_TblLock (0), TT_MergeJoin (0),
         TT_NestedLoop (1) */
         r.region_name, c.country_name
         FROM regions r, countries c
         WHERE r.region_id = c.region_id
         ORDER BY c.region_id;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               3
  OPERATION:           RowLkRangeScan
  TBLNAME:             REGIONS
  IXNAME:              REGIONS
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
  STEP:                2
  LEVEL:               3
  OPERATION:           RowLkRangeScan
  TBLNAME:             COUNTRIES
  IXNAME:              COUNTR_REG_FK
  INDEXED CONDITION:   C.REGION_ID = R.REGION_ID
  NOT INDEXED:         <NULL>
 
  STEP:                3
  LEVEL:               2
  OPERATION:           NestedLoop
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
  STEP:                4
  LEVEL:               1
  OPERATION:           OrderBy
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>

Prepare the query again without statement level optimizer hints. The optimizer reverts back to the original execution plan because statement level optimizer hints are scoped to the SQL statement.

Command> EXPLAIN SELECT r.region_name, c.country_name
         FROM regions r, countries c
         WHERE r.region_id = c.region_id
         ORDER BY c.region_id;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               2
  OPERATION:           TblLkRangeScan
  TBLNAME:             COUNTRIES
  IXNAME:              COUNTR_REG_FK
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
  STEP:                2
  LEVEL:               2
  OPERATION:           TblLkRangeScan
  TBLNAME:             REGIONS
  IXNAME:              REGIONS
  INDEXED CONDITION:   R.REGION_ID >= C.REGION_ID
  NOT INDEXED:         <NULL>
 
  STEP:                3
  LEVEL:               1
  OPERATION:           MergeJoin
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   C.REGION_ID = R.REGION_ID
  NOT INDEXED:         <NULL>

Using on and off hinting

This example illustrates the importance of directing the optimizer to specifically enable or disable hints that perform a similar function. For example, the hash and range hints direct the optimizer to use either a hash or range access path for the table. In order to ensure the optimizer chooses the specific access path, enable one hint and disable all other related hints.

Create a table and create a hash index on the first column of the table and a range index on the second column.

Command> CREATE TABLE test (col1 NUMBER, col2 NUMBER);
Command> CREATE HASH INDEX h_index ON test (col1);
Command> CREATE INDEX hr_index ON test (col2);

Set autocommit to off and execute the built-in procedure, ttOptGetFlag, to review the current transaction level optimizer hint settings for the transaction. A setting of 1 means the flag is enabled.

Command> autocommit off;
Command> CALL ttOptGetFlag ('Hash');
< Hash, 1 >
1 row found.
Command> CALL ttOptGetFlag ('Scan');
< Scan, 1 >
1 row found.

Use the ttIsql EXPLAIN command to review the plan for a SELECT query using a WHERE clause and dynamic parameters. The optimizer uses a hash scan.

Command> EXPLAIN SELECT * FROM test WHERE col1 = ? and col2 = ?;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               1
  OPERATION:           RowLkHashScan
  TBLNAME:             TEST
  IXNAME:              H_INDEX
  INDEXED CONDITION:   TEST.COL1 = _QMARK_1
  NOT INDEXED:         TEST.COL2 = _QMARK_2

Use the statement level optimizer hint TT_Range to direct the optimizer to use a range scan. Note that the optimizer ignores the TT_Range hint and uses a hash scan because you did not direct the optimizer to disable the hash scan. Alter the statement and direct the optimizer to use a range scan and not use a hash scan. To accomplish this, enable the statement level optimizer hint TT_Range and disable the statement level optimizer hint TT_HashScan. The optimizer no longer ignores the TT_Range hint.

Command> EXPLAIN SELECT --+ TT_Range (1) Single line comment to set TT_Range
         * FROM TEST WHERE col1 = ? and col2 = ?;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               1
  OPERATION:           RowLkHashScan
  TBLNAME:             TEST
  IXNAME:              H_INDEX
  INDEXED CONDITION:   TEST.COL1 = _QMARK_1
  NOT INDEXED:         TEST.COL2 = _QMARK_2

Command> EXPLAIN SELECT /*+ TT_Range (1) TT_HashScan (0)
          Multiple line comment to enable TT_Range and disable TT_HashScan */
          * FROM TEST WHERE col1 = ? and col2 = ?;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               1
  OPERATION:           RowLkRangeScan
  TBLNAME:             TEST
  IXNAME:              HR_INDEX
  INDEXED CONDITION:   TEST.COL2 = _QMARK_2
  NOT INDEXED:         TEST.COL1 = _QMARK_1

Prepare the query again without using statement level optimizer hints and without issuing a commit or rollback. The optimizer uses the transaction level optimizer hints settings that were in effect before executing the query. The optimizer uses transaction level optimizer hints because statement level optimizer hints are scoped to the SQL statement.

Command> EXPLAIN SELECT * FROM TEST WHERE col1 = ? and col2 = ?;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               1
  OPERATION:           RowLkHashScan
  TBLNAME:             TEST
  IXNAME:              H_INDEX
  INDEXED CONDITION:   TEST.COL1 = _QMARK_1
  NOT INDEXED:         TEST.COL2 = _QMARK_2

Using TT_JoinOrder to specify a join order

Use the statement level optimizer hint TT_JoinOrder to direct the optimizer to use a specific join order. First use a transaction level optimizer hint to direct the optimizer to use a specific join order for the transaction. Then use a statement level optimizer hint to direct the optimizer to change the join order for the statement only.

Command> CALL ttOptSetOrder ('e d j');
Command> EXPLAIN SELECT *
         FROM employees e, departments d, job_history j
         WHERE e.department_id = d.department_id
         AND e.hire_date = j.start_date;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               3
  OPERATION:           TblLkRangeScan
  TBLNAME:             EMPLOYEES
  IXNAME:              EMP_DEPT_FK
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
  STEP:                2
  LEVEL:               3
  OPERATION:           TblLkRangeScan
  TBLNAME:             DEPARTMENTS
  IXNAME:              DEPARTMENTS
  INDEXED CONDITION:   D.DEPARTMENT_ID >= E.DEPARTMENT_ID
  NOT INDEXED:         <NULL>
 
  STEP:                3
  LEVEL:               2
  OPERATION:           MergeJoin
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   E.DEPARTMENT_ID = D.DEPARTMENT_ID
  NOT INDEXED:         <NULL>
 
  STEP:                4
  LEVEL:               2
  OPERATION:           TblLkRangeScan
  TBLNAME:             JOB_HISTORY
  IXNAME:              JOB_HISTORY
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         E.HIRE_DATE = J.START_DATE
 
  STEP:                5
  LEVEL:               1
  OPERATION:           NestedLoop
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>

Use the statement level optimizer hint, TT_JoinOrder, to direct the optimizer to override the transaction level join order optimizer hint for the SQL statement only.

Command> EXPLAIN SELECT --+ TT_JoinOrder (e j d)
          *
          FROM employees e, departments d, job_history j
          WHERE e.department_id = d.department_id
          AND e.hire_date = j.start_date;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               3
  OPERATION:           TblLkRangeScan
  TBLNAME:             EMPLOYEES
  IXNAME:              EMP_DEPT_FK
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
  STEP:                2
  LEVEL:               3
  OPERATION:           TblLkRangeScan
  TBLNAME:             JOB_HISTORY
  IXNAME:              JOB_HISTORY
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         E.HIRE_DATE = J.START_DATE
 
  STEP:                3
  LEVEL:               2
  OPERATION:           NestedLoop
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
  STEP:                4
  LEVEL:               2
  OPERATION:           TblLkRangeScan
  TBLNAME:             DEPARTMENTS
  IXNAME:              DEPARTMENTS
  INDEXED CONDITION:   D.DEPARTMENT_ID >= E.DEPARTMENT_ID
  NOT INDEXED:         <NULL>
 
  STEP:                5
  LEVEL:               1
  OPERATION:           MergeJoin
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   E.DEPARTMENT_ID = D.DEPARTMENT_ID
  NOT INDEXED:         <NULL>

Prepare the query again to verify that the join order that was in effect for the transaction remains in effect.

Command> EXPLAIN SELECT *
          FROM employees e, departments d, job_history j
          WHERE e.department_id = d.department_id
          AND e.hire_date = j.start_date;
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               3
  OPERATION:           TblLkRangeScan
  TBLNAME:             EMPLOYEES
  IXNAME:              EMP_DEPT_FK
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
 
  STEP:                2
  LEVEL:               3
  OPERATION:           TblLkRangeScan
  TBLNAME:             DEPARTMENTS
  IXNAME:              DEPARTMENTS
  INDEXED CONDITION:   D.DEPARTMENT_ID >= E.DEPARTMENT_ID
  NOT INDEXED:         <NULL>
 
  STEP:                3
  LEVEL:               2
  OPERATION:           MergeJoin
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   E.DEPARTMENT_ID = D.DEPARTMENT_ID
  NOT INDEXED:         <NULL>
 
  STEP:                4
  LEVEL:               2
  OPERATION:           TblLkRangeScan
  TBLNAME:             JOB_HISTORY
  IXNAME:              JOB_HISTORY
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         E.HIRE_DATE = J.START_DATE
 
  STEP:                5
  LEVEL:               1
  OPERATION:           NestedLoop
  TBLNAME:             <NULL>
  IXNAME:              <NULL>
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>

Using the statement level optimizer hint TT_INDEX

Perform a query on the employees table that uses the index, emp_name_ix. Then use the statement level optimizer hint TT_INDEX to direct the optimizer not to use this index. First run the ttIsql command, indexes, to view the indexes for the employees table.

Command> indexes employees;
 
Indexes on table TESTUSER.EMPLOYEES:
  EMPLOYEES: unique range index on columns:
    EMPLOYEE_ID
    (referenced by foreign key index JHIST_EMP_FK on table TESTUSER.JOB_HISTORY)
  TTUNIQUE_0: unique range index on columns:
    EMAIL
  EMP_DEPT_FK: non-unique range index on columns:
    DEPARTMENT_ID
    (foreign key index references table TESTUSER.DEPARTMENTS(DEPARTMENT_ID))
  EMP_JOB_FK: non-unique range index on columns:
    JOB_ID
    (foreign key index references table TESTUSER.JOBS(JOB_ID))
  EMP_NAME_IX: non-unique range index on columns:
    LAST_NAME
    FIRST_NAME
  5 indexes found.
 
5 indexes found on 1 table.

Use the ttIsql command, EXPLAIN, to view the execution plan for a SELECT query on the employees table that uses a WHERE clause on the last_name column.

Command> EXPLAIN SELECT e.first_name
           FROM employees e
           WHERE e.last_name BETWEEN 'A' AND 'B';
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               1
  OPERATION:           RowLkRangeScan
  TBLNAME:             EMPLOYEES
  IXNAME:              EMP_NAME_IX
  INDEXED CONDITION:   E.LAST_NAME >= 'A' AND E.LAST_NAME <= 'B'
  NOT INDEXED:         <NULL>

Use the statement level optimizer hint, TT_INDEX, to direct the optimizer not to use the index, emp_name_ix.

Command> EXPLAIN SELECT --+ TT_INDEX (E,EMP_NAME_IX,0)
           e.first_name
           FROM employees e
           WHERE e.last_name BETWEEN 'A' AND 'B';
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               1
  OPERATION:           TblLkRangeScan
  TBLNAME:             EMPLOYEES
  IXNAME:              EMPLOYEES
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         E.LAST_NAME <= 'B' AND E.LAST_NAME >= 'A'

Optimizer hints supported in TimesTen Scaleout only

These optimizer hints are only supported in TimesTen Scaleout. They are valid at the statement and at the connection levels.

See "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on hints at the connection level and "Statement level optimizer hints" in this book for information on statement level optimizer hints.

TT_GridQueryExec optimizer hint

The TT_GridQueryExec optimizer hint enables you to specify whether the query should return data from the local element or from all elements, including the elements in a replica set when K-safety is set to 2.

If you do not specify this hint, the query is executed in one logical data space. It is neither local nor global. This means that exactly one full copy of the data is used to compute the query. Use this hint in cases where obtaining some result is more important than obtaining the correct result (for example, where one or more replica sets are unavailable). Valid options for this hint are LOCAL and GLOBAL.

For more information, see:

  • "TT_GridQueryExec" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for information on using this hint.

  • "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on using this hint at the connection level.

  • "Statement level optimizer hints" for information on using this hint at the statement level.

This example illustrates how to use the TT_GridQueryExec(GLOBAL) hint on the dual table to determine the ids of all elements, replica sets, and dataspaces.

Command> SELECT /*+TT_GridQueryExec(GLOBAL)*/ elementId#, replicasetId#,
           dataspaceId# FROM dual ORDER BY elementId#,replicasetId#,dataspaceId#;
 
ELEMENTID#, REPLICASETID#, DATASPACEID#
< 1, 1, 1 >
< 2, 1, 2 >
< 3, 2, 1 >
< 4, 2, 2 >
< 5, 3, 1 >
< 6, 3, 2 >
6 rows found.

See "TT_GridQueryExec" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more examples.

TT_PartialResult optimizer hint

The TT_PartialResult optimizer hint enables you to specify whether the query should return partial results if some data is not available.

Use TT_PartialResult(1) to direct the query to return partial results if all elements in a replica set are not available.

Use TT_PartialResult(0) to direct the query to return an error if the required data is not available in the case where all elements in a replica set are not available. If at least one element from each replica set is available or the data required by the query is available, the optimizer returns the query result correctly without error.

The default is TT_PartialResult(0).

For more information, see:

  • "TT_PartialResult" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for information on using this hint and for examples.

  • "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on using this hint at the connection level.

  • "Statement level optimizer hints" for information on using this hint at the statement level.

TT_CommitDMLOnSuccess optimizer hint

Use the TT_CommitDMLOnSuccess hit to enable or disable a commit operation as part of DML execution.

  • At the statement level, TT_CommitDMLOnSuccess is used in a DML statement (DELETE, INSERT, INSERT... SELECT, and UPDATE) to enable or disable the commit behavior of the transaction when the DML operation is executed. For the INSERT...SELECT statement, specify TT_CommitDMLOnSuccess after the SELECT keyword.

    TT_CommitDMLOnSuccess is valid in DML operations only. It is not valid for queries or DDL operations and, if specified in a non-DML statement, is ignored and no error is returned. See "Statement level optimizer hints" for information on the syntax and semantics.

  • At the connection level, TT_CommitDMLOnSuccess is also used to enable or disable the commit behavior of the transaction when a DML operation is executed. However, you specify TT_CommitDMLOnSuccess as a parameter to the OptimizerHint connection attribute. See "OptimizerHint" in the Oracle TimesTen In-Memory Database Reference for information on using TT_CommitDMLOnSuccess at the connection level.

At both levels, valid options are 0 and 1. If you do not specify TT_CommitDMLOnSuccess, there are no changes to the normal commit behavior. The order of precedence is statement level followed by connection level.

The TT_CommitDMLOnSuccess commit behavior at the statement level is:

  • TT_CommitDMLOnSuccess(1) commits the current transaction if the DML statement in which the hint is specified is executed successfully. If there are open cursors at commit time, all cursors are closed and the transaction is committed. If the statement with this hint fails, the transaction is not committed.

  • TT_CommitDMLOnSuccess(0) disables the commit of the current transaction if the DML statement in which the hint is specified is executed successfully.

Table 6-4 shows the commit behavior when not setting TT_CommitDMLOnSuccess as well as setting TT_CommitDMLOnSuccess to 0 and 1 at the statement and connection levels. The table shows the commit behavior when autocommit is set to 0.

Table 6-5 shows the commit behavior when not setting TT_CommitDMLOnSuccess as well as setting TT_CommitDMLOnSuccess to 0 and 1 at the statement and connection levels. The table shows the commit behavior when autocommit is set to 1.

Table 6-4 TT_CommitDMLOnSuccess commit behavior: Autocommit 0

Blank Not set at connection level Set to 0 at connection level Set to 1 at connection level

Not set at statement level

  • If there are no open cursors, and DML execution is successful, the transaction is not committed.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is not committed.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed due to the connection level setting of 1.

  • If there are open cursors and DML execution is successful, the cursors are closed and the transaction is committed due to the connection level setting of 1.

  • If DML execution is not successful, the transaction is not committed.

Set to 0 at statement level

  • If there are no open cursors, and DML execution is successful, the transaction is not committed up until the DML statement that includes the hint due to the autocommit 0 setting. This DML statement is not committed due to disabling the commit behavior at the statement level. (This is the same behavior as when TT_CommitDMLOnSuccess is not set at the statement level.)

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed. (This is the same behavior as when TT_CommitDMLOnSuccess is not set at the statement level.)

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is not committed.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed up until the DML statement that includes the hint. This DML statement is not committed due to disabling the commit behavior at the statement level.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed.

Set to 1 at statement level

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, cursors are closed and the transaction is committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, cursors are closed and the transaction is committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, cursors are closed and the transaction is committed.

  • If DML execution is not successful, the transaction is not committed.

Table 6-5 TT_CommitDMLOnSuccess commit behavior: Autocommit 1

Blank Not set at connection level Set to 0 at connection level Set to 1 at connection level

Not set at statement level

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, the cursors are closed and the transaction is committed.

  • If DML execution is not successful, the transaction is not committed.

Set to 0 at statement level

  • If there are no open cursors, and DML execution is successful, the transaction is committed. (This is the same behavior as when TT_CommitDMLOnSuccess is not set at the statement level.)

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed. (This is the same behavior as when TT_CommitDMLOnSuccess is not set at the statement level.)

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, the cursors are not closed and the transaction is not committed.

  • If DML execution is not successful, the transaction is not committed.

Set to 1 at statement level

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, cursors are closed and the transaction is committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, cursors are closed and the transaction is committed.

  • If DML execution is not successful, the transaction is not committed.

  • If there are no open cursors, and DML execution is successful, the transaction is committed.

  • If there are open cursors and DML execution is successful, cursors are closed and the transaction is committed.

  • If DML execution is not successful, the transaction is not committed.

For more information, see:

These examples illustrate the use of the TT_CommitDMLOnSuccess optimizer hint:

Setting TT_CommitDMLOnSuccess to 1

This example first creates the mytable table. It then sets autocommit to 0 and inserts a row into the mytable table. A second connection (conn2) connects to the database and issues a SELECT query against the mytable table. The query returns 0 rows. The ttIsql use command returns the application to the first connection (database1) and issues a second INSERT operation, setting TT_CommitDMLOnSuccess to 1 at the statement level. A second ttIsql use command returns the application to the conn2 connection. A SELECT query shows two rows have been inserted into the mytable table. This example illustrates that issuing TT_CommitDMLOnSuccess(1) commits the transaction after the successful execution of the second INSERT operation (which set the hint).

Command> CREATE TABLE mytable (col1 TT_INTEGER, col2 VARCHAR2(4000));
Command> autocommit 0;
Command> INSERT INTO mytable VALUES (10, 'ABC');
1 row inserted.

Establish a second connection (conn2)

Command> connect as conn2;
Using the connection string of connection database1 to connect...
...
(Default setting AutoCommit=1)

Issue a SELECT query and expect 0 rows due to autocommit set to 0.

conn2: Command> SELECT * FROM mytable;
0 rows found.

Return to the first connection (database1) and issue an INSERT operation with TT_CommitDMLOnSuccess set to 1.

conn2: Command> use database1;
database1: Command> INSERT /*+TT_CommitDMLOnSuccess(1)*/ 
                       INTO mytable VALUES (10, 'ABC');
1 row inserted.

Return to the second connection (conn2) and issue a SELECT query. Expect 2 rows (due to the two INSERT statements. (The transaction is committed due to the TT_CommitDMLOnSuccess statement level hint set to 1 and the successful execution of the two INSERT operations.)

database1: Command> use conn2
conn2: Command> SELECT * FROM mytable;
< 10, ABC >
< 10, ABC >
2 rows found.

Using TT_CommitDMLOnSuccess at connection level

This example first creates the mytable table. It then uses PL/SQL to insert 1000 rows into the table. There is a second connection to the database (conn2) and this connection connects with TT_CommitDMLOnSuccess set to 1 at the connection level. Various operations are performed to illustrate the behavior of TT_CommitDMLOnSuccess at both the statement and connection levels.

Command>  CREATE TABLE mytable (col1 TT_INTEGER NOT NULL PRIMARY KEY, 
            col2 VARCHAR2 (4000));
Command> BEGIN
           FOR i in 1..1000
           LOOP
            INSERT INTO mytable VALUES (i,i);
          END LOOP;
        END;
        /
PL/SQL procedure successfully completed.

Establish a second connection (conn2) and connect setting TT_CommitDMLOnSuccess at the connection level to 1.

Command> CONNECT adding "OptimizerHint=TT_CommitDMLOnSuccess(1)" as conn2;
Connection successful: 
...

Set autocommit to 0 and issue a DELETE operation.

conn2: Command> autocommit 0;
conn2: Command> DELETE FROM mytable WHERE col1=1000;
1 row deleted.

Return to the original connection (database1) and issue a SELECT query to see if the DELETE operation was committed. The operation was committed due to the TT_CommitDMLOnSuccess setting of 1 at the connection level.

conn2: Command> use database1;
database1: Command> SELECT * FROM mytable WHERE col1=1000;
0 rows found.

Return to the second connection (conn2) and issue an INSERT operation. Then return to the original connection (database1). The transaction containing the INSERT operation was committed.

database1: Command> use conn2;
conn2: Command> INSERT INTO mytable VALUES (1000,1000);
1 row inserted.
conn2: Command> use database1
database1: Command> SELECT * FROM mytable WHERE col1=1000;
< 1000, 1000 >
1 row found.

Return to the second connection (conn2) and issue a DELETE operation, followed by an INSERT operation, and then a second INSERT operation where TT_CommitDMLOnSuccess is set to 0 at the statement level (the second INSERT).

database1: Command> use conn2;
conn2: Command> DELETE FROM mytable WHERE col1=1000;
1 row deleted.
conn2: Command> INSERT INTO mytable VALUES (1001,1001);
1 row inserted.
conn2: Command> INSERT /*+TT_CommitDMLOnSuccess(0)*/ INTO mytable 
                  VALUES (1002,1002);
1 row inserted.

Issue a SELECT query and notice the results of the query. The one DELETE operation and the two INSERT operations were successful.

conn2: Command> SELECT * FROM mytable where col1 >= 1000;
< 1001, 1001 >
< 1002, 1002 >
2 rows found.

Return to the original connection (database1) and issue the same SELECT query. Observe that the one DELETE statement and the first INSERT operation were committed. This is due to the TT_CommitDMLOnSuccess setting of 1 at the connection level. The second INSERT statement was not committed due to the TT_CommitDMLOnSuccess setting of 0 for this second INSERT statement.

conn2: Command> use database1;
database1: Command> SELECT * FROM mytable where col1 >= 1000;
< 1001, 1001 >
1 row found.

Return to the second connection (conn2) and issue a third INSERT operation. Then issue a SELECT query and observe the results.

database1: Command> use conn2;
conn2: Command> INSERT INTO mytable VALUES (1003,1003);
1 row inserted.
conn2: Command> SELECT * FROM mytable where col1 >= 1000 ORDER BY col1;
< 1001, 1001 >
< 1002, 1002 >
< 1003, 1003 >
3 rows found.

Return to the original connection (database1) and issue the same SELECT query. Note the results are the same as in the conn2 connection. The transaction is committed due to the TT_CommitDMLOnSuccess setting of 1 at the connection level and the successful execution of the second and third INSERT operations.

conn2: Command> use database1
database1: Command> SELECT * FROM mytable where col1 >= 1000 ORDER BY col1;
< 1001, 1001 >
< 1002, 1002 >
< 1003, 1003 >
3 rows found.

ALTER ACTIVE STANDBY PAIR

This statement is not supported in TimesTen Scaleout.

In TimesTen Classic:

You can change an active standby pair by:

  • Adding or dropping a subscriber database

  • Altering store attributes

    Only the PORT and TIMEOUT attributes can be set for subscribers.

  • Including tables, sequences or cache groups in the replication scheme

  • Excluding tables, sequences or cache groups from the replication scheme

See "Making other changes to an active standby pair" in Oracle TimesTen In-Memory Database Replication Guide.

Required privilege

ADMIN

Usage with TimesTen Scaleout

This statement is not supported with TimesTen Scaleout.

SQL syntax

ALTER ACTIVE STANDBY PAIR { 
      SubscriberOperation | 
      StoreOperation | InclusionOperation |
      NetworkOperation } [...]

Syntax for SubscriberOperation:

{ADD | DROP } SUBSCRIBER FullStoreName

Syntax for StoreOperation:

ALTER STORE FullStoreName SET StoreAttribute

Syntax for InclusionOperation:

    [{ INCLUDE | EXCLUDE }{TABLE [[Owner.]TableName [,...]]|
         CACHE GROUP [[Owner.]CacheGroupName [,...]]|
         SEQUENCE [[Owner.]SequenceName [,...]]} [,...]]

Syntax for NetworkOperation:

ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName
      { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost }
          PRIORITY Priority } [...] 
DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName
      { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } [...] 

Parameters

Parameter Description

ADD SUBSCRIBER FullStoreName

Indicates a subscriber database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

DROP SUBSCRIBER FullStoreName

Indicates that updates should no longer be sent to the specified subscriber database. This operation fails if the replication scheme has only one subscriber. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

ALTER STORE FullStoreName SET StoreAttribute

Indicates changes to the attributes of a database. Only the PORT and TIMEOUT attributes can be set for subscribers. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

See "CREATE ACTIVE STANDBY PAIR" For information on StoreAttribute clauses.

FullStoreName

The database, specified as one of the following:

  • SELF

  • The prefix of the database file name

For example, if the database path is directory/subdirectory/data.ds0, then data is the database name that should be used.

This is the database file name specified in the DataStore attribute of the DSN description with optional host ID in the form:

DataStoreName [ON Host]

Host can be either an IP address or a literal host name assigned to one or more IP addresses, as described in "Configuring the network" in Oracle TimesTen In-Memory Database Replication Guide. Host names containing special characters must be surrounded by double quotes. For example: "MyHost-500".

{INCLUDE|EXCLUDE}

{[TABLE [Owner.]TableName[,...]|

CACHE GROUP

[[Owner.]CacheGroupName]|[,...]

SEQUENCE [[Owner.]SequenceName [,...]}

[,...]

Includes in or excludes from replication the tables, sequences or cache groups listed.

INCLUDE adds the tables, sequences or cache groups to the replication scheme. Use one INCLUDE clause for each object type (table, sequence or cache group).

EXCLUDE removes the tables, sequences or cache groups from replication. Use one EXCLUDE clause for each object type (table, sequence or cache group).

You cannot use the EXCLUDE clause for AWT cache groups.

ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName

Adds NetworkOperation to replication scheme. Enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores. In the context of the ADD ROUTE clause, each master database is a subscriber of the other master database and each read-only subscriber is a subscriber of both master databases.

Can be specified more than once.

For FullStoreName, "ON host" must be specified.

DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName

Drops NetworkOperation from replication scheme.

Can be specified more than once.

For FullStoreName, "ON host" must be specified.

MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost

MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.

Clause can be specified more than once. Valid for both ADD and DROP ROUTE MASTER.

PRIORITY Priority

Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.

Required syntax of NetworkOperation clause. Follows MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost clause.

Description

  • You must stop the replication agent before altering an active standby pair. The exceptions are for those objects and statements that are automatically replicated and included based on the values of the DDL_REPLICATION_LEVEL and DDL_REPLICATION_ACTION attributes. See "ALTER SESSION" for more information.

  • You may only alter the active standby pair replication scheme on the active database. See "Making other changes to an active standby pair" in Oracle TimesTen In-Memory Database Replication Guide for more information.

  • You may not use ALTER ACTIVE STANDBY PAIR when using Oracle Clusterware with TimesTen. See "Restricted commands and SQL statements" in Oracle TimesTen In-Memory Database Replication Guide for more information.

    Instead, perform the tasks described in "Changing the schema" section of the Oracle TimesTen In-Memory Database Replication Guide.

  • Use ADD SUBSCRIBER FullStoreName to add a subscriber to the replication scheme.

  • Use DROP SUBSCRIBER FullStoreName to drop a subscriber from the replication scheme.

  • Use the INCLUDE or EXCLUDE clause to include the listed tables, sequences or cache groups in the replication scheme or to exclude them from the replication scheme. Use one INCLUDE or EXCLUDE clause for each object type (table, sequence or cache group). The ALTER ACTIVE STANDBY statement is not necessary for those objects and statements that are automatically replicated and included based on the values of the DDL_REPLICATION_LEVEL and DDL_REPLICATION_ACTION attributes. See "ALTER SESSION" for more information. However, if DDL_REPLICATION_LEVEL is 2 or greater and DDL_REPLICATION_ACTION="EXCLUDE", use the INCLUDE clause to include replicated objects into the replication scheme.

  • Do not use the EXCLUDE clause for AWT cache groups.

  • When DDL_REPLICATION_LEVEL is 2 or greater, the INCLUDE clause can only be used with empty tables on the active database. The contents of the corresponding tables on the standby and any subscribers will be truncated before the table is added to the replication scheme.

Examples

Add a subscriber to the replication scheme.

ALTER ACTIVE STANDBY PAIR
    ADD SUBSCRIBER rep4;

Drop two subscribers from the replication scheme.

ALTER ACTIVE STANDBY PAIR
    DROP SUBCRIBER rep3
    DROP SUBSCRIBER rep4;

Alter the store attributes of the rep3 and rep4 databases.

ALTER ACTIVE STANDBY PAIR
    ALTER STORE rep3 SET PORT 23000 TIMEOUT 180
    ALTER STORE rep4 SET PORT 23500 TIMEOUT 180;

Add a table, a sequence and two cache groups to the replication scheme.

ALTER ACTIVE STANDBY PAIR
    INCLUDE TABLE my.newtab
    INCLUDE SEQUENCE my.newseq
    INCLUDE CACHE GROUP my.newcg1, my.newcg2;

Add NetworkOperation clause to active standby pair:

ALTER ACTIVE STANDBY PAIR
ADD ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2"
MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1;

ALTER CACHE GROUP

The ALTER CACHE GROUP statement modifies the state, interval and mode of AUTOREFRESH for a cache group.

Updates on the Oracle Database tables can be propagated back to the TimesTen cache group with the use of AUTOREFRESH. AUTOREFRESH can be enabled when the cache group is a user managed cache group or is defined as READONLY with an AUTOREFRESH clause.

Any values or states set by ALTER CACHE GROUP are persistent. They are stored in the database and survive daemon and cache agent restarts.

Required privilege

No privilege is required for the cache group owner.

ALTER ANY CACHE GROUP for another user's cache group.

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

This statement changes the AUTOREFRESH mode of the cache group, which determines which rows are updated during an autorefresh operation. You cannot use the ALTER CACHE GROUP...SET AUTOREFRESH MODE clause in TimesTen Scaleout.

ALTER CACHE GROUP [Owner.]GroupName
        SET AUTOREFRESH MODE
        {INCREMENTAL | FULL}

This statement changes the AUTOREFRESH interval on the cache group. You cannot use the ALTER CACHE GROUP...SET AUTOREFRESH INTERVAL clause in TimesTen Scaleout.

ALTER CACHE GROUP [Owner.]GroupName 
        SET AUTOREFRESH INTERVAL IntervalValue 
        {MINUTE[S] | SECOND[S] | MILLISECOND[S]}

This statement alters the AUTOREFRESH state:

ALTER CACHE GROUP [Owner.]GroupName 
        SET AUTOREFRESH STATE
        {ON | OFF | PAUSED}

Parameters

Parameter Description

[Owner.]GroupName

Name assigned to the new cache group.

AUTOREFRESH

Indicates that changes to the Oracle Database tables should be automatically propagated to TimesTen.

MODE

Determines which rows in the cache are updated during an autorefresh. If the INCREMENTAL clause is specified, TimesTen refreshes only rows that have been changed on the Oracle Database since the last propagation. If the FULL clause is specified or if there is neither FULL nor INCREMENTAL clause specified, TimesTen updates all rows in the cache with each autorefresh. The default mode is INCREMENTAL.

INTERVAL

IntervalValue

An integer value that specifies how often AUTOREFRESH should be scheduled, in minutes, seconds or milliseconds. The default value is five minutes. An autorefresh interval set to 0 milliseconds enables continuous autorefresh, where the next autorefresh cycle is scheduled immediately after the last autorefresh cycle has ended. See "Automatically refreshing a cache group" in the Oracle TimesTen In-Memory Database Cache Guide for more information.

If the specified interval is not long enough for an AUTOREFRESH to complete, a runtime warning is generated and the next AUTOREFRESH waits until the current one finishes. An informational message is generated in the support log if the wait queue reaches 10.

STATE

Specifies whether AUTOREFRESH should be changed to on, off or paused. By default, the AUTOREFRESH STATE is ON.

ON

AUTOREFRESH is scheduled to occur at the specified interval.

OFF

A scheduled AUTOREFRESH is canceled, and TimesTen does not try to maintain the information necessary for an INCREMENTAL refresh. Therefore if AUTOREFRESH is turned on again at a later time, the first refresh is FULL.

PAUSED

A scheduled AUTOREFRESH is canceled, but TimesTen tries to maintain the information necessary for an INCREMENTAL refresh. Therefore if AUTOREFRESH is turned on again at a later time, a full refresh may not be necessary.

Description

  • A refresh does not occur immediately after issuing ALTER CACHE GROUP...SET AUTOREFRESH STATE. This statement only changes the state of AUTOREFRESH. When the transaction that contains the ALTER CACHE GROUP statement is committed, the cache agent is notified to schedule an AUTOREFRESH immediately, but the commit goes through without waiting for the completion of the refresh. The scheduling of the autorefresh operation is part of the transaction, but the refresh itself is not.

  • If you issue an ALTER CACHE GROUP... SET AUTOREFRESH STATE OFF statement and there is an autorefresh operation currently running, then:

    • If LockWait interval is 0, the ALTER statement fails with a lock timeout error.

    • If LockWait interval is nonzero, then the current autorefresh transaction is rolled back, and the ALTER statement continues. This affects all cache groups with the same autorefresh interval.

  • Replication cannot occur between cache groups with AUTOREFRESH and cache groups without AUTOREFRESH.

  • If the ALTER CACHE GROUP statement is part of a transaction that is being replicated, and if the replication scheme has the RETURN TWOSAFE attribute, the transaction may fail.

  • You cannot execute the ALTER CACHE GROUP statement when performed under the serializable isolation level. An error message is returned when attempted.

ALTER FUNCTION

This statement is not supported in TimesTen Scaleout.

In TimesTen Classic:

The ALTER FUNCTION statement recompiles a standalone stored function. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors and performance overhead.

To recompile a function that is part of a package, recompile the package using the ALTER PACKAGE statement.

Required privilege

No privilege is required for the PL/SQL function owner.

ALTER ANY PROCEDURE for another user's function.

Usage with TimesTen Scaleout

This statement is not supported with TimesTen Scaleout.

SQL syntax

ALTER FUNCTION [Owner.]FunctionName COMPILE
      [CompilerParametersClause [...]] 
      [REUSE SETTINGS]

Parameters

Parameter Description

[Owner.]FunctionName

Name of the function to be recompiled.

COMPILE

Required keyword that causes recompilation of the function. If the function does not compile successfully, use the ttIsql command SHOW ERRORS to display the compiler error messages.

CompilerParametersClause

Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL and NLS_LENGTH_SEMANTICS.

You can specify each parameter once in the statement.

If you omit a parameter from this clause and you specify REUSE SETTINGS, then if a value was specified for the parameter in an earlier compilation, TimesTen uses that earlier value. If you omit a parameter and either you do not specify REUSE SETTINGS or no value has been specified for the parameter in an earlier compilation, then TimesTen obtains the value for the parameter from the session environment.

REUSE SETTINGS

Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS, TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified.

Description

  • The ALTER FUNCTION statement does not change the declaration or definition of an existing function. To redeclare or redefine a function, use the CREATE FUNCTION statement.

  • TimesTen first recompiles objects upon which the function depends, if any of those objects are invalid.

  • TimesTen also invalidates any objects that depend on the function, such as functions that call the recompiled function or package bodies that define functions that call the recompiled function.

  • If TimesTen recompiles the function successfully, then the function becomes valid. If recompiling the function results in compilation errors, then TimesTen returns an error and the function remains invalid. Use the ttIsql command SHOW ERRORS to display compilation errors.

  • During recompilation, TimesTen drops all persistent compiler settings, retrieves them again from the session, and stores them at the end of compilation. To avoid this process, specify the REUSE SETTINGS clause.

See also

ALTER PACKAGE

This statement is not supported in TimesTen Scaleout.

In TimesTen Classic:

The ALTER PACKAGE statement explicitly recompiles a package specification, package body, or both. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors.

This statement recompiles all package objects together. You cannot use the ALTER PROCEDURE or ALTER FUNCTION statement to individually recompile a procedure or function that is part of a package.

Required privilege

No privilege is required for the package owner.

ALTER ANY PROCEDURE for another user's package.

Usage with TimesTen Scaleout

This statement is not supported with TimesTen Scaleout.

SQL syntax

ALTER PACKAGE [Owner.]PackageName COMPILE
      [PACKAGE|SPECIFICATION|BODY]
      [CompilerParametersClause [...]] 
      [REUSE SETTINGS]

Parameters

Parameter Description

[Owner.]PackageName

Name of the package to be recompiled.

COMPILE

Required clause used to force the recompilation of the package specification, package body, or both.

[PACKAGE|SPECIFICATION|BODY]

Specify PACKAGE to recompile both the package specification and the body. Specify SPECIFICATION to recompile the package specification. Specify BODY to recompile the package body.

PACKAGE is the default.

CompilerParametersClause

Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL and NLS_LENGTH_SEMANTICS.

You can specify each parameter once in the statement.

If you omit a parameter from this clause and you specify REUSE SETTINGS, then if a value was specified for the parameter in an earlier compilation, TimesTen uses that earlier value. If you omit a parameter and either you do not specify REUSE SETTINGS or no value has been specified for the parameter in an earlier compilation, then TimesTen obtains the value for the parameter from the session environment.

REUSE SETTINGS

Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS, TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified.

Description

  • When you recompile a package specification, TimesTen invalidates local objects that depend on the specification, such as procedures that call procedures or functions in the package. The body of the package also depends on the specification. If you subsequently reference one of these dependent objects without first explicitly recompiling it, then TimesTen recompiles it implicitly at runtime.

  • When you recompile a package body, TimesTen does not invalidate objects that depend on the package specification. TimesTen first recompiles objects upon which the body depends, if any of those objects are invalid. If TimesTen recompiles the body successfully, then the body become valid.

  • When you recompile a package, both the specification and the body are explicitly recompiled. If there are no compilation errors, then the specification and body become valid. If there are compilation errors, then TimesTen returns an error and the package remains invalid.

See also

ALTER PROCEDURE

This statement is not supported in TimesTen Scaleout.

In TimesTen Classic:

The ALTER PROCEDURE statement recompiles a standalone stored procedure. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors and performance overhead.

To recompile a procedure that is part of a package, recompile the package using the ALTER PACKAGE statement.

Required privilege

No privilege is required for the procedure owner.

ALTER ANY PROCEDURE for another user's procedure.

Usage with TimesTen Scaleout

This statement is not supported with TimesTen Scaleout.

SQL syntax

ALTER PROCEDURE [Owner.]ProcedureName COMPILE
      [CompilerParametersClause [...]] 
      [REUSE SETTINGS]

Parameters

Parameter Description

[Owner.]ProcedureName

Name of the procedure to be recompiled.

COMPILE

Required keyword that causes recompilation of the procedure. If the procedure does not compile successfully, use the ttIsql command SHOW ERRORS to display the compiler error messages.

CompilerParametersClause

Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL and NLS_LENGTH_SEMANTICS.

You can specify each parameter once in the statement.

If you omit a parameter from this clause and you specify REUSE SETTINGS, then if a value was specified for the parameter in an earlier compilation, TimesTen uses that earlier value. If you omit a parameter and either you do not specify REUSE SETTINGS or no value has been specified for the parameter in an earlier compilation, then TimesTen obtains the value for the parameter from the session environment.

REUSE SETTINGS

Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS, TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified.

Description

  • The ALTER PROCEDURE statement does not change the declaration or definition of an existing procedure. To redeclare or redefine a procedure, use the CREATE PROCEDURE statement.

  • TimesTen first recompiles objects upon which the procedure depends, if any of those objects are invalid.

  • TimesTen also invalidates any objects that depend on the procedure, such as procedures that call the recompiled procedure or package bodies that define procedures that call the recompiled procedure.

  • If TimesTen recompiles the procedure successfully, then the procedure becomes valid. If recompiling the procedure results in compilation errors, then TimesTen returns an error and the procedure remains invalid. Use the ttIsql command SHOW ERRORS to display compilation errors.

  • During recompilation, TimesTen drops all persistent compiler settings, retrieves them again from the session, and stores them at the end of compilation. To avoid this process, specify the REUSE SETTINGS clause.

Examples

Query the system view USER_PLSQL_OBJECT_SETTINGS to check PLSQL_OPTIMIZE_LEVEL for procedure query_emp. Alter query_emp by changing PLSQL_OPTIMIZE_LEVEL to 3. Verify results.

Command> SELECT PLSQL_OPTIMIZE_LEVEL FROM user_plsql_object_settings WHERE  name = 'QUERY_EMP';
< 2 >
1 row found.

Command> ALTER PROCEDURE query_emp COMPILE PLSQL_OPTIMIZE_LEVEL = 3;
 
Procedure altered.
 
Command> SELECT PLSQL_OPTIMIZE_LEVEL FROM user_plsql_object_settings WHERE  name = 'QUERY_EMP';
< 3 >
1 row found.

See also

ALTER PROFILE

The ALTER PROFILE statement adds, modifies, or removes one or more password parameters in a profile.

Required privilege

ADMIN

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

ALTER PROFILE profile LIMIT password_parameters

password_parameters::=
[FAILED_LOGIN_ATTEMPTS password_parameter_options]
[PASSWORD_LIFE_TIME password_parameter_options]
[PASSWORD_REUSE_TIME password_parameter_options]
[PASSWORD_REUSE_MAX password_parameter_options]
[PASSWORD_LOCK_TIME password_parameter_options]
[PASSWORD_GRACE_TIME password_parameter_options]
[{PASSWORD_COMPLEXITY_CHECKER|PASSWORD_VERIFY_FUNCTION} password_checker_options]

password_parameter_options::=
UNLIMITED|DEFAULT|constant

password_checker_options::=
function|NULL|DEFAULT

function::=
TT_VERIFY_FUNCTION|TT_STRONG_VERIFY_FUNCTION|TT_STIG_VERIFY_FUNCTION

Parameters

Parameter Description

profile

Name of the profile.

LIMIT password_parameters

The LIMIT clause sets the limits for the password parameters. The LIMIT keyword is required.

The password parameters consist of the name of the password parameter and the value (or limit) for the password parameter. This includes the password complexity checker functions. All the parameters (with the exception of FAILED_LOGIN_ATTEMPTS and PASSWORD_REUSE_MAX) set lengths of time and are interpreted in number of days. You can use a decimal value (for example, you can use .0833 to denote approximately one hour). The minimum value is 1 second. The maximum value is 106,751,991 days. The constant value must be expressed in days. For example, to set a value of 5 minutes, specify the constant value of 0.0034722222222222 (5/1440 days). For FAILED_LOGIN_ATTEMPTS and PASSWORD_REUSE_MAX, you must specify an integer.

If you do not specify a password parameter after the LIMIT clause, the limit for that password parameter is based on the limit defined in the DEFAULT profile. In addition, if you only specify the LIMIT keyword with no additional parameters, the limits for the profile are based on the limits of the DEFAULT profile.

FAILED_LOGIN_ATTEMPTS

Specifies the number of consecutive failed attempts to connect to the database by a user before that user's account is locked.

PASSWORD_LIFE_TIME

Specifies the number of days that a user can use the same password for authentication. If you also set a value for PASSWORD_GRACE_TIME, then the password expires if it is not changed within the grace period. In such a situation, future connections to the database are rejected.

PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX

These two parameters must be used together.

  • PASSWORD_REUSE_TIME specifies the number of days that must pass before a user can reuse a password. For example, if you specify a value of 30, then after 30 days the user can reuse a previous password.

  • PASSWORD_REUSE_MAX specifies the number of password changes that are required before the current password can be reused.

You must specify a value for both parameters for them to have any effect. Specifically:

  • If you specify a value for both parameters: A user cannot reuse a password until the password has been changed the number of times specified for PASSWORD_REUSE_MAX during the number of days specified for PASSWORD_REUSE_TIME. For example, if you specify a value of 30 for PASSWORD_REUSE_TIME and a value of 10 for PASSWORD_REUSE_MAX, then the user can reuse the password after 30 days if the password has been changed 10 times.

  • If you specify a value for one parameter and specify a value of UNLIMITED for the second parameter, then the user can never reuse a password.

  • If you specify a value of UNLIMITED for both parameters, then TimesTen ignores both values, indicating that the password can be reused.

PASSWORD_LOCK_TIME

Specifies the number of days the user account is locked after the specified number of consecutive failed connection attempts.

PASSWORD_GRACE_TIME

Specifies the number of days after the grace period begins during which TimesTen issues a warning, but allows the connection to the database. If the password is not changed during the grace period, the password expires. This parameter is associated with the PASSWORD_LIFE_TIME parameter.

UNLIMITED

Indicates that there is no limit for the password parameter. If you specify UNLIMITED, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS UNLIMITED.

DEFAULT

Indicates that you want to omit a limit for the password parameter in this profile. A user that is assigned this profile is subject to the limit defined in the DEFAULT profile for this password parameter.

If you specify DEFAULT, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS DEFAULT.

constant

Indicates the value of the password parameter if you do not specify UNLIMITED or DEFAULT. If specified, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS 3.

{PASSWORD_COMPLEXITY_CHECKER| PASSWORD_VERIFY_FUNCTION} {function|NULL|DEFAULT}

Indicates if password verification is done on passwords and, if so, the function used for verification. You can specify either the PASSWORD_COMPLEXITY_CHECKER or the PASSWORD_VERIFY_FUNCTION password parameter. They are synonymous.

function refers to one of the three supported password complexity checker functions. Specify one of these functions to direct TimesTen to perform password verification. Valid values:
  • TT_VERIFY_FUNCTION
  • TT_STRONG_VERIFY_FUNCTION
  • TT_STIG_VERIFY_FUNCTION

NULL indicates that there is not a password verification function assigned for the profile.

DEFAULT indicates that the user is subject to the limits defined by the DEFAULT profile. The DEFAULT profile initially has a value of NULL.

If you do not specify the PASSWORD_COMPLEXITY_CHECKER password parameter, the value defaults to the limits defined for the DEFAULT profile.

Description

  • Use the ALTER PROFILE statement to modify a previously created profile. See "CREATE PROFILE" for information on creating a profile.

  • Changes made using the ALTER PROFILE statement takes effect the next time any affected user connected to the database. The exception is when you modify the PASSWORD_COMPLEXITY_CHECKER password parameter. Password verification is only done on newly created passwords (on the password provided in the IDENTIFIED BY clause of the CREATE USER or ALTER USER statement). Therefore, a user can connect to the database with an old password. See "ALTER the PASSWORD_COMPLEXITY_CHECKER password parameter" for an example.

  • You can alter the DEFAULT profile. However, you cannot drop the DEFAULT profile. See "Alter the DEFAULT profile" for an example of altering the DEFAULT profile.

  • You cannot alter the password parameters of the SYSTEM profile. This profile is assigned to system users, including the instance administrator.

  • You can alter the profile to change the password verification that is done on the passwords of users that are assigned the profile. See "About password complexity checker verification" for information on password verification and the password complexity checker verification functions.

Examples

ALTER the PASSWORD_COMPLEXITY_CHECKER password parameter

This example creates the myprofile_alterpw1 profile and specifies TT_VERIFY_FUNCTION for the PASSWORD_COMPLEXITY_CHECKER password parameter. The example then creates the sampleuser_alterpw1 user and assigns the myprofile_alterpw1 profile to the sampleuser_alterpw1 user. The example alters the profile, specifying TT_STIG_VERIFY_FUNCTION for the PASSWORD_COMPLEXITY_CHECKER password parameter. The sampleuser_alterpw1 attempts to connect to the database with the original password. The connection is successful. TimesTen does not perform password verification on old passwords. The example then uses the ALTER USER statement to change the sampleuser_alterpw1 user password to meet the requirements of the TT_STIG_VERIFY_FUNCTION. The ALTER USER statement succeeds and the user's password is changed.

Command> CREATE PROFILE myprofile_alterpw1 LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_VERIFY_FUNCTION;

Profile created.

Command> CREATE USER sampleuser_alterpw1 
           IDENTIFIED BY "%aabb2L90" PROFILE myprofile_alterpw1;

User created.

Alter the myprofile_alterpw1 profile, changing the value of PASSWORD_COMPLEXITY_CHECKER to TT_STIG_VERIFY_FUNCTION. Connect to the database as the sampleuser_alterpw1 user. The connection succeeds.

Command> ALTER PROFILE myprofile_alterpw1 LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_STIG_VERIFY_FUNCTION;

Profile altered.

Command> GRANT CONNECT TO sampleuser_alterpw1;
Command> connect adding "UID=sampleuser_alterpw1;PWD=%aabb2L90" as sampleuser;
Connection successful: DSN=access1;UID=sampleuser_alterpw1;
DataStore=/scratch/sampleuser/mydatabase1;DatabaseCharacterSet=AL32UTF8;
ConnectionCharacterSet=AL32UTF8;PermSize=128;
(Default setting AutoCommit=1)

Alter the sampleuser_alterpw1 user specifying the same password. The ALTER USER statement fails. The newly created password does not meet the requirements of the TT_STIG_VERIFY_FUNCTION function. Alter the sampleuser_alterpw1 again, specifying a password that meets the requirements of the TT_STIG_VERIFY_FUNCTION function. The ALTER USER statement succeeds. See "TT_STIG_VERIFY_FUNCTION" for information on the TT_STIG_VERIFY_FUNCTION function.

Command> ALTER USER sampleuser_alterpw1 
           IDENTIFIED BY  "%aabb2L90";
15186: Password complexity check for the specified password failed
15188: TT-20001: Password length less than 15
The command failed.

Command> ALTER USER sampleuser_alterpw1 
           IDENTIFIED BY  "%aabb2L##mf5Fn!";

User altered.

Alter the DEFAULT profile

This example verifies the values of the password parameters in the DEFAULT profile. It then alters the profile with different values. Users that are assigned the DEFAULT profile will inherit the modified values at the user's next connection to the database.

Command> SELECT * FROM dba_profiles WHERE profile='DEFAULT' AND 
           resource_type='PASSWORD';
< DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 10 >
< DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL >
< DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, .0034 >
< DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED >
7 rows found.

Create the user1 user and do not specify a profile. User1 is assigned the DEFAULT profile. Use the ALTER PROFILE statement to change the value of the FAILED_LOGIN_ATTEMPTS password parameter to 5 and the value of the PASSWORD_LOCK_TIME password parameter to 1 for the DEFAULT profile. Enclose DEFAULT in double quotation marks as DEFAULT is a reserved word. Connect to the database five times as user1 supplying an incorrect password each time. On the sixth attempt, the user1 account is locked.

Command> CREATE USER user1 IDENTIFIED BY user1;
 
User created.
Command> GRANT CONNECT TO user1;

Query the dba_users system view to verify that user1 is assigned the DEFAULT profile.

Command> SELECT profile FROM dba_users WHERE username='USER1';
< DEFAULT >
1 row found.

Use the ALTER PROFILE statement to modify the DEFAULT profile.

Command> ALTER PROFILE "DEFAULT" LIMIT 
           FAILED_LOGIN_ATTEMPTS 5 
           PASSWORD_LOCK_TIME 1;
 
Profile altered.
 

Query the dba_profiles system view to verify the values are changed (represented in bold).

Command> SELECT * FROM dba_profiles WHERE profile='DEFAULT' AND 
   resource_type='PASSWORD';
< DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 5 >
< DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL >
< DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, 1 >
< DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED >
7 rows found.

Attempt to connect to the database as user1. Supply an incorrect password. On the sixth attempt, the user1 account is locked.

Command> connect adding "uid=user1;pwd=user1_test1" as user1;
 7001: User authentication failed
The command failed.
none: Command> connect adding "uid=user1;pwd=user1_test2" as user1;
 7001: User authentication failed
The command failed.
none: Command> connect adding "uid=user1;pwd=user1_test3" as user1;
 7001: User authentication failed
The command failed.
none: Command> connect adding "uid=user1;pwd=user1_test4" as user1;
 7001: User authentication failed
The command failed.
none: Command> connect adding "uid=user1;pwd=user1_test5" as user1;
 7001: User authentication failed
The command failed.
none: Command> connect adding "uid=user1;pwd=user1_test6" as user1;
15179: the account is locked
The command failed.

Create a profile then alter the profile

This example creates the profile1 profile and specifies values for the FAILED_LOGIN_ATTEMPTS, the PASSWORD_LIFE_TIME, the PASSWORD_LOCK_TIME, and the PASSWORD_GRACE_TIME password parameters. It then alters the profile1 profile to modify the PASSWORD_REUSE_TIME and the PASSWORD_REUSE_MAX password parameters.

Command> CREATE PROFILE profile1 LIMIT 
           FAILED_LOGIN_ATTEMPTS 3 
           PASSWORD_LIFE_TIME 90 
           PASSWORD_LOCK_TIME 30 
           PASSWORD_GRACE_TIME 10;
 
Profile created.

Query the dba_profiles system view to verify the values for the password parameters. Note that the PASSWORD_REUSE_TIME and the PASSWORD_REUSE_MAX password parameters each have a value of DEFAULT (represented in bold). These password parameters were not specified in the CREATE PROFILE definition, so TimesTen assigns a value of DEFAULT to each parameter. The values for these parameters are derived from the values in the DEFAULT profile.

Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE1' AND
           resource_type= 'PASSWORD';
< PROFILE1, FAILED_LOGIN_ATTEMPTS, PASSWORD, 3 >
< PROFILE1, PASSWORD_LIFE_TIME, PASSWORD, 90 >
< PROFILE1, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT >
< PROFILE1, PASSWORD_REUSE_MAX, PASSWORD, DEFAULT >
< PROFILE1, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT >
< PROFILE1, PASSWORD_LOCK_TIME, PASSWORD, 30 >
< PROFILE1, PASSWORD_GRACE_TIME, PASSWORD, 10 >
7 rows found.

Alter the profile1 profile, specifying a value of 20 for the PASSWORD_REUSE_TIME password and a value of 15 for the PASSWORD_REUSE_MAX password parameter (represented in bold). A user assigned this profile can reuse the same password after 20 days if the password has been changed 15 times.

Command> ALTER PROFILE profile1 LIMIT 
           PASSWORD_REUSE_TIME 20 
           PASSWORD_REUSE_MAX 15;
 
Profile altered.

Query the dba_profiles system view to verify the values for the password parameters are changed (represented in bold).

Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE1' AND 
           resource_type= 'PASSWORD';
< PROFILE1, FAILED_LOGIN_ATTEMPTS, PASSWORD, 3 >
< PROFILE1, PASSWORD_LIFE_TIME, PASSWORD, 90 >
< PROFILE1, PASSWORD_REUSE_TIME, PASSWORD, 20 >
< PROFILE1, PASSWORD_REUSE_MAX, PASSWORD, 15 >
< PROFILE1, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT >
< PROFILE1, PASSWORD_LOCK_TIME, PASSWORD, 30 >
< PROFILE1, PASSWORD_GRACE_TIME, PASSWORD, 10 >
7 rows found.

ALTER REPLICATION

This statement is not supported in TimesTen Scaleout.

In TimesTen Classic:

The ALTER REPLICATION statement adds, alters, or drops replication elements and changes the replication attributes of participating databases involved in a classic replication scheme.

Most ALTER REPLICATION operations are supported only when the replication agent is stopped (ttAdmin -repStop). However, it is possible to dynamically add a subscriber database to a replication scheme while the replication agent is running. See "Altering a Classic Replication Scheme" in Oracle TimesTen In-Memory Database Replication Guide for more information.

Required privilege

ADMIN

Usage with TimesTen Scaleout

This statement is not supported with TimesTen Scaleout.

SQL syntax

The ALTER REPLICATION statement has the syntax:

ALTER REPLICATION [Owner.]ReplicationSchemeName
  ElementOperation [...] | StoreOperation |
  NetworkOperation [...]

Specify ElementOperation one or more times:

ADD ELEMENT ElementName
  { DATASTORE | 
  { TABLE [Owner.]TableName [CheckConflicts] } | 
    SEQUENCE [Owner.]SequenceName }
  { MASTER | PROPAGATOR } FullStoreName
  { SUBSCRIBER FullStoreName [,... ] [ReturnServiceAttribute] } [ ... ] 
  { INCLUDE | EXCLUDE } { TABLE [[Owner.]TableName[,...]] | 
    SEQUENCE [[Owner.]SequenceName[,...]] } [,...]

ALTER ELEMENT { ElementName | * IN FullStoreName ]}
  ADD SUBSCRIBER FullStoreName [,...] [ReturnServiceAttribute] |
  ALTER SUBSCRIBER FullStoreName [,...]|
  SET [ReturnServiceAttribute] 
  DROP SUBSCRIBER FullStoreName [,... ]

ALTER ELEMENT * IN FullStoreName
  SET { MASTER | PROPAGATOR } FullStoreName

ALTER ELEMENT ElementName
  {SET NAME NewElementName | SET CheckConflicts}

ALTER ELEMENT ElementName 
  { INCLUDE | EXCLUDE } { TABLE [Owner.]TableName |
    SEQUENCE [Owner.]SequenceName }[,...] 

DROP ELEMENT { ElementName | * IN FullStoreName }

CheckConflicts can only be set when replicating TABLE elements. See "CHECK CONFLICTS" for syntax requirements.

Syntax for ReturnServiceAttribute is:

{ RETURN RECEIPT [BY REQUEST] | NO RETURN }

StoreOperation clauses:

ADD STORE FullStoreName [StoreAttribute [... ]]
ALTER STORE FullStoreName SET StoreAttribute [... ]

Syntax for the StoreAttribute is:

DISABLE RETURN {SUBSCRIBER | ALL} NumFailures
RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED
DURABLE COMMIT {ON | OFF}
RESUME RETURN Milliseconds 
LOCAL COMMIT ACTION {NO ACTION | COMMIT}
RETURN WAIT TIME Seconds
COMPRESS TRAFFIC {ON | OFF} 
PORT PortNumber 
TIMEOUT Seconds 
FAILTHRESHOLD Value
CONFLICT REPORTING SUSPEND AT Value 
CONFLICT REPORTING RESUME AT Value 
TABLE DEFINITION CHECKING {EXACT|RELAXED}

Specify NetworkOperation one or more times:

ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName
  { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost }
      PRIORITY Priority } [...] 

DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName
  { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } [...] 

Parameters

Parameter Description

[Owner.]ReplicationSchemeName

Name assigned to the classic replication scheme.

ADD ELEMENT ElementName

Adds a new element to the existing classic replication scheme. ElementName is an identifier of up to 30 characters. With DATASTORE elements, the ElementName must be unique with respect to other DATASTORE element names within the first 20 characters.

If the element is a DATASTORE, all tables are included in the database. SEQUENCE elements that are part of the database do not have their return services modified by this statement.

ADD ELEMENT ElementName DATASTORE

{INCLUDE | EXCLUDE}

{TABLE [[Owner.]TableName [,...]]|

SEQUENCE [[Owner.]SequenceName[,...]]} [,...]

Adds a new DATASTORE element to the existing classic replication scheme. ElementName is an identifier of up to 30 characters. With DATASTORE elements, the ElementName must be unique with respect to other DATASTORE element names within the first 20 characters.

INCLUDE includes in the database only the tables and sequences listed. Use one INCLUDE clause for each object type (table or sequence).

EXCLUDE includes in the database all tables and sequences except the tables and sequences listed. Use one EXCLUDE clause for each object type (table or sequence).

If the element is a sequence, RETURN attributes are not applied, no conflict checking is supported and sequences that cycle return an error.

ADD SUBSCRIBER FullStoreName

Indicates an additional subscriber database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

ALTER ELEMENT * IN FullStoreName

SET { MASTER | PROPAGATOR } FullStoreName

Makes a change to all elements for which FullStoreName is the MASTER or PROPAGATOR. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

This syntax can be used on a set of element names to:

  • Add, alter, or drop subscribers.

  • Set the MASTER or PROPAGATOR status of the element set.

SEQUENCE elements that are part of the database being altered do not have their return services modified by this statement.

ALTER ELEMENT ElementName

Name of the element to which a subscriber is to be added or dropped.

ALTER ELEMENT

ElementName1

SET NAME ElementName2

Renames ElementName1 with the name ElementName2. You can only rename elements of type TABLE.

ALTER ELEMENT ElementName

{INCLUDE|EXCLUDE}

{TABLE [Owner.]TableName |

SEQUENCE [Owner.]SequenceName} [,...]

ElementName is the name of the element to be altered.

INCLUDE adds to the database the tables and sequences listed. Use one INCLUDE clause for each object type (table or sequence).

EXCLUDE removes from the database the tables and sequences listed. Use one EXCLUDE clause for each object type (table or sequence).

If the element is a sequence, RETURN attributes are not applied, no conflict checking is supported and sequences that cycle return an error.

ALTER SUBSCRIBER FullStoreName

SET RETURN RECEIPT

[BY REQUEST]|NO RETURN

Indicates an alteration to a subscriber database to enable, disable, or change the return receipt service. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

CheckConflicts

Check for replication conflicts when simultaneously writing to bidirectionally replicating TABLE elements between databases. You cannot check for conflicts when replicating elements of type DATASTORE. See "CHECK CONFLICTS".

COMPRESS TRAFFIC {ON | OFF}

Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details.

CONFLICT REPORTING SUSPEND AT Value

Suspends conflict resolution reporting.

Value is a non-negative integer. Conflict reporting is suspended when the rate of conflict exceeds Value. The default is 0, which means that the conflict reporting is always on and will not be suspended.

This clause is valid for table level replication.

CONFLICT REPORTING RESUME AT Value

Resumes conflict resolution reporting.

Value is a non-negative integer. Conflict reporting is resumed when the rate of conflict falls below Value. The default is 1.

This clause is valid for table level replication.

DISABLE RETURN {SUBSCRIBER | ALL} NumFailures

Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures. Selecting SUBSCRIBER applies this policy only to the subscriber that fails to acknowledge replicated updates within the set timeout period. ALL applies this policy to all subscribers should any of the subscribers fail to respond. This failure policy can be specified for either the RETURN RECEIPT or RETURN TWOSAFE service.

If DISABLE RETURN is specified but RESUME RETURN is not specified, the return services remain off until the replication agent for the database has been restarted.

DURABLE COMMIT {ON | OFF}

Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped.

DROP ELEMENT * IN FullStoreName

Deletes the replication description of all elements for which FullStoreName is the MASTER. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

DROP ELEMENT ElementName

Deletes the replication description of ElementName.

DROP SUBSCRIBER FullStoreName

Indicates that updates should no longer be sent to the specified subscriber database. This operation fails if the classic replication scheme has only one subscriber. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

FAILTHRESHOLD Value

The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.

The value 0 means "No Limit." This is the default.

See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide for more information.

FullStoreName

The database, specified as one of the following:

  • SELF

  • The prefix of the database file name

For example, if the database path is directory/subdirectory/data.ds0, then data is the database name.

This is the database file name specified in the DataStore attribute of the DSN description with optional host ID in the form:

DataStoreName [ON Host]

Host can be either an IP address or a literal host name assigned to one or more IP addresses, as described in "Configuring the network" in Oracle TimesTen In-Memory Database Replication Guide. Host names containing special characters must be surrounded by double quotes. For example: "MyHost-500".

LOCAL COMMIT ACTION {NO ACTION | COMMIT}

Specifies the default action to be taken for a RETURN TWOSAFE transaction in the event of a timeout.

NO ACTION: On timeout, the commit function returns to the application, leaving the transaction in the same state it was in when it entered the commit call, with the exception that the application is not able to update any replicated tables. The application can only reissue the commit. The transaction may not be rolled back. This is the default.

COMMIT: On timeout, the commit function attempts to perform a COMMIT to end the transaction locally. No more operations are possible on the same transaction.

This setting can be overridden for specific transactions by calling the ttRepSyncSet procedure with the localAction parameter.

MASTER FullStoreName

The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

NO RETURN

Specifies that no return service is to be used. This is the default.

For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide.

PORT PortNumber

The TCP/IP port number on which the replication agent on this database listens for connections. If not specified, the replication agent allocates a port number automatically.

All TimesTen databases that replicate to each other must use the same port number.

PROPAGATOR FullStoreName

The database that receives replicated updates and passes them on to other databases.

RESUME RETURN Milliseconds

If return service blocking has been disabled by DISABLE RETURN, this attribute sets the policy on when to re-enable return service blocking. Return service blocking is re-enabled as soon as the failed subscriber acknowledges the replicated update in a period of time that is less than the specified Milliseconds.

If DISABLE RETURN is specified but RESUME RETURN is not specified, the return services remain off until the replication agent for the database has been restarted.

RETURN RECEIPT [BY REQUEST]

Enables the return receipt service, so that applications that commit a transaction to a master database are blocked until the transaction is received by all subscribers.

RETURN RECEIPT applies the service to all transactions. If you specify RETURN RECEIPT BY REQUEST, you can use the ttRepSyncSet procedure to enable the return receipt service for selected transactions. For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide.

RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED

Sets return services on or off when replication is disabled (stopped or paused state).

OFF disables return services when replication is disabled and is the default for RETURN RECEIPT service. ON allows return services to continue to be enabled when replication is disabled and is the default for RETURN TWOSAFE service.

RETURN TWOSAFE [BY REQUEST]

Enables the return twosafe service, so that applications that commit a transaction to a master database are blocked until the transaction is committed on all subscribers.

RETURN TWOSAFE applies the service to all transactions. If you specify RETURN TWOSAFE BY REQUEST, you can use the ttRepSyncSet procedure to enable the return receipt service for selected transactions. For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide.

RETURN WAIT TIME Seconds

Specifies the number of seconds to wait for return service acknowledgment. The default value is 10 seconds. A value of 0 (zero) means there is no timeout. Your application can override this timeout setting by calling the ttRepSyncSet procedure with the returnWait parameter.

SET {MASTER | PROPAGATOR} FullStoreName

Sets the given database to be the MASTER or PROPAGATOR of the given elements. The FullStoreName must the be database's file base name.

SUBSCRIBER FullStoreName

A database that receives updates from the MASTER databases. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

TABLE DEFINITION CHECKING {EXACT|RELAXED}

Specifies type of table definition checking that occurs on the subscriber:

  • EXACT - The tables must be identical on master and subscriber.

  • RELAXED - The tables must have the same key definition, number of columns and column data types.

The default is RELAXED.

Note: If you use TABLE DEFINITION CHECKING EXACT, use ttMigrate -exactUpgrade if you migrate the database. If you use TABLE DEFINITION CHECKING RELAXED, use ttMigrate -relaxedUpgrade if you migrate the database.

TIMEOUT Seconds

The maximum number of seconds the replication agent waits for a response from remote replication agents. The default is 120 seconds.

Note: For large transactions that may cause a delayed response from the remote replication agent, the agent scales the timeout to increasingly larger values, as needed, based on the size of the transaction. This scaling will not occur, and the agent may time out waiting for responses, if you set TIMEOUT to less than or equal to 60 seconds for large transactions. Also see "Setting wait timeout for response from remote replication agents" in Oracle TimesTen In-Memory Database Replication Guide.

ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName

Adds NetworkOperation to replication scheme. Enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores.

Can be specified more than once.

For FullStoreName, ON "host" must be specified.

DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName

Drops NetworkOperation from the classic replication scheme.

Can be specified more than once.

For FullStoreName, ON "host" must be specified.

MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost

MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.

Clause can be specified more than once. Valid for both ADD and DROP ROUTE MASTER.

PRIORITY Priority

Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.

Required syntax of NetworkOperation clause. Follows MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost clause.

Description

  • ALTER ELEMENT DROP SUBSCRIBER deletes a subscriber for a particular replication element.

  • ALTER ELEMENT SET NAME may be used to change the name of a replication element when it conflicts with one already defined at another database. SET NAME does not admit the use of * IN FullStoreName. The FullStoreName must be the database's file base name. For example, if the database file name is data.ds0, then data is the file base name.

  • ALTER ELEMENT SET MASTER may be used to change the master database for replication elements. The * IN FullStoreName option must be used for the MASTER operation. That is, a master database must transfer ownership of all of its replication elements, thereby giving up its master role entirely. Typically, this option is used in ALTER REPLICATION statements requested at SUBSCRIBER databases after the failure of a (common) MASTER.

  • To transfer ownership of the master elements to the subscriber:

    1. Manually drop the replicated elements by executing an ALTER REPLICATION DROP ELEMENT statement for each replicated table.

    2. Use ALTER REPLICATION ADD ELEMENT to add each table back to the replication scheme, with the newly designated MASTER / SUBSCRIBER roles.

  • ALTER REPLICATION ALTER ELEMENT SET MASTER does not automatically retain the old master as a subscriber in the scheme. If this is desired, execute an ALTER REPLICATION ALTER ELEMENT ADD SUBSCRIBER statement.

    Note:

    There is no ALTER ELEMENT DROP MASTER. Each replication element must have exactly one MASTER database, and the currently designated MASTER cannot be deleted from the replication scheme.

  • Stop the replication agent before you use the NetworkOperation clause.

  • You cannot alter the following replication schemes with the ALTER REPLICATION statement:

    • Any active standby pair. Instead, use ALTER ACTIVE STANDBY PAIR.

    • A Clusterware-managed active standby pair. Instead, perform the tasks described in "Changing the schema" section of the Oracle TimesTen In-Memory Database Replication Guide.

Examples

This example sets up a classic replication scheme for an additional table westleads that is updated on database west and replicated to database east.

ALTER REPLICATION r1
   ADD ELEMENT e3 TABLE westleads
     MASTER west ON "westcoast"
     SUBSCRIBER east ON "eastcoast";

This example adds an additional subscriber (backup) to table westleads.

ALTER REPLICATION r1
   ALTER ELEMENT e3
     ADD SUBSCRIBER backup ON "backupserver";

This example changes the element name of table westleads from e3 to newelementname.

ALTER REPLICATION r1
   ALTER ELEMENT e3
     SET NAME newelementname;

This example makes newwest the master for all elements for which west currently is the master.

ALTER REPLICATION r1
   ALTER ELEMENT * IN west
     SET MASTER newwest;

This element changes the port number for east.

ALTER REPLICATION r1
   ALTER STORE east ON "eastcoast" SET PORT 22251;

This example adds my.tab1 table to the ds1 database element in my.rep1 replication scheme.

ALTER REPLICATION my.rep1
  ALTER ELEMENT ds1 DATASTORE
     INCLUDE TABLE my.tab1;

This example adds ds1 database to my.rep1 replication scheme. Include my.tab2 table in the database.

ALTER REPLICATION my.rep1
  ADD ELEMENT ds1 DATASTORE
     MASTER rep2
     SUBSCRIBER rep1, rep3
     INCLUDE TABLE my.tab2;

This example adds ds2 database to a replication scheme but excludes my.tab1 table.

ALTER REPLICATION my.rep1
  ADD ELEMENT ds2 DATASTORE
     MASTER rep2
     SUBSCRIBER rep1
     EXCLUDE TABLE my.tab1;

Add NetworkOperation clause:

ALTER REPLICATION r
ADD ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2"
MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2"
    PRIORITY 1
MASTERIP "3.3.3.3" PRIORITY 2 SUBSCRIBERIP "4.4.4.4" PRIORITY 2;

Drop NetworkOperation clause:

ALTER REPLICATION r
DROP ROUTE MASTER repl ON "machine1" SUBSCRIBER rep2 ON "machine2"
MASTERIP "1.1.1.1" SUBSCRIBERIP "2.2.2.2"
MASTERIP "3.3.3.3" SUBSCRIBERIP "4.4.4.4";

ALTER SEQUENCE

This statement is supported in TimesTen Scaleout only.

Use the ALTER SEQUENCE statement to change the batch value of a sequence.

Required privilege

No privilege is required for the sequence owner.

ALTER ANY SEQUENCE privilege for another user's sequence.

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

ALTER SEQUENCE [Owner.]SequenceName BATCH BatchValue

Parameters

Parameter Description

SEQUENCE [Owner.]SequenceName

Name of the sequence to be altered.

BATCH BatchValue

Valid with TimesTen Scaleout only. Configures the range of unique sequence values that are stored at each element of the grid. The default value is 10 million.

Description

  • Use this statement to change the batch value for a sequence in TimesTen Scaleout. The change affects future sequence numbers.

  • This statement cannot be used to alter any other values supported in the CREATE SEQUENCE statement. In this case, use the DROP SEQUENCE statement and then create a new sequence with the same name. For example, to change the MINVALUE, drop the sequence and recreate it with the same name and with the desired MINVALUE.

See "Using sequences" in Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.

Examples

To change the batch value for the sequence:

ALTER SEQUENCE myseq BATCH 2000;
Sequence altered

ALTER SESSION

The ALTER SESSION statement changes session parameters dynamically. This overrides the setting of the equivalent connection attribute for the current session, as applicable.

Required privilege

None

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout. However, these parameters are not supported:

  • DDL_REPLICATION_ACTION

  • DDL_REPLICATION_LEVEL

  • REPLICATION_TRACK

SQL syntax

ALTER SESSION SET
  {COMMIT_BUFFER_SIZE_MAX = n |
   DDL_REPLICATION_ACTION={'INCLUDE'|'EXCLUDE'} | 
   DDL_REPLICATION_LEVEL={1|2|3} |
   ISOLATION_LEVEL = {SERIALIZABLE | READ COMMITTED} |
   NLS_SORT = {BINARY| SortName} |
   NLS_LENGTH_SEMANTICS = {BYTE|CHAR} |
   NLS_NCHAR_CONV_EXCP = {TRUE|FALSE} |
   PLSQL_TIMEOUT = n |
   PLSQL_OPTIMIZE_LEVEL = {0|1|2|3}|
   PLSQL_CONN_MEM_LIMIT = n |
   PLSQL_CCFLAGS = 'name1:value1, name2:value2,..., nameN:valueN' |
   PLSQL_SESSION_CACHED_CURSORS = n |
   REPLICATION_TRACK = TrackNumber |
   } 

Parameters

Parameter Description

COMMIT_BUFFER_SIZE_MAX= n

Changes the maximum size of the commit buffer when a connection is in progress. n is expressed as an integer and represents the maximum size of the commit buffer (in MB).

Change takes effect starting with the next transaction.

Call the ttConfiguration built-in procedure to see the currently configured maximum size of the commit buffer. A value of 0 means the buffer is configured with a default size. The default size is 128 KB.

For more information on the commit buffer and transaction reclaim operations, see "Transaction reclaim operations" in the Oracle TimesTen In-Memory Database Operations Guide and "CommitBufferSizeMax" in the Oracle TimesTen In-Memory Database Reference.

Note: The equivalent connection attribute is CommitBufferSizeMax.

DDL_REPLICATION_ACTION={'INCLUDE'|'EXCLUDE'}

To include a table or sequence in the active standby pair when either is created, set DDL_REPLICATION_ACTION to INCLUDE. If you do not want to include a table or sequence in the active standby pair when either is created, set DDL_REPLICATION_ACTION to EXCLUDE. The default is INCLUDE.

If set to EXCLUDE:

  • A subsequent ALTER ACTIVE STANDBY PAIR ... INCLUDE TABLE is required to be executed on the active database to add the table to the replication scheme. All tables must be empty on all active standby databases and subscribers as the table contents will be truncated when this statement is executed.

  • A subsequent ALTER ACTIVE STANDBY PAIR ... INCLUDE SEQUENCE is required to be executed on the active database to add the sequence to the replication scheme.

This attribute is valid only if DDL_REPLICATION_LEVEL is 2 or greater.

See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.

Note: The equivalent connection attribute is DDLReplicationAction.

DDL_REPLICATION_LEVEL={1|2|3}

Indicates whether DDL is replicated across all databases in an active standby pair. The value can be one of the following:

  • 1: Default. Add or drop a column to or from a replicated table on the active database using ALTER TABLE. The change is replicated to the table in the standby database.

  • 2: Supports replication of the creation or dropping of tables, synonyms or indexes from the active database to the standby database. This does include creating or dropping global temporary tables, but does not include CREATE TABLE AS SELECT. The statement is replicated only when the index is created on an empty table.

  • 3: Supports replication of all DDL supported by level 2 as well as replication of creation or dropping of views and sequences (not including materialized views) and changes to the cache administration user ID and password settings when you call the ttCacheUidPwdSet built-in procedure.

    Note: After you have defined cache groups, you cannot change the cache administration user ID, but can still change the cache administration password.

See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.

Note: The equivalent connection attribute is DDLReplicationLevel.

ISOLATION_LEVEL = {SERIALIZABLE|READ COMMITTED}

Sets isolation level. Change takes effect starting with the next transaction.

For a descriptions of the isolation levels, see "Transaction isolation levels" in the Oracle TimesTen In-Memory Database Operations Guide.

Note: The equivalent connection attribute is Isolation.

NLS_SORT={BINARY| SortName}

Indicates which collation sequence to use for linguistic comparisons.

Append _CI or _AI to either BINARY or the SortName value to do case-insensitive or accent-insensitive sorting.

If you do not specify NLS_SORT, the default is BINARY.

For a complete list of supported values for SortName, see "Linguistic sorts" in Oracle TimesTen In-Memory Database Operations Guide.

For more information on case-insensitive or accent-insensitive sorting, see "Case-insensitive and accent-insensitive linguistic sorts" in Oracle TimesTen In-Memory Database Operations Guide.

NLS_LENGTH_SEMANTICS ={BYTE|CHAR}

Sets the default length semantics configuration. BYTE indicates byte length semantics. CHAR indicates character length semantics. The default is BYTE.

For more information on length semantics, see "Length semantics and data storage" in Oracle TimesTen In-Memory Database Operations Guide.

NLS_NCHAR_CONV_EXCP = {TRUE|FALSE}

Determines whether an error should be reported when there is data loss during an implicit or explicit character type conversion between NCHAR/NVARCHAR2 data and CHAR/VARCHAR2 data. Specify TRUE to enable error reporting. Specify FALSE to not report errors. The default is FALSE.

PLSQL_TIMEOUT= n

Controls how long PL/SQL procedures run before being automatically terminated. n represents the time, in seconds. Specify 0 for no time limit or any positive integer. The default is 30.

When you modify this value, the new value impacts PL/SQL program units that are currently running as well as any other program units subsequently executed in the same connection.

See "Choose SQL and PL/SQL timeout values" in the Oracle TimesTen In-Memory Database Operations Guide for information on setting timeout values.

PLSQL_OPTIMIZE_LEVEL = {0|1|2|3}

Specifies the optimization level used to compile PL/SQL library units. The higher the setting, the more effort the compiler makes to optimize PL/SQL library units. Possible values are 0, 1, 2 or 3. The default is 2.

For more information, see "PLSQL_OPTIMIZE_LEVEL" in Oracle TimesTen In-Memory Database Reference.

PLSQL_CONN_MEM_LIMIT = n

Specifies the maximum amount of process heap memory that PL/SQL can use for this connection, where n is an integer expressed in MB. The default is 100.

For more information, see "PLSQL_CONN_MEM_LIMIT" in Oracle TimesTen In-Memory Database Reference.

PLSQL_CCFLAGS = 'name1:value1, name2:value2, ..., nameN:valueN'

Specifies inquiry directives to control conditional compilation of PL/SQL units, which enables you to customize the functionality of a PL/SQL program depending on conditions that are checked. For example, to activate debugging features:

PLSQL_CCFLAGS = 'DEBUG:TRUE'

PLSQL_SESSION_CACHED_CURSORS= n

Specifies the maximum number of session cursors to cache. The default is 50. The range of values is 1 to 65535.

The PLSQL_SESSION_CACHED_CURSORS setting in TimesTen behaves the same as the SESSION_CACHED_CURSORS setting in Oracle RDBMS.

REPLICATION_TRACK = TrackNumber

When managing track-based parallel replication, you can assign a connection to a replication track. All transactions issued by the connection are assigned to this track, unless the track is altered.

If the number specified is for a non-existent replication track X, the transaction is assigned to a track number computed as X modulo ReplicationParallelism.

You cannot change tracks in the middle of a transaction unless all preceding operations have been read operations.

For more information, see "Specifying replication tracks within an automatic parallel replication environment" in Oracle TimesTen In-Memory Database Replication Guide.

The equivalent connection attribute is ReplicationTrack.

Description

  • The ALTER SESSION statement affects commands that are subsequently executed by the session. ALTER SESSION does not do an implicit commit.

  • In cases of client failover, if an ALTER SESSION statement is issued in the failed connection, the setting is not seen or carried over to the new connection. You must re-issue the ALTER SESSION statement and re-specify the value for that parameter. For more information on client failover, in TimesTen Classic, see "Using automatic client failover" in the Oracle TimesTen In-Memory Database Operations Guide and, in TimesTen Scaleout, see "Client connection failover" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.

  • Operations involving character comparisons support linguistic sensitive collating sequences. Case-insensitive sorts may affect DISTINCT value interpretation.

  • Implicit and explicit conversions between CHAR and NCHAR are supported.

  • You can use the SQL string functions with the supported character sets. For example, UPPER and LOWER functions support non-ASCII CHAR and VARCHAR2 characters as well as NCHAR and NVARCHAR2 characters.

  • Choice of character set could have an impact on memory consumption for CHAR and VARCHAR2 column data.

  • The character sets of all databases involved in a replication scheme must match.

  • To add an existing table to an active standby pair, set DDL_REPLICATION_LEVEL to 2 or greater and DDL_REPLICATION_ACTION to INCLUDE. Alternatively, you can use the ALTER ACTIVE STANDBY PAIR ... INCLUDE TABLE statement if DDL_REPLICATION_ACTION is set to EXCLUDE. In this case, the table must be empty and present on all databases before executing the ALTER ACTIVE STANDBY PAIR ... INCLUDE TABLE statement as the table contents will be truncated when this statement is executed.

  • To add an existing sequence or view to an active standby pair, set DDL_REPLICATION_LEVEL to 3. To include the sequence in the replication scheme, DDL_REPLICATION_ACTION must be set to INCLUDE. This does not apply to materialized views.

  • Objects are replicated only when the receiving database is of a TimesTen release that supports that level of replication, and is configured for an active standby pair replication scheme. For example, replication of sequences (requiring DDL_REPLICATION_LEVEL=3) to a database release prior to 11.2.2.7.0 is not supported. The receiving database must be of at least release 11.2.1.8.0 for replication of objects supported by DDL_REPLICATION_LEVEL=2.

Examples

Use the ALTER SESSION statement to change COMMIT_BUFFER_SIZE_MAX to 500 MB. First, call ttConfiguration to display the current connection setting. Use the ALTER SESSION statement to change the COMMIT_BUFFER_SIZE_MAX setting to 500. Call ttConfiguration to display the new setting.

Command>  CALL ttConfiguration ('CommitBufferSizeMax');
< CommitBufferSizeMax, 0 >
1 row found.
Command> ALTER SESSION SET COMMIT_BUFFER_SIZE_MAX = 500;
 
Session altered.
 
Command> CALL ttConfiguration ('CommitBufferSizeMax');
< CommitBufferSizeMax, 500 >
1 row found.

Use the ALTER SESSION statement to change PLSQL_TIMEOUT to 60 seconds. Use a second ALTER SESSION statement to change PLSQL_OPTIMIZE_LEVEL to 3. Then call ttConfiguration to display the new values.

Command> ALTER SESSION SET PLSQL_TIMEOUT = 60;
Session altered.
Command> ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 3;
Session altered.

Command> CALL TTCONFIGURATION ();
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
...
< PLSQL_OPTIMIZE_LEVEL, 3 >
< PLSQL_TIMEOUT, 60 >
...
47 rows found.

In this example, set PLSQL_TIMEOUT to 20 seconds. Attempt to execute a program that loops indefinitely. In 20 seconds, execution is terminated and an error is returned.

Command> ALTER SESSION SET PLSQL_TIMEOUT = 20;

Command> DECLARE v_timeout NUMBER;
         BEGIN
           LOOP
             v_timeout :=0;
             EXIT WHEN v_timeout < 0;
           END LOOP;
         END;
         /
 8509: PL/SQL execution terminated; PLSQL_TIMEOUT exceeded

The following example uses the ALTER SESSION statement to change the NLS_SORT setting from BINARY to BINARY_CI to BINARY_AI. The database and connection character sets are WE8ISO8859P1.

Command> connect "dsn=cs;ConnectionCharacterSet=WE8ISO8859P1";
Connection successful: DSN=cs;UID=user;DataStore=/datastore/user/cs;
DatabaseCharacterSet=WE8ISO8859P1;
ConnectionCharacterSet=WE8ISO8859P1;PermSize=32;
(Default setting AutoCommit=1)
Command> -- Create the Table
Command> CREATE TABLE collatingdemo (letter VARCHAR2 (10));
Command> -- Insert values
Command> INSERT INTO collatingdemo VALUES ('a');
1 row inserted.
Command> INSERT INTO collatingdemo VALUES ('A');
1 row inserted.
Command> INSERT INTO collatingdemo VALUES ('Y');
1 row inserted.
Command> INSERT INTO collatingdemo VALUES ('ä');
1 row inserted.
Command> -- SELECT
Command> SELECT * FROM collatingdemo;
< a >
< A >
< Y >
< ä >
4 rows found.
Command> --SELECT with ORDER BY
Command> SELECT * FROM collatingdemo ORDER BY letter;
< A >
< Y >
< a >
< ä >
4 rows found.
Command>-- set NLS_SORT to BINARY_CI and SELECT
Command> ALTER SESSION SET NLS_SORT = BINARY_CI;
Command> SELECT * FROM collatingdemo ORDER BY letter;
< a >
< A >
< Y >
< Ä >
< ä >
4 rows found.
Command> -- Set NLS_SORT to BINARY_AI and SELECT
Command> ALTER SESSION SET NLS_SORT = BINARY_AI;
Command> SELECT * FROM collatingdemo ORDER BY letter;
< ä >
< a >
< A >
< Y >
4 rows found.

The following example enables automatic parallel replication with disabled commit dependencies. It uses the ALTER SESSION statement to change the replication track number to 5 for the current connection. To enable automatic parallel replication with disabled commit dependencies for replication schemes, set ReplicationApplyOrdering to 2. Then, always set REPLICATION_TRACK to a number less than or equal to ReplicationParallelism. For example, the ReplicationParallelism connection attribute could be set to 6, which is higher than the value of 5 set for REPLICATION_TRACK.

Command> ALTER SESSION SET REPLICATION_TRACK = 5;
Session altered.

The following example enables replication of adding and dropping columns, tables, synonyms and indexes by setting the following on the active database in an alter standby replication pair: DDL_REPLICATON_LEVEL set to 2 and DDLReplicationAction set to 'INCLUDE'.

Command > ALTER SESSION SET DDL_REPLICATION_LEVEL=2;
Session altered.

Command > ALTER SESSION SET DDL_REPLICATION_ACTION='INCLUDE';
Session altered.

Note:

The equivalent connection attributes for DDL_REPLICATION_LEVEL and DDL_REPLICATION_ACTION are DDLReplicationLevel and DDLReplicationAction, respectively.

ALTER TABLE

The ALTER TABLE statement changes an existing table definition.

The ALTER TABLE statement is supported in TimesTen Scaleout and in TimesTen Classic. However, there are differences in syntax and semantics. For simplicity, the supported syntax, parameters, description (semantics), and examples for TimesTen Scaleout and for TimesTen Classic are separated into the usage with TimesTen Scaleout and the usage with TimesTen Classic. While there is repetition in the usages, it is presented this way in order to allow you to progress from syntax to parameters to semantics to examples for each usage.

Review the required privilege section and then see:

Required privilege

No privilege is required for the table owner.

ALTER ANY TABLE for another user's table.

For ALTER TABLE...ADD FOREIGN KEY, the owner of the altered table must have the REFERENCES privilege on the table referenced by the foreign key clause.

After reviewing this section, see:

SQL syntax for ALTER TABLE: TimesTen Scaleout

To change the distribution key in TimesTen Scaleout:

ALTER TABLE [Owner.]TableName DistributionClause

To add a primary key constraint and optionally specify a global or local index:

Note: The (CreateIndexStmt) is the clause used to represent the TimesTen CREATE INDEX statement. The parentheses ( ) are required. You must create a unique index as that is the requirement for a primary key constraint. See "CREATE INDEX" for details.

ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName
  PRIMARY KEY (ColumnName [,... ]) [(UsingIndexClause2)]

UsingIndexClause2::= USING INDEX {GLOBAL|LOCAL} [USE HASH INDEX PAGES=RowPages|CURRENT]|
USING INDEX (CreateIndexStmt)

Note:

You cannot use ALTER TABLE to drop a primary key constraint. To drop the constraint, you must drop and recreate the table.

To add a unique constraint on a column and optionally specify a global or local index:

Note: The (CreateIndexStmt) is the clause used to represent the TimesTen CREATE INDEX statement. The parentheses ( ) are required. You must create a unique index as that is the requirement for a unique constraint. See CREATE INDEX for details.

ALTER TABLE Owner.]TableName
ADD UNIQUE (ColumnName)
[UsingIndexClause1]

UsingIndexClause1::= USING INDEX {GLOBAL | LOCAL}| USING INDEX (CreateIndexStmt)

To add one column:

ALTER TABLE [Owner.]TableName 
  ADD [COLUMN] ColumnName ColumnDataType
    [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL]
  [COMPRESS (CompressColumns [,...])]

To add multiple columns:

ALTER TABLE [Owner.]TableName 
 ADD (ColumnName ColumnDataType 
     [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [,... ] )

To add a NOT NULL column (note that the DEFAULT clause is required):

ALTER TABLE [Owner.]TableName
  ADD [COLUMN] ColumnName ColumnDataType
    NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE]

To add multiple NOT NULL columns (note that the DEFAULT clause is required):

ALTER TABLE [Owner.]TableName
  ADD (ColumnName ColumnDataType
    NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE] [,...])

To remove columns.

ALTER TABLE [Owner.]TableName 
  DROP {[COLUMN] ColumnName | (ColumnName [,... ] )}

To add a foreign key and optionally add ON DELETE CASCADE:

ALTER TABLE [Owner.]TableName 
ADD [CONSTRAINT ForeignKeyName] FOREIGN KEY
    (ColumnName [,...]) REFERENCES RefTableName
       [(ColumnName [,...])] [ON DELETE CASCADE]

To remove a foreign key:

ALTER TABLE [Owner.]TableName 
DROP CONSTRAINT ForeignKeyName

To resize a hash index:

ALTER TABLE [Owner.]TableName
SET PAGES = RowPages | CURRENT

To change the primary key to use a hash index:

ALTER TABLE [Owner.]TableName
USE HASH INDEX PAGES = RowPages | CURRENT

To change the primary key to use a range index with the USE RANGE INDEX clause:

ALTER TABLE [Owner.]TableName
USE RANGE INDEX

To change the default value of a column:

ALTER TABLE [Owner.]TableName
MODIFY (ColumnName DEFAULT DefaultVal)

To drop a unique constraint on a column:

ALTER TABLE Owner.]TableName
DROP UNIQUE (ColumnName)

To remove the default value of a column that is nullable, by changing it to NULL:

ALTER TABLE [Owner.]TableName
MODIFY (ColumnName DEFAULT NULL)

Parameters for ALTER TABLE ADD CONSTRAINT PRIMARY KEY: TimesTen Scaleout

Parameter Description
ALTER TABLE [Owner.]TableName Start of ALTER TABLE statement. Name of table required. Owner i optional.
ADD CONSTRAINT ConstraintName PRIMARY KEY Clause indicating that the table is to be altered by adding a primary key constraint. ConstraintName is the name of the constraint. Once you add the primary key constraint, you cannot drop it. You must drop the table.
(ColumnName [,…]) (ColumnName) is required and specifies the column(s) to use for the primary key constraint.
[UsingIndexClause2] UsingIndexClause2 is optional and is described in the remainder of this table. You cannot specify two USING INDEX clauses in the ALTER TABLE definition.
USING INDEX {GLOBAL|LOCAL} Part of [UsingIndexClause2]: If specified, indicates if a global or local index is to be created for the primary key.
USE HASH INDEX PAGES = RowPages|CURRENT Part of the USING INDEX {GLOBAL|LOCAL} clause and is optional. If specified, indicates a unique hash index is to be created for the primary key. If not specified, a unique range index is created. Can be used for both global and local indexes.

The PAGES clause is required and enables you to specify the expected page count value for the table.

If you specify RowPages, the number of pages is used to calculate the page count value. To determine the value for RowPages, divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).

The value for RowPages must be a positive constant and must be greater than 0.

If you specify CURRENT, the current number of rows in the table is used to calculate the page count value.

TimesTen recommends that you do not specify PAGES=CURRENT if there are no rows in your table.

USING INDEX (CreateIndexStmt) Part of the [UsingIndexClause2] clause. When this USING INDEX clause is specified, the (CreateIndexStmt) clause indicates that you want to define the index according to the TimesTen CREATE INDEX statement. The parentheses ( ) are required. You must create a unique index as that is the requirement for a primary key constraint. See "CREATE INDEX" for information on the CREATE INDEX statement.

Parameters for ALTER TABLE ADD UNIQUE CONSTRAINT: TimesTen Scaleout

Parameter Description
ALTER TABLE [Owner.]TableName Start of ALTER TABLE statement. Name of table required. Owner is optional.
ADD UNIQUE (ColumnName [,…]) Clause indicating that the table is to be altered by adding a unique constraint. (ColumnName) is required and specifies the column(s) to be used for the unique constraint.
[UsingIndexClause1] UsingIndexClause1 is optional and is described in the remainder of this table. You cannot specify two USING INDEX clauses in the ALTER TABLE definition. This clause enables you to define a global or local index for the PRIMARY KEY
USING INDEX {GLOBAL|LOCAL} Part of [UsingIndexClause1]. If specified, indicates if a global or local index is to be created for the unique constraint.
USING INDEX (CreateIndexStmt) Part of the [UsingIndexClause1] clause. When this USING INDEX clause is specified, the (CreateIndexStmt) clause indicates that you want to define the index according to the TimesTen CREATE INDEX statement. The parentheses ( ) are required. You must create a unique index as that is the requirement for a unique constraint. See "CREATE INDEX" for information on the CREATE INDEX statement.

Additional parameters for ALTER TABLE: TimesTen Scaleout

Parameter Description

[Owner.] TableName

Identifies the table to be altered.

DistributionClause

See "CREATE TABLE" for information on syntax.

UNIQUE

Specifies that in the column ColumnName each row must contain a unique value.

MODIFY

Specifies that an attribute of a given column is to be changed to a new value.

DEFAULT [DefaultVal |NULL]

Specifies that the column has a default value, DefaultVal. If NULL, specifies that the default value of the columns is to be dropped. If a column with a default value of SYSDATE is added, the value of the column of the existing rows only is the system date at the time the column was added. If the default value is one of the USER functions the column value is the user value of the session that executed the ALTER TABLE statement. Currently, you cannot assign a default value for the ROWID data type.

Altering the default value of a column has no impact on existing rows.

Note: To add a NOT NULL column to a table that is part of a replication scheme, DDL_REPLICATON_LEVEL must be 3 or greater.

ColumnName

Name of the column participating in the ALTER TABLE statement. A new column cannot have the same name as an existing column or another new column. If you add a NOT NULL column, you must include the DEFAULT clause.

ColumnDataType

Type of the column to be added. Some types require additional parameters. See "Data Types" for the data types that can be specified.

NOT NULL [ENABLE]

If you add a column, you can specify NOT NULL. If you specify NOT NULL, then you must include the DEFAULT clause. Optionally, you can specify ENABLE after the NOT NULL clause. Because NOT NULL constraints are always enabled, you are not required to specify ENABLE.

INLINE|NOT INLINE

By default, variable-length columns whose declared column length is > 128 bytes are stored out of line. Variable-length columns whose declared column length is <= 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords.

CONSTRAINT

Specifies that a foreign key is to be dropped. Optionally specifies that an added foreign key is named by the user.

DROP UNIQUE (ColumnName) Indicates that unique constraint is to dropped. ColumnName is the name of the constraint.

ForeignKeyName

Name of the foreign key to be added or dropped. All foreign keys are assigned a default name by the system if the name was not specified by the user. Either the user-provided name or system name can be specified in the DROP FOREIGN KEY clause.

FOREIGN KEY

Specifies that a foreign key is to be added.

REFERENCES

Specifies that the foreign key references another table.

RefTableName

The name of the table that the foreign key references.

[ON DELETE CASCADE]

Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted.

USE HASH INDEX PAGES = RowPages | CURRENT

Changes primary key to use a hash index. If the primary key already uses a hash index, then this clause is equivalent to the SET PAGES clause.

USE RANGE INDEX

Changes primary key to use a range index. If the primary key already uses a range index, TimesTen ignores this clause.

SET PAGES = RowPages | CURRENT

Resizes the hash index to reflect the expected number of pages in the table. If you specify CURRENT, the current number of rows in the table is used to calculate the page count value. If you specify RowPages, the number of pages is used. To determine the value for RowPages, divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).

The value for RowPages must be a positive constant and must be greater than 0.

TimesTen recommends that you do not specify PAGES=CURRENT if there are no rows in your table.

If your estimate is too small, performance may be degraded. See "Column definition: TimesTen Scaleout" for more information on hash indexes.

Description for ALTER TABLE ADD PRIMARY KEY: TimesTen Scaleout

You have the option of specifying an additional clause after the PRIMARY KEY clause in your ALTER TABLE definition. This clause enables you to specify a global or local index for the primary key constraint.
  • The USING INDEX {GLOBAL | LOCAL} clause is one option that enables you to specify a global or local index for the primary key constraint. You must specify the GLOBAL or the LOCAL keyword. You can optionally specify the USE HASH INDEX clause after the USING INDEX {GLOBAL | LOCAL} clause if you want to define a hash index.

  • The USING INDEX (CreateIndexStmt) clause is your other option for specifying a global or local index. The(CreateIndexStmt) clause indicates that you want to define the index according to the TimesTen CREATE INDEX statement. The parentheses ( ) are required. You must create a unique index as that is the requirement for a primary key constraint. See "CREATE INDEX" for information on the CREATE INDEX statement.

Note:

You cannot use both the USING INDEX {GLOBAL | LOCAL} and the USING INDEX (CreateIndexStmt) in the ALTER TABLE definition. Specify one clause or the other or specify neither.

See "CREATE INDEX" for information on global and local indexes and their use in TimesTen Scaleout.

Description for ALTER TABLE ADD UNIQUE: TimesTen Scaleout

You have the option of specifying an additional clause after the UNIQUE clause in your ALTER TABLE definition. This clause enables you to specify a global or local index for the unique constraint.
  • The USING INDEX {GLOBAL | LOCAL} clause is one option that enables you to specify a global or local index for the primary key constraint. You must specify the GLOBAL or the LOCAL keyword.

  • The USING INDEX (CreateIndexStmt) clause is your other option for specifying a global or local index. The(CreateIndexStmt) clause indicates that you want to define the index according to the TimesTen CREATE INDEX statement. The parentheses ( ) are required. You must create a unique index as that is the requirement for a unique constraint. See "CREATE INDEX" for information on the CREATE INDEX statement.

Note:

You cannot use both the USING INDEX {GLOBAL | LOCAL} and the USING INDEX (CreateIndexStmt) in the ALTER TABLE definition. Specify one clause or the other or specify neither.

See CREATE INDEX for information on global and local indexes and their use in TimesTen Scaleout.

Additional ALTER TABLE information: TimesTen Scaleout

  • You can alter tables to change defaults or add and drop columns and constraints. However, you cannot change the distribution scheme unless the table is empty. In addition, you cannot drop a constraint that is named in the DISTRIBUTE BY REFERENCE clause. See "CREATE TABLE" for information on the distribution schemes. See "Altering tables" in Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.

  • The ALTER TABLE statement cannot be used to alter a temporary table.

  • The ALTER TABLE ADD [COLUMN] ColumnName statement adds one or more new columns to an existing table. When you add one or more columns, the new columns are added to the end of all existing rows of the table in one new partition.

  • Columns referenced by materialized views cannot be dropped.

  • You cannot use the ALTER TABLE statement to add a column, drop a column, or add a constraint for cache group tables.

  • Only one partition is added to the table per statement regardless of the number of columns added.

  • You can ALTER a table to add a NOT NULL column with a default value. The DEFAULT clause is required. Restrictions include:

    • You cannot use the column as a primary key column. Specifically, you cannot specify the column in the statement: ALTER TABLE ADD ConstraintName PRIMARY KEY (ColumnName [,...]).

  • NULL is the initial value for all added columns, unless a default value is specified for the new column.

  • The total number of columns in the table cannot exceed 1000. In addition, the total number of partitions in a table cannot exceed 1000, one of which is used by TimesTen.

  • Do not specify the ADD CONSTRAINT ... PRIMARY KEY clause on a global temporary table.

  • As the result of an ALTER TABLE ADD statement, an additional read occurs for each new partition during queries. Therefore, altered tables may have slightly degraded performance. The performance can only by restored by dropping and recreating the table, or by using the ttMigrate create -c -relaxedUpgrade command, and restoring the table using the ttRestore -r -relaxedUpgrade command. Dropping the added column does not recover the lost performance or decrease the number of partitions.

  • When you use the ALTER TABLE DROP statement to remove one or more columns from an existing table, dropped columns are removed from all current rows of the table. Subsequent SQL statements must not attempt to make any use of the dropped columns. You cannot drop columns that are in the table's primary key. You cannot drop columns that are in any of the table's foreign keys until you have dropped all foreign keys. You cannot drop columns that are indexed until all indexes on the column have been dropped. ALTER TABLE cannot be used to drop all of the columns of a table. Use DROP TABLE instead.

  • When a column is dropped from a table, all commands referencing that table need to be recompiled. An error may result at recompilation time if a dropped column was referenced. The application must re-prepare those commands, and rebuild any parameters and result columns. When a column is added to a table, the commands that contain a SELECT * statement are invalidated. Only these commands must be re-prepared. All other commands continue to work as expected.

  • When you drop a column, the column space is not freed.

  • When you add a UNIQUE constraint, there is overhead incurred (in terms of additional space and additional time). This is because an index is created to maintain the UNIQUE constraint. You cannot use the DROP INDEX statement to drop an index used to maintain the UNIQUE constraint.

  • A UNIQUE constraint and its associated index cannot be dropped if it is being used as a unique index on a replicated table.

  • Use ALTER TABLE...USE RANGE INDEX if your application performs range queries over a table's primary key.

  • Use ALTER TABLE...USE HASH INDEX if your application performs exact match lookups on a table's primary key.

  • An error is generated if a table has no primary key and either the USE HASH INDEX clause or the USE RANGE INDEX clause is specified.

  • If ON DELETE CASCADE is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE privilege without requiring explicit DELETE privilege on the child table.

  • To change the ON DELETE CASCADE triggered action, drop then redefine the foreign key constraint.

  • ON DELETE CASCADE is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.

  • The total number of rows reported by the DELETE statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE action.

  • For ON DELETE CASCADE, since different paths may lead from a parent table to a child table, the following rule is enforced:

  • Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths.

    • Specify ON DELETE CASCADE on all child tables on the "delete" path.

    • This rule does not apply to paths from one parent to different children or from different parents to the same child.

  • For ON DELETE CASCADE, a second rule is also enforced:

  • If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.

  • The ALTER TABLE ADD/DROP CONSTRAINT statement has the following restrictions:

    • When a foreign key is dropped, TimesTen also drops the index associated with the foreign key. Attempting to drop an index associated with a foreign key using the regular DROP INDEX statement results in an error.

    • Foreign keys cannot be added or dropped on views or temporary tables.

    • You cannot use ALTER TABLE to drop a primary key constraint. You would have to drop and recreate the table in order to drop the constraint.

Examples: Add primary key constraint using global indexes in TimesTen Scaleout

These examples show various uses of the syntax for using global indexes with ALTER TABLE ADD PRIMARY KEY.

Create a table. Alter the table and add a primary key constraint. Specify the USING INDEX GLOBAL clause. Drop the table.
Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           a TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> ALTER TABLE mytab1 ADD CONSTRAINT pk PRIMARY KEY (c,b) USING INDEX GLOBAL;
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  PK: global unique range index on columns:
    C
    B
  1 index found.

1 index found on 1 table.
Command> DROP TABLE mytab1;

Create a table. Alter the table adding a primary key constraint. Specify the USING INDEX GLOBAL with the USE HASH INDEX PAGES clause. Drop the table.

Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           a TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> ALTER TABLE mytab1 ADD CONSTRAINT pk PRIMARY KEY (c,b) 
           USING INDEX GLOBAL USE HASH INDEX PAGES =200;
Command> INDEXES mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  PK: global unique hash index on columns:
    C
    B
  1 index found.

1 index found on 1 table.
Command> DROP TABLE mytab1;

Create a table. Alter the table adding a primary key constraint. Specify the USING INDEX (CreateIndexStmt) clause. The (CreateIndexStmt) clause is the TimesTen CREATE INDEX statement. See "CREATE INDEX" for information on this statement.

Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           a TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> ALTER TABLE mytab1 ADD CONSTRAINT pk PRIMARY KEY (c,b) 
           USING INDEX (CREATE GLOBAL UNIQUE HASH INDEX myglobalix ON mytab1 (c,b) PAGES =200);
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  MYGLOBALIX: global unique hash index on columns:
    C
    B
  1 index found.

1 index found on 1 table.
Command> DROP TABLE mytab1;
This example illustrates that you cannot use the USING INDEX GLOBAL|LOCAL clause with the USING INDEX (CreateIndexStmt) clause.
Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           a TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> ALTER TABLE mytab1 ADD CONSTRAINT pk PRIMARY KEY (c,b) 
           USING INDEX GLOBAL USE HASH INDEX PAGES = 200 
           USING INDEX (CREATE GLOBAL UNIQUE HASH INDEX myglobalix ON mytab1 (c,b) PAGES =200);
 1001: Syntax error in SQL statement  before or at: "USING", character position: 102
...USING INDEX GLOBAL USE HASH INDEX PAGES = 200 USING INDEX (CREATE G...
                                                 ^^^^^
The command failed.

Examples: Add unique constraint using global indexes in TimesTen Scaleout

These examples show various uses of the syntax for using global indexes with ALTER TABLE ADD UNIQUE CONSTRAINT.

Create a table. Alter the table adding a unique constraint. Drop the table. Create the table again adding a unique constraint and specifying the USING INDEX GLOBAL clause.

Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           a TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> ALTER TABLE mytab1 ADD UNIQUE (a);
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  TTUNIQUE_6E6: unique range index on columns:
    A
  1 index found.

1 index found on 1 table.
Command> DROP TABLE mytab1;
Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           a TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> ALTER TABLE mytab1 ADD UNIQUE (a) USING INDEX GLOBAL;
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  $GUA8C5B4ECE6D8: global unique range index on columns:
    A
  1 index found.

1 index found on 1 table.
Command> DROP TABLE mytab1;

Create a table. Alter the table adding a unique constraint and use the USING INDEX (CreateIndexStmt) clause to create a local unique index. Alter the table a second time adding another unique constaint. Use the USING INDEX (CreateIndexStmt) clause to create a global unique index.

Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           a TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> ALTER TABLE mytab1 ADD UNIQUE (b) USING INDEX (CREATE UNIQUE INDEX myuniqueidxB ON mytab1 (b));
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  MYUNIQUEIDXB: unique range index on columns:
    B
  1 index found.

1 index found on 1 table.
Command> ALTER TABLE mytab1 ADD UNIQUE (c) USING INDEX (CREATE GLOBAL UNIQUE INDEX myuniqueidxC ON mytab1 (c));
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  MYUNIQUEIDXB: unique range index on columns:
    B
  MYUNIQUEIDXC: global unique range index on columns:
    C
  2 indexes found.

2 indexes found on 1 table.
Command> DROP TABLE mytab1;

Additional examples for ALTER TABLE: TimesTen Scaleout

Table 6-6 shows the rules associated with altering tables. Supporting examples follow.

Table 6-6 ALTER TABLE rules

ALTER statement Comment
ALTER TABLE t1 ADD CONSTRAINT c1
 PRIMARY KEY (p);

The primary key constraint is added to the table. The distribution key is not changed.

CREATE TABLE t1 (c1 NUMBER, 
 c2 VARCHAR2 (10));

ALTER TABLE t1 
 DISTRIBUTE BY HASH (c1);

The operation succeeds if the table is empty. If the table is not empty, the operation fails because the distribution key cannot be changed on tables that are not empty.

ALTER TABLE t1 ADD CONSTRAINT c1
 FOREIGN KEY (f1)REFERENCES t2 (c2);

The operation succeeds. The distribution of the t1 table is not related to the c1 constraint.

CREATE TABLE t1...CONSTRAINT fk1...
 DISTRIBUTE BY REFERENCE(fk1);

ALTER TABLE t1 DROP CONSTRAINT(fk1);

The operation fails. The foreign key is used to distribute the table.

These examples support the information in the "Table 6-6" table:

Use ALTER TABLE to add a primary key constraint

This example creates the mytable table without a primary key or distribution clause. The table is distributed by hash on a hidden column. Then the ALTER TABLE statement is used to add a primary key constraint. The operation succeeds but the distribution key is not changed.

Command> CREATE TABLE mytable (col1 NUMBER NOT NULL, col2 VARCHAR2 (32));
Command> describe mytable;
 
Table SAMPLEUSER.MYTABLE:
  Columns:
    COL1                            NUMBER NOT NULL
    COL2                            VARCHAR2 (32) INLINE
  DISTRIBUTE BY HASH
 
1 table found.
(primary key columns are indicated with *)

Now alter the table to add the primary key. The operation succeeds. The distribution scheme and distribution key do not change.

Command> ALTER TABLE mytable ADD CONSTRAINT c1 PRIMARY KEY (col1);
Command> describe mytable;
 
Table SAMPLEUSER.MYTABLE:
  Columns:
   *COL1                            NUMBER NOT NULL
    COL2                            VARCHAR2 (32) INLINE
  DISTRIBUTE BY HASH
 
1 table found.
(primary key columns are indicated with *)

Add a primary key constraint on table distributed on unique column

This example creates the mytab table and distributes the data by hash on the id2 unique column. The example then alters the mytab table adding the primary key constraint on the id column. A ttIsql describe command shows the table remains distributed by hash on the id2 column.

Command> CREATE TABLE mytab (id TT_INTEGER NOT NULL, id2 TT_INTEGER UNIQUE, 
           id3 TT_INTEGER) distribute by hash (id2);
Command> ALTER TABLE mytab ADD CONSTRAINT c1 PRIMARY KEY (id);                  Command> describe mytab;
 
Table SAMPLEUSER.MYTAB:
  Columns:
   *ID                              TT_INTEGER NOT NULL
    ID2                             TT_INTEGER UNIQUE
    ID3                             TT_INTEGER
  DISTRIBUTE BY HASH (ID2)
 
1 table found.
(primary key columns are indicated with *)

Use ALTER TABLE to change the distribution key

This example shows that you can use the ALTER TABLE statement to change the distribution key, but only if the table is empty.

Command> CREATE TABLE mytable2 (col1 NUMBER NOT NULL, col2 VARCHAR2 (32))
         DISTRIBUTE BY HASH (col1,col2);
Command> describe mytable2;
 
Table SAMPLEUSER.MYTABLE2:
  Columns:
    COL1                            NUMBER NOT NULL
    COL2                            VARCHAR2 (32) INLINE
  DISTRIBUTE BY HASH (COL1, COL2)
 
1 table found.
(primary key columns are indicated with *)

Use the ALTER TABLE statement to change the distribution key to col1. The operation succeeds because the table is empty.

Command> ALTER TABLE mytable2 DISTRIBUTE BY HASH (col1);
Command> describe mytable2;
 
Table SAMPLEUSER.MYTABLE2:
  Columns:
    COL1                            NUMBER NOT NULL
    COL2                            VARCHAR2 (32) INLINE
  DISTRIBUTE BY HASH (COL1)
 
1 table found.
(primary key columns are indicated with *)

Insert a row of data and attempt to change the distribution key back to col1, col2. The operation fails because the table is not empty.

Command> INSERT INTO mytable2 VALUES (10, 'test');
1 row inserted.
Command> commit;
Command> ALTER TABLE mytable2 DISTRIBUTE BY HASH (col1,col2);
 1069: Table not empty. Alter table distribution is only permitted on empty
tables.
The command failed.

Add a foreign key constraint that is not part of the distribution key

This example first describes the accounts and accounts2 tables. The example then alters the accounts2 table, adding a foreign key constraint. Since this constraint is not part of the accounts2 table distribution, the operation succeeds.

Command> describe accounts;
 
Table SAMPLEUSER.ACCOUNTS:
  Columns:
   *ACCOUNT_ID                      NUMBER (10) NOT NULL
    PHONE                           VARCHAR2 (15) INLINE NOT NULL
    ACCOUNT_TYPE                    CHAR (1) NOT NULL
    STATUS                          NUMBER (2) NOT NULL
    CURRENT_BALANCE                 NUMBER (10,2) NOT NULL
    PREV_BALANCE                    NUMBER (10,2) NOT NULL
    DATE_CREATED                    DATE NOT NULL
    CUST_ID                         NUMBER (10) NOT NULL
  DISTRIBUTE BY REFERENCE (FK_CUSTOMER)
 
1 table found.
(primary key columns are indicated with *)

Command> describe accounts2;
 
Table SAMPLEUSER.ACCOUNTS2:
  Columns:
   *ACCOUNTS2_ID                    NUMBER (10) NOT NULL
    ACCOUNT_ORIG_ID                 NUMBER (10) NOT NULL
    STATUS                          NUMBER (2) NOT NULL
  DISTRIBUTE BY HASH (ACCOUNTS2_ID)
 
1 table found.
(primary key columns are indicated with *)

Command> ALTER TABLE accounts2 ADD CONSTRAINT accounts2_fk FOREIGN KEY 
           (account_orig_id) REFERENCES accounts (account_id);

Use the ttIsql indexes command to show the accounts2_fk constraint is created successfully.

Command> indexes accounts2;
 
Indexes on table SAMPLEUSER.ACCOUNTS2:
  ACCOUNTS2: unique range index on columns:
    ACCOUNTS2_ID
  ACCOUNTS2_FK: non-unique range index on columns:
    ACCOUNT_ORIG_ID
    (foreign key index references table SAMPLEUSER.ACCOUNTS(ACCOUNT_ID))
  2 indexes found.
 
2 indexes found on 1 table.

Attempt to drop a foreign key constraint used as a distribution key

This example attempts to drop the fk_accounts constraint. Since the constraint is used as the distribution key, the operation fails.

Command> describe transactions;
 
Table SAMPLEUSER.TRANSACTIONS:
  Columns:
   *TRANSACTION_ID                  NUMBER (10) NOT NULL
   *ACCOUNT_ID                      NUMBER (10) NOT NULL
   *TRANSACTION_TS                  TIMESTAMP (6) NOT NULL
    DESCRIPTION                     VARCHAR2 (60) INLINE
    OPTYPE                          CHAR (1) NOT NULL
    AMOUNT                          NUMBER (6,2) NOT NULL
  DISTRIBUTE BY REFERENCE (FK_ACCOUNTS)
 
1 table found.
(primary key columns are indicated with *)

Command> ALTER TABLE transactions DROP CONSTRAINT fk_accounts;
 1072: Dropping a table's reference by distribution foreign key is not allowed.
The command failed.

SQL syntax for ALTER TABLE: TimesTen Classic

To add one column:

ALTER TABLE [Owner.]TableName 
  ADD [COLUMN] ColumnName ColumnDataType
    [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL]
  [COMPRESS (CompressColumns [,...])]

To add multiple columns:

ALTER TABLE [Owner.]TableName 
 ADD (ColumnName ColumnDataType 
      [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [,... ] )
  [COMPRESS (CompressColumns [,...])]

To add a NOT NULL column (note that the DEFAULT clause is required):

ALTER TABLE [Owner.]TableName
  ADD [COLUMN] ColumnName ColumnDataType
    NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE]
  [COMPRESS (CompressColumns [,...])]

To add multiple NOT NULL columns (note that the DEFAULT clause is required):

ALTER TABLE [Owner.]TableName
  ADD (ColumnName ColumnDataType
       NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE] [,...])
  [COMPRESS (CompressColumns [,...])]

The CompressColumns syntax is as follows:

{ColumnDefinition | (ColumnDefinition [,...])} BY DICTIONARY 
   [MAXVALUES = CompressMax]

To remove columns.

ALTER TABLE [Owner.]TableName 
  DROP {[COLUMN] ColumnName | (ColumnName [,... ] )}

Note:

If removing columns in a compressed column group, all columns in the compressed column group must be specified.

To add a primary key constraint using a range index:

ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName
  PRIMARY KEY (ColumnName [,... ])

To add a primary key constraint using a hash index:

ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName
  PRIMARY KEY (ColumnName [,... ])
  USE HASH INDEX PAGES = RowPages | CURRENT

To add a foreign key and optionally add ON DELETE CASCADE:

ALTER TABLE [Owner.]TableName 
ADD [CONSTRAINT ForeignKeyName] FOREIGN KEY
    (ColumnName [,...]) REFERENCES RefTableName
       [(ColumnName [,...])] [ON DELETE CASCADE]

To remove a foreign key:

ALTER TABLE [Owner.]TableName 
DROP CONSTRAINT ForeignKeyName

Note:

You cannot use ALTER TABLE to drop a primary key constraint. To drop the constraint, drop and recreate the table.

To resize a hash index:

ALTER TABLE [Owner.]TableName
SET PAGES = RowPages | CURRENT

To change the primary key to use a hash index:

ALTER TABLE [Owner.]TableName
USE HASH INDEX PAGES = RowPages | CURRENT

To change the primary key to use a range index with the USE RANGE INDEX clause:

ALTER TABLE [Owner.]TableName
USE RANGE INDEX

To change the default value of a column:

ALTER TABLE [Owner.]TableName
MODIFY (ColumnName DEFAULT DefaultVal)

To add or drop a unique constraint on a column:

ALTER TABLE Owner.]TableName
{ADD | DROP} UNIQUE (ColumnName)

To remove the default value of a column that is nullable, by changing it to NULL:

ALTER TABLE [Owner.]TableName
MODIFY (ColumnName DEFAULT NULL)

To add LRU aging:

ALTER TABLE [Owner.]TableName
ADD AGING LRU [ON | OFF]

To add time-based aging:

ALTER TABLE [Owner.]TableName
ADD AGING USE ColumnName LIFETIME num1
    {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]}
     [CYCLE num2 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S] }]
    [ON | OFF]

To change the aging state:

ALTER TABLE [Owner.]TableName
SET AGING {ON | OFF}

To drop aging:

ALTER TABLE [Owner.]TableName
DROP AGING

To change the lifetime for time-based aging:

ALTER TABLE [Owner.]TableName
SET AGING LIFETIME num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]}

To change the cycle for time-based aging:

ALTER TABLE [Owner.]TableName
SET AGING CYCLE num2 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]}

Parameters for ALTER TABLE: TimesTen Classic

Parameter Description

[Owner.] TableName

Identifies the table to be altered.

UNIQUE

Specifies that in the column ColumnName each row must contain a unique value.

MODIFY

Specifies that an attribute of a given column is to be changed to a new value.

DEFAULT [DefaultVal |NULL]

Specifies that the column has a default value, DefaultVal. If NULL, specifies that the default value of the columns is to be dropped. If a column with a default value of SYSDATE is added, the value of the column of the existing rows only is the system date at the time the column was added. If the default value is one of the USER functions the column value is the user value of the session that executed the ALTER TABLE statement. Currently, you cannot assign a default value for the ROWID data type.

Altering the default value of a column has no impact on existing rows.

Note: To add a NOT NULL column to a table that is part of a replication scheme, DDL_REPLICATON_LEVEL must be 3 or greater.

ColumnName

Name of the column participating in the ALTER TABLE statement. A new column cannot have the same name as an existing column or another new column. If you add a NOT NULL column, you must include the DEFAULT clause.

ColumnDataType

Type of the column to be added. Some types require additional parameters. See "Data Types" for the data types that can be specified.

NOT NULL [ENABLE]

If you add a column, you can specify NOT NULL. If you specify NOT NULL, then you must include the DEFAULT clause. Optionally, you can specify ENABLE after the NOT NULL clause. Because NOT NULL constraints are always enabled, you are not required to specify ENABLE.

INLINE|NOT INLINE

By default, variable-length columns whose declared column length is > 128 bytes are stored out of line. Variable-length columns whose declared column length is <= 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords.

COMPRESS (CompressColumns [,...])

Defines a compressed column group for a table that is enabled for compression. This can include one or more columns in the table.

If you define multiple columns for a compression group, you must specify the columns as INLINE. An out-of-line column cannot be in a multi-column compression group.

Each compressed column group is limited to a maximum of 16 columns.

See "Column-based compression of tables (TimesTen Classic)" for details on compression columns.

BY DICTIONARY

Defines a compression dictionary for each compressed column group.

MAXVALUES = CompressMax

CompressMax is the total number of distinct values in the table and sets the size for the compressed column group pointer column to 1, 2, or 4 bytes and sets the size for the maximum number of entries in the dictionary table.

For the dictionary table, NULL is counted as one unique value.

CompressMax takes an integer between 1 and 232-1.

The maximum size defaults to size of 232-1 if the MAXVALUES clause is omitted, which uses 4 bytes for the pointer column. An error is thrown if the value is greater than 232-1.

See "Column-based compression of tables (TimesTen Classic)" for details on maximum sizing for compression dictionaries.

ADD CONSTRAINT ConstraintName PRIMARY KEY (ColumnName

[,... ] ) [USE HASH INDEX PAGES = RowPages | CURRENT]

Adds a primary key constraint to the table. Columns of the primary key must be defined as NOT NULL.

Specify ConstraintName as the name of the index used to enforce the primary key constraint. Specify ColumnName as the name(s) of the NOT NULL column(s) used for the primary key.

Specify the USE HASH INDEX clause to use a hash index for the primary key. If not specified, a range index is used for the primary key constraint.

If you specify CURRENT, the current number of rows in the table is used to calculate the page count value. If you specify RowPages, the number of pages is used. To determine the value for RowPages, divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).

The value for RowPages must be a positive constant and must be greater than 0.

TimesTen recommends that you do not specify PAGES=CURRENT if there are no rows in your table.

If your estimate is too small, performance may be degraded. See "Column definition: TimesTen Classic" for more information on hash indexes.

Note: Before you use ADD CONSTRAINT to add a named primary key constraint, be aware that you cannot use ALTER TABLE to drop a primary key constraint. You would have to drop and recreate the table in order to drop the constraint.

CONSTRAINT

Specifies that a foreign key is to be dropped. Optionally specifies that an added foreign key is named by the user.

ForeignKeyName

Name of the foreign key to be added or dropped. All foreign keys are assigned a default name by the system if the name was not specified by the user. Either the user-provided name or system name can be specified in the DROP FOREIGN KEY clause.

FOREIGN KEY

Specifies that a foreign key is to be added.

REFERENCES

Specifies that the foreign key references another table.

RefTableName

The name of the table that the foreign key references.

[ON DELETE CASCADE]

Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted.

USE HASH INDEX PAGES = RowPages | CURRENT

Changes primary key to use a hash index. If the primary key already uses a hash index, then this clause is equivalent to the SET PAGES clause.

USE RANGE INDEX

Changes primary key to use a range index. If the primary key already uses a range index, TimesTen ignores this clause.

SET PAGES = RowPages | CURRENT

Resizes the hash index to reflect the expected number of pages in the table. If you specify CURRENT, the current number of rows in the table is used to calculate the page count value. If you specify RowPages, the number of pages is used. To determine the value for RowPages, divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).

The value for RowPages must be a positive constant and must be greater than 0.

TimesTen recommends that you do not specify PAGES=CURRENT if there are no rows in your table.

If your estimate is too small, performance may be degraded. See "Column definition: TimesTen Classic" for more information on hash indexes.

ADD AGING LRU [ON | OFF]

Adds least recently used (LRU) aging to an existing table that has no aging policy defined.

The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state (ON or OFF) and the LRU aging attributes.

Set the aging state to either ON or OFF. ON indicates that the aging state is enabled and aging is done automatically. OFF indicates that the aging state is disabled and aging is not done automatically. In both cases, the aging policy is defined. The default is ON.

LRU attributes are defined by calling the ttAgingLRUConfig and/or the ttAgingTableLRUConfig built-in procedures. LRU attributes are not defined at the SQL level. See "ttAgingLRUConfig" and "ttAgingTableLRUConfig" in the Oracle TimesTen In-Memory Database Reference and "Implementing an aging policy in your tables" in the Oracle TimesTen In-Memory Database Operations Guide for more information.

ADD AGING USE ColumnName...[ON| OFF]

Adds time-based aging to an existing table that has no aging policy defined.

The time-based aging policy defines the type of aging (time-based), the aging state (ON or OFF) and the time-based aging attributes.

Set the aging state to either ON or OFF. ON indicates that the aging state is enabled and aging is done automatically. OFF indicates that the aging state is disabled and aging is not done automatically. In both cases, the aging policy is defined. The default is ON.

Time-based aging attributes are defined at the SQL level and are specified by the LIFETIME and CYCLE clauses.

Specify ColumnName as the name of the column used for time-based aging. Define the column as NOT NULL and of data type TIMESTAMP or DATE. The value of this column is subtracted from SYSDATE, truncated using the specified unit (minute, hour, day) and then compared to the LIFETIME value. If the result is greater than the LIFETIME value, then the row is a candidate for aging.

The values of the column used for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be NULL).

You can define your aging column with a data type of TT_TIMESTAMP or TT_DATE. If you choose data type TT_DATE, then you must specify the LIFETIME unit as days.

For more information about time-based aging, see "Implementing an aging policy in your tables" in Oracle TimesTen In-Memory Database Operations Guide.

LIFETIME Num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]

Specify the LIFETIME clause after the ADD AGING USE ColumnName clause if you are adding the time-based aging policy to an existing table. Specify the LIFETIME clause after the SET AGING clause to change the LIFETIME setting.

The LIFETIME clause specifies the minimum amount of time data is kept in cache.

Specify Num1 as a positive integer constant to indicate the unit of time expressed in seconds, minutes, hours or days that rows should be kept in cache. Rows that exceed the LIFETIME value are aged out (deleted from the table). If you define your aging column with data type TT_DATE, then you must specify DAYS as the LIFETIME unit.

The concept of time resolution is supported. If DAYS is specified as the time resolution, then all rows whose timestamp belongs to the same day are aged out at the same time. If HOURS is specified as the time resolution, then all rows with timestamp values within that hour are aged at the same time. A LIFETIME of 3 days is different than a LIFETIME of 72 hours (3*24) or a LIFETIME of 432 minutes (3*24*60).

CYCLE Num2 {SECOND[S]| MINUTE[S]| HOUR[S]|DAY[S]}

Specify the optional CYCLE clause after the LIFETIME clause if you are adding the time-based aging policy to an existing table.

CYCLE is a time-based aging attribute.

The CYCLE clause indicates how often the system should examine rows to see if data exceeds the specified LIFETIME value and should be aged out (deleted).

Specify Num2 as a positive integer constant.

If you do not specify the CYCLE clause, then the default value is 5 minutes. If you specify 0 for Num2, then the aging thread wakes up every second.

If the aging state is OFF, then aging is not done automatically and the CYCLE clause is ignored.

Specify the CYCLE clause after the SET AGING clause to change the CYCLE setting.

SET AGING {ON|OFF}

Changes the aging state. The aging policy must be previously defined. ON enables automatic aging. OFF disables automatic aging. To control aging with an external scheduler, then disable aging and invoke the ttAgingScheduleNow built-in procedure.

DROP AGING

Drops the aging policy from the table. After you define an aging policy, you cannot alter it. Drop aging, then redefine.

SET AGING LIFETIME Num1 {SECOND[S]| MINUTE[S]|HOUR[S] |DAY[S]}

Use this clause to change the lifetime for time-based aging.

Num1 must be a positive integer constant.

If you defined your aging column with data type TT_DATE, then you must specify DAYS as the LIFETIME unit.

SET AGING CYCLE Num2 {SECOND[S]| MINUTE[S]| HOUR[S]|DAY[S]}

Use this clause to change the cycle for time-based aging.

Num2 must be a positive integer constant.

Description for ALTER TABLE: TimesTen Classic

  • The ALTER TABLE statement cannot be used to alter a temporary table.

  • The ALTER TABLE ADD [COLUMN] ColumnName statement adds one or more new columns to an existing table. When you add one or more columns, the new columns are added to the end of all existing rows of the table in one new partition.

  • The ALTER TABLE ADD or DROP COLUMN statement can be used to add or drop columns from replicated tables.

    Do not use ALTER TABLE to alter a replicated table that is part of a TWOSAFE BY REQUEST transaction.

  • Columns referenced by materialized views cannot be dropped.

  • You cannot use the ALTER TABLE statement to add a column, drop a column, or add a constraint for cache group tables.

  • Only one partition is added to the table per statement regardless of the number of columns added.

  • You can ALTER a table to add a NOT NULL column with a default value. The DEFAULT clause is required. Restrictions include:

    • You cannot use the column as a primary key column. Specifically, you cannot specify the column in the statement: ALTER TABLE ADD ConstraintName PRIMARY KEY (ColumnName [,...]).

    • You cannot use the column for time-based aging. Specifically, you cannot specify the column in the statement ALTER TABLE ADD AGING USE ColumnName.

      Note:

      To add a NOT NULL column to a table that is part of a replication scheme, DDL_REPLICATON_LEVEL must be 3 or greater.
  • NULL is the initial value for all added columns, unless a default value is specified for the new column.

  • The total number of columns in the table cannot exceed 1000. In addition, the total number of partitions in a table cannot exceed 1000, one of which is used by TimesTen.

  • Use the ADD CONSTRAINT ... PRIMARY KEY clause to add a primary key constraint to a regular table or to a detailed or materialized view table. Do not use this clause on a table that already has a primary key.

  • If you use the ADD CONSTRAINT... PRIMARY KEY clause to add a primary key constraint, and you do not specify the USE HASH INDEX clause, then a range index is used for the primary key constraint.

  • If a table is replicated and the replication agent is active, you cannot use the ADD CONSTRAINT ... PRIMARY KEY clause. Stop the replication agent first.

  • Do not specify the ADD CONSTRAINT ... PRIMARY KEY clause on a global temporary table.

  • Do not specify the ADD CONSTRAINT ... PRIMARY KEY clause on a cache group table because cache group tables defined with a primary key must be defined in the CREATE CACHE GROUP statement.

  • As the result of an ALTER TABLE ADD statement, an additional read occurs for each new partition during queries. Therefore, altered tables may have slightly degraded performance. The performance can only by restored by dropping and recreating the table, or by using the ttMigrate create -c -relaxedUpgrade command, and restoring the table using the ttRestore -r -relaxedUpgrade command. Dropping the added column does not recover the lost performance or decrease the number of partitions.

  • When you use the ALTER TABLE DROP statement to remove one or more columns from an existing table, dropped columns are removed from all current rows of the table. Subsequent SQL statements must not attempt to make any use of the dropped columns. You cannot drop columns that are in the table's primary key. You cannot drop columns that are in any of the table's foreign keys until you have dropped all foreign keys. You cannot drop columns that are indexed until all indexes on the column have been dropped. ALTER TABLE cannot be used to drop all of the columns of a table. Use DROP TABLE instead.

  • When a column is dropped from a table, all commands referencing that table need to be recompiled. An error may result at recompilation time if a dropped column was referenced. The application must re-prepare those commands, and rebuild any parameters and result columns. When a column is added to a table, the commands that contain a SELECT * statement are invalidated. Only these commands must be re-prepared. All other commands continue to work as expected.

  • When you drop a column, the column space is not freed.

  • When you add a UNIQUE constraint, there is overhead incurred (in terms of additional space and additional time). This is because an index is created to maintain the UNIQUE constraint. You cannot use the DROP INDEX statement to drop an index used to maintain the UNIQUE constraint.

  • A UNIQUE constraint and its associated index cannot be dropped if it is being used as a unique index on a replicated table.

  • Use ALTER TABLE...USE RANGE INDEX if your application performs range queries over a table's primary key.

  • Use ALTER TABLE...USE HASH INDEX if your application performs exact match lookups on a table's primary key.

  • An error is generated if a table has no primary key and either the USE HASH INDEX clause or the USE RANGE INDEX clause is specified.

  • Make sure to stop the replication agent before adding or dropping a foreign key on a replicated table.

  • If ON DELETE CASCADE is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE privilege without requiring explicit DELETE privilege on the child table.

  • To change the ON DELETE CASCADE triggered action, drop then redefine the foreign key constraint.

  • ON DELETE CASCADE is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.

  • The total number of rows reported by the DELETE statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE action.

  • For ON DELETE CASCADE, since different paths may lead from a parent table to a child table, the following rule is enforced:

  • Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths.

    • Specify ON DELETE CASCADE on all child tables on the "delete" path.

    • This rule does not apply to paths from one parent to different children or from different parents to the same child.

  • For ON DELETE CASCADE, a second rule is also enforced:

  • If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.

  • For ON DELETE CASCADE with replication, the following restrictions apply:

    • The foreign keys specified with ON DELETE CASCADE must match between the Master and subscriber for replicated tables. Checking is done at runtime. If there is an error, the receiver thread stops working.

    • All tables in the delete cascade tree have to be replicated if any table in the tree is replicated. This restriction is checked when the replication scheme is created or when a foreign key with ON DELETE CASCADE is added to one of the replication tables. If an error is found, the operation is aborted. You may be required to drop the replication scheme first before trying to change the foreign key constraint.

  • The ALTER TABLE ADD/DROP CONSTRAINT statement has the following restrictions:

    • When a foreign key is dropped, TimesTen also drops the index associated with the foreign key. Attempting to drop an index associated with a foreign key using the regular DROP INDEX statement results in an error.

    • Foreign keys cannot be added or dropped on tables in a cache group.

    • Foreign keys cannot be added or dropped on views or temporary tables.

    • You cannot use ALTER TABLE to drop a primary key constraint. You would have to drop and recreate the table in order to drop the constraint.

  • After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.

  • The aging policy must be defined to change the aging state.

  • The following rules determine if a row is accessed or referenced for LRU aging:

    • Any rows used to build the result set of a SELECT statement.

    • Any rows used to build the result set of an INSERT ... SELECT statement.

    • Any rows that are about to be updated or deleted.

  • Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.

  • Call the ttAgingScheduleNow procedure to schedule the aging process right away regardless if the aging state is ON or OFF.

  • For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL column.

  • Aging restrictions:

    • You cannot drop the column that is used for time-based aging.

    • Tables that are related by foreign keys must have the same aging policy.

    • For LRU aging, if a child row is not a candidate for aging, neither this child row nor its parent row are deleted. ON DELETE CASCADE settings are ignored.

    • For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE (whether specified or not) is ignored.

  • Restrictions for column-based compression of tables:

    • You can add compressed column groups with the ALTER TABLE statement only if the table was enabled for compression at table creation. You can add uncompressed columns to any table, including tables enabled for compression. Refer to "Column-based compression of tables (TimesTen Classic)" for more details on adding compressed column groups to a table.

    • You cannot modify columns of a compressed column group.

    • You can drop all columns within a compressed column group with the ALTER TABLE command; when removing columns in a compressed column group, all columns in the compressed column group must be specified for removal.

    • You cannot use ALTER TABLE to modify an existing uncompressed column to make it compressed. For example:

      Command> create table mytab (a varchar2 (30), b int, c int) compress ((a,b)
                 by dictionary);
      Command> alter table mytab add (d int) compress (c by dictionary);
      2246: Cannot change compression clause for already defined column C 
      The command failed.

Understanding partitions when using ALTER TABLE in TimesTen

When you create a table, an initial partition is created. If you ALTER the table, and add additional columns, secondary partitions are created. There is one secondary partition created for each ALTER TABLE statement. For a column in secondary partitions, you cannot create a primary key constraint on the column or use the column for time-based aging.

You can use ttMigrate -r -relaxedUpgrade to condense multiple partitions. This means the initial partition plus one or more secondary partitions are condensed into a single partition called the initial partition. Once you condense the partitions, you can then ALTER the table and add a primary key constraint on the column or use the column for time-based aging. This is because the columns are no longer in secondary partitions but are now in the initial partition.

If your database is involved in replication and you want to condense multiple partitions, you must use the StoreAttribute TABLE DEFINITION CHECKING RELAXED (of the CREATE REPLICATION statement). Run ttMigrate -r -relaxedUpgrade on both the master and subscriber or on either the master or subscriber by using -duplicate.

Use ttSchema to view partition numbers for columns. ttSchema displays secondary partition number 1 as partition 1, secondary partition number 2 as partition 2 and so on.

As an example, create a table MyTab with 2 columns. Then ALTER the table adding 2 columns (Col3 and Col4) with the NOT NULL DEFAULT clause.

Command> CREATE TABLE MyTab (Col1 NUMBER, Col2 VARCHAR2 (30));
Command> ALTER TABLE MyTab ADD (Col3 NUMBER NOT NULL DEFAULT 10, Col4 TIMESTAMP
            NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00');

Use ttSchema to verify Col3 and Col4 are in secondary partition 1.

ttschema -DSN sampledb_1122
-- Database is in Oracle type mode
create table TESTUSER.MYTAB (
        COL1 NUMBER,
        COL2 VARCHAR2(30 BYTE) INLINE,
        COL3 NUMBER NOT NULL DEFAULT 10,
        COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00');
-- column COL3 partition 1
-- column COL4 partition 1

Attempt to add a primary key constraint on Col3 and time-based aging on Col4. You see errors because you can neither add a primary key constraint nor add time-based aging to a column that is not in the initial partition.

Command> ALTER TABLE MyTab ADD CONSTRAINT PriKey PRIMARY KEY (Col3);
 2419: All columns in a primary key constraint must be in the initial partition;
column COL3 was added by ALTER TABLE
The command failed.

Command> ALTER TABLE MyTab ADD AGING USE Col4 LIFETIME 3 DAYS;
 3023: Aging column must be in the initial partition; column COL4 was added by
ALTER TABLE
The command failed.

Use ttMigrate with the -relaxedUpgrade option to condense the partitions. Then use ttSchema to verify the partitions are condensed and there are no columns in secondary partition 1.

ttMigrate -c dsn=sampledb_1122 test.migrate
 
Saving user PUBLIC
User successfully saved.
 
Saving table TESTUSER.MYTAB
  Saving rows...
  0/0 rows saved.
Table successfully saved.

ttDestroy sampledb_1122

ttMigrate -r -relaxedUpgrade
 dsn=sampledb_1122 test.migrate
 
Restoring table TESTUSER.MYTAB
  Restoring rows...
  0/0 rows restored.
Table successfully restored.

ttSchema DSN=sampledb_1122
-- Database is in Oracle type mode
create table TESTUSER.MYTAB (
        COL1 NUMBER,
        COL2 VARCHAR2(30 BYTE) INLINE,
        COL3 NUMBER NOT NULL DEFAULT 10,
        COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00');

Now add a primary key constraint on Col3 and time-based aging on Col4. The results are successful because Col3 and Col4 are in the initial partition as a result of ttMigrate. Use ttSchema to verify results.

Command> ALTER TABLE MyTab ADD CONSTRAINT PriKey PRIMARY KEY (Col3);
Command> ALTER TABLE MyTab ADD AGING USE Col4 LIFETIME 3 DAYS;

ttschema sampledb_1122
-- Database is in Oracle type mode
create table TESTUSER.MYTAB (
        COL1 NUMBER,
        COL2 VARCHAR2(30 BYTE) INLINE,
        COL3 NUMBER NOT NULL DEFAULT 10,
        COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00')
    AGING USE COL4 LIFETIME 3 days CYCLE 5 minutes ON;
 
    alter table TESTUSER.MYTAB add constraint PRIKEY primary key (COL3);

Examples for ALTER TABLE: TimesTen Classic

Add returnrate column to parts table.

ALTER TABLE parts ADD COLUMN returnrate DOUBLE;

Add numsssign and prevdept columns to contractor table.

ALTER TABLE contractor
  ADD ( numassign INTEGER, prevdept CHAR(30) );

Remove addr1 and addr2 columns from employee table.

ALTER TABLE employee DROP ( addr1, addr2 );

Drop the UNIQUE title column of the books table.

ALTER TABLE books DROP UNIQUE (title);

Add the x1 column to the t1 table with a default value of 5:

ALTER TABLE t1 ADD (x1 INT DEFAULT 5);

Change the default value of column x1 to 2:

ALTER TABLE t1 MODIFY (x1 DEFAULT 2);

Alter table primarykeytest to add the primary key constraint c1. Use the ttIsql INDEXES command to show that the primary key constraint c1 is created and a range index is used:

Command> CREATE TABLE primarykeytest (col1 TT_INTEGER NOT NULL);
Command> ALTER TABLE primarykeytest ADD CONSTRAINT c1 
         PRIMARY KEY (col1);
Command> INDEXES primarykeytest;

Indexes on table SAMPLEUSER.PRIMARYKEYTEST:
  C1: unique range index on columns:
    COL1
  1 index found.

1 index found on 1 table.

Alter table prikeyhash to add the primary key constraint c2 using a hash index. Use the ttIsql INDEXES command to show that the primary key constraint c2 is created and a hash index is used:

Command> CREATE TABLE prikeyhash (col1 NUMBER (3,2) NOT NULL);
Command> ALTER TABLE prikeyhash ADD CONSTRAINT c2
           PRIMARY KEY (col1) USE HASH INDEX PAGES = 20;
Command> INDEXES prikeyhash;

Indexes on table SAMPLEUSER.PRIKEYHASH:
  C2: unique hash index on columns:
    COL1
  1 index found.

1 table found.

Attempt to add a primary key constraint on a table already defined with a primary key. You see an error:

Command> CREATE TABLE oneprikey (col1 VARCHAR2 (30) NOT NULL, 
         col2 TT_BIGINT NOT NULL, col3 CHAR (15) NOT NULL, 
         PRIMARY KEY (col1,col2));
Command> ALTER TABLE oneprikey ADD CONSTRAINT c2 
         PRIMARY KEY (col1,col2);
 2235: Table can have only one primary key
The command failed.

Attempt to add a primary key constraint on a column that is not defined as NOT NULL. You see an error:

Command> CREATE TABLE prikeynull (col1 CHAR (30));
Command> ALTER TABLE prikeynull ADD CONSTRAINT c3 
         PRIMARY KEY (col1);
 2236: Nullable column cannot be part of a primary key
The command failed.

This example illustrates the use of range and hash indexes. It creates the pkey table with col1 as the primary key. A range index is created by default. The table is then altered to change the index on col1 to a hash index. The table is altered again to change the index back to a range index.

Command> CREATE TABLE pkey (col1 TT_INTEGER PRIMARY KEY, col2 VARCHAR2 (20));
Command> INDEXES pkey;
Indexes on table SAMPLEUSER.PKEY:
   PKEY: unique range index on columns:
    COL1
 1 index found.
1 index found on 1 table.

Alter the pkey table to use a hash index:

Command> ALTER TABLE pkey USE HASH INDEX PAGES = CURRENT;
Command> INDEXES pkey;
Indexes on table SAMPLEUSER.PKEY:
  PKEY: unique hash index on columns:
    COL1
  1 index found.
1 table found.

Alter the pkey table to use a range index with the USE RANGE INDEX clause:

Command> ALTER TABLE pkey USE RANGE INDEX;
Command> INDEXES pkey;
Indexes on table SAMPLEUSER.PKEY:
  PKEY: unique range index on columns:
    COL1
  1 index found.
1 table found.

This example generates an error when attempting to alter a table to define either a range or hash index on a column without a primary key.

Command> CREATE TABLE myindex (Ccl1 CHAR (20));
Command> ALTER TABLE myindex USE RANGE INDEX;
 2810: The table has no primary key so cannot change its index type
The command failed.
Command> ALTER TABLE myindex USE HASH INDEX PAGES = CURRENT;
 2810: The table has no primary key so cannot change its index type
The command failed.

These examples show how time resolution works with aging. In this example, lifetime is three days.

  • If (SYSDATE - ColumnValue) <= 3, do not age out the row.

  • If (SYSDATE - ColumnValue) > 3, then the row is a candidate for aging.

  • If (SYSDATE - ColumnValue) = 3 days, 22 hours, then row is not aged out because lifetime was specified in days. The row would be aged out if lifetime had been specified as 72 hours.

This example alters a table by adding LRU aging. The table has no previous aging policy. The aging state is ON by default.

ALTER TABLE agingdemo3 ADD AGING LRU;
Command> DESCRIBE agingdemo3;
Table USER.AGINGDEMO3:
  Columns:
   *AGINGID                         NUMBER NOT NULL
    NAME                            VARCHAR2 (20) INLINE
  Aging lru on
1 table found.
(primary key columns are indicated with *)

This example alters a table by adding time-based aging. The table has no previous aging policy. The agingcolumn column is used for aging. LIFETIME is 2 days. CYCLE is 30 minutes.

ALTER TABLE agingdemo4
       ADD AGING USE agingcolumn LIFETIME 2 DAYS CYCLE 30 MINUTES;
Command> DESCRIBE agingdemo4;
Table USER.AGINGDEMO4:
  Columns:
   *AGINGID                         NUMBER NOT NULL
    NAME                            VARCHAR2 (20) INLINE
    AGINGCOLUMN                     TIMESTAMP (6) NOT NULL
  Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes on

This example illustrates that after you create an aging policy, you cannot change it. You must drop aging and redefine.

CREATE TABLE agingdemo5
       (agingid NUMBER NOT NULL PRIMARY KEY
       ,name VARCHAR2 (20)
       ,agingcolumn TIMESTAMP NOT NULL
       )
       AGING USE agingcolumn LIFETIME 3 DAYS OFF;
ALTER TABLE agingdemo5
      ADD AGING LRU;
 2980: Cannot add aging policy to a table with an existing aging policy. Have to
 drop the old aging first
The command failed.

Drop aging on the table and redefine with LRU aging.

ALTER TABLE agingdemo5
      DROP AGING;
ALTER TABLE agingdemo5
       ADD AGING LRU;
Command> DESCRIBE agingdemo5;
Table USER.AGINGDEMO5:
  Columns:
   *AGINGID                         NUMBER NOT NULL
    NAME                            VARCHAR2 (20) INLINE
    AGINGCOLUMN                     TIMESTAMP (6) NOT NULL
  Aging lru on
1 table found.
(primary key columns are indicated with *)

This example alters a table by setting the aging state to OFF. The table has been defined with a time-based aging policy. If you set the aging state to OFF, aging is not done automatically. This is useful to use an external scheduler to control the aging process. Set aging state to OFF and then call the ttAgingScheduleNow procedure to start the aging process.

Command> DESCRIBE agingdemo4;
Table USER.AGINGDEMO4:
  Columns:
   *AGINGID                         NUMBER NOT NULL
    NAME                            VARCHAR2 (20) INLINE
    AGINGCOLUMN                     TIMESTAMP (6) NOT NULL
  Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes on

ALTER TABLE AgingDemo4
       SET AGING OFF;

Note that when you describe agingdemo4, the aging policy is defined and the aging state is set to OFF.

Command> DESCRIBE agingdemo4;
Table USER.AGINGDEMO4:
  Columns:
   *AGINGID                         NUMBER NOT NULL
    NAME                            VARCHAR2 (20) INLINE
    AGINGCOLUMN                     TIMESTAMP (6) NOT NULL
  Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes off
1 table found.
(primary key columns are indicated with *)

Call ttAgingScheduleNow to invoke aging with an external scheduler:

Command> CALL ttAgingScheduleNow ('agingdemo4');

Attempt to alter a table adding the aging column and then use that column for time-based aging. An error is generated.

Command> DESCRIBE x;
Table USER1.X:
  Columns:
   *ID                              TT_INTEGER NOT NULL
1 table found.
(primary key columns are indicated with *)
Command> ALTER TABLE x ADD COLUMN t TIMESTAMP;
Command> ALTER TABLE x ADD AGING USE t LIFETIME 2 DAYS;
 2993: Aging column cannot be nullable
The command failed.

Attempt to alter the LIFETIME clause for a table defined with time-based aging. The aging column is defined with data type TT_DATE. An error is generated because the LIFETIME unit is not expressed in DAYS.

Command> CREATE TABLE aging1 (col1 TT_DATE NOT NULL) AGING USE 
         col1 LIFETIME 2 DAYS;
Command> ALTER TABLE aging1 SET AGING LIFETIME 2 HOURS;
 2977: Only DAY lifetime unit is allowed with a TT_DATE column
The command failed.

Alter the employees table to add a new compressed column of state, which contains the full name of the state. Note that the employees table already has a compressed column group consisting of job_id and manager_id.

Command> ALTER TABLE employees 
         ADD COLUMN state VARCHAR2(20) 
         COMPRESS (state BY DICTIONARY);

Command> DESCRIBE employees; 
Table MYSCHEMA.EMPLOYEES:
  Columns:
   *EMPLOYEE_ID                     NUMBER (6) NOT NULL
    FIRST_NAME                      VARCHAR2 (20) INLINE
    LAST_NAME                       VARCHAR2 (25) INLINE NOT NULL
    EMAIL                           VARCHAR2 (25) INLINE NOT NULL
    PHONE_NUMBER                    VARCHAR2 (20) INLINE
    HIRE_DATE                       DATE NOT NULL
    JOB_ID                          VARCHAR2 (10) INLINE NOT NULL
    SALARY                          NUMBER (8,2)
    COMMISSION_PCT                  NUMBER (2,2)
    MANAGER_ID                      NUMBER (6)
    DEPARTMENT_ID                   NUMBER (4)
    STATE                           VARCHAR2 (20) INLINE
  COMPRESS ( ( JOB_ID, MANAGER_ID ) BY DICTIONARY,
             STATE BY DICTIONARY )
 
1 table found.
(primary key columns are indicated with *)

The following example drops the compressed column state from the employees table:

Command> ALTER TABLE employees
 DROP state;
Command> DESCRIBE employees; 
Table MYSCHEMA.EMPLOYEES:
  Columns:
   *EMPLOYEE_ID                     NUMBER (6) NOT NULL
    FIRST_NAME                      VARCHAR2 (20) INLINE
    LAST_NAME                       VARCHAR2 (25) INLINE NOT NULL
    EMAIL                           VARCHAR2 (25) INLINE NOT NULL
    PHONE_NUMBER                    VARCHAR2 (20) INLINE
    HIRE_DATE                       DATE NOT NULL
    JOB_ID                          VARCHAR2 (10) INLINE NOT NULL
    SALARY                          NUMBER (8,2)
    COMMISSION_PCT                  NUMBER (2,2)
    MANAGER_ID                      NUMBER (6)
    DEPARTMENT_ID                   NUMBER (4)
 COMPRESS ( ( JOB_ID, MANAGER_ID ) BY DICTIONARY )
 
1 table found.
(primary key columns are indicated with *)

ALTER USER

The ALTER USER statement enables you to change a user's password. It also enables you to change the profile for the user, to lock or unlock the user's account, and to expire the user's password. A user with the ADMIN privilege can perform these operations.

This statement also enables you to change a user from internal to external or from external to internal.

Required privilege

No privilege is required to change the user's own password.

ADMIN privilege is required for all other operations.

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

This is the syntax for ALTER USER...IDENTIFIED BY. Ensure to specify at least one of these clauses: IDENTIFIED BY, PROFILE, ACCOUNT, or PASSWORD EXPIRE.

ALTER USER user [IDENTIFIED BY {password | "password"}]
  [PROFILE profile] [ACCOUNT {LOCK|UNLOCK}] [PASSWORD EXPIRE]

This is the syntax for ALTER USER...IDENTIFIED EXTERNALLY. Ensure to specify at least one of these clauses: IDENTIFIED EXTERNALLY, PROFILE, or ACCOUNT.

ALTER USER user [IDENTIFIED EXTERNALLY]
  [PROFILE profile] [ACCOUNT {LOCK|UNLOCK}]

Parameters

Parameter Description

user

Name of the user to alter.

IDENTIFIED BY password|"password"

Specifies an internal user and the password for the internal user.

The password you can specify is dependent on the profile assigned to the user. Specifically, the value of the PASSWORD_COMPLEXITY_CHECKER password parameter determines the complexity of the password. If the value is TT_VERIFY_FUNCTION, TT_STRONG_VERIFY_FUNCTION, or TT_STIG_VERIFY_FUNCTION, the password must meet specific password verification requirements. For example, if the value is TT_VERIFY_FUNCTION, the password cannot contain the name of the database. See "About password complexity checker verification" for details.

IDENTIFIED EXTERNALLY

Specifies the user is an external user.

PROFILE profile

Use the PROFILE clause to specify the name of the profile (designated by profile) that you want to assign to the user. The profile sets the limits for the password parameters for the user. See "CREATE PROFILE" for information on these password parameters. You can specify a PROFILE clause for external users, but the password parameters have no effect for these users.

ACCOUNT [LOCK|UNLOCK]

Specify ACCOUNT LOCK to lock the user's account and disable connections to the database. Specify ACCOUNT UNLOCK to unlock the user's account and enable connections to the database. The default is ACCOUNT UNLOCK.

PASSWORD EXPIRE

Specify PASSWORD EXPIRE if you want the user's password to expire. This setting forces a user with ADMIN privileges to change the password before the user can connect to the database. This clause is not valid for an externally identified user (as denoted by the IDENTIFIED EXTERNALLY clause).

Description

  • Database users can be internal or external.

    • Internal users are defined for a TimesTen database.

    • External users are defined by the operating system. External users cannot be assigned a TimesTen password.

  • Password requirements:
    • Cannot exceed 30 characters.
    • Is case-sensitive.
    • Must start with a letter. A password cannot start with a digit or a special character unless the password is enclosed in double quotation marks.
    • If a special character is used, the password must be contained in double quotation marks. The exceptions are the # and the @ special characters. A password that contains the # or the @ special character does not need to be enclosed in double quotation marks.
    • Cannot contain a semi-colon (;) or a double quotation mark (").
  • Use the PROFILE clause to change the profile for a user. See "CREATE PROFILE" for details.

  • Use the ACCOUNT LOCK or ACCOUNT UNLOCK to change the lock settings for the user account.

  • Use the PASSWORD EXPIRE clause to expire the user's password and force a password change before the user can connect to the database.

  • You can alter a user over a client/sever connection if the connection is encrypted with TLS. See "Transport Layer Security for TimesTen Client/Server" in the Oracle TimesTen In-Memory Database Security Guide for details.

  • When replication is configured, this statement is replicated.

Examples

Illustrate password verification when altering user

This example creates the myprofile_strongpw profile and specifies a value of TT_STRONG_VERIFY_FUNCTION for the PASSWORD_COMPLEXITY_CHECKER password parameter. The example then creates the sampleuser_pwchange user and assigns the myprofile_strongpw profile to this user. The specified password meets the requirements of the TT_STRONG_VERIFY_FUNCTION function and the user is created. See "TT_STRONG_VERIFY_FUNCTION" for more information on the TT_STRONG_VERIFY_FUNCTION function.

Command> CREATE PROFILE myprofile_strongpw LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_STRONG_VERIFY_FUNCTION;

Profile created.

Command> CREATE USER sampleuser_pwchange 
           IDENTIFIED BY "5&AbbN*60" PROFILE myprofile_strongpw;

User created.

Now alter the myprofile_strongpw profile, changing the value of the PASSWORD_COMPLEXITY_CHECKER password parameter to TT_STIG_VERIFY_FUNCTION. Use the ALTER USER statement to expire the password for the sampleuser_pwchange user. Attempt to connect to the database as the sampleuser_pwchange user. The connection fails, as the password is expired.

Command> ALTER PROFILE myprofile_strongpw LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_STIG_VERIFY_FUNCTION;

Profile altered.

Command> ALTER USER sampleuser_pwchange PASSWORD EXPIRE;

User altered.

Command> GRANT CONNECT TO sampleuser_pwchange;
Command> connect adding "UID=sampleuser_pwchange;PWD=5&AbbN*60" as sampleuser;
15180: the password has expired
The command failed.

Use the ALTER USER statement to change the password for the sampleuser_pwchange user. The ALTER USER statement succeeds, as the password meets the requirements of the TT_STIG_VERIFY_FUNCTION function. Attempt to connect to the database as the sampleuser_pwchange user. The connection is successful. See "TT_STIG_VERIFY_FUNCTION" for more information on the TT_STIG_VERIFY_FUNCTION function.

access1: Command> ALTER USER sampleuser_pwchange 
           IDENTIFIED BY "bd@<!BCvvKASn67";

User altered.

Command> connect adding "UID=sampleuser_pwchange;PWD=bd@<!BCvvKASn67" 
           as sampleuser;
Connection successful: DSN=access1;UID=sampleuser_pwchange;
DataStore=/scratch/sampleuser/mydatabase1;DatabaseCharacterSet=AL32UTF8;
ConnectionCharacterSet=AL32UTF8;PermSize=128;
(Default setting AutoCommit=1)

Change the user's profile

This example creates the user1 user and assigns the user1 user the profile1 profile. The example then uses the ALTER USER statement to change the user1 user's profile to profile2.

Command> CREATE USER user1 IDENTIFIED BY user1 PROFILE profile1;
 
User created.
 
Command> ALTER USER user1 PROFILE profile2;
 
User altered.

Query the dba_users system view to verify the user1 profile has been changed to profile2.

Command> SELECT profile FROM dba_users WHERE username = 'USER1';
< PROFILE2 >
1 row found.

Lock and unlock a user's account

This example creates the user2 user. It then uses the ALTER USER statement to lock and then unlock the user2 user's account.

Command> CREATE USER user2 IDENTIFIED BY user2 PROFILE profile1;
 
User created.
 
Command> ALTER USER user2 ACCOUNT LOCK;
 
User altered.
 

Grant the CONNECT privilege to user2;

Command> GRANT CONNECT TO user2;

Attempt to connect to the database as user2. The user2 account is locked so the connection fails.

Command> connect adding "UID=user2;PWD=user2" as user2;
15179: the account is locked
The command failed.

As the instance administrator, reconnect to the database and use the ALTER USER statement to unlock the user2 account.

none: Command> use database1
database1: Command> ALTER USER user2 ACCOUNT UNLOCK;
 
User altered.
 

Attempt to connect to the database as the user2 user. The connection succeeds.

database1: Command> connect adding "UID=user2;PWD=user2" as user2;
Connection successful: DSN=database1;UID=user3;DataStore=/scratch/database1;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128;
(Default setting AutoCommit=1)

Expire a user's password

This example uses the ALTER USER statement to change the user2 user's account to expire the password. A user with ADMIN privilege must change the user2 password before user2 can connect to the database.

Command> ALTER USER user2 PASSWORD EXPIRE;
 
User altered.
 

Attempt to connect to the database as user2. The user2 password must be changed before the user2 user can connect to the database.

Command>  connect adding "UID=user2;PWD=user2" as user2;
15180: the password has expired
The command failed.

As the instance administrator, reconnect to the database and use the ALTER USER statement to change the user2 password.

none: Command> use database1
database1: Command>  ALTER USER user2 IDENTIFIED BY newuser2password;
 
User altered.
 

Attempt to connect to the database a the user2 user. The connection succeeds.

database1: Command> connect adding "UID=user2;PWD=newuser2password" as user2;
Connection successful: DSN=database1;UID=user4;DataStore=/scratch/database1;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128;
(Default setting AutoCommit=1)

Change a user from external to internal and internal to external

This example uses the ALTER USER statement to change the user2 internal user to an external user and then back to an internal user.

Command> ALTER USER user2 IDENTIFIED EXTERNALLY;
 
User altered.

Use the ALTER USER statement to change the user2 external user back to an internal user.

Command> ALTER USER user2 IDENTIFIED BY user2_password_change;
 
User altered.

CALL

Use the CALL statement to execute a TimesTen built-in procedure or to execute a PL/SQL procedure or function that is standalone or part of a package from within SQL.

Required privilege

The privileges required for executing each TimesTen built-in procedure are listed in the description of each procedure in the "Built-In Procedures" section in the Oracle TimesTen In-Memory Database Reference.

No privileges are required for an owner calling its own PL/SQL procedure or function that is standalone or part of a package using the CALL statement. For all other users, the EXECUTE privilege on the procedure or function or on the package in which it is defined is required.

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

To call a TimesTen built-in procedure:

CALL TimesTenBuiltIn [( arguments )]

When calling PL/SQL procedures or functions that are standalone or part of a package, you can either call these by name or as the result of an expression.

To call a PL/SQL procedure:

CALL [Owner.][Package.]ProcedureName [( arguments )]

To call a PL/SQL function that returns a parameter, one of the following are appropriate:

CALL [Owner.][Package.]FunctionName [( arguments )] INTO :return_param

Note:

A user's own PL/SQL procedure or function takes precedence over a TimesTen built-in procedure with the same name.

Parameters

Parameter Description

TimesTenBuiltIn

Name of the TimesTen built-in procedure. For a full list of TimesTen built-in procedures, see "Built-In Procedures" in the Oracle TimesTen In-Memory Database Reference.

[Owner.]ProcedureName

Name of the PL/SQL procedure. You can optionally specify the owner of the procedure.

[Owner.]FunctionName

Name of the PL/SQL function. You can optionally specify the owner of the function.

arguments

Specify 0 or more arguments for the PL/SQL procedure or function.

INTO

If the routine is a function, the INTO clause is required.

return_param

Specify the host variable that stores the return value of the function.

Description

Detailed information on how to execute PL/SQL procedures or functions with the CALL statement in TimesTen is provided in "Executing procedures and functions" in the Oracle TimesTen In-Memory Database PL/SQL Developer's Guide, "Using CALL to execute procedures and functions" in the Oracle TimesTen In-Memory Database C Developer's Guide, or "Using CALL to execute procedures and functions" in the Oracle TimesTen In-Memory Database Java Developer's Guide.

Examples

The following is the definition of the mytest function:

create or replace function mytest return number is
begin
  return 1;
end;
/

Perform the following to execute the mytest function in a CALL statement:

Command> variable n number;
Command> call mytest() into :n;
Command> print n;
N                    : 1

The following example creates a function that returns the salary of the employee whose employee ID is specified as input, then calls the function and displays the result that was returned.

Command> CREATE OR REPLACE FUNCTION get_sal
          (p_id employees.employee_id%TYPE) RETURN NUMBER IS
            v_sal employees.salary%TYPE := 0;
         BEGIN
           SELECT salary INTO v_sal FROM employees
             WHERE employee_id = p_id;
           RETURN v_sal;
         END get_sal;
         /
 
Function created.
 
Command> variable n number;
Command> call get_sal(100) into :n;
Command> print n;
N                    : 24000

COMMIT

The COMMIT statement ends the current transaction and makes permanent all changes performed in the transaction.

Required privilege

None

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

COMMIT [WORK]

Parameters

The COMMIT statement enables the following optional keyword:

Parameter Description

[WORK]

Optional clause supported for compliance with the SQL standard. COMMIT and COMMIT WORK are equivalent.

Description

  • Until you commit a transaction:

    • You can see any changes you have made during the transaction but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit.

    • You can roll back (undo) changes made during the transaction with the ROLLBACK statement.

  • This statement releases transaction locks.

  • For passthrough, the Oracle Database transaction will also be committed.

  • A commit closes all open cursors.

Examples

Insert a row into regions table of the HR schema and commit transaction. First set autocommit to 0:

Command> SET AUTOCOMMIT 0;
Command> INSERT INTO regions VALUES (5,'Australia');
1 row inserted.
Command> COMMIT;
Command> SELECT * FROM regions;
< 1, Europe >
< 2, Americas >
< 3, Asia >
< 4, Middle East and Africa >
< 5, Australia >
5 rows found.

See also

ROLLBACK

CREATE ACTIVE STANDBY PAIR

This statement is not supported in TimesTen Scaleout.

In TimesTen Classic:

This statement creates an active standby pair. It includes an active master database, a standby master database, and may also include one or more read-only subscribers. The active master database replicates updates to the standby master database, which propagates the updates to the subscribers.

Required privilege

ADMIN

Usage with TimesTen Scaleout

This statement is not supported with TimesTen Scaleout.

SQL syntax

CREATE ACTIVE STANDBY PAIR
  FullStoreName, FullStoreName [ReturnServiceAttribute]
    [SUBSCRIBER FullStoreName [,...]]
    [STORE FullStoreName [StoreAttribute [...]]]
    [NetworkOperation [...] ]
    [{ INCLUDE | EXCLUDE }{TABLE [[Owner.]TableName [,...]]|
         CACHE GROUP [[Owner.]CacheGroupName [,...]]|
         SEQUENCE [[Owner.]SequenceName [,...]]} [,...]]

Syntax for ReturnServiceAttribute:

{ RETURN RECEIPT [BY REQUEST] |
  RETURN TWOSAFE [BY REQUEST] |
  NO RETURN }

Syntax for StoreAttribute:

  DISABLE RETURN {SUBSCRIBER | ALL} NumFailures 
  RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED 
  DURABLE COMMIT {ON | OFF}
  RESUME RETURN Milliseconds 
  LOCAL COMMIT ACTION {NO ACTION | COMMIT} 
  RETURN WAIT TIME Seconds 
  COMPRESS TRAFFIC {ON | OFF}
  PORT PortNumber 
  TIMEOUT Seconds 
  FAILTHRESHOLD Value 
  TABLE DEFINITION CHECKING {RELAXED|EXACT}

Syntax for NetworkOperation:

ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName
  { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost }
      PRIORITY Priority } [...]

Parameters

Parameter Description

FullStoreName

The database, specified as one of the following:

  • SELF

  • The prefix of the database file name

For example, if the database path is directory/subdirectory/data.ds0, then data is the database name that should be used.

This is the database file name specified in the DataStore attribute of the DSN description with optional host ID in the form:

DataStoreName [ON Host]

Host can be either an IP address or a literal host name assigned to one or more IP addresses, as described in "Configuring the network" in Oracle TimesTen In-Memory Database Replication Guide. Host names containing special characters must be surrounded by double quotes. For example: "MyHost-500".

RETURN RECEIPT [BY REQUEST]

Enables the return receipt service, so that applications that commit a transaction to an active master database are blocked until the transaction is received by the standby master database.

Specifying RETURN RECEIPT applies the service to all transactions. If you specify RETURN RECEIPT BY REQUEST, you can use the ttRepSyncSet procedure to enable the return receipt service for selected transactions. For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide.

RETURN TWOSAFE [BY REQUEST]

Enables the return twosafe service, so that applications that commit a transaction to an active master database are blocked until the transaction is committed on the standby master database.

Specifying RETURN TWOSAFE applies the service to all transactions. If you specify RETURN TWOSAFE BY REQUEST, you can use the ttRepSyncSet procedure to enable the return receipt service for selected transactions.

For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide.

DISABLE RETURN {SUBSCRIBER | ALL} NumFailures

Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures.

Specifying SUBSCRIBER is the same as specifying ALL. Both settings refer to the standby master database.

This failure policy can be specified for either the RETURN RECEIPT or RETURN TWOSAFE service.

RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED

Sets return services on or off when replication is disabled (stopped or paused state).

OFF disables return services when replication is disabled and is the default for RETURN RECEIPT service. ON allows return services to continue to be enabled when replication is disabled and is the default for RETURN TWOSAFE service.

See "Establishing return service failure/recovery policies" in Oracle TimesTen In-Memory Database Replication Guide.

RESUME RETURN Milliseconds

If DISABLE RETURN has disabled return service blocking, this attribute sets the policy for when to re-enable the return service.

NO RETURN

Specifies that no return service is to be used. This is the default.

For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide.

RETURN WAIT TIME Seconds

Specifies the number of seconds to wait for return service acknowledgment. A value of 0 (zero) means that there is no waiting. The default value is 10 seconds.

The application can override this timeout setting by using the returnWait parameter in the ttRepSyncSet built-in procedure.

SUBSCRIBER FullStoreName [,...]]

A database that receives updates from a master database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

STORE FullStoreName [StoreAttribute [...]]

Defines the attributes for the specified database. Attributes include PORT, TIMEOUT and FAILTHRESHOLD. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.

TABLE DEFINITION CHECKING {EXACT|RELAXED}

StoreAttribute clause.

Specifies type of table definition checking that occurs on the subscriber:

  • EXACT - The tables must be identical on master and subscriber.

  • RELAXED - The tables must have the same key definition, number of columns and column data types.

The default is RELAXED.

Note: If you use TABLE DEFINITION CHECKING EXACT, use ttMigrate -exactUpgrade if you migrate the database. If you use TABLE DEFINITION CHECKING RELAXED, use ttMigrate -relaxedUpgrade if you migrate the database.

{INCLUDE | EXCLUDE}

{TABLE [[Owner.]TableName[,...]]|

CACHE GROUP

[[Owner.]CacheGroupName

[,...]]|

SEQUENCE

[[Owner.]SequenceName

[,...]]}

[,...]

An active standby pair replicates an entire database by default.

INCLUDE includes only the listed tables, sequences or cache groups for the replication scheme. Use one INCLUDE clause for each object type (table, sequence or cache group).

EXCLUDE removes tables or sequences or cache groups from the replication scheme. Use one EXCLUDE clause for each object type (table, sequence or cache group).

Do not use the EXCLUDE clause for AWT cache groups.

COMPRESS TRAFFIC {ON | OFF}

Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details.

DURABLE COMMIT {ON | OFF}

Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. It also enables durable commits when the ttRepStateSave built-in procedure has marked the standby database as failed.

FAILTHRESHOLD Value

The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.The value 0 means "No Limit." This is the default.

See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide for more information.

LOCAL COMMIT ACTION {NO ACTION | COMMIT}

Specifies the default action to be taken for a return twosafe transaction in the event of a timeout.

Note: This attribute is valid only when the RETURN TWOSAFE or RETURN TWOSAFE BY REQUEST attribute is set in the SUBSCRIBER clause.

NO ACTION: On timeout, the commit function returns to the application, leaving the transaction in the same state it was in when it entered the commit call, with the exception that the application is not able to update any replicated tables. The application can only reissue the commit. The transaction may not be rolled back. This is the default.

COMMIT: On timeout, the commit function attempts to perform a COMMIT to end the transaction locally. No more operations are possible on the same transaction.

This setting can be overridden for specific transactions by calling the localAction parameter in the ttRepSyncSet procedure.

MASTER FullStoreName

The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description.

PORT PortNumber

The TCP/IP port number on which the replication agent for the database listens for connections. If not specified, the replication agent automatically allocates a port number.

In an active standby pair, the standby master database listens for updates from the active master database. Read-only subscribers listen for updates from the standby master database.

ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName

Denotes the NetworkOperation clause. If specified, enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores. In the context of the ROUTE clause, you can define the following:

  • A route for the active database to the standby database and for the standby database to the active database for when failover occurs

  • A route for a read-only subscriber to the active and standby databases

When using active standby pairs, ROUTE should be specified at least twice for an active standby pair with no read only subscribers. Then, ROUTE should be specified twice more for each read only subscriber on the active standby pair.

For FullStoreName, ON "host" must be specified.

MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost

MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.

Clause can be specified more than once.

PRIORITY Priority

Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.

Required syntax of NetworkOperation clause. Follows MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost clause.

TIMEOUT Seconds

The maximum number of seconds the replication agent waits for a response from remote replication agents. The default is 120 seconds.

In an active standby pair, the active master database sends messages to the standby master database. The standby master database sends messages to the read-only subscribers.

Note: For large transactions that may cause a delayed response from the remote replication agent, the agent scales the timeout based on the size of the transaction. This scaling is disabled if you set TIMEOUT to less than or equal to 60 seconds. Also see "Setting wait timeout for response from remote replication agents" in Oracle TimesTen In-Memory Database Replication Guide.

Description

  • After you create an active standby pair, make one of your databases the active database. To accomplish this, call ttRepStateSet ('ACTIVE'). Then use ttRepAdmin to duplicate the active database to the second database. When the operation is successful, the second database becomes the standby database. For more information, see "Setting up an active standby pair with no cache groups" in Oracle TimesTen In-Memory Database Replication Guide.

  • The SUBSCRIBER clause lists one or more read-only subscriber databases. You can designate up to 127 subscriber databases.

  • Replication between the active master database and the standby master database can be RETURN TWOSAFE, RETURN RECEIPT, or asynchronous. RETURN TWOSAFE ensures no transaction loss.

  • Use the INCLUDE and EXCLUDE clauses to exclude the listed tables, sequences and cache groups from replication, or to include only the listed tables, sequences and cache groups, excluding all others.

  • If the active standby pair has the RETURN TWOSAFE attribute and replicates a cache group, a transaction may fail if:

    • The transaction that is being replicated contains an ALTER TABLE statement or an ALTER CACHE GROUP statement.

    • The transaction contains an INSERT, UPDATE or DELETE statement on a replicated table, replicated cache group or an asynchronous writethrough cache group.

  • You can use an active standby pair to replicate read-only cache groups and asynchronous writethrough (AWT) cache groups. You cannot use an active standby pair to replicate synchronous writethrough (SWT) cache groups or user managed cache groups.

  • You cannot use the EXCLUDE clause for AWT cache groups.

  • You cannot execute the CREATE ACTIVE STANDBY PAIR statement when Oracle Clusterware is used with TimesTen.

Examples

This example creates an active standby pair whose master databases are rep1 and rep2. There is one subscriber, rep3. The type of replication is RETURN RECEIPT. The statement also sets PORT and TIMEOUT attributes for the master databases.

CREATE ACTIVE STANDBY PAIR rep1, rep2 RETURN RECEIPT
  SUBSCRIBER rep3
  STORE rep1 PORT 21000 TIMEOUT 30
  STORE rep2 PORT 22000 TIMEOUT 30;

Specify NetworkOperation clause to control network interface:

CREATE ACTIVE STANDBY PAIR rep1,rep2
ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2"
MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1;
ROUTE MASTER rep2 ON "machine2" SUBSCRIBER rep1 ON "machine1"
MASTERIP "2.2.2.2" PRIORITY 1 SUBSCRIBERIP "1.1.1.1" PRIORITY 1;

CREATE CACHE GROUP

The CREATE CACHE GROUP statement:

  • Creates the table defined by the cache group.

  • Loads all new information associated with the cache group in the appropriate system tables.

A cache group is a set of tables related through foreign keys that cache data from tables in an Oracle database. There is one root table that does not reference any of the other tables. All other cache tables in the cache group reference exactly one other table in the cache group. In other words, the foreign key relationships form a tree.

A cache table is a set of rows satisfying the conditions:

  • The rows constitute a subset of the rows of a vertical partition of an Oracle database table.

  • The rows are stored in a TimesTen table with the same name as the Oracle database table.

If a database has more than one cache group, the cache groups must correspond to different Oracle database (and TimesTen) tables.

Cache group instance refers to a row in the root table and all the child table rows related directly or indirectly to the root table rows.

A cache group can be either system managed or user managed.

A system managed cache group is fully managed by TimesTen and has fixed properties. System managed cache group types include:

  • Read-only cache groups are updated in the Oracle database, and the updates are propagated from the Oracle database to the cache.

  • Asynchronous writethrough (AWT) cache groups are updated in the cache and the updates are propagated to the Oracle database. Transactions continue executing on the cache without waiting for a commit on the Oracle database.

  • Synchronous writethrough (SWT) cache groups are updated in the cache and the updates are propagated to the Oracle database. Transactions are committed on the cache after notification that a commit has occurred on the Oracle database.

Because TimesTen manages system managed cache groups, including loading and unloading the cache group, certain statements and clauses cannot be used in the definition of these cache groups, including:

  • WHERE clauses in AWT and SWT cache group definitions

  • READONLY, PROPAGATE and NOT PROPAGATE in cache table definitions

  • AUTOREFRESH in AWT and SWT cache group definitions

The FLUSH CACHE GROUP and REFRESH CACHE GROUP operations are not allowed for AWT and SWT cache groups.

You must stop the replication agent before creating an AWT cache group.

A user managed cache group must be managed by the application or user. PROPAGATE in a user managed cache group is synchronous. The table-level READONLY keyword can only be used for user managed cache groups.

In addition, both TimesTen and Oracle Database must be able to parse all WHERE clauses.

Cache groups can be explicitly or dynamically loaded.

In cache groups that are explicitly loaded, new cache instances are loaded manually into the TimesTen cache tables from the Oracle database tables using a LOAD CACHE GROUP or REFRESH CACHE GROUP statement or automatically using an autorefresh operation.

In a dynamic cache group, new cache instances can be loaded manually into the TimesTen cache tables by using a LOAD CACHE GROUP or on demand using a dynamic load operation. In a dynamic load operation, data is automatically loaded into the TimesTen cache tables from the cached Oracle database tables when a SELECT, UPDATE, DELETE or INSERT statement is issued on one of the cache tables, where the data is not present in the cache table but does exist in the cached Oracle database table. A manual refresh or automatic refresh operation on a dynamic cache group can result in the updating or deleting of existing cache instances, but not in the loading of new cache instances.

Any cache group type (read-only, asynchronous writethrough, synchronous writethrough, user managed) can be defined as an explicitly loaded cache group.

Any cache group type can be defined as a dynamic cache group except a user managed cache group that has both the AUTOREFRESH cache group attribute and the PROPAGATE cache table attribute.

Data in a dynamic cache group is aged out because LRU aging is defined by default. Use the ttAgingLRUConfig and/or the ttAgingTableLRUConfig built-in procedures to override the space usage thresholds for LRU aging. You can also define time-based aging on a dynamic cache group to override LRU aging.

For more information on static and dynamic cache groups, see "Cache groups and cache tables" in Oracle TimesTen In-Memory Database Cache Guide.

Required privilege

CREATE CACHE GROUP or CREATE ANY CACHE GROUP and CREATE TABLE (if all tables in the cache group are owned by the current user) or CREATE ANY TABLE (if at least one of the tables in the cache group is not owned by the current user).

Usage with TimesTen Scaleout

Static read-only cache groups with incremental autorefresh are supported.

SQL syntax: TimesTen Scaleout

For static read-only cache groups in TimesTen Scaleout:
CREATE READONLY CACHE GROUP [Owner.]GroupName
 [AUTOREFRESH 
    [MODE INCREMENTAL]
    [INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S]}]
    [STATE {ON|OFF|PAUSED}]
  ]
  FROM
    [Owner.]TableName (ColumnDefinition[,...][,PRIMARY KEY(ColumnName[,...])])
  [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages]
  [ParentDistributionClause]
    [WHERE ExternalSearchCondition]
  [,[Owner.]TableName (ColumnDefinition[,...]
    [,PRIMARY KEY(ColumnName[,...])]
    [,FOREIGN KEY(ColumnName[,...])
        REFERENCES RefTableName (ColumnName [,...])[ON DELETE CASCADE]])
 [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages]
 [ChildDistributionClause]
   [WHERE ExternalSearchCondition]
 [,...]
 ]

The syntax for the distribution clause for a parent:
ParentDistributionClause::= DISTRIBUTE BY HASH [(ColumnName [,...])] | DUPLICATE

The syntax for the distribution clause for a child:
ChildDistributionClause::= DISTRIBUTE BY HASH [(ColumnName [,...])] |
 DISTRIBUTE BY REFERENCE [(ForeignKeyConstraint)] | DUPLICATE

SQL syntax: TimesTen Classic

For read-only cache groups:

CREATE [DYNAMIC] [HYBRID] READONLY CACHE GROUP [Owner.]GroupName
 [AUTOREFRESH
  [MODE {INCREMENTAL | FULL}]
  [INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S] }]
  [STATE {ON|OFF|PAUSED}]
 ]
 FROM
  [Owner.]TableName (
    {ColumnDefinition[,...]}
    [,PRIMARY KEY(ColumnName[,...])]
    [,FOREIGN KEY(ColumnName [,...])
            REFERENCES RefTableName (ColumnName [,...])]
                    [ON DELETE CASCADE])
 [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages]
 [AGING {LRU|
         USE ColumnName
         LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}
         [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}]
        }[ON|OFF]
 ]
 [WHERE ExternalSearchCondition]
} [,...]

For asynchronous writethrough cache groups:

CREATE [DYNAMIC] [ASYNCHRONOUS] WRITETHROUGH CACHE GROUP   [Owner.]GroupName
  FROM
   {[Owner.]TableName (
     {ColumnDefinition[,...]}
    [,PRIMARY KEY(ColumnName[,...])]
     [FOREIGN KEY(ColumnName [,...])
          REFERENCES RefTableName (ColumnName [,...])]
      [ON DELETE CASCADE])
 UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages]
 [AGING {LRU|
     USE ColumnName
          LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}
         [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}]
     }[ON|OFF]
 ]
} [,...]

For synchronous writethrough cache groups:

CREATE [DYNAMIC] SYNCHRONOUS WRITETHROUGH
CACHE GROUP [Owner.]GroupName
 FROM 
   {[Owner.]TableName (
     {ColumnDefinition[,...]}
    [,PRIMARY KEY(ColumnName[,...])]
     [FOREIGN KEY(ColumnName [,...])
            REFERENCES RefTableName (ColumnName [,...])]
   [ON DELETE CASCADE])
 [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages]
 [AGING {LRU|
     USE ColumnName
         LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}
         [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}]
     }[ON|OFF]
 ]
} [,...]

For user managed cache groups:

CREATE [DYNAMIC][USERMANAGED] CACHE GROUP [Owner.]GroupName
 [AUTOREFRESH
   [MODE {INCREMENTAL | FULL}]
   [INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S] }]
   [STATE {ON|OFF|PAUSED}]
 ]
  FROM 
   {[Owner.]TableName (
    {ColumnDefinition[,...]}
    [,PRIMARY KEY(ColumnName[,...])]
    [FOREIGN KEY(ColumnName[,...])
          REFERENCES RefTableName (ColumnName [,...])]
    [ON DELETE CASCADE]
    [,{READONLY | PROPAGATE | NOT PROPAGATE}])
 [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages]
 [AGING {LRU|
         USE ColumnName
             LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}
             [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}]
        }[ON|OFF]
 ]
 [WHERE ExternalSearchCondition]
} [,...]

Parameters

Following are the parameters for the cache group definition before the FROM keyword:

Parameter Description

[Owner.]GroupName

Owner and name assigned to the new cache group.

DYNAMIC

Supported in TimesTen Classic only.

If specified, a dynamic cache group is created.

HYBRID

Supported in TimesTen Classic only.

If specified, a dynamic read-only cache group where the root table does not exist in the Oracle database.

AUTOREFRESH

The AUTOREFRESH parameter automatically propagates changes from the Oracle database to the cache group.

MODE [INCREMENTAL | FULL]

Determines which rows in the cache are updated during an autorefresh. If the INCREMENTAL clause is specified, TimesTen refreshes only rows that have been changed on the Oracle database since the last propagation. If the FULL clause is specified, TimesTen updates all rows in the cache with each autorefresh. The default autorefresh mode is INCREMENTAL.

In TimesTen Scaleout, MODE INCREMENTAL is supported.

INTERVAL IntervalValue

Indicates the interval at which autorefresh should occur in units of minutes, seconds or milliseconds. IntervalValue is an integer value that specifies how often autorefresh should be scheduled, in minutes, seconds, or milliseconds. The default IntervalValue value is 5 minutes. An autorefresh interval set to 0 milliseconds enables continuous autorefresh, where the next autorefresh cycle is scheduled immediately after the last autorefresh cycle has ended. See Automatically refreshing a cache group in the Oracle TimesTen In-Memory Database Cache Guide for more information.

If the specified interval is not long enough for an autorefresh to complete, a runtime warning is generated and the next autorefresh waits until the current one finishes. An informational message is generated in the support log if the wait queue reaches 10.

STATE [ON | OFF | PAUSED]

Specifies whether autorefresh should be ON or OFF or PAUSED when the cache group is created. You can alter this setting later by using the ALTER CACHE GROUP statement. By default, the AUTOREFRESH state is PAUSED.

FROM

Designates one or more table definitions for the cache group.

Everything after the FROM keyword comprises the definitions of the Oracle database tables cached in the cache group. The syntax for each table definition is similar to that of a CREATE TABLE statement. However, primary key constraints are required for the cache group table.

Table definitions have the following parameters.

Parameter Description

[Owner.]TableName

Owner and name to be assigned to the new table. If you do not specify the owner name, your login becomes the owner name for the new table.

ColumnDefinition

Name of an individual column in a table, its data type and whether it is nullable. Each table must have at least one column.

PRIMARY KEY (ColumnName[,...])

Specifies that the table has a primary key. Primary key constraints are required for a cache group. ColumnName is the name of the column that forms the primary key for the table to be created. Up to 16 columns can be specified for the primary key. Cannot be specified with UNIQUE in one specification.

FOREIGN KEY (ColumnName[,...])

Specifies that the table has a foreign key. ColumnName is the name of the column that forms the foreign key for the table to be created.

REFERENCES RefTableName (ColumnName[,...])

Specifies the table which the foreign key is associated with. RefTableName is the name of the referenced table and ColumnName is the name of the column referenced in the table.

[ON DELETE CASCADE]

Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted.

READONLY

Specifies that changes cannot be made on the cached table.

PROPAGATE|NOT PROPAGATE

Supported in TimesTen Classic only.

Specifies whether changes to the cached table are automatically propagate to the corresponding Oracle database table at commit time.

UNIQUE HASH ON (HashColumnName)

Specifies that a hash index is created on this table. HashColumnName identifies the column that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key.

PAGES = PrimaryPages

Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages, divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).

The value for PrimaryPages must be a positive constant and must be greater than 0.

If your estimate for PrimaryPages is too small, performance may be degraded.

For more information on hash indexes, see "CREATE TABLE".

WHERE ExternalSearchCondition

The WHERE clause evaluated by the Oracle database for the cache group table. This WHERE clause is added to every LOAD and REFRESH operation on the cache group. It may not directly reference other tables. It is parsed by both TimesTen and Oracle Database. See "Using a WHERE clause" in Oracle TimesTen In-Memory Database Cache Guide.

ParentDistributionClause

In TimesTen Scaleout, distribution clause for a parent table in a static read-only cache group with incremental autorefresh. These distribution schemes are supported for parent tables:

  • DISTRIBUTE BY HASH [(ColumnName [,...])]

  • DUPLICATE

ChildDistributionClause

In TimesTen Scaleout, distribution clause for a child table in a static read-only cache group with incremental autorefresh. These distribution schemes are supported for child tables:

  • DISTRIBUTE BY HASH [(ColumnName [,...])]

  • DISTRIBUTE BY REFERENCE [(ForeignKeyConstraint)]

  • DUPLICATE

AGING LRU [ON | OFF]

Supported in TimesTen Classic only.

If specified, defines the LRU aging policy on the root table. The LRU aging policy applies to all tables in the cache group. The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state (ON or OFF) and the LRU aging attributes.

Set the aging state to either ON or OFF. ON indicates that the aging state is enabled and aging is done automatically. OFF indicates that the aging state is disabled and aging is not done automatically. In both cases, the aging policy is defined. The default is ON.

In dynamic cache groups, LRU aging is ON by default. However, you can specify time-based aging or set LRU aging to OFF at the syntax level.

LRU aging cannot be specified on a cache group with the autorefresh attribute, unless the cache group is dynamic.

LRU attributes are defined by calling the ttAgingLRUConfig and/or the ttAgingTableLRUConfig built-in procedures. LRU attributes are not defined at the SQL level. See "ttAgingLRUConfig" and "ttAgingTableLRUConfig" in the Oracle TimesTen In-Memory Database Reference and "Implementing an aging policy in your tables" in the Oracle TimesTen In-Memory Database Operations Guide for more information.

AGING USE ColumnName...[ON|OFF]

Supported in TimesTen Classic only.

If specified, defines the time-based aging policy on the root table. The time-based aging policy applies to all tables in the cache group. The time-based aging policy defines the type of aging (time-based), the aging state (ON or OFF) and the time-based aging attributes.

Set the aging state to either ON or OFF. ON indicates that the aging state is enabled and aging is done automatically. OFF indicates that the aging state is disabled and aging is not done automatically. In both cases, the aging policy is defined. The default is ON.

Time-based aging attributes are defined at the SQL level and are specified by the LIFETIME and CYCLE clauses.

Specify ColumnName as the name of the column used for time-based aging. Define the column as NOT NULL and of data type TIMESTAMP or DATE. The value of this column is subtracted from SYSDATE, truncated using the specified unit (second, minute, hour, day) and then compared to the LIFETIME value. If the result is greater than the LIFETIME value, then the row is a candidate for aging.

The values of the column used for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be NULL).

For more information about time-based aging, see "Implementing aging in a cache group for TimesTen Classic" in Oracle TimesTen In-Memory Database Cache Guide.

LIFETIME Num1 {SECOND[S]|MINUTE[S]|HOUR[S]DAY[S]}

Supported in TimesTen Classic only.

LIFETIME is a time-based aging attribute and is a required clause.

Specify the LIFETIME clause after the AGING USE ColumnName clause.

The LIFETIME clause specifies the minimum amount of time data is kept in cache.

Specify Num1 as a positive integer constant to indicate the unit of time expressed in seconds, minutes, hours or days that rows should be kept in cache. Rows that exceed the LIFETIME value are aged out (deleted from the table).

The concept of time resolution is supported. If DAYS is specified as the time resolution, then all rows whose timestamp belongs to the same day are aged out at the same time. If HOURS is specified as the time resolution, then all rows with timestamp values within that hour are aged at the same time. A LIFETIME of 3 days is different than a LIFETIME of 72 hours (3*24) or a LIFETIME of 432 minutes (3*24*60).

[CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S]|DAY[S]}]

Supported in TimesTen Classic only.

CYCLE is a time-based aging attribute and is optional. Specify the CYCLE clause after the LIFETIME clause.

The CYCLE clause indicates how often the system should examine rows to see if data exceeds the specified LIFETIME value and should be aged out (deleted).

Specify Num2 as a positive integer constant.

If you do not specify the CYCLE clause, then the default value is 5 minutes. If you specify 0 for Num2, then the aging thread wakes up every second.

If the aging state is OFF, then aging is not done automatically and the CYCLE clause is ignored.

Cache groups in TimesTen Scaleout

TimesTen Scaleout supports static read-only cache groups with incremental autorefresh. You can specify a distribution scheme on a parent table and on one or more child tables. The distribution scheme specifies how data is distributed across the elements of the database.

A distribution scheme is denoted by the DISTRIBUTE BY clause:
  • For a single table cache group, the default distribution scheme is HASH.
  • If you do not specify a column in the DISTRIBUTE BY clause, the primary key columns are used as the key columns for the distribution scheme.
  • For a multiple table cache group, you can specify either the HASH or the DUPLICATE distribution scheme on the parent table. If you define the DUPLICATE distribution scheme, you can only specify HASH or DUPLICATE on the child tables.
  • For a multiple table cache group, HASH is the default distribution scheme for the parent table and all child tables default to the REFERENCE distribution scheme. If you specify DUPLICATE on the parent table, and do not specify a distribution scheme for the child tables, the default distribution scheme for the child tables is DUPLICATE.
  • If the foreign key on a child table is identical to the primary key on the parent table, the HASH distribution scheme is used for the child table as an optimization.
  • It is best practice to distribute child tables by reference.

See "Distribution schemes for TimesTen Cache in TimesTen Scaleout" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information on distribution schemes.

The following are not supported:
  • Full autorefresh mode
  • Aging
  • Materialized views
  • Global indexes

For more information on static read-only cache groups in TimesTen Scaleout, see "Using Cache Groups in TimesTen Scaleout" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.

Cache groups in TimesTen Classic

Dynamic hybrid read-only cache groups

A dynamic hybrid read-only cache group is a dynamic read-only cache group where the root table does not exist in the Oracle database. The root table is automatically created in the TimesTen database from the cache group definition. The cache group definition includes the description of this root table, as if it existed in the Oracle database.

Description and restrictions:
  • The root table must not exist in the Oracle database.
  • The root table in the TimesTen database must have a primary key.
  • The root table can only contain columns in the primary key. The primary key must be referenced by at least one child table.
  • For dynamic load triggering, you can use a derived table in the FROM clause of the SELECT statement. You can also specify more than one table of the same hybrid cache group in the SELECT query.
  • If you issue a SELECT query on the root table in the TimesTen database, this SELECT operation does not trigger a dynamic load.
  • You cannot specify time-based aging on this type of cache group. LRU aging is enabled by default.
  • The WHERE clause in the cache group definition is not supported.
  • You cannot use the LOAD CACHE GROUP statement to manually load the cache group.
  • The UNLOAD CACHE GROUP ...WITH ID statement is not supported.

See "Hybrid cache group" in the Oracle TimesTen In-Memory Database Cache Guide for more information on dynamic hybrid read-only cache groups.

Description of cache groups

  • Two cache groups cannot have the same owner name and group name. If you do not specify the owner name, your schema becomes the owner name for the new cache group.

  • Neither a cache table name nor a cache group name can contain #.

  • Dynamic parameters are not allowed in the WHERE clause.

  • Oracle Database temporary tables cannot be cached.

  • Each table must correspond to a table in the Oracle database.

  • In the Oracle database, you can define a parent/child relationship and then insert a null value into the foreign key column of the child table. This means this row in the child table references a null parent. You can then create a cache group and cache the parent/child relationship of the Oracle database tables. However, if you load data from the Oracle database tables into the cache group, the row that contains the null value of the foreign key column is not loaded. TimesTen recommends that you do not create cache groups if the tables you cache define a parent/child relationship in which the foreign key represents a null parent.

  • You cannot use lowercase delimited identifiers to name your cache tables. Table names in TimesTen are case-insensitive and are stored as uppercase. The name of the cache table must be the same as the Oracle database table name. Uppercase table names on TimesTen will not match mixed case table names on the Oracle database. As a workaround, create a synonym for your table in the Oracle database and use that synonym as the table name for the cache group. This workaround is not available for read-only cache groups or cache groups with the AUTOREFRESH parameter set.

  • Each column in the cache table must match each column in the Oracle database table, both in name and in data type. See "Mappings between Oracle Database and TimesTen data types" in Oracle TimesTen In-Memory Database Cache Guide. In addition, each column name must be fully qualified with an owner and table name when referenced in a WHERE clause.

  • The WHERE clause can only directly refer to the cache group table. Tables that are not in the cache group can only be referenced with a subquery.

  • Generally, you do not have to fully qualify the column names in the WHERE clause of the CREATE CACHE GROUP, LOAD CACHE GROUP, UNLOAD CACHE GROUP, REFRESH CACHE GROUP or FLUSH CACHE GROUP statements. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name.

  • By default, a range index is created to enforce the primary key for a cache group table. Use the UNIQUE HASH clause if you want to specify a hash index for the primary key.

    • If your application performs range queries over a cache group table's primary key, then choose a range index for that cache group table by omitting the UNIQUE HASH clause.

    • If, however, your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH clause. See "CREATE TABLE" for more information on the UNIQUE HASH clause.

    • Use ALTER TABLE to change the representation of the primary key index for a table.

  • For cache group tables with the PROPAGATE attribute and for tables of SWT and AWT cache groups, foreign keys specified with ON DELETE CASCADE must be a proper subset of foreign keys with ON DELETE CASCADE in the Oracle database tables.

  • You cannot execute the CREATE CACHE GROUP statement when performed under the serializable isolation level. An error message is returned when attempted.

The AUTOREFRESH parameter automatically propagates changes from the Oracle database to TimesTen cache groups. For static cache groups, deletes, updates and inserts are automatically propagated from the Oracle database to the cache group. For dynamic cache groups, only deletes and updates are propagated. Inserts to the specified Oracle database tables are not propagated to dynamic cache groups. They are dynamically loaded into TimesTen Cache when referenced by the application. They can also be explicitly loaded by the application.

To use autorefresh with a cache group, you must specify AUTOREFRESH when you create the cache group. You can change the MODE, STATE and INTERVAL AUTOREFRESH settings after a cache group has been created by using the ALTER CACHE GROUP statement. Once a cache group has been specified as either AUTOREFRESH or PROPAGATE, you cannot change these attributes. If you are creating a read-only cache group, you do not need to specify the autorefresh clause. A read-only cache group defaults to incremental autorefresh.

TimesTen supports FULL or INCREMENTAL AUTOREFRESH. In FULL mode, the entire cache is periodically unloaded and then reloaded. In INCREMENTAL mode, TimesTen installs triggers in the Oracle database to track changes and periodically updates only the rows that have changed in the specified Oracle database tables. The first incremental refresh is always a full refresh, unless the autorefresh state is PAUSED. The default mode is INCREMENTAL.

FULL AUTOREFRESH is more efficient when most of the Oracle database table rows have been changed. INCREMENTAL AUTOREFRESH is more efficient when there are fewer changes.

TimesTen schedules an autorefresh operation when the transaction that contains a statement with AUTOREFRESH specified is committed. The statement types that cause autorefresh to be scheduled are:

  • A CREATE CACHE GROUP statement in which AUTOREFRESH is specified, and the AUTOREFRESH state is specified as ON.

  • An ALTER CACHE GROUP statement in which the AUTOREFRESH state has been changed to ON.

  • A LOAD CACHE GROUP statement on an empty cache group whose autorefresh state is PAUSED.

The specified interval determines how often autorefresh occurs.

The current STATE of AUTOREFRESH can be ON, OFF or PAUSED. By default, the autorefresh state is PAUSED.

The NOT PROPAGATE attribute cannot be used with the AUTOREFRESH attribute.

Aging in cache groups:

  • You can implement sliding windows with time-based aging. See "Configuring a sliding window in TimesTen Classic" in Oracle TimesTen In-Memory Database Cache Guide.

  • After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.

  • The aging policy must be defined to change the aging state.

  • LRU and time-based aging can be combined in one system. If you use only LRU aging, the aging thread wakes up based on the cycle specified for the whole database. If you use only time-based aging, the aging thread wakes up based on an optimal frequency. This frequency is determined by the values specified in the CYCLE clause for the database. If you use both LRU and time-based aging, then the thread wakes up based on a combined consideration of both types.

  • Call the ttAgingScheduleNow procedure to schedule the aging process right away regardless if the aging state is ON or OFF.

  • The following rules determine if a row is accessed or referenced for LRU aging:

    • Any rows used to build the result set of a SELECT statement.

    • Any rows used to build the result set of an INSERT...SELECT statement.

    • Any rows that are about to be updated or deleted.

  • Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.

  • use

    For LRU aging, if a child row is not a candidate for aging, then neither this child row nor its parent row are deleted. ON DELETE CASCADE settings are ignored.

  • For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE (whether specified or not) is ignored.

  • Specify either the LRU aging or time-based aging policy on the root table. The policy applies to all tables in the cache group.

  • For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL column.

  • Restrictions on defining aging for a cache group:

    • LRU aging is not supported on a cache group defined with the autorefresh attribute, unless it is a dynamic cache group.

    • The aging policy cannot be added, altered, or dropped for read-only cache groups or cache groups with the AUTOREFRESH attribute while the cache agent is active. Stop the cache agent first.

    • You cannot drop the column that is used for time-based aging.

Examples: TimesTen Classic

These examples are specific to TimesTen Classic. For information and examples on using cache groups in TimesTen Scaleout, see "Using Cache Groups in TimesTen Scaleout" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.

Create a read-only cache group:

CREATE READONLY CACHE GROUP customerorders
AUTOREFRESH INTERVAL 10 MINUTES
FROM
customer (custid INT NOT NULL,
       name CHAR(100) NOT NULL,
       addr CHAR(100),
       zip INT,
       region CHAR(10),
       PRIMARY KEY(custid)),
ordertab (orderid INT NOT NULL,
       custid INT NOT NULL,
       PRIMARY KEY (orderid),
       FOREIGN KEY (custid) REFERENCES customer(custid));

Create an asynchronous writethrough cache group:

CREATE ASYNCHRONOUS WRITETHROUGH CACHE GROUP cstomers
FROM
customer (custid INT NOT NULL,
       name CHAR(100) NOT NULL,
       addr CHAR(100),
       zip INT,
       PRIMARY KEY(custid));

Create a synchronous writethrough cache group:

CREATE SYNCHRONOUS WRITETHROUGH CACHE GROUP customers
FROM
customer (custid INT NOT NULL,
       name CHAR(100) NOT NULL,
       addr CHAR(100),
       zip INT,
       PRIMARY KEY(custid));

Create a user managed cache group:

CREATE USERMANAGED CACHE GROUP updateanywherecustomers
AUTOREFRESH
       MODE INCREMENTAL
       INTERVAL 30 SECONDS
       STATE ON
FROM
customer (custid INT NOT NULL,
       name CHAR(100) NOT NULL,
       addr CHAR(100),
       zip INT,
       PRIMARY KEY(custid),
       PROPAGATE);

Create a cache group with time-based aging. Specify agetimestamp as the column for aging. Specify LIFETIME 2 hours, CYCLE 30 minutes. Aging state is not specified, so the default setting (ON) is used.

CREATE READONLY CACHE GROUP agingcachegroup
AUTOREFRESH
       MODE INCREMENTAL
       INTERVAL 5 MINUTES
       STATE PAUSED
FROM
customer (customerid NUMBER NOT NULL,
       agetimestamp TIMESTAMP NOT NULL,
       PRIMARY KEY (customerid))
       AGING USE agetimestamp LIFETIME 2 HOURS CYCLE 30 MINUTES;

Command> DESCRIBE customer;
Table USER.CUSTOMER:
  Columns:
   *CUSTOMERID                      NUMBER NOT NULL
    AGETIMESTAMP                    TIMESTAMP (6) NOT NULL
  AGING USE AgeTimestamp LIFETIME 2 HOURS CYCLE 30 MINUTES ON
1 table found.
(primary key columns are indicated with *)

Use a synonym for a mixed case delimited identifier table name in the Oracle database so the mixed case table name can be cached in TimesTen. First attempt to cache the mixed case Oracle database table name. You see the error "Could not find 'NameofTable' in Oracle":

Command> AUTOCOMMIT 0;
Command> PASSTHROUGH 3;
Command> CREATE TABLE "MixedCase" (col1 NUMBER PRIMARY KEY NOT NULL);
Command> INSERT INTO "MixedCase" VALUES (1);
1 row inserted.
Command> COMMIT;
Command> CREATE CACHE GROUP MixedCase1 from "MixedCase" 
        (col1 NUMBER PRIMARY KEY NOT NULL);
 5140: Could not find SAMPLEUSER.MIXEDCASE in Oracle.  May not have privileges.
The command failed.

Now, using the PassThrough attribute, create the synonym "MIXEDCASE" in the Oracle database and use that synonym as the table name.

Command> AUTOCOMMIT 0;
Command> PASSTHROUGH 3;
Command> CREATE SYNONYM "MIXEDCASE" FOR "MixedCase";
Command> COMMIT;
Command> CREATE CACHE GROUP MixedCase2 FROM "MIXEDCASE" 
         (col1 NUMBER PRIMARY KEY NOT NULL);
Warning  5147: Cache group contains synonyms
Command> COMMIT;

Attempt to use a synonym name with a read-only cache group or a cache group with the AUTOREFRESH attribute. You see an error:

Command> AUTOCOMMIT 0;
Command> PASSTHROUGH 3;
Command> CREATE SYNONYM "MIXEDCASE_AUTO" FOR "MixedCase";
Command> COMMIT;
Command> CREATE READONLY CACHE GROUP MixedCase3 AUTOREFRESH MODE
         INCREMENTAL INTERVAL 10 MINUTES FROM "MIXEDCASE_AUTO" 
         (Col1 NUMBER PRIMARY KEY NOT NULL);
 5142: Autorefresh is not allowed on cache groups with Oracle synonyms
The command failed.

CREATE FUNCTION

The CREATE FUNCTION statement creates a standalone stored function.

Required privilege

CREATE PROCEDURE (if owner) or CREATE ANY PROCEDURE (if not owner).

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

CREATE [OR REPLACE] FUNCTION [Owner.]FunctionName 
     [(arguments [IN|OUT|IN OUT][NOCOPY] DataType [DEFAULT expr][,...])]
     RETURN DataType 
     [InvokerRightsClause][AccessibleByClause][DETERMINISTIC]
     {IS|AS} PlsqlFunctionBody

InvokerRightsClause::=
AUTHID {CURRENT_USER|DEFINER}

AccessibleByClause::=
ACCESSIBLE BY (accessor[,...])

accessor::= 
[UnitKind][Owner.]UnitName

You can specify InvokerRightsClause, AccessibleByClause, or DETERMINISTIC in any order.

Parameters

Parameter Description

OR REPLACE

Specify OR REPLACE to recreate the function if it already exists. Use this clause to change the definition of an existing function without dropping and recreating it. When you recreate a function, TimesTen recompiles it.

FunctionName

Name of function.

arguments

Name of argument or parameter. You can specify 0 or more parameters for the function. If you specify a parameter, you must specify a data type for the parameter. The data type must be a PL/SQL data type.

IN|OUT|IN OUT

Parameter modes.

IN is a read-only parameter. You can pass the parameter's value into the function but the function cannot pass the parameter's value out of the function and back to the calling PL/SQL block. The value of the parameter cannot be changed.

OUT is a write-only parameter. Use an OUT parameter to pass a value back from the function to the calling PL/SQL block. You can assign a value to the parameter.

IN OUT is a read/write parameter. You can pass values into the function and return a value back to the calling program (either the original, unchanged value or a new value set within the function.

IN is the default.

NOCOPY

Specify NOCOPY to instruct TimesTen to pass the parameter as fast as possible. You can enhance performance when passing a large value such as a record, an index-by-table, or a varray to an OUT or IN OUT parameter. IN parameters are always passed NOCOPY.

DEFAULT expr

Use this clause to specify a default value for the parameter. You can specify := in place of the keyword DEFAULT.

RETURN DataType

Required clause. A function must return a value. You must specify the data type of the return value of the function.

Do not specify a length, precision, or scale for the data type.

The data type is a PL/SQL data type.

InvokerRightsClause

Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:

  • Specify DEFINER so that SQL name resolution and privilege checking operate as though the owner of the procedure or function (the definer, in whose schema it resides) is running it. DEFINER is the default.

  • Specify CURRENT_USER so that SQL name resolution and privilege checking operate as though the current user (the invoker) is running it.

For more information, see "Definer's rights and invoker's rights (AUTHID clause)" in the Oracle TimesTen In-Memory Database Security Guide.

AccessibleByClause

Use this clause to specify one or more accessors (PL/SQL units) that can invoke the function directly. The list of accessors that can access the function is called a white list. A white list gives you the ability to add an extra layer of security to your PL/SQL objects. Specifically, you can restrict access to the function to only those objects on the white list.

AccessibleByClause can appear only once in the CREATE FUNCTION statement.

Syntax: ACCESSIBLE BY (accessor [,...])

accessor

Used in AccessibleByClause. An accessor is a PL/SQL unit that can invoke the function.

An accessor can appear more than once in the AccessibleByClause clause.

Syntax: [UnitKind][Owner.]UnitName

UnitKind

Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the kind of PL/SQL unit that can invoke the function.

  • UnitKind is optional, but if specified, valid options are:
  • FUNCTION

  • PROCEDURE

  • PACKAGE

[Owner.]UnitName

Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the name of the PL/SQL unit that can invoke the function. If you specify UnitKind, then UnitName must be a name of a unit of that kind. For example, if you specify PROCEDURE for UnitKind, then UnitName must be the name of a procedure. UnitName is required.

You can optionally specify Owner. If you specify Owner, then UnitName must reside in that owner's schema. If you do not specify Owner, UnitName must be in the schema that contains the function.

DETERMINISTIC

Specify DETERMINISTIC to indicate that the function returns the same result value whenever it is called with the same values for its parameters.

IS|AS

Specify either IS or AS to declare the body of the function.

plsql_function_spec

Specifies the function body.

Description

  • AccessibleByClause:

    • The compiler checks the validity of the syntax of the ACCESSIBLE BY clause, but does not check that the accessor exists. Therefore, you can define an accessor that does yet exist in the owner's schema.

    • When you invoke the function, the compiler first does the normal permission checks on the invocation. If any check fails, the invocation fails, even if the invoker is an accessor. If all normal permission checks on the invocation succeed, and the function has no ACCESSIBLE BY clause, the invocation succeeds. If the function has an ACCESSIBLE BY clause, the invocation succeeds only if the invoker is an accessor.

  • When you create or replace a function, the privileges granted on the function remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.

  • In a replication environment, the CREATE FUNCTION statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.

  • TimesTen does not support:

    • parallel_enable_clause

      You can specify this clause, but it has no effect.

    • call_spec clause

    • AS EXTERNAL clause

Examples

Using the AccessibleByClause

This example creates the ProtectedFunction function. The ACCESSIBLE BY clause is used to restrict the invocation of the function to the CallingProc1 and CallingProc2 procedures. Note that for CallingProc1, the type of PL/SQL unit is not specified and for CallingProc2, the type of PL/SQL unit is specified (PROCEDURE).

Command> CREATE OR REPLACE FUNCTION ProtectedFunction (a IN NUMBER)
           RETURN NUMBER
         ACCESSIBLE BY (CallingProc1, PROCEDURE CallingProc2)
         AS
         BEGIN
           RETURN a * 1;
         END;
         /
         
Function created.

Create the CallingProc1 and CallingProc2 procedures.

Command> CREATE OR REPLACE PROCEDURE CallingProc1 AS
           a NUMBER:=1;
         BEGIN
           a:=ProtectedFunction(a);
           DBMS_OUTPUT.PUT_LINE ('Calling Procedure: '|| a);
         END;
         /
         
Procedure created.

Command> CREATE OR REPLACE PROCEDURE CallingProc2
         AS
           a NUMBER:=2;
         BEGIN
           a:=ProtectedFunction(a);
           DBMS_OUTPUT.PUT_LINE ('Calling Procedure: '|| a);
         END;
         /
         
Procedure created.

Call the procedures. CallingProc1 and CallingProc2 are in the white list, resulting in successful execution.

Command> SET SERVEROUTPUT ON
Command> exec CallingProc1;
Calling Procedure: 1
 
PL/SQL procedure successfully completed.
 
Command> exec CallingProc2;
Calling Procedure: 2
 
PL/SQL procedure successfully completed.

Illustrating the syntax for creating a PL/SQL function

Create function get_sal with one input parameter. Return salary as type NUMBER.

Command> CREATE OR REPLACE FUNCTION get_sal
           (p_id employees.employee_id%TYPE) RETURN NUMBER IS
           v_sal employees.salary%TYPE := 0;
         BEGIN
           SELECT salary INTO v_sal FROM employees
             WHERE employee_id = p_id;
           RETURN v_sal;
         END get_sal;
         /
 
Function created.

CREATE INDEX

The CREATE INDEX statement creates an index on one or more columns of a table or a materialized view.

Required privilege

For a global index (supported in TimesTen Scaleout) and for a local index (supported in TimesTen Scaleout and in TimesTen Classic:
  • If the owner, no privilege is required.
  • If not the owner, the CREATE ANY INDEX system privilege or the INDEX object privilege is required.

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout. You can create a global or a local index.

SQL syntax

The syntax to create a range index follows. Use the GLOBAL keyword, the optional INCLUDE clause, and the optional IndexDistributionClause clause to create a global index. A global index is supported only in TimesTen Scaleout:

CREATE [GLOBAL][UNIQUE] INDEX [Owner.]IndexName ON
[Owner.]TableName (ColumnName [ASC | DESC][,... ])
[INCLUDE (ColumnName[,…])] 
[IndexDistributionClause]

The syntax to create a hash index follows. Use the GLOBAL keyword to create a global index. The optional INCLUDE clause, and the optional IndexDistributionClause clause can only be used with a global index. A global index is supported only in TimesTen Scaleout:

CREATE [GLOBAL][UNIQUE] HASH INDEX [Owner.]IndexName ON
[Owner.]TableName (ColumnName [ASC | DESC][,... ] ) 
[INCLUDE (ColumnName [,…])]
[ PAGES = RowPages | CURRENT ]
[IndexDistributionClause]

The syntax for IndexDistributionClause can only be used for a global index (supported in TimesTen Scaleout):

IndexDistributionClause::=
DISTRIBUTE BY HASH [(ColumnName [,…])]

Parameters

Parameter Description

GLOBAL

The GLOBAL keyword is only supported in TimesTen Scaleout.

In TimesTen Scaleout:
  • Specify GLOBAL to create a global index. A global index maps all rows in the table to a hash distribution scheme. When you create a global index, TimesTen Scaleout creates a materialized view with a local index and a hash distribution scheme to the index key columns.
  • If you do not specify GLOBAL, a local index is created in each database element. The index in this database element maps to rows in the table in the database element.

UNIQUE

You can specify UNIQUE for both range and hash indexes. If you specify UNIQUE each possible combination of index key column values can occur in only one row of the table.

HASH

Specify HASH to create a hash index. Specify UNIQUE with HASH to create a unique hash index.

[Owner.] IndexName

Name to be assigned to the new index. A table cannot have two indexes with the same name. If the owner is specified, it must be the same as the owner of the table.

[Owner.] TableName

Designates the table or materialized view for which an index is to be created.

ColumnName

Name of a column to be used as an index key. You can specify up to 32 columns in order from major index key to minor index key.

[ASC|DESC]

Specifies the order of the index to be either ascending (the default) or descending. In TimesTen, this clause is currently ignored.

INCLUDE (ColumnName [,…])

The INCLUDE clause is only supported in TimesTen Scaleout and can only be used when defining a global index. Use the INCLUDE clause to add non-index columns to the index definition. These non-index columns can be used to satisfy some queries without accessing the base table. This is especially useful if such columns are accessed frequently in queries (both for equality and for range conditions).

PAGES = {RowPages | CURRENT}

The PAGES clause sizes the hash index to reflect the expected number of pages in the table. If you do not specify the PAGES clause when defining a hash index, the default is CURRENT.

If you specify CURRENT, the current number of rows in the table is used to calculate the page count value. If you specify RowPages, the number of pages is used. To determine the value for RowPages, divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).

The value for RowPages must be a positive constant and must be greater than 0.

Do not specify PAGES=CURRENT if there are no rows in your table. This is because when rows are added to the table, the hash index does not perform optimally.

IndexDistributionClause :: = DISTRIBUTE BY HASH [( ColumnName [,…])]

You can specify the DISTRIBUTE BY HASH clause only if you are defining a global index. This clause is optional.

If you do not specify this clause, the column(s) defined in the global index definition form the distribution key.

If you do specify this clause, you can optionally specify the ColumnName clause:
  • If specified, you must then specify one or more columns for the distribution key. These columns must include one or more of the columns specified in the global index definition.

  • If not specified, the column(s) defined in the global index definition form the distribution key.

    Note:

    The distribution key of the global index cannot be the same as the distribution key of the table.

Indexes in TimesTen Scaleout

TimesTen Scaleout supports global and local indexes:
  • Global index: Maps all rows in the table to a hash distribution scheme. When you create a global index, TimesTen Scaleout creates a materialized view with a local index and a hash distribution scheme to the index key columns. The local index that is created on the materialized view further optimizes query performance.
  • Local index: Is created in each database element. The index in this database element maps to rows in the table in this database element. Queries on index columns that do not include the distribution key columns on the table require communication with an element in every replica set.

See "Understanding indexes" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for information on using indexes in TimesTen Scaleout.

Choosing a global or a local index in TimesTen Scaleout

Consider the following when deciding whether to use a global or a local index:
  • Use a global index for:

    • Unique columns: A global unique index optimizes query execution by performing unique constraint checks more efficiently. TimesTen Scaleout uses the distribution key columns for uniqueness verification instead of accessing all replica sets. However, if the distribution key is a subset of the index key, create a local index.
    • Queries that have an equality predicate that do not include all of the columns in the distribution key of the table.
    • A group of columns that are frequently joined in queries with primary key columns.
    • Non-index columns that are frequently used in queries: Define a global index with the INCLUDE clause to include those non-index columns. In such a case, the table does not need to be accessed to satisfy the query.
    • An index where the index key is a prefix of the distribution key of the table.
  • Use a local index for:
    • Non-unique columns: If the index key consists of only non-unique columns, create a local non-unique index.
    • An index key that has the same columns as the distribution key for the table.
    • The situation where the distribution key of the table is a prefix of the index key.
    • Queries that have an equality predicate that includes all columns in the distribution key of the table.

See "Understanding indexes" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.

Description of global indexes

Global index usage:
  • You must specify the GLOBAL keyword to create a global index. An index is local by default.
  • Global indexes by default are distributed by hash and can only be distributed by hash. Local indexes are not distributed.
  • When you create a global index, TimesTen Scaleout internally creates its own materialized view and its own local index on that materialized view.
  • Global indexes result in more efficient query execution with joins. However DML operations are slower due to the maintenance of the internal materialized view (that is created when you define a global index).
  • When a new element is added in the grid, the schema is replicated on the new element. In addition, the rows are redistributed, and the indexes are rebuilt. This includes the global indexes. Similarly, when an element is removed from the grid, the rows are redistributed and the indexes are rebuilt.
Distribution scheme of table:
  • You can define a global index on a table distributed by hash and on a table distributed by reference. Global indexes on parent and child (first-level reference) tables are supported. However, you cannot define a global index on grandchild tables or any tables that are not first-level reference tables.
  • You cannot define a global index on a table distributed by duplicate.

Restrictions on global indexes:

  • The column list for the distribution key cannot contain the ROWID pseudocolumn or a column of type ROWID.
  • Not supported on a global temporary table.
  • Not supported on readonly cache groups.
  • Not supported on a materialized view.

Syntax and semantic rules for global indexes

You must specify the GLOBAL keyword to create a global index. If you do not specify the GLOBAL keyword, a local index is created. Global indexes are distributed by hash on index key columns.

If you specify the GLOBAL keyword, you can optionally specify these clauses that are specific to global indexes:
  • INCLUDE clause: Optional clause that enables you to include non-key columns in the index. If such columns are frequently accessed by queries that use the index, this may improve performance.
  • IndexDistributionClause: Optional clause that enables you to specify what columns to use for the hash distribution. If you do not specify this clause, then the index columns form the distribution key. The distribution key of the index cannot be the same as the distribution key of the table.

  • Examples:

    Global range index:

    Command> CREATE GLOBAL INDEX globalindex1 ON mytab (a) INCLUDE (b,c) DISTRIBUTE BY HASH (a);
    Command> indexes mytab;
    
    Indexes on table SAMPLEUSER.MYTAB:
      MYTAB: unique range index on columns:
        C
        B
      GLOBALINDEX1: global non-unique range index on columns:
        A
        Included columns:
          B
          C
      2 indexes found.
    
    2 indexes found on 1 table.
    Command> drop table mytab;

    Global hash index:

    Command> CREATE TABLE mytab (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
               a TT_INTEGER NOT NULL, PRIMARY KEY (c,b)) DISTRIBUTE BY HASH (a,b);
    Command> CREATE GLOBAL HASH INDEX globalhashindex1 ON mytab(a) INCLUDE (b,c) PAGES=200 DISTRIBUTE BY HASH (a);
    Command> indexes MYTAB;
    
    Indexes on table SAMPLEUSER.MYTAB:
      MYTAB: unique range index on columns:
        C
        B
      GLOBALHASHINDEX1: global non-unique hash index on columns:
        A
        Included columns:
          B
          C
      2 indexes found.
    
    2 indexes found on 1 table.
    

See "Examples: TimesTen Scaleout" for additional examples.

General description of indexes in TimesTen Scaleout

  • TimesTen creates a nonunique range index by default. Specify CREATE UNIQUE INDEX to create a unique range index.

  • To create a nonunique hash index, specify CREATE HASH INDEX. To create a unique hash index, specify CREATE UNIQUE HASH INDEX.

  • If UNIQUE is specified, all existing rows must have unique values in the indexed column(s).

  • The new index is maintained automatically until the index is deleted by a DROP INDEX statement or until the table associated with it is dropped.

  • Any prepared statements that reference the table with the new index are automatically prepared again the next time they are executed. Then the statements can take advantage, if possible, of the new index.

  • An index on a temporary table cannot be created by a connection if any other connection has a non-empty instance of the table.

  • If you are using linguistic comparisons, you can create a linguistic index. A linguistic index uses sort key values and storage is required for these values. Only one unique value for NLS_SORT is allowed for an index. For more information on linguistic indexes and linguistic comparisons, see "Using linguistic indexes" in Oracle TimesTen In-Memory Database Operations Guide.

  • If you create indexes that are redundant, TimesTen generates warnings or errors. Call ttRedundantIndexCheck to see the list of redundant indexes for your tables.

  • To change the size or type of a hash index, drop the hash index and create a new index.

  • A hash index is created with a fixed size that remains constant for the life of the table. To resize the hash index, drop and recreate the index. If the hash index has insufficient pages it results in hash collisions which slows down the index look-up. Hash key comparison is a fast operation, so a small number of hash collisions should not cause a performance problem for TimesTen.

    To ensure that your hash index is sized correctly, your application must indicate the expected size of your table with the value of the RowPages parameter of the SET PAGES clause. Compute this value by dividing the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for the value of RowPages (256000/256=1000).

  • The maximum number of columns that can be specified for an index is 32.

Using indexes in query processing

Proper indexes can improve query performance. Some queries can benefit from the use of indexes and some queries do not benefit from the use of indexes. Additionally, the choice of indexes for your queries is important.

A range index is ideal for processing range searches and exact matches, especially if most of the values in the index columns are unique. For example, if a range index is defined on columns (C1,C2), the index can be used to process the following types of predicates. ConstantOrParam refers to a constant value or dynamic parameter and range refers to the operators >,<,>=, or <=:

  • C1 = ConstantOrParam AND C2 = ConstantOrParam

  • C1 = ConstantOrParam AND C2 range ConstantOrParam

  • C1 = ConstantOrParam

  • C1 range ConstantOrParam

A range index efficiently processes equality and range predicates and efficiently processes sort and group operations. Use range indexes on index columns with many unique values. The order of columns you specify in a range index is relevant. The order of expressions in the predicate of a query that uses the range index is not relevant. When your query is processed, only one range index is used for each scan of your table even if you have defined multiple range indexes on your table.

A hash index efficiently processes equality predicates. You must size your hash index correctly for optimal performance. Use the PAGES parameter to size your hash index. If you specify a PAGES value that is too small, a large number of hash collisions may result, leading to performance degradation for statements that access the hash index. The order of columns specified in the hash index is not relevant and the order of expressions in the predicate of the query that uses the hash index is not relevant. If either a hash index or a range index can be used to process a particular equality predicate, the hash index is chosen because a lookup in a hash index is faster than a scan of a range index.

You can influence the indexes used by the optimizer by setting statement level or transaction level optimizer hints. see "Statement level optimizer hints" for information on statement level optimizer hints. For more information on transaction level optimizer hints, see "ttOptSetFlag", ttOptSetOrder", or "ttOptUseIndex" in the Oracle TimesTen In-Memory Database Reference.

Examples: TimesTen Scaleout

These examples illustrate the syntax requirements for creating a global index. You must specify the GLOBAL keyword to create a global index.

Illustrate global index syntax

This example illustrates the supported syntax for a global index.

Create a table with three columns (c,b,a) and define a primary key on two of those columns (c,b). Distribute the table by hash on columns (a,b).

Create a global unique range index on columns (c,b).
Command> CREATE TABLE mytab1 (c TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, a TT_INTEGER NOT NULL, 
PRIMARY KEY (c,b)) DISTRIBUTE BY HASH (a,b);
Command> CREATE GLOBAL UNIQUE INDEX mygix1 ON mytab1 (c,b);
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  MYGIX1: global unique range index on columns:
    C
    B
  MYTAB1: unique range index on columns:
    C
    B
  2 indexes found.

2 indexes found on 1 table.
Create a second global range index and specify the INCLUDE clause.

Command> CREATE GLOBAL INDEX mygix2 ON MYTAB1(b) include (a);
Command> indexes mytab1

Indexes on table SAMPLEUSER.MYTAB1:
  MYGIX1: global unique range index on columns:
    C
    B
  MYTAB1: unique range index on columns:
    C
    B
  MYGIX2: global non-unique range index on columns:
    B
    Included columns:
      A
  3 indexes found.

3 indexes found on 1 table.
Drop the second index and recreate it. Distribute the index by hash on column b.
Command> DROP INDEX mygix2;
Command> CREATE GLOBAL INDEX mygix2 ON MYTAB1(b) INCLUDE (a) DISTRIBUTE BY HASH(b);
Command> INDEXES mytab1

Indexes on table SAMPLEUSER.MYTAB1:
  MYGIX1: global unique range index on columns:
    C
    B
  MYTAB1: unique range index on columns:
    C
    B
  MYGIX2: global non-unique range index on columns:
    B
    Included columns:
      A
  3 indexes found.

3 indexes found on 1 table.

Create a global hash index.

Command> CREATE GLOBAL HASH INDEX mygix3 ON mytab1(a) PAGES =200;
Command> indexes mytab1;

Indexes on table SAMPLEUSER.MYTAB1:
  MYGIX1: global unique range index on columns:
    C
    B
  MYTAB1: unique range index on columns:
    C
    B
  MYGIX3: global non-unique hash index on columns:
    A
  MYGIX2: global non-unique range index on columns:
    B
    Included columns:
      A
  4 indexes found.

4 indexes found on 1 table.

Distribution key of global index is same as distribution key of table

This example illustrates that you cannot create a global index whose distribution key is the same as the distribution key of the table. In this example, the mytab1 table is distributed by hash on columns (a,b). An attempt to create a global index, with columns (a,b) as the distribution key, results in an error.

Command> CREATE TABLE mytab1 (a TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL,
           c TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> CREATE GLOBAL INDEX gix1 ON mytab1(a,b,c) DISTRIBUTE BY HASH (a,b);
 2253: Distribution key for global index cannot be same as that of the table or
 other global index. Consider creating a local index.
The command failed.

Global index creates its own materialized view and its own local index

This example illustrates that when you create a global index, TimesTen Scaleout creates its own internal materialized view and its own local index. Create the mytab2 table distributed by hash on columns (a,b). Create a global non-unique range index distributed by hash on columns (b,a). Run the ttIsql indexes command to show the gix2 global index is created as well as an internal local index on the internal materialized view. Then, run the ttIsql views command to show an internal materialized view is also created as a result of creating the global index. Run the ttIsql describe command to show the internal materialized view. Note that you cannot explicitly drop the internal materialized view or the internal local index.

Command> CREATE TABLE mytab2 (a TT_INTEGER NOT NULL, b TT_INTEGER NOT NULL, 
           c TT_INTEGER NOT NULL) DISTRIBUTE BY HASH (a,b);
Command> CREATE GLOBAL INDEX gix2 ON mytab2(a,b,c) DISTRIBUTE BY HASH (b,a);
Command> indexes;

Indexes on materialized view SAMPLEUSER.$GV9B55D3955D52:
  $GVI9B55D3955D52: non-unique range index on columns:
    A
    B
    C
  1 index found.

Indexes on table SAMPLEUSER.MYTAB2:
  GIX2: global non-unique range index on columns:
    A
    B
    C
  1 index found.

2 indexes found on 2 tables.
Command> views;
  SAMPLEUSER.$GV9B55D3955D52
1 view found.
Command> describe SAMPLEUSER.$GV9B55D3955D52;

Materialized view SAMPLEUSER.$GV9B55D3955D52:
  Global index: GIX2 (table: MYTAB2)
  Columns:
    A                               TT_INTEGER NOT NULL
    B                               TT_INTEGER NOT NULL
    C                               TT_INTEGER NOT NULL
  DISTRIBUTE BY HASH (B, A)

1 view found.

Examples: TimesTen Classic

Create a table and then create a unique hash index on col2. Do not specify the PAGES clause. If PAGES is not specified, the current table page count is used for the size of the hash table. Use INDEXES to verify the index was created. Insert a row in the table, set SHOWPLAN to 1 and then verify the optimizer uses the hash index.

Command> CREATE TABLE tab (col1 NUMBER PRIMARY KEY NOT NULL, col2 VARCHAR2 (30));
Command> CREATE UNIQUE HASH INDEX hash1 ON tab (col2);
Command> INDEXES;
 
Indexes on table TESTUSER.TAB:
  HASH1: unique hash index on columns:
    COL2
  TAB: unique range index on columns:
    COL1
  2 indexes found.
 
2 indexes found on 1 table.
Command> INSERT INTO tab VALUES (10, 'ABC');
Command> SHOWPLAN 1;
Command> SELECT * FROM tab where col2 = 'ABC';
 
Query Optimizer Plan:
 
  STEP:                1
  LEVEL:               1
  OPERATION:           RowLkHashScan
  TBLNAME:             TAB
  IXNAME:              HASH1
  INDEXED CONDITION:   TAB.COL2 = 'ABC'
  NOT INDEXED:         <NULL>
 
< 10, ABC >
1 row found.

Create a table and create a nonunique hash index on col1. Use PAGES = CURRENT to use the current table page count to size the hash index. Use INDEXES to verify the nonunique hash index is created.

Command> CREATE TABLE tab2 (col1 NUMBER);
Command> CREATE HASH INDEX hash_index ON tab2 (col1) PAGES = CURRENT;
Command> INDEXES;
 
Indexes on table TESTUSER.TAB2:
  HASH_INDEX: non-unique hash index on columns:
    COL1
  1 index found.
 
1 index found on 1 table.

Create table and create unique hash index on col3. Use PAGES = 100 to specify a page count of 100 for the size of the hash table. Use INDEXES to verify the unique hash index is created.

Command> CREATE TABLE tab3 (col1 NUMBER, col2 NUMBER, col3 TT_INTEGER);
Command> CREATE UNIQUE HASH INDEX unique_hash1 on tab3 (col3) PAGES = 100;
Command> INDEXES;
 
Indexes on table TESTUSER.TAB3:
  UNIQUE_HASH1: unique hash index on columns:
    COL3
  1 index found.
 
1 index found on 1 table.

The regions table in the HR schema creates a unique index on region_id. Issue the ttIsql INDEXES command on table regions. You see the unique range index regions.

Command> INDEXES REGIONS;

Indexes on table SAMPLEUSER.REGIONS:
  REGIONS: unique range index on columns:
    REGION_ID
    (referenced by foreign key index COUNTR_REG_FK on table SAMPLEUSER.COUNTRIES)
  1 index found.

1 index found on 1 table.

Attempt to create a unique index i on table regions indexing on column region_id. You see a warning message.

Command> CREATE UNIQUE INDEX i ON regions (region_id);
Warning  2232: New index I is identical to existing index REGIONS; 
consider dropping index I

Call ttRedundantIndexCheck to see warning message for this index:

Command> CALL ttRedundantIndexCheck ('regions');
< Index SAMPLEUSER.REGIONS.I is identical to index SAMPLEUSER.REGIONS.REGIONS;
consider dropping index SAMPLEUSER.REGIONS.I >
1 row found.

Create table redundancy and define columns co11 and col2. Create two user indexes on col1 and col2. You see an error message when you attempt to create the second index r2. Index r1 is created. Index r2 is not created.

Command> CREATE TABLE redundancy (col1 CHAR (30), col2 VARCHAR2 (30));
Command> CREATE INDEX r1 ON redundancy (col1, col2);
Command> CREATE INDEX r2 ON redundancy (col1, col2);
 2231: New index R2 would be identical to existing index R1
The command failed.

Issue the ttIsql command INDEXES on table redundancy to show that only index r1 is created:

Command> INDEXES redundancy;

Indexes on table SAMPLEUSER.REDUNDANCY:
  R1: non-unique range index on columns:
    COL1
    COL2
  1 index found.

1 index found on 1 table.

This unique index ensures that all part numbers are unique.

CREATE UNIQUE INDEX purchasing.partnumindex
ON purchasing.parts (partnumber);

Create a linguistic index named german_index on table employees1. To have more than one linguistic sort, create a second linguistic index.

Command> CREATE TABLE employees1 (id CHARACTER (21),
id2 character (21));
Command> CREATE INDEX german_index ON employees1
           (NLSSORT(id, 'NLS_SORT=GERMAN'));
Command> CREATE INDEX german_index2 ON employees1
           (NLSSORT(id2, 'nls_sort=german_ci'));
Command> indexes employees1;
Indexes on table SAMPLEUSER.EMPLOYEES1:
  GERMAN_INDEX: non-unique range index on columns:
    NLSSORT(ID,'NLS_SORT=GERMAN')
  GERMAN_INDEX2: non-unique range index on columns:
    NLSSORT(ID2,'nls_sort=german_ci')
  2 indexes found.
1 table found.

See also

DROP INDEX

CREATE MATERIALIZED VIEW

The CREATE MATERIALIZED VIEW statement creates a view of the table specified in the SelectQuery clause. The original tables used to create a view are referred to as detail tables. The view is refreshed synchronously with regard to changes in the detail tables.

Required privileges

User executing the statement must have CREATE MATERIALIZED VIEW (if owner) or CREATE ANY MATERIALIZED VIEW (if not owner) privilege.

Owner of the materialized view must have:
  • SELECT privilege on the detail tables.
  • CREATE TABLE privilege.

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout. You must specify the DISTRIBUTE BY HASH clause and you must define a distribution key. The DISTRIBUTE BY REFERENCE and DUPLICATE clauses are not supported.

SQL syntax: TimesTen Scaleout

CREATE MATERIALIZED VIEW [Owner.]ViewName
DISTRIBUTE BY HASH (ColumnName [,...])
AS SelectQuery
[PRIMARY KEY (ColumnName [,...])] 
[UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages]

SQL syntax: TimesTen Classic

CREATE MATERIALIZED VIEW [Owner.]ViewName
AS SelectQuery
[PRIMARY KEY (ColumnName [,...])] 
[UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages]

Parameters

Parameter Description

[Owner.]ViewName

Name assigned to the new view.

DISTRIBUTE BY HASH (ColumnName [,...])

TimesTen Scaleout only. You must specify the DISTRIBUTE BY HASH clause and you must specify one or more columns for the distribution key (even if you have specified a primary key).

The detail table must be distributed by hash.

DISTRIBUTE BY REFERENCE or DUPLICATE clauses are not supported.

This clause must appear before the AS SelectQuery clause.

SelectQuery

Select column from the detail tables to be used in the view.

ColumnName

Name of the column(s) that forms the primary key for the view to be created. Up to 32 columns can be specified for the primary key. Each result column name of a viewed table must be unique. The column name definition cannot contain the table or owner component.

UNIQUE HASH ON

Hash index for the table. Only unique hash indexes are created. This parameter is used for equality predicates. UNIQUE HASH ON requires that a primary key be defined.

HashColumnName

Column defined in the view that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key.

PAGES = PrimaryPages

Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages, divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).

The value for PrimaryPages must be a positive constant and must be greater than 0.

If your estimate for PrimaryPages is too small, performance may be degraded.

See "CREATE TABLE" for information on hash indexes.

Description and restrictions for CREATE MATERIALIZED VIEW: TimesTen Scaleout

Description and restrictions include:

  • The SQL optimizer may re-write a query against a base table to use an available materialized view if the use of the materialized view is expected to improve the execution time of the query.

  • You must specify the DISTRIBUTE BY HASH clause and you must specify it with a distribution key (even if you have specified a primary key and intend to use the primary key as the distribution key).

  • You must specify the DISTRIBUTE BY HASH clause before the AS SelectQuery clause.

  • You can only specify the DISTRIBUTE BY HASH clause. The DISTRIBUTE BY REFERENCE and DUPLICATE clauses are not supported.

  • The SelectQuery must be restricted to single table SELECT statements.

  • You cannot specify the GROUP BY or the WHERE clause in the SelectQuery.

  • You cannot use SQL functions in the SelectQuery.

  • You cannot use an expression in the SelectQuery.

  • The detail table of the materialized view cannot have a foreign key with a cascade delete clause.

  • The distribution key columns must be in the project list of the SelectQuery.

  • There are no DDL rewrites. For example, if you create a unique index on the detail table, a corresponding index on the materialized view (which is distributed on the unique column) is not created.

Description: TimesTen Scaleout and TimesTen Classic

The restrictions and requirements on the defining query include:

  • Each expression in the select list must have a unique name.

  • Do not use non-materialized views to define a materialized view.

  • Do not define CLOB, BLOB, or NCLOB data types for columns in the select list of the materialized view query.

  • The detail tables cannot belong to a cache group and the detail tables cannot have compression.

  • Do not use SELECT FOR UPDATE.

  • Do not reference system tables or views.

  • Do not use nested definitions for a materialized view.

  • Do not use dynamic parameters.

  • Do not use ROWNUM.

  • Do not use analytic functions.

  • Do not use GROUPING SETS, ROLLUP, or CUBE.

  • Do not use the SYSDATE function.

  • Do not use the functions SYSTEM_USER, USER, CURRENT_USER, or SESSION_USER.

  • Do not use NEXTVAL or CURRVAL.

  • Outer joins are allowed but the select list must project at least one non-nullable column from each of the inner tables specified in the outer join.

  • Do not use the WITH subquery clause.

The restrictions (not on the defining query) include:

  • Do not have a hash-based primary key that contains any aggregate columns of the materialized view.

  • A materialized view cannot be replicated directly using TimesTen replication. You can replicate the detail tables. You must define the same materialized view on both sides of replication. TimesTen automatically updates the corresponding materialized views.

  • You cannot define a foreign key if the referencing or referenced table is a materialized view.

The following restrictions and requirements on the defining query are:

  • The view definition must include all columns in the group by list in the select list.

  • An aggregate view must include a COUNT (*) or COUNT (non-nullable column) in the select list.

  • Do not use derived tables or JOIN tables.

  • Do not use SELECT DISTINCT or an aggregate distinct function.

  • Do not use the set operators UNION, MINUS, or INTERSECT.

  • Do not use SUM of nullable expressions.

  • Use only simple columns as group by columns.

  • Group by columns cannot belong to self join tables.

  • Do not use these clauses:

    • HAVING

    • ORDER BY

    • DISTINCT

    • FIRST

    • JOIN

  • Do not use the TT_HASH function.

  • You can use SUM and COUNT but do not use expressions involving SUM and COUNT. Do not use AVG, which is treated as SUM/COUNT.

  • Do not specify MIN or MAX functions in the select list.

  • For joins:

    • Join predicates cannot have an OR.

    • Do not specify Cartesian product joins (joins with no join predicate).

    • For outer joins, outer join each inner table with at most one table.

Additional considerations include:

  • A materialized view is read-only and cannot be updated directly. A materialized view is updated only when changes are made to the associated detail tables. Therefore a materialized view cannot be the target of a DELETE, UPDATE or INSERT statement.

  • By default, a range index is created to enforce the primary key for a materialized view. Alternatively, use the UNIQUE HASH clause to specify a hash index for the primary key.

    • If your application performs range queries over a materialized view's primary key, then choose a range index for that view by omitting the UNIQUE HASH clause.

    • If your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH clause. See "CREATE TABLE" for more information about the UNIQUE HASH clause.

  • You can use ALTER TABLE to change the representation of the primary key index or resize a hash index of a materialized view.

  • You cannot add or drop columns in the materialized view with the ALTER TABLE statement. To change the structure of the materialized view, drop and recreate the view.

  • You can create indexes on the materialized view with the CREATE INDEX SQL statement.

The owner of a materialized view must have the SELECT privilege on its detail tables. The SELECT privilege is implied by the SELECT ANY TABLE and ADMIN system privileges. When the SELECT privilege or a higher-level system privilege on the detail tables is revoked from the owner of the materialized view, the materialized view becomes invalid.

Selecting from an invalid materialized view fails with an error. Updates to the detail tables of an invalid materialized view do not update the materialized view.

You can identify invalid materialized views by using the ttIsql describe command and by inspecting the STATUS column of the SYS.DBA_OBJECTS, SYS.ALL_OBJECTS or SYS.USER_OBJECTS system tables. See Oracle TimesTen In-Memory Database System Tables and Views Reference.

If the revoked privilege is restored, you can make an invalid materialized view valid again by dropping and recreating the materialized view.

For more information, see "Object privileges for materialized views" in Oracle TimesTen In-Memory Database Security Guide.

Examples for CREATE MATERIALIZED VIEW: TimesTen Scaleout

Syntax example:

Command> CREATE MATERIALIZED VIEW mv
             DISTRIBUTE BY HASH (phone)
             AS SELECT phone FROM accounts;
1010 rows materialized.

Examples: TimesTen Classic

Create a materialized view of columns from the customer and bookorder tables.

CREATE MATERIALIZED VIEW custorder AS
  SELECT custno, custname, ordno, book
  FROM customer, bookorder
  WHERE customer.custno=bookorder.custno;

Create a materialized view of columns x1 and y1 from the t1 table.

CREATE MATERIALIZED VIEW v1 AS SELECT x1, y1 FROM t1
  PRIMARY KEY (x1) UNIQUE HASH ON (x1) PAGES=100;

Create a materialized view from an outer join of columns x1 and y1 from the t1 and t2 tables.

CREATE MATERIALIZED VIEW v2 AS SELECT x1, y1 FROM t1, t2
  WHERE x1=x2(+);

The following example creates a materialized view empmatview2 based on selected columns employee_id and email from table employees. After the materialized view is created, create an index on the materialized view column mvemp_id of the materialized view empmatview2.

CREATE MATERIALIZED VIEW empmatview2
   AS SELECT employee_id mvemp_id, email mvemail 
         FROM employees;
107 rows materialized. 

CREATE INDEX empmvindex ON empmatview2 (mvemp_id);

CREATE PACKAGE

The CREATE PACKAGE statement creates the specification for a standalone package, which is an encapsulated collection of related procedures, functions, and other program objects stored together in your database. The package specification declares these objects. The package body defines these objects.

Required privilege

CREATE PROCEDURE (if owner) or CREATE ANY PROCEDURE (if not owner).

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

CREATE [OR REPLACE] PACKAGE [Owner.]PackageName
      [InvokerRightsClause] [AccessibleByClause]
      {IS|AS}
      PlsqlPackageSpec

InvokerRightsClause::=
AUTHID {CURRENT_USER | DEFINER}

AccessibleByClause::=
ACCESSIBLE BY (accessor[,...])

accessor::= 
[UnitKind][Owner.]UnitName

You can specify InvokerRightsClause or AccessibleByClause in any order.

Parameters

Parameter Description

OR REPLACE

Specify OR REPLACE to recreate the package specification if it already exists. Use this clause to change the specification of an existing package without dropping and recreating the package. When you change a package specification, TimesTen recompiles it.

PackageName

Name of the package.

InvokerRightsClause

Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:

  • Specify DEFINER so that SQL name resolution and privilege checking operate as though the owner of the procedure or function (the definer, in whose schema it resides) is running it. DEFINER is the default.

  • Specify CURRENT_USER so that SQL name resolution and privilege checking operate as though the current user (the invoker) is running it.

For more information, see "Definer's rights and invoker's rights (AUTHID clause)" in the Oracle TimesTen In-Memory Database Security Guide.

AccessibleByClause

Use this clause to specify one or more accessors (PL/SQL units) that can invoke the package directly. The list of accessors that can access the package is called a white list. A white list gives you the ability to add an extra layer of security to your PL/SQL objects. Specifically, you can restrict access to the package to only those objects on the white list.

AccessibleByClause can appear only once in the CREATE PACKAGE statement.

Syntax: ACCESSIBLE BY (accessor [,...])

accessor

Used in AccessibleByClause. An accessor is a PL/SQL unit that can invoke the package.

An accessor can appear more than once in the AccessibleByClause clause.

Syntax: [UnitKind][Owner.]UnitName

UnitKind

Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the kind of PL/SQL unit that can invoke the package.

  • UnitKind is optional, but if specified, valid options are:
  • FUNCTION

  • PROCEDURE

  • PACKAGE

[Owner.]UnitName

Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the name of the PL/SQL unit that can invoke the package. If you specify UnitKind, then UnitName must be a name of a unit of that kind. For example, if you specify PROCEDURE for UnitKind, then UnitName must be the name of a procedure. UnitName is required.

You can optionally specify Owner. If you specify Owner, then UnitName must reside in that owner's schema. If you do not specify Owner, UnitName must be in the schema that contains the package.

IS|AS

Specify either IS or AS to declare the body of the function.

PlsqlPackageSpec

Specifies the package specification. Can include type definitions, cursor declarations, variable declarations, constant declarations, exception declarations and PL/SQL subprogram declarations.

Description

  • AccessibleByClause:

    • AccessibleByClause is valid at the top-level package definition. You cannot specify AccessibleByClause in the individual procedures or functions within the package. In addition, you cannot specify AccessibleByClause in the CREATE PACKAGE BODY statement.

    • You can use this clause to restrict access to helper packages. For example, assume your PL/SQL package defines an API for a given functionality and that functionality is implemented using a set of helper procedures and functions. You want to limit applications to only be able to call the API procedure or function that is defined in your package, and to not be able to call the helper procedures and functions directly. You can use the ACCESSIBLE BY clause to achieve this.

    • The compiler checks the validity of the syntax of the ACCESSIBLE BY clause, but does not check that the accessor exists. Therefore, you can define an accessor that does yet exist in the owner's schema.

    • When you invoke the package, the compiler first does the normal permission checks on the invocation. If any check fails, the invocation fails, even if the invoker is an accessor. If all normal permission checks on the invocation succeed, and the package has no ACCESSIBLE BY clause, the invocation succeeds. If the package has an ACCESSIBLE BY clause, the invocation succeeds only if the invoker is an accessor.

  • When you create or replace a package, the privileges granted on the package remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.

  • In a replicated environment, the CREATE PACKAGE statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.

Examples

Illustrating the correct usage of the AccessibleByClause

This example illustrates the correct usage of the AccessibleByClause. The clause is specified at the top-level of the CREATE PACKAGE statement. Note that the CallingProc procedure does not need to exist.

Command> CREATE OR REPLACE PACKAGE ProtectedPkg
           ACCESSIBLE BY (PROCEDURE CallingProc)
         AS
           PROCEDURE ProtectedProc;
         END;
         /
 
Package created.

Illustrating the incorrect usage of the AccessibleByClause

These examples show the incorrect use of the AccessibleByClause. The first example attempts to use AccessibleByClause in the packaged procedure, resulting in a compilation error. The second example attempts to use AccessibleByClause in the CREATE PACKAGE BODY statement, resulting in a compilation error.

This example uses the ACCESSIBLE BY clause in the packaged procedure.

Command> CREATE OR REPLACE PACKAGE ProtectedPkg1
         AS
           PROCEDURE ProtectedProc1 
           ACCESSIBLE BY (PROCEDURE CallingProc)
         END;
         /
         
Warning: Package created with compilation errors.
 
Command> SHOW ERRORS
Errors for PACKAGE PROTECTEDPKG1:
 
LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0      PLS-00157: Only schema-level programs allow ACCESSIBLE BY

This example uses the ACCESSIBLE BY clause in the CREATE PACKAGE BODY statement.

Command> CREATE OR REPLACE PACKAGE ProtectedPkg3
           ACCESSIBLE BY (PROCEDURE CallingProc3)
         AS
           PROCEDURE ProtectedProc3;
         END;
         /
        
Package created.

Command> CREATE OR REPLACE PACKAGE BODY ProtectedPkg3
           ACCESSIBLE BY (PROCEDURE CallingProc3)
         AS
           PROCEDURE ProtectedProc3 AS
           BEGIN
             NULL;
           END;
           ;
         /
 
Warning: Package body created with compilation errors.
 
Command> SHOW ERRORS
Errors for PACKAGE BODY PROTECTEDPKG3:
 
LINE/COL ERROR
-------- -----------------------------------------------------------------
2/1      PLS-00103: Encountered the symbol "ACCESSIBLE" when expecting one of the
following:
 
  is as compress compiled wrapped

Ensuring only the API can access the helper package

This example walks through a series of steps to illustrate the use of the AccessibleByClause. The example creates the SampleAPI package and the SampleHelper package. The ACCESSIBLE BY clause is specified on the SampleHelper to ensure that only the SampleAPI package can access the SampleHelper package.

Steps:

  1. Create the SampleHelper package. Specify the ACCESSIBLE BY clause, giving the SampleAPI package access to the SampleHelper package. The SampleAPI package is in the white list.

    Command> CREATE OR REPLACE PACKAGE SampleHelper
               ACCESSIBLE BY (SampleAPI)
             AS
               PROCEDURE SampleH1;
               PROCEDURE SampleH2;
             END;
             /
     
    Package created.
    
  2. Create the SampleHelper package body.

    Command> CREATE OR REPLACE PACKAGE BODY SampleHelper
             AS
               PROCEDURE SampleH1 AS
               BEGIN
                 DBMS_OUTPUT.PUT_LINE('Sample helper procedure SampleH1');
               END;
               PROCEDURE SampleH2 AS
               BEGIN
                 DBMS_OUTPUT.PUT_LINE('Sample helper procedure SampleH2');
               END;
             END;
             /
     
    Package body created.
    
  3. Create the SampleAPI package.

    Command> CREATE OR REPLACE PACKAGE SampleAPI
             AS
               PROCEDURE p1;
               PROCEDURE p2;
             END;
             /
     
    Package created.
    
  4. Create the SampleAPI package body. The p1 procedure references the SampleHelper.SampleH1 procedure. The p2 procedure references the SampleHelper.SampleH2 procedure.

    Command> CREATE OR REPLACE PACKAGE BODY SampleAPI
             AS
               PROCEDURE p1 AS
               BEGIN
                 DBMS_OUTPUT.PUT_LINE('SampleAPI procedure p1');
                 SampleHelper.SampleH1;
               END;
               PROCEDURE p2 AS
               BEGIN
                 DBMS_OUTPUT.PUT_LINE('SampleAPI procedure p2');
                 SampleHelper.SampleH2;
               END;
             END;
             /
     
    Package body created.
    
  5. Call the SampleAPI.p1 and the SampleAPI.p2 procedures. The SampleAPI package is in the white list of the SampleHelper package, resulting in successful execution.

    Command> SET SERVEROUTPUT ON
    Command> BEGIN
               SampleAPI.p1;
               SampleAPI.p2;
             END;
             /
    SampleAPI procedure p1
    Sample helper procedure SampleH1
    SampleAPI procedure p2
    Sample helper procedure SampleH2
     
    PL/SQL procedure successfully completed.
    
  6. Call the SampleHelper.SampleH1 procedure directly. An error is returned due to insufficient access privileges.

    Command> BEGIN
               SampleHelper.SampleH1;
             END;
            /
     8503: ORA-06550: line 2, column 3:
    PLS-00904: insufficient privilege to access object SAMPLEHELPER
     8503: ORA-06550: line 2, column 3:
    PL/SQL: Statement ignored
    The command failed.

CREATE PACKAGE BODY

The CREATE PACKAGE BODY statement creates the body of a standalone package. A package is an encapsulated collection of related procedures, functions, and other program objects stored together in your database. A package specification declares these objects. A package body defines these objects.

Required privilege

CREATE PROCEDURE (if owner) or CREATE ANY PROCEDURE (if not owner).

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

CREATE [OR REPLACE] PACKAGE BODY [Owner.]PackageBody 
      {IS|AS} plsql_package_body

Parameters

Parameter Description

OR REPLACE

Specify OR REPLACE to recreate the package body if it already exists. Use this clause to change the body of an existing package without dropping and recreating it. When you change a package body, TimesTen recompiles it.

PackageBody

Name of the package body.

IS|AS

Specify either IS or AS to declare the body of the function.

plsql_package_body

Specifies the package body which consists of PL/SQL subprograms.

Description

In a replicated environment, the CREATE PACKAGE BODY statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.

When you create or replace a package body, the privileges granted on the package body remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.

CREATE PROCEDURE

The CREATE PROCEDURE statement creates a standalone stored procedure.

Required privilege

CREATE PROCEDURE (if owner) or CREATE ANY PROCEDURE (if not owner).

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

CREATE [OR REPLACE] PROCEDURE [Owner.]ProcedureName 
     [(arguments [IN|OUT|IN OUT][NOCOPY] DataType [DEFAULT expr][,...])]
     [InvokerRightsClause][AccessibleByClause] [DETERMINISTIC]
     {IS|AS} plsql_procedure_body

InvokerRightsClause::=
AUTHID {CURRENT_USER|DEFINER}

AccessibleByClause::=
ACCESSIBLE BY(accessor[,...])

accessor::= 
[UnitKind][Owner.]UnitName

You can specify InvokerRightsClause, AccessibleByClause, or DETERMINISTIC in any order.

Parameters

Parameter Description

OR REPLACE

Specify OR REPLACE to recreate the procedure if it already exists. Use this clause to change the definition of an existing procedure without dropping and recreating it. When you recreate a procedure, TimesTen recompiles it.

ProcedureName

Name of procedure.

arguments

Name of argument/parameter. You can specify 0 or more parameters for the procedure. If you specify a parameter, you must specify a data type for the parameter. The data type must be a PL/SQL data type.

[IN|OUT|IN OUT]

Parameter modes.

IN is a read-only parameter. You can pass the parameter's value into the procedure but the procedure cannot pass the parameter's value out of the procedure and back to the calling PL/SQL block.The value of the parameter cannot be changed.

OUT is a write-only parameter. Use an OUT parameter to pass a value back from the procedure to the calling PL/SQL block. You can assign a value to the parameter.

IN OUT is a read/write parameter. You can pass values into the procedure and return a value back to the calling program (either the original, unchanged value or a new value set within the procedure.

IN is the default.

NOCOPY

Specify NOCOPY to instruct TimesTen to pass the parameter as fast as possible. Can enhance performance when passing a large value such as a record, an index-by-table, or a varray to an OUT or IN OUT parameter. IN parameters are always passed NOCOPY.

DEFAULT expr

Use this clause to specify a DEFAULT value for the parameter. You can specify := in place of the keyword DEFAULT.

InvokerRightsClause

Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:

  • Specify DEFINER so that SQL name resolution and privilege checking operate as though the owner of the procedure or function (the definer, in whose schema it resides) is running it. DEFINER is the default.

  • Specify CURRENT_USER so that SQL name resolution and privilege checking operate as though the current user (the invoker) is running it.

For more information, see "Definer's rights and invoker's rights (AUTHID clause)" in the Oracle TimesTen In-Memory Database Security Guide.

AccessibleByClause

Use this clause to specify one or more accessors (PL/SQL units) that can invoke the procedure directly. The list of accessors that can access the procedure is called a white list. A white list gives you the ability to add an extra layer of security to your PL/SQL objects. Specifically, you can restrict access to the procedure to only those objects on the white list.

The AccessibleByClause can appear only once in the CREATE PROCEDURE statement.

Syntax: ACCESSIBLE BY (accessor [,...])

accessor

Used in the AccessibleByClause. An accessor is a PL/SQL unit that can invoke the procedure.

An accessor can appear more than once in the AccessibleByClause.

Syntax: [UnitKind][Owner.]UnitName

UnitKind

Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the kind of PL/SQL unit that can invoke the procedure.

  • UnitKind is optional, but if specified, valid options are:
  • FUNCTION

  • PROCEDURE

  • PACKAGE

[Owner.]UnitName

Used in the accessor clause (which is part of the AccessibleByClause clause). Specifies the name of the PL/SQL unit that can invoke the procedure. If you specify UnitKind, then UnitName must be a name of a unit of that kind. For example, if you specify PROCEDURE for UnitKind, then UnitName must be the name of a procedure. UnitName is required.

You can optionally specify Owner. If you specify Owner, then UnitName must reside in that owner's schema. If you do not specify Owner, UnitName must be in the schema that contains the procedure.

DETERMINISTIC

Specify DETERMINISTIC to indicate that the procedure returns the same result value whenever it is called with the same values for its parameters.

IS|AS

Specify either IS or AS to declare the body of the procedure.

plsql_procedure_body

Specifies the procedure body.

Description

  • AccessibleByClause:

    • The compiler checks the validity of the syntax of the AccessibleByClause, but does not check that the accessor exists. Therefore, you can define an accessor that does yet exist in the owner's schema.

    • When you invoke the procedure, the compiler first does the normal permission checks on the invocation. If any check fails, the invocation fails, even if the invoker is an accessor. If all normal permission checks on the invocation succeed, and the procedure has no AccessibleByClause, the invocation succeeds. If the procedure has an AccessibleByClause, the invocation succeeds only if the invoker is an accessor.

  • When you create or replace a procedure, the privileges granted on the procedure remain the same. If you drop and recreate the object, the object privileges that were granted on the original object are revoked.

  • The namespace for PL/SQL procedures is distinct from the TimesTen built-in procedures. You can create a PL/SQL procedure with the same name as a TimesTen built-in procedure.

  • TimesTen does not support:

    • call_spec clause

    • AS EXTERNAL clause

  • In a replicated environment, the CREATE PROCEDURE statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.

Examples

Using the AccessibleByClause

This example creates the ProtectedProc procedure and uses the ACCESSIBLE BY clause to restrict access to the CallingProc procedure. The CallingProc procedure does not yet exist. The example then creates the CallingProc procedure, which calls the ProtectedProc procedure. The CallingProc procedure is successfully created, as it is specified in the ACCESSIBLE BY clause. The example then attempts to call the ProtectedProc procedure directly, resulting in an error. It concludes with attempting to create the AnotherCallingProc procedure that references the ProtectedProc procedure, but the AnotherCallingProc procedure is not in the white list. A compilation error results.

Steps to illustrate the example:

  1. Create the ProtectedProc procedure, specifying the ACCESSIBLE BY clause. The CallingProc procedure is in the white list. It does not yet exist.

    Command> CREATE OR REPLACE PROCEDURE ProtectedProc
               ACCESSIBLE BY (CallingProc)
             AS
             BEGIN
               DBMS_OUTPUT.PUT_LINE ('ProtectedProc');
             END;
             /
              
    Procedure created.
    
  2. Create the CallingProc procedure, referencing the ProtectedProc procedure.

    Command> CREATE OR REPLACE PROCEDURE CallingProc
             AS
             BEGIN
               DBMS_OUTPUT.PUT_LINE ('CallingProc');
               ProtectedProc;
             END;
             /
             
    Procedure created.
    
  3. Call the CallingProc procedure. The procedure is successfully executed.

    Command> SET SERVEROUTPUT ON
    Command> exec CallingProc;
    CallingProc
    ProtectedProc
     
    PL/SQL procedure successfully completed.
    
  4. Attempt to call the ProtectedProc procedure directly. An error is thrown due to insufficient access privileges.

    Command> exec ProtectedProc;
     8503: ORA-06550: line 1, column 7:
    PLS-00904: insufficient privilege to access object PROTECTEDPROC
     8503: ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    The command failed.
    
  5. Create the AnotherCallingProc procedure that references the ProtectedProc procedure. The AnotherCallingProc is not in the white list (not listed in the ACCESSIBLE BY clause of ProtectedProc), resulting in a compilation error.

    Command> CREATE OR REPLACE PROCEDURE AnotherCallingProc
             AS
             BEGIN
               DBMS_OUTPUT.PUT_LINE ('AnotherCallingProc');
               ProtectedProc;
             END;
             /
     
    Warning: Procedure created with compilation errors.
     
    Command> SHOW ERRORS
    Errors for PROCEDURE ANOTHERCALLINGPROC:
     
    LINE/COL ERROR
    -------- -----------------------------------------------------------------
    5/1      PL/SQL: Statement ignored
    5/1      PLS-00904: insufficient privilege to access object PROTECTEDPROC
    

Using the accessor clause

This example illustrates the uses of the accessor clause through a sequence of steps.

  1. Create the SampleUser1 and SampleUser2 users and grant ADMIN privileges to both users.

    Command> CREATE USER SampleUser1 IDENTIFIED BY SampleUser1;
     
    User created.
     
    Command> CREATE USER SampleUser2 IDENTIFIED BY SampleUser2;
     
    User created.
    Command> GRANT ADMIN TO SampleUser1, SampleUser2;
    
  2. Create the SampleUser1.ProtectedProc procedure, specifying the ACCESSIBLE BY clause. The CallingProc procedure is specified in the white list without an owner. The owner of the CallingProc procedure is assumed to be in the same schema as the owner of the procedure with the ACCESSIBLE BY clause. Thus, CallingProc is assumed to be in the SampleUser1 schema.

    Command> CREATE OR REPLACE PROCEDURE SampleUser1.ProtectedProc
             ACCESSIBLE BY (CallingProc)
             AS
             BEGIN
               DBMS_OUTPUT.PUT_LINE ('SampleUser1 ProtectedProc');
             END;
             /
     
    Procedure created.
    
  3. Connect as SampleUser1. Create the CallingProc procedure, referencing the SampleUser1.ProtectedProc procedure.

    Command> Connect adding "uid=SampleUser1;pwd=SampleUser1PW" as SampleUser1;
    Connection successful:
    DSN=database1;UID=SampleUser1;DataStore=/scratch/sampleuser1/database1;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;
    PermSize=128;
    (Default setting AutoCommit=1)
    
    sampleuser1: Command> CREATE OR REPLACE PROCEDURE CallingProc
                          AS
                          BEGIN
                            DBMS_OUTPUT.PUT_LINE ('SampleUser1 CallingProc');
                            ProtectedProc;
                          END;
                          /
     
    Procedure created.
    
  4. From the SampleUser1 connection, call the CallingProc procedure. The call succeeds.

    sampleuser1: Command> SET SERVEROUTPUT ON
    sampleuser1: Command> exec CallingProc;
    SampleUser1 CallingProc
    SampleUser1 ProtectedProc
     
    PL/SQL procedure successfully completed.
    
  5. Connect to SampleUser2. Create the CallingProc procedure, referencing the SampleUser1.ProtectedProc procedure. A compilation error results.

    SampleUser1: Command> connect adding "uid=Sampleuser2;pwd=SampleUser2PW"
                          as SampleUser2;
    Connection successful:
    DSN=database1;UID=Sampleuser2;DataStore=/scratch/sampleuser2/database1;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;
    PermSize=128;
    (Default setting AutoCommit=1)
    
    sampleuser2: Command> CREATE OR REPLACE PROCEDURE CallingProc
                          AS
                          BEGIN
                            DBMS_OUTPUT.PUT_LINE ('SampleUser2 CallingProc');
                            SampleUser1.ProtectedProc;
                          END;
                          /
     
    Warning: Procedure created with compilation errors.
     
    sampleuser2: Command> SHOW ERRORS
    Errors for PROCEDURE CALLINGPROC:
     
    LINE/COL ERROR
    -------- -----------------------------------------------------------------
    5/1      PL/SQL: Statement ignored
    5/1      PLS-00904: insufficient privilege to access object PROTECTEDPROC
    
  6. Switch to the SampleUser1 connection. Recreate the ProtectedProc procedure.

    sampleuser2: Command> use SampleUser1
    sampleuser1: Command> CREATE OR REPLACE PROCEDURE ProtectedProc
                          ACCESSIBLE BY (CallingProc, SampleUser2.CallingProc)
                          AS
                          BEGIN
                            DBMS_OUTPUT.PUT_LINE ('SampleUser1 ProtectedProc');
                          END;
                          /
     
    Procedure created.
    
  7. From the SampleUser2 connection, call the CallingProc procedure. The SampleUser2.CallingProc is in the white list of the SampleUser1.ProtectedProc procedure, resulting in successful execution.

    sampleuser1: Command> use SampleUser2;
    sampleuser2: Command> SET SERVEROUTPUT ON
    sampleuser2: Command> exec CallingProc
    SampleUser2 CallingProc
    SampleUser1 ProtectedProc
     
    PL/SQL procedure successfully completed.

Using the CREATE PROCEDURE statement to retrieve information

Create a procedure query_emp to retrieve information about an employee. Pass the employee_id 171 to the procedure and retrieve the last_name and salary into two OUT parameters.

Command> CREATE OR REPLACE PROCEDURE query_emp
                   (p_id IN employees.employee_id%TYPE,
                    p_name  OUT employees.last_name%TYPE,
                    p_salary OUT employees.salary%TYPE) IS
                 BEGIN
                   SELECT last_name, salary INTO p_name, p_salary
                   FROM employees
                   WHERE employee_id = p_id;
                 END query_emp;
                 /
 
Procedure created.

CREATE PROFILE

The CREATE PROFILE statement creates a profile, which is a set of limits on the database resources. If you assign a profile to a user, that user cannot exceed the limits specified in the profile.

Required privilege

ADMIN

Usage with TimesTen Scaleout

This statement is supported with TimesTen Scaleout.

SQL syntax

CREATE PROFILE profile LIMIT password_parameters

password_parameters::=
[FAILED_LOGIN_ATTEMPTS password_parameter_options]
[PASSWORD_LIFE_TIME password_parameter_options]
[PASSWORD_REUSE_TIME password_parameter_options]
[PASSWORD_REUSE_MAX password_parameter_options]
[PASSWORD_LOCK_TIME password_parameter_options]
[PASSWORD_GRACE_TIME password_parameter_options]
[{PASSWORD_COMPLEXITY_CHECKER|PASSWORD_VERIFY_FUNCTION} password_checker_options]

password_parameter_options::=
UNLIMITED|DEFAULT|constant

password_checker_options::=
function|NULL|DEFAULT

function::
TT_VERIFY_FUNCTION|TT_STRONG_VERIFY_FUNCTION|TT_STIG_VERIFY_FUNCTION

Parameters

Parameter Description

profile

Name of the profile.

LIMIT password_parameters

The LIMIT clause sets the limits for the password parameters. The LIMIT keyword is required.

The password parameters consist of the name of the password parameter and the value (or limit) for the password parameter. This includes the password complexity checker functions. All the parameters (with the exception of FAILED_LOGIN_ATTEMPTS and PASSWORD_REUSE_MAX) set lengths of time and are interpreted in number of days. You can use a decimal value (for example, you can use .0833 to denote approximately one hour). The minimum value is 1 second. The maximum value is 106,751,991 days. The constant value must be expressed in days. For example, to set a value of 5 minutes, specify the constant value of 0.0034722222222222 (5/1440 days). For FAILED_LOGIN_ATTEMPTS and PASSWORD_REUSE_MAX, you must specify an integer.

If you do not specify a password parameter after the LIMIT clause, the limit for that password parameter is based on the limit defined in the DEFAULT profile. In addition, if you only specify the LIMIT keyword with no additional parameters, the limits for the profile are based on the limits of the DEFAULT profile.

FAILED_LOGIN_ATTEMPTS

Specifies the number of consecutive failed attempts to connect to the database by a user before that user's account is locked.

PASSWORD_LIFE_TIME

Specifies the number of days that a user can use the same password for authentication. If you also set a value for PASSWORD_GRACE_TIME, then the password expires if it is not changed within the grace period. In such a situation, future connections to the database are rejected.

PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX

These two parameters must be used together.

  • PASSWORD_REUSE_TIME specifies the number of days that must pass before a user can reuse a password. For example, if you specify a value of 30, then after 30 days the user can reuse a previous password.

  • PASSWORD_REUSE_MAX specifies the number of password changes that are required before the current password can be reused.

You must specify a value for both parameters for them to have any effect. Specifically:

  • If you specify a value for both parameters: A user cannot reuse a password until the password has been changed the number of times specified for PASSWORD_REUSE_MAX during the number of days specified for PASSWORD_REUSE_TIME. For example, if you specify a value of 30 for PASSWORD_REUSE_TIME and a value of 10 for PASSWORD_REUSE_MAX, then the user can reuse the password after 30 days if the password has been changed 10 times.

  • If you specify a value for one parameter and specify a value of UNLIMITED for the second parameter, then the user can never reuse a password.

  • If you specify a value of UNLIMITED for both parameters, then TimesTen ignores both values, indicating that the password can be reused.

PASSWORD_LOCK_TIME

Specifies the number of days the user account is locked after the specified number of consecutive failed connection attempts.

PASSWORD_GRACE_TIME

Specifies the number of days after the grace period begins during which TimesTen issues a warning, but allows the connection to the database. If the password is not changed during the grace period, the password expires. This parameter is associated with the PASSWORD_LIFE_TIME parameter.

UNLIMITED

Indicates that there is no limit for the password parameter. If you specify UNLIMITED, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS UNLIMITED.

DEFAULT

Indicates that you want to omit a limit for the password parameter in this profile. A user that is assigned this profile is subject to the limit defined in the DEFAULT profile for this password parameter.

If you specify DEFAULT, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS DEFAULT.

constant

Indicates the value of the password parameter if you do not specify UNLIMITED or DEFAULT. If specified, it must follow the password parameter. For example, FAILED_LOGIN_ATTEMPTS 3.

{PASSWORD_COMPLEXITY_CHECKER| PASSWORD_VERIFY_FUNCTION} {function|NULL|DEFAULT}

Indicates if password verification is done on passwords and, if so, the function used for verification. You can specify either the PASSWORD_COMPLEXITY_CHECKER or the PASSWORD_VERIFY_FUNCTION password parameter. They are synonymous.

function refers to one of the three supported password complexity checker functions. Specify one of these functions to direct TimesTen to perform password verification. Valid values:
  • TT_VERIFY_FUNCTION
  • TT_STRONG_VERIFY_FUNCTION
  • TT_STIG_VERIFY_FUNCTION

NULL indicates that there is not a password verification function assigned for the profile.

DEFAULT indicates that the user is subject to the limits defined by the DEFAULT profile. The DEFAULT profile initially has a value of NULL.

If you do not specify the PASSWORD_COMPLEXITY_CHECKER password parameter, the value defaults to the limits defined for the DEFAULT profile.

Description: PROFILE statement

  • Use the CREATE PROFILE statement to create a profile for the password resources, which is a set of limits for the password parameters. If you assign the profile to a user, the user cannot exceed the limits specified for the profile. If you do not assign a profile to a user, TimesTen assigns the DEFAULT profile. See "Password management" in the Oracle TimesTen In-Memory Database Security Guide for more information on password management and profiles.

  • To specify the password parameter limits for a user, do the following:

    • Use the CREATE PROFILE statement to create a profile that defines the password parameter limits.

    • Use the CREATE USER or ALTER USER statement to assign the profile to the user.

  • There is a DEFAULT profile that defines a limit for each of the password parameters. This profile initially defines UNLIMITED for these parameters (which indicates that no limit has been set for the parameter). The exceptions are:

    • FAILED_LOGIN_ATTEMPTS: Set to 10.

    • PASSWORD_LOCK_TIME: Set to 0.0034722222222222 days (equal to 5 minutes, 5/1440 days)

    • PASSWORD_COMPLEXITY_CHECKER: Set to NULL.

    You can change these limits by using the ALTER PROFILE statement and specifying "DEFAULT" for the profile name. (Note that DEFAULT must be enclosed in double quotation marks.) See "ALTER PROFILE" for information.

  • If a user is not assigned a profile, the user is subject to the limits defined in the DEFAULT profile. If a user is assigned a profile and that profile omits a limit on the password parameter or specifies DEFAULT for the password parameter, then the user is subject to the limits on those password parameters as defined by the DEFAULT profile.

  • The instance administrator is assigned a system profile. You cannot alter or drop the profile of an instance administrator.

About password complexity checker verification

Password complexity checker verification ensures the password for a user is complex enough to deter intruders who try to guess passwords. When you specify a password complexity checker function in the CREATE PROFILE statement, and then assign this profile to a user, the user must create a password that meets the requirements defined for the password complexity checker function. These requirements depend on the specific password complexity checker function that you specify.

TimesTen provides the TT_VERIFY_FUNCTION, the TT_STRONG_VERIFY_FUNCTION, and the TT_STIG_VERIFY_FUNCTION password complexity checker functions to manage the complexity of the passwords. These functions are stored in the SYS schema.

The characters of interest for the password complexity checker functions are letters, digits, and special characters:
  • letter: Uppercase and lowercase letters
  • digit: 0-9 numbers
  • special: A character that is neither a letter nor a digit.
    `~!@#$%^&*()_-+={}[]\/<>,.?':|(space)

    Note:

    • If you use one or more of the special characters, the entire password must be enclosed in double quotation marks ("). The exceptions are the # and the @ special characters. (A password that contains the # or the @ does not need to be enclosed in double quotation marks.)

    • The password cannot contain a semicolon (;) or a double quotation mark (").

    • The password must begin with a letter unless you enclose the entire password in double quotation marks.

You cannot define your own function for password complexity checker verification. The complexity of the password is checked when you use the IDENTIFIED BY clause in the CREATE USER or ALTER USER statements.

Here are the password complexity checker functions:

TT_VERIFY_FUNCTION

The TT_VERIFY_FUNCTION does the following password complexity checker verification:
  • A password:
    • Must have at least 8 characters.
    • Of these 8 characters, must contain at least one letter, at least one digit, and at least one special character. .
  • A password cannot contain:
    • Username or the username reversed
    • Database name
    • Oracle or TimesTen

    Note:

    The comparisons are case insensitive.

TT_STRONG_VERIFY_FUNCTION

The TT_STRONG_VERIFY_FUNCTION performs the following password complexity checker verification:
  • Must have at least 9 characters.
  • Of these 9 characters, must contain at least two uppercase letters, at least two lowercase letters, at least two digits, and at least two special characters.

TT_STIG_VERIFY_FUNCTION

The TT_STIG_VERIFY_FUNCTION performs the following password complexity checker verification:
  • Must have at least 15 characters.
  • Of these 15 characters, must contain at least one uppercase letter, at least one lowercase letter, at least one digit, and at least one special character.

Description: Password complexity checker verification

  • EXECUTE privilege on TT_VERIFY_FUNCTION, TT_STRONG_VERIFY_FUNCTION, and TT_STIG_VERIFY_FUNCTION is required. TimesTen grants the EXECUTE privilege on these functions to PUBLIC by default.
  • The SYSTEM and the DEFAULT profiles are assigned a value of a NULL by default. A NULL value indicates that there is no password complexity checker function for these profiles, and as such, there is no password complexity checker verification done.
  • You cannot modify the SYSTEM profile to specify a password complexity checker function. Passwords for system users do not undergo password complexity checker verification.
  • You can use the ALTER PROFILE statement to modify the DEFAULT profile to specify a password complexity checker function. You specify such a function in the PASSWORD_COMPLEXITY_CHECKER (or PASSWORD_VERIFY_FUNCTION) clause.
  • The TT_VERIFY_FUNCTION, TT_STRONG_VERIFY_FUNCTION, and TT_STIG_VERIFY_FUNCTION functions in TimesTen are equivalent to the ORA12C_VERIFY_FUNCTION, ORA12C_STRONG_VERIFY_FUNCTION, and ORA12C_STIG_VERIFY_FUNCTION functions in Oracle Database.
  • If you use the ttMigrate utility to downgrade to an earlier major release (such as the 18.1 release), the PASSWORD_COMPLEXITY_CHECKER value is set to NULL for each profile in the database.

Restrictions on the password complexity checker functions

There are restrictions on the password complexity checker functions:
  • You cannot specify the SYS schema for the password complexity checker function. For example:
    Command> CREATE PROFILE my_profile LIMIT 
               PASSWORD_COMPLEXITY_CHECKER SYS.TT_VERIFY_FUNCTION;
    15187: Cannot specify schema name for password complexity checker function
    The command failed.
    
  • You cannot define your own password complexity checker function. Use only the TT_VERIFY_FUNCTION,the TT_STRONG_VERIFY_FUNCTION, or the TT_STIG_VERIFY_FUNCTION password complexity checker functions.
  • The password complexity checker verification is only done on a newly created password. You specify this new password by using the IDENTIFIED BY clause of the CREATE USER or the ALTER USER statement. TimesTen does not verify differences between an old and a new password.
  • Multi-byte characters are not supported when specifying passwords. TimesTen does not validate passwords with multi-byte characters.

Specify TT_VERIFY_FUNCTION for PASSWORD_COMPLEXITY_CHECKER

This example first creates a profile and specifies the TT_VERIFY_FUNCTION function. The example then queries the dba_profiles system view to verify the TT_VERIFY_FUNCTION has been assigned to this profile. A user who is assigned this profile must specify a password that meets the password verification requirements for this function.

Command> CREATE PROFILE myprofile_pw1 LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_VERIFY_FUNCTION;

Profile created.

Command> SELECT * FROM dba_profiles WHERE profile = 'MYPROFILE_PW1';
< MYPROFILE_PW1, FAILED_LOGIN_ATTEMPTS, PASSWORD, DEFAULT >
< MYPROFILE_PW1, PASSWORD_LIFE_TIME, PASSWORD, DEFAULT >
< MYPROFILE_PW1, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT >
< MYPROFILE_PW1, PASSWORD_REUSE_MAX, PASSWORD, DEFAULT >
< MYPROFILE_PW1, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, TT_VERIFY_FUNCTION >
< MYPROFILE_PW1, PASSWORD_LOCK_TIME, PASSWORD, DEFAULT >
< MYPROFILE_PW1, PASSWORD_GRACE_TIME, PASSWORD, DEFAULT >
< MYPROFILE_PW1, TEMP_SPACE_PER_SESSION_MAX, MEMORY, DEFAULT >
8 rows found.

Create the sampleuser_pw1 user and assign the myprofile_pw1 profile to this user. Specify a password that meets the requirements of the TT_VERIFY_FUNCTION. See "TT_VERIFY_FUNCTION" for information on the TT_VERIFY_FUNCTION function.

Command> CREATE USER sampleuser_pw1 
           IDENTIFIED BY "A1!XXcg3" PROFILE myprofile_pw1;

User created.

Attempt to create the sampleuser_pw2. Assign the myprofile_pw1 profile to the user. Specify a password that contains the username reversed in uppercase. The CREATE USER statement fails. The password cannot contain the username reversed. Note that the comparison is case insensitive.

Command> CREATE USER sampleuser_pw2 
           IDENTIFIED BY "2WP_RESUELPMAS" PROFILE myprofile_pw1;
15186: Password complexity check for the specified password failed
15188: TT-20002: Password contains the username reversed
The command failed.

Specify TT_STRONG_VERIFY_FUNCTION for PASSWORD_COMPLEXITY_CHECKER

This example creates the myprofile_pw2 profile and specifies TT_STRONG_VERIFY_FUNCTION for the PASSWORD_COMPLEXITY_CHECKER password parameter. The example then creates the sampleuser_pw2 user, assigns the myprofile_pw2 profile to the user. The password meets the requirements of the TT_STRONG_VERIFY_FUNCTION function. See "TT_STRONG_VERIFY_FUNCTION" for more information on the TT_STRONG_VERIFY_FUNCTION function.

Command> CREATE PROFILE myprofile_pw2 LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_STRONG_VERIFY_FUNCTION;

Profile created.

Create the sampleuser_pw2, assign the myprofile_pw2 profile to the user. The password meets the requirements of the TT_STRONG_VERIFY_FUNCTION function. The user is successfully created.

Command> CREATE USER sampleuser_pw2 
           IDENTIFIED BY "!ddFF6C2?" PROFILE myprofile_pw2;

User created.

Specify TT_STIG_VERIFY_FUNCTION for PASSWORD_COMPLEXITY_CHECKER

This example creates the myprofile_pw3 profile and specifies TT_STIG_VERIFY_FUNCTION for the PASSWORD_COMPLEXITY_CHECKER password parameter. The example then creates the sampleuser_pw3 user, assigns the myprofile_pw3 profile to the user. The password meets the requirements of the TT_STIG_VERIFY_FUNCTION function. See "TT_STIG_VERIFY_FUNCTION" for more information on the TT_STIG_VERIFY_FUNCTION function.

Command> CREATE PROFILE myprofile_pw3 LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_STIG_VERIFY_FUNCTION;

Profile created.

Create the sampleuser_pw3, assign the myprofile_pw3 profile to the user. The password meets the requirements of the TT_STIG_VERIFY_FUNCTION function. The user is successfully created.

Command> CREATE USER sampleuser_pw3 
           IDENTIFIED BY "!ddBBKKUYT165>m" PROFILE myprofile_pw3;

User created.

Modify PASSWORD_COMPLEXITY_CHECKER value for SYSTEM and DEFAULT

This example queries the dba_profiles system view to check the value of the PASSWORD_COMPLEXITY_CHECKER password parameter for the SYSTEM and the DEFAULT profiles. The value is NULL by default.

Command> SELECT * FROM dba_profiles WHERE 
           resource_name='PASSWORD_COMPLEXITY_CHECKER' AND 
           profile IN ('DEFAULT','SYSTEM');
< DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL >
< SYSTEM, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL >
2 rows found.

Attempt to modify the PASSWORD_COMPLEXITY_CHECKER password parameter for the SYSTEM profile. An error results as this password parameter cannot be modified.

Command> ALTER PROFILE SYSTEM LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_STRONG_VERIFY_FUNCTION;
15176: Profile SYSTEM cannot be altered
The command failed.

Attempt to modify the PASSWORD_COMPLEXITY_CHECKER password parameter for the DEFAULT profile. The modification is successful. A user who is assigned the DEFAULT profile, or is not assigned a profile, must specify a password that meets the password verification requirements for the TT_STRONG_VERIFY_FUNCTION function.

Command> ALTER PROFILE "DEFAULT" LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_STRONG_VERIFY_FUNCTION;

Profile altered

Query the dba_profiles view to verify the TT_STRONG_VERIFY_FUNCTION has been assigned to the DEFAULT profile.

Command> SELECT * FROM dba_profiles WHERE 
           resource_name='PASSWORD_COMPLEXITY_CHECKER' AND 
           profile = 'DEFAULT';
< DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, TT_STRONG_VERIFY_FUNCTION >
1 row found.

Create a profile and attempt to specify an invalid password complexity checker function

This example specifies an invalid password complexity checker function for the PASSWORD_COMPLEXITY_CHECKER clause. Even though this function resides in the SYS schema, an error results, as you can only specify one of the three supported password complexity checker functions.

Command> CREATE PROFILE myprofile1 LIMIT 
           PASSWORD_COMPLEXITY_CHECKER TT_COMPLEXITY_CHECK;
 8529: Invalid password complexity checker function TT_COMPLEXITY_CHECK
The command failed.

Create a profile and set limits on the password parameters

This example creates the profile1 profile and sets various limits on the password parameters. It then queries the dba_profiles system view to verify the limits.

Command> CREATE PROFILE profile1 LIMIT 
           FAILED_LOGIN_ATTEMPTS 5 
           PASSWORD_LIFE_TIME 60 
           PASSWORD_REUSE_TIME 60 
           PASSWORD_REUSE_MAX 5 
           PASSWORD_LOCK_TIME 1 
           PASSWORD_GRACE_TIME 10;
 
Profile created.

Query the dba_profiles system view to verify the limits. Note that since the PASSWORD_COMPLEXITY_CHECKER password parameter was not specified in the CREATE PROFILE statement, the value of PASSWORD_COMPLEXITY_CHECKER is DEFAULT (the value comes from the value that is in the DEFAULT profile).

Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE1' AND 
           resource_type='PASSWORD';
< PROFILE1, FAILED_LOGIN_ATTEMPTS, PASSWORD, 5 >
< PROFILE1, PASSWORD_LIFE_TIME, PASSWORD, 60 >
< PROFILE1, PASSWORD_REUSE_TIME, PASSWORD, 60 >
< PROFILE1, PASSWORD_REUSE_MAX, PASSWORD, 5 >
< PROFILE1, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT >
< PROFILE1, PASSWORD_LOCK_TIME, PASSWORD, 1 >
< PROFILE1, PASSWORD_GRACE_TIME, PASSWORD, 10 >
7 rows found.

Create a profile and specify FAILED_LOGIN_ATTEMPTS

This example creates the profile2 profile and specifies a value of 1 for FAILED_LOGIN_ATTEMPTS. The example then creates the user2 user and assigns user2 the profile2 profile. The user2 user attempts to connect to the database, but specifies an invalid password. The connection fails. After five minutes, the user2 user attempts to reconnect to the database. The connection succeeds due to the 0.0034722222222222 (equal to 5 minutes) value for PASSWORD_LOCK_TIME (specified in the DEFAULT profile).

Command> CREATE PROFILE profile2 LIMIT FAILED_LOGIN_ATTEMPTS 1;
 
Profile created.
 
Command> CREATE USER user2 IDENTIFIED BY user2 PROFILE profile2;
 
User created.

Grant admin privilege to user2.

Command> GRANT ADMIN TO user2;

Attempt to connect to the database. The connection fails due to an invalid password specified in the connection string.

Command> connect adding "UID=user2;PWD=user3" as user2;
 7001: User authentication failed
The command failed.

Attempt to connect again specifying the correct password in the connection string. The connection fails due to:

  • One previous failed connection attempt

  • An attempt to connect to the database before the five minute password lock time.

none: Command> use database1
database1: Command> connect adding "UID=user2;PWD=user2" as user2;
15179: the account is locked
The command failed.

After five minutes, attempt to connect to the database again. The connection succeeds.

none: Command> use database1
database1: Command> connect adding "UID=user2;PWD=user2" as user2;
Connection successful: DSN=database1;UID=user2;DataStore=/scratch/database1;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128;
(Default setting AutoCommit=1)

Determine the password parameter values in the DEFAULT profile

This example queries the dba_profiles system view to determine the password parameter values for the DEFAULT profile.

Command> SELECT * FROM dba_profiles WHERE profile = 'DEFAULT' AND 
           resource_type='PASSWORD';
< DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 10 >
< DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL >
< DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, .0034 >
< DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED >
7 rows found.

Specify PASSWORD_LIFE_TIME and PASSWORD_GRACE_TIME

This example creates the profile4 profile and specifies a value of 0.0034722222222222 (equal to 5 minutes) for the PASSWORD_LIFE_TIME password parameter and a value of 0.01041667 (equal to 15 minutes) for the PASSWORD_GRACE_TIME password parameter. It then creates the user4 user and assigns the profile4 profile to user4. The example continues with attempts to connect to the database as user4.

Command> CREATE PROFILE profile4 LIMIT 
           PASSWORD_LIFE_TIME 0.0034722222222222 
           PASSWORD_GRACE_TIME 0.01041667;
 
Profile created.

Query the dba_profiles system view to verify the values for the password parameters.

Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE4' AND 
           resource_type='PASSWORD';
< PROFILE2, FAILED_LOGIN_ATTEMPTS, PASSWORD, DEFAULT >
< PROFILE2, PASSWORD_LIFE_TIME, PASSWORD, .0034 >
< PROFILE2, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT >
< PROFILE2, PASSWORD_REUSE_MAX, PASSWORD, DEFAULT >
< PROFILE2, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT >
< PROFILE2, PASSWORD_LOCK_TIME, PASSWORD, DEFAULT >
< PROFILE2, PASSWORD_GRACE_TIME, PASSWORD, .0104 >
7 rows found.

Create the user4 user and assign user4 the profile4 profile. Grant the CONNECT privilege to user4.

Command> CREATE USER user4 IDENTIFIED BY user4 PROFILE profile4;
 
User created.
 
Command> GRANT CONNECT TO user4;

Connect to the database as user4. The connection succeeds.

Command> connect adding "UID=user4;PWD=user4" as user4;
Connection successful: DSN=access1;UID=user4;DataStore=/scratch/database1;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128;
(Default setting AutoCommit=1)

Disconnect from the database. After 5 minutes, reconnect to the database as user4. The connection succeeds but a warning is issued. The password lifetime is 5 minutes and the password grace time is 15 minutes.

user4: Command> disconnect user4;
Disconnecting from user4...
none: Command> use database1
database1: Command> connect adding "UID=user4;PWD=user4" as user4;
 
Warning 15182: Password will expire within 0.010417 days
 
Connection successful: DSN=access1;UID=user4;DataStore=/scratch/database1;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;PermSize=128;
(Default setting AutoCommit=1)

Disconnect from the database. After 15 minutes, reconnect to the database as user4. The connection fails as the password grace time of 15 minutes has ended.

user4: Command> disconnect user4;
Disconnecting from user4...
none: Command> use database1
database1: Command> connect adding "UID=user4;PWD=user4" as user4;
15180: the password has expired
The command failed.

Create a profile specifying only the LIMIT keyword

This example creates the profile5 profile and specifies just the LIMIT keyword. The example then queries the dba_profiles system view to illustrate the password parameter limits for the profile5 profile are all set to a value of DEFAULT.

Command> CREATE PROFILE profile5 LIMIT;
 
Profile created.
Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE5' AND 
           resource_type='PASSWORD
< PROFILE5, FAILED_LOGIN_ATTEMPTS, PASSWORD, DEFAULT >
< PROFILE5, PASSWORD_LIFE_TIME, PASSWORD, DEFAULT >
< PROFILE5, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT >
< PROFILE5, PASSWORD_REUSE_MAX, PASSWORD, DEFAULT >
< PROFILE5, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT >
< PROFILE5, PASSWORD_LOCK_TIME, PASSWORD, DEFAULT >
< PROFILE5, PASSWORD_GRACE_TIME, PASSWORD, DEFAULT >
7 rows found.

Specify UNLIMITED for PASSWORD_REUSE_TIME

This example creates the profile6 profile and specifies a PASSWORD_REUSE_TIME of UNLIMITED. The password cannot be reused.

Command> CREATE PROFILE profile6 LIMIT 
           PASSWORD_REUSE_MAX 2 
           PASSWORD_REUSE_TIME UNLIMITED;
 
Profile created.

Create the user6 user and assign user6 the profile6 profile. Change the user6 password two times. Attempt to reuse the user6 password. The attempt fails due to the PASSWORD_REUSE_TIME value of UNLIMITED.

Command> CREATE USER user6 IDENTIFIED BY user6 PROFILE profile6;
 
User created.
 
Command> ALTER USER user6 IDENTIFIED BY user6_test1;
 
User altered.
 
Command> ALTER USER user6 IDENTIFIED BY user6_test2;
 
User altered.
 
Command> ALTER USER user6 IDENTIFIED BY user6;
15183: Password cannot be reused
The command failed.

Specify DEFAULT for PASSWORD_REUSE_TIME

This example creates the profile7 profile, specifying the value of DEFAULT for the PASSWORD_REUSE_TIME password parameter and the value of 3 for the PASSWORD_REUSE_MAX password parameter. TimesTen uses the value in the DEFAULT profile for the PASSWORD_REUSE_TIME password parameter.

Command> CREATE PROFILE profile7 LIMIT 
           PASSWORD_REUSE_TIME DEFAULT 
           PASSWORD_REUSE_MAX 3;
 
Profile created.

Query the dba_profiles system view to verify the password parameter values for the profile7 profile. Note the value of DEFAULT for PASSWORD_REUSE_TIME and a value of 3 for PASSWORD_REUSE_MAX (represented in bold).

Command> SELECT * FROM dba_profiles WHERE profile = 'PROFILE7' AND
           resource_type = 'PASSWORD';
< PROFILE7, FAILED_LOGIN_ATTEMPTS, PASSWORD, DEFAULT >
< PROFILE7, PASSWORD_LIFE_TIME, PASSWORD, DEFAULT >
< PROFILE7, PASSWORD_REUSE_TIME, PASSWORD, DEFAULT >
< PROFILE7, PASSWORD_REUSE_MAX, PASSWORD, 3 >
< PROFILE7, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, DEFAULT >
< PROFILE7, PASSWORD_LOCK_TIME, PASSWORD, DEFAULT >
< PROFILE7, PASSWORD_GRACE_TIME, PASSWORD, DEFAULT >
7 rows found.

Query the dba_profiles system view to verify the password parameter values for the DEFAULT profile. Note the value of UNLIMITED for PASSWORD_REUSE_TIME (represented in bold).

Command> SELECT * FROM dba_profiles WHERE profile = 'DEFAULT' AND
           resource_type = 'PASSWORD';
< DEFAULT, FAILED_LOGIN_ATTEMPTS, PASSWORD, 10 >
< DEFAULT, PASSWORD_LIFE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_TIME, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_REUSE_MAX, PASSWORD, UNLIMITED >
< DEFAULT, PASSWORD_COMPLEXITY_CHECKER, PASSWORD, NULL >
< DEFAULT, PASSWORD_LOCK_TIME, PASSWORD, .0034 >
< DEFAULT, PASSWORD_GRACE_TIME, PASSWORD, UNLIMITED >
7 rows found.

Create the user7 user and assign the profile7 profile to user7. Change the user7 password three times. The user7 password cannot be reused due to the value of UNLIMITED for the PASSWORD_REUSE_TIME parameter.

Command> CREATE USER user7 IDENTIFIED BY user7 PROFILE profile7;
 
User created.
 
Command> ALTER USER user7 IDENTIFIED BY user7_test1;
 
User altered.
 
Command> ALTER USER user7 IDENTIFIED BY user7_test2;
 
User altered.
 
Command> ALTER USER user7 IDENTIFIED BY user_test3;
 
User altered.
 
Command> ALTER USER user7 IDENTIFIED BY user7;
15183: Password cannot be reused
The command failed.

Specify PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX

This example creates the profile8 profile, specifying a value of 0.0020833 (equal to approximately 2 minutes) for the PASSWORD_REUSE_TIME password parameter and a value of 2 for the PASSWORD_REUSE_MAX password parameter. The example then creates the user8 user and assigns user8 the profile8 profile. The user8 password is changed two times within two minutes. Then, still within the two minutes, the original user8 password (user8_pwd) is reused. The ALTER USER operation fails. Even though the password is changed 2 times, the original password can only be reused after 0.00208333 days (equal to approximately two minutes). After two minutes, the original user8 password (user8_pwd) is reused again. The ALTER USER operation succeeds. The user's password was changed two times and more than two minutes had passed.

Command> CREATE PROFILE profile8 LIMIT 
           PASSWORD_REUSE_TIME 0.00208333 
           PASSWORD_REUSE_MAX 2;
 
Profile created.

Create the user8 user and assign user8 the profile8 profile.

Command> CREATE USER user8 IDENTIFIED BY user8_pwd PROFILE profile8;
 
User created.
 

Immediately alter the user, changing the password two times.

Command> ALTER USER user8 IDENTIFIED BY user8_test1;
 
User altered.
 
Command> ALTER USER user8 IDENTIFIED BY user8_test2;
 
User altered.

Within two minutes, attempt to reuse the original user8_pwd password (represented in bold). The ALTER USER operation fails as the original password can only be reused after two minutes.

Command> ALTER USER user8 IDENTIFIED BY user8_pwd;
15183: Password cannot be reused
The command failed.

After two minutes, attempt to reuse the original user8_pwd password (represented in bold). The ALTER USER operation succeeds. The original password can be reused as the password was changed two times and two minutes had expired.

Command> ALTER USER user8 IDENTIFIED BY user8_pwd;
 
User altered.

CREATE REPLICATION

This statement is not supported in TimesTen Scaleout.

In TimesTen Classic:

The CREATE REPLICATION statement:

  • Defines a classic replication scheme on a participating database.

  • Installs the specified configuration in the executing database's replication system tables.

  • Typically consists of one or more replication element specifications and zero or more STORE specifications.

TimesTen SQL configuration for replication also provides a programmable way to configure a classic replication scheme. The configuration can be embedded in C, C++ or Java code. Replication can be configured locally or from remote systems using client/server.

In addition, you need to use the ttRepAdmin utility to maintain operations not covered by the supported SQL statements. Use ttRepAdmin to change replication state, duplicate databases, list the replication configuration, and view replication status.

Required privilege

ADMIN

Usage with TimesTen Scaleout

This statement is not supported with TimesTen Scaleout.