PK gpUIoa,mimetypeapplication/epub+zipPKgpUIOEBPS/preface.htm Preface

Preface

This document describes how to create and administer an Oracle Database.

This preface contains these topics:

Audience

This document is intended for database administrators who perform the following tasks:

To use this document, you need to be familiar with relational database concepts. You should also be familiar with the operating system environment under which you are running the Oracle Database.

Documentation Accessibility

Our goal is to make Oracle products, services, and supporting documentation accessible, with good usability, to the disabled community. To that end, our documentation includes features that make information available to users of assistive technology. This documentation is available in HTML format, and contains markup to facilitate access by the disabled community. Accessibility standards will continue to evolve over time, and Oracle is actively engaged with other market-leading technology vendors to address technical obstacles so that our documentation can be accessible to all of our customers. For more information, visit the Oracle Accessibility Program Web site at

http://www.oracle.com/accessibility/

Accessibility of Code Examples in Documentation

Screen readers may not always correctly read the code examples in this document. The conventions for writing code require that closing braces should appear on an otherwise empty line; however, some screen readers may not always read a line of text that consists solely of a bracket or brace.

Accessibility of Links to External Web Sites in Documentation

This documentation may contain links to Web sites of other companies or organizations that Oracle does not own or control. Oracle neither evaluates nor makes any representations regarding the accessibility of these Web sites.

TTY Access to Oracle Support Services

Oracle provides dedicated Text Telephone (TTY) access to Oracle Support Services within the United States of America 24 hours a day, seven days a week. For TTY support, call 800.446.2398.

Structure

This document contains:

Part I, "Basic Database Administration"

This part contains information about creating a database, starting and shutting down a database, and managing Oracle processes.

Chapter 1, "Overview of Administering an Oracle Database"

This chapter serves as an introduction to typical tasks performed by database administrators, such as installing software and planning a database.

Chapter 2, "Creating an Oracle Database"

This chapter describes how to create a database. Consult this chapter when you are planning a database.

Chapter 3, "Starting Up and Shutting Down"

This chapter describes how to start a database, alter its availability, and shut it down. It also describes the parameter files related to starting up and shutting down.

Chapter 4, "Managing Oracle Database Processes"

This chapter describes how to identify different Oracle Database processes, such as dedicated server processes and shared server processes. Consult this chapter when configuring, modifying, tracking and managing processes.

Part II, "Oracle Database Structure and Storage"

This part describes the structure and management of the Oracle Database and its storage.

Chapter 5, "Managing Control Files"

This chapter describes how to manage control files, including the following tasks: naming, creating, troubleshooting, and dropping control files.

Chapter 6, "Managing the Redo Log"

This chapter describes how to manage the online redo log, including the following tasks: planning, creating, renaming, dropping, or clearing redo log files.

Chapter 7, "Managing Archived Redo Logs"

This chapter describes archiving.

Chapter 8, "Managing Tablespaces"

This chapter provides guidelines for managing tablespaces. It describes how to create, manage, alter, and drop tablespaces and how to move data between tablespaces.

Chapter 9, "Managing Datafiles and Tempfiles"

This chapter provides guidelines for managing datafiles. It describes how to create, change, alter, and rename datafiles and how to view information about datafiles.

Chapter 10, "Managing the Undo Tablespace"

This chapter describes how to manage undo space using an undo tablespace.

Part III, "Automated File and Storage Management"

This part describes how to use Oracle-Managed Files and Automatic Storage Management.

Chapter 11, "Using Oracle-Managed Files"

This chapter describes how to use the Oracle Database server to create and manage database files.

Chapter 12, "Using Automatic Storage Management"

This chapter describes how to use Automatic Storage Management.

Part IV, "Schema Objects"

This section describes how to manage schema objects, including the following: tables, indexes, clusters, hash clusters, views, sequences, and synonyms.

Chapter 13, "Managing Schema Objects"

This chapter describes management of schema objects. It contains information about analyzing objects, truncation of tables and clusters, database triggers, integrity constraints, and object dependencies.

Chapter 14, "Managing Space for Schema Objects"

This chapter describes common tasks such as setting storage parameters, deallocating space, and managing space.

Chapter 15, "Managing Tables"

This chapter contains table management guidelines, as well as information about creating, altering, maintaining and dropping tables.

Chapter 16, "Managing Indexes"

This chapter contains guidelines about indexes, including creating, altering, monitoring and dropping indexes.

Chapter 17, "Managing Partitioned Tables and Indexes"

This chapter describes partitioned tables and indexes and how to create and manage them.

Chapter 18, "Managing Clusters"

This chapter contains guidelines for creating, altering, or dropping clusters.

Chapter 19, "Managing Hash Clusters"

This chapter contains guidelines for creating, altering, or dropping hash clusters.

Chapter 20, "Managing Views, Sequences, and Synonyms"

This chapter describes how to manage views, sequences and synonyms.

Chapter 21, "Using DBMS_REPAIR to Repair Data Block Corruption"

This chapter describes methods for detecting and repairing data block corruption.

Part V, "Database Security"

This part discusses the importance of establishing a security policy for your database and users.

Chapter 22, "Managing Users and Securing the Database"

This chapter discusses the importance of establishing a security policy for your database and users.

Part VI, "Database Resource Management and Task Scheduling"

This part describes database resource management and task scheduling.

Chapter 23, "Managing Automatic System Tasks Using the Maintenance Window"

This chapter describes how to use automatic system tasks.

Chapter 24, "Using the Database Resource Manager"

This chapter describes how to use the Database Resource Manager to allocate resources.

Chapter 25, "Moving from DBMS_JOB to DBMS_SCHEDULER"

This chapter describes how to take statements created with DBMS_JOB and rewrite them using DBMS_SCHEDULER.

Chapter 26, "Scheduler Concepts"

Oracle Database provides advanced scheduling capabilities through the database Scheduler. This chapter introduces you to its concepts.

Chapter 27, "Using the Scheduler"

This chapter describes how to use the Scheduler.

Chapter 28, "Administering the Scheduler"

This chapter describes the tasks a database administrator needs to perform so end users can schedule jobs using the Scheduler.

Part VII, "Distributed Database Management"

This part describes distributed database management.

Chapter 29, "Distributed Database Concepts"

This chapter describes the basic concepts and terminology of Oracle's distributed database architecture.

Chapter 30, "Managing a Distributed Database"

This chapter describes how to manage and maintain a distributed database system.

Chapter 31, "Developing Applications for a Distributed Database System"

This chapter describes the considerations for developing an application to run in a distributed database system.

Chapter 32, "Distributed Transactions Concepts"

This chapter describes what distributed transactions are and how the Oracle Databases maintains their integrity.

Chapter 33, "Managing Distributed Transactions"

This chapter describes how to manage and troubleshoot distributed transactions.

Related Documents

For more information, see these Oracle resources:

Many of the examples in this book use the sample schemas, which are installed by default when you select the Basic Installation option with an Oracle Database installation. Refer to Oracle Database Sample Schemas for information on how these schemas were created and how you can use them yourself.

Printed documentation is available for sale in the Oracle Store at

http://oraclestore.oracle.com/

To download free release notes, installation documentation, white papers, or other collateral, please visit the Oracle Technology Network (OTN). You must register online before using OTN; registration is free and can be done at

http://www.oracle.com/technology/membership/

If you already have a username and password for OTN, then you can go directly to the documentation section of the OTN Web site at

http://www.oracle.com/technology/documentation/

Conventions

This section describes the conventions used in the text and code examples of this documentation set. It describes:

Conventions in Text

We use various conventions in text to help you more quickly identify special terms. The following table describes those conventions and provides examples of their use.

Convention Meaning Example
Bold Bold typeface indicates terms that are defined in the text or terms that appear in a glossary, or both. When you specify this clause, you create an index-organized table.
Italics Italic typeface indicates book titles or emphasis. Oracle Database Concepts

Ensure that the recovery catalog and target database do not reside on the same disk.

UPPERCASE monospace (fixed-width) font Uppercase monospace typeface indicates elements supplied by the system. Such elements include parameters, privileges, datatypes, Recovery Manager keywords, SQL keywords, SQL*Plus or utility commands, packages and methods, as well as system-supplied column names, database objects and structures, usernames, and roles. You can specify this clause only for a NUMBER column.

You can back up the database by using the BACKUP command.

Query the TABLE_NAME column in the USER_TABLES data dictionary view.

Use the DBMS_STATS.GENERATE_STATS procedure.

lowercase monospace (fixed-width) font Lowercase monospace typeface indicates executable programs, filenames, directory names, and sample user-supplied elements. Such elements include computer and database names, net service names and connect identifiers, user-supplied database objects and structures, column names, packages and classes, usernames and roles, program units, and parameter values.

Note: Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown.

Enter sqlplus to start SQL*Plus.

The password is specified in the orapwd file.

Back up the datafiles and control files in the /disk1/oracle/dbs directory.

The department_id, department_name, and location_id columns are in the hr.departments table.

Set the QUERY_REWRITE_ENABLED initialization parameter to true.

Connect as oe user.

The JRepUtil class implements these methods.

lowercase italic monospace (fixed-width) font Lowercase italic monospace font represents placeholders or variables. You can specify the parallel_clause.

Run old_release.SQL where old_release refers to the release you installed prior to upgrading.


Conventions in Code Examples

Code examples illustrate SQL, PL/SQL, SQL*Plus, or other command-line statements. They are displayed in a monospace (fixed-width) font and separated from normal text as shown in this example:

SELECT username FROM dba_users WHERE username = 'MIGRATE';

The following table describes typographic conventions used in code examples and provides examples of their use.

Convention Meaning Example
[ ]
Brackets enclose one or more optional items. Do not enter the brackets.
DECIMAL (digits [ , precision ])
{ }
Braces enclose two or more items, one of which is required. Do not enter the braces.
{ENABLE | DISABLE}
|

A vertical bar represents a choice of two or more options within brackets or braces. Enter one of the options. Do not enter the vertical bar.
{ENABLE | DISABLE}
[COMPRESS | NOCOMPRESS]
...
Horizontal ellipsis points indicate either:
  • That we have omitted parts of the code that are not directly related to the example

  • That you can repeat a portion of the code

CREATE TABLE ... AS subquery;

SELECT col1, col2, ... , coln FROM employees;
 .
 .
 .
Vertical ellipsis points indicate that we have omitted several lines of code not directly related to the example.
SQL> SELECT NAME FROM V$DATAFILE;
NAME
------------------------------------
/fsl/dbs/tbs_01.dbf
/fs1/dbs/tbs_02.dbf
.
.
.
/fsl/dbs/tbs_09.dbf
9 rows selected.
Other notation You must enter symbols other than brackets, braces, vertical bars, and ellipsis points as shown.
acctbal NUMBER(11,2);
acct    CONSTANT NUMBER(4) := 3;
Italics
Italicized text indicates placeholders or variables for which you must supply particular values.
CONNECT SYSTEM/system_password
DB_NAME = database_name
UPPERCASE
Uppercase typeface indicates elements supplied by the system. We show these terms in uppercase in order to distinguish them from terms you define. Unless terms appear in brackets, enter them in the order and with the spelling shown. However, because these terms are not case sensitive, you can enter them in lowercase.
SELECT last_name, employee_id FROM employees;
SELECT * FROM USER_TABLES;
DROP TABLE hr.employees;
lowercase
Lowercase typeface indicates programmatic elements that you supply. For example, lowercase indicates names of tables, columns, or files.

Note: Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown.

SELECT last_name, employee_id FROM employees;
sqlplus hr/hr
CREATE USER mjones IDENTIFIED BY ty3MU9;

Conventions for Windows Operating Systems

The following table describes conventions for Windows operating systems and provides examples of their use.

Convention Meaning Example
Choose Start > menu item How to start a program. The '>' character indicates a hierarchical menu (submenu). To start the Database Configuration Assistant, choose Start > Programs > Oracle - HOME_NAME > Configuration and Migration Tools > Database Configuration Assistant.
File and directory names File and directory names are not case sensitive. The following special characters are not allowed: left angle bracket (<), right angle bracket (>), colon (:), double quotation marks ("), slash (/), pipe (|), and dash (-). The special character backslash (\) is treated as an element separator, even when it appears in quotes. If the filename begins with \\, then Windows assumes it uses the Universal Naming Convention. c:\winnt"\"system32 is the same as C:\WINNT\SYSTEM32
C:\> Represents the Windows command prompt of the current hard disk drive. The escape character in a command prompt is the caret (^). Your prompt reflects the subdirectory in which you are working. Referred to as the command prompt in this manual.
C:\oracle\oradata>
Special characters The backslash (\) special character is sometimes required as an escape character for the double quotation mark (") special character at the Windows command prompt. Parentheses and the single quotation mark (') do not require an escape character. Refer to your Windows operating system documentation for more information on escape and special characters.
C:\> exp HR/HR TABLES=emp QUERY=\"WHERE job='REP'\"
HOME_NAME
Represents the Oracle home name. The home name can be up to 16 alphanumeric characters. The only special character allowed in the home name is the underscore.
C:\> net start OracleHOME_NAMETNSListener
ORACLE_HOME and ORACLE_BASE In releases prior to Oracle8i release 8.1.3, when you installed Oracle components, all subdirectories were located under a top level ORACLE_HOME directory. The default for Windows NT was C:\orant.

This release complies with Optimal Flexible Architecture (OFA) guidelines. All subdirectories are not under a top level ORACLE_HOME directory. There is a top level directory called ORACLE_BASE that by default is C:\oracle\product\10.1.0. If you install the latest Oracle release on a computer with no other Oracle software installed, then the default setting for the first Oracle home directory is C:\oracle\product\10.1.0\db_n, where n is the latest Oracle home number. The Oracle home directory is located directly under ORACLE_BASE.

All directory path examples in this guide follow OFA conventions.

Refer to Oracle Database Installation Guide for Microsoft Windows (32-Bit) for additional information about OFA compliances and for information about installing Oracle products in non-OFA compliant directories.

Go to the ORACLE_BASE\ORACLE_HOME\rdbms\admin directory.

PK~tPKgpUIOEBPS/part2.htmg Oracle Database Structure and Storage

Part II

Oracle Database Structure and Storage

This part describes database structure in terms of its storage components and how to create and manage those components. It contains the following chapters:

PK)YڝPKgpUIOEBPS/index.htm Index

Index

A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X 

A

abort response, 32.3.1.1.3
two-phase commit, 32.3.1.1.3
accounts
DBA operating system account, 1.5.1
users SYS and SYSTEM, 1.5.2
ADD LOGFILE clause
ALTER DATABASE statement, 6.3.1
ADD LOGFILE MEMBER clause
ALTER DATABASE statement, 6.3.2
ADD PARTITION clause, 17.4.2.1
ADD SUBPARTITION clause, 17.4.2.4.2, 17.4.2.5.2
adding
templates to a disk group, 12.4.10.1
ADMIN_TABLES procedure
creating admin table, 21.3.1.1
DBMS_REPAIR package, 21.2.1
example, 21.4.1.1, 21.4.1.2
ADMINISTER_RESOURCE_MANAGER system privilege, 24.2
administering
disk groups, 12.4
the Scheduler, 28
administration
distributed databases, 30
AFTER SUSPEND system event, 14.4.4.1
AFTER SUSPEND trigger, 14.4.4.1
example of registering, 14.4.6
agent
Heterogeneous Services, definition of, 29.1.2
aggregate functions
statement transparency in distributed databases, 30.7
alert log
about, 4.7.2
location of, 4.7.2.2
size of, 4.7.2.3
using, 4.7.2
when written, 4.7.2.4
alert thresholds
setting for locally managed tablespaces, 14.1.1
alerts
server-generated, 4.7.1
threshold-based, 4.7.1
viewing, 14.1.2
aliases
dropping from a disk group, 12.4.9
aliases, managing Automatic Storage Management, 12.4.8
ALL_DB_LINKS view, 30.5.1, 30.5.1
allocation
extents, 15.6.4
ALTER CLUSTER statement
ALLOCATE EXTENT clause, 18.4
using for hash clusters, 19.4
using for index clusters, 18.4
ALTER DATABASE ADD LOGFILE statement
using Oracle-managed files, 11.3.6.1
ALTER DATABASE statement
ADD LOGFILE clause, 6.3.1
ADD LOGFILE MEMBER clause, 6.3.2
ARCHIVELOG clause, 7.3.2
CLEAR LOGFILE clause, 6.8
CLEAR UNARCHIVED LOGFILE clause, 6.2.1.1
database partially available to users, 3.2.1
DATAFILE...OFFLINE DROP clause, 9.4.2
datafiles online or offline, 9.4.3
default temporary tablespace, specifying, 2.3.6
DROP LOGFILE clause, 6.5.1
DROP LOGFILE MEMBER clause, 6.5.2
MOUNT clause, 3.2.1
NOARCHIVELOG clause, 7.3.2
OPEN clause, 3.2.2
READ ONLY clause, 3.2.3
RENAME FILE clause, 9.5.2
tempfiles online or offline, 9.4.3
UNRECOVERABLE DATAFILE clause, 6.8
ALTER DISKGROUP command, 12.4.3
ALTER FUNCTION statement
COMPILE clause, 13.7.3
ALTER INDEX statement
COALESCE clause, 16.2.10
for maintaining partitioned indexes, 17.4
MONITORING USAGE clause, 16.4.3
ALTER PACKAGE statement
COMPILE clause, 13.7.4
ALTER PROCEDURE statement
COMPILE clause, 13.7.3
ALTER SEQUENCE statement, 20.2.3
ALTER SESSION
Enabling resumable space allocation, 14.4.2.2
ALTER SESSION statement
ADVISE clause, 33.4.3.3
CLOSE DATABASE LINK clause, 31.2
SET SQL_TRACE initialization parameter, 4.7.2.4
setting time zone, 2.3.9.1
ALTER SYSTEM statement
ARCHIVE LOG ALL clause, 7.3.3
DISABLE DISTRIBUTED RECOVERY clause, 33.9.2
ENABLE DISTRIBUTED RECOVERY clause, 33.9.2
ENABLE RESTRICTED SESSION clause, 3.2.4
enabling Database Resource Manager, 24.6
QUIESCE RESTRICTED, 3.4.1
RESUME clause, 3.5
SCOPE clause for SET, 2.7.5.1
SET RESOURCE_MANAGER_PLAN, 24.6
SET SHARED_SERVERS initialization parameter, 4.2.2.2
setting initialization parameters, 2.7.5
SUSPEND clause, 3.5
SWITCH LOGFILE clause, 6.6
UNQUIESCE, 3.4.2
ALTER TABLE
MODIFY DEFAULT ATTRIBUTES FOR PARTITION clause, 17.4.7.2, 17.4.7.3
ALTER TABLE statement
ADD (column) clause, 15.6.6
ALLOCATE EXTENT clause, 15.6.4
DEALLOCATE UNUSED clause, 15.6.4
DISABLE ALL TRIGGERS clause, 13.4.2
DISABLE integrity constraint clause, 13.5.3.1
DROP COLUMN clause, 15.6.8.1
DROP integrity constraint clause, 13.5.3.3
DROP UNUSED COLUMNS clause, 15.6.8.2
ENABLE ALL TRIGGERS clause, 13.4.1
ENABLE integrity constraint clause, 13.5.3.1, 13.5.3.1
external tables, 15.13.2
for maintaining partitions, 17.4
MODIFY (column) clause, 15.6.5
MODIFY DEFAULT ATTRIBUTES clause, 17.4.7.1
modifying index-organized table attributes, 15.12.3.1
MOVE clause, 15.6.3, 15.6.3, 15.12.3.2
reasons for use, 15.6.1
RENAME COLUMN clause, 15.6.7
SET UNUSED clause, 15.6.8.2
ALTER TABLESPACE statement
adding an Oracle-managed datafile, example, 11.3.3.3
adding an Oracle-managed tempfile, example, 11.3.4.2
ONLINE clause, example, 8.5.2
READ ONLY clause, 8.6.1
READ WRITE clause, 8.6.2
RENAME DATAFILE clause, 9.5.1.1
RENAME TO clause, 8.7
taking datafiles/tempfiles online/offline, 9.4.3
ALTER TRIGGER statement
DISABLE clause, 13.4.2
ENABLE clause, 13.4.1
ALTER VIEW statement
COMPILE clause, 13.7.2
altering
(Scheduler) windows, 27.6.3
chain steps, 27.9.12
event schedule, 27.8.2.4
event-based job, 27.8.2.2
job classes, 27.5.3
jobs, 27.2.4
programs, 27.3.3
schedules, 27.4.3
altering indexes, 16.4, 16.4.2
ANALYZE statement
CASCADE clause, 13.2.2
corruption reporting, 21.3.1.3
listing chained rows, 13.2.3
remote tables, 31.4.2.2.2
validating structure, 13.2.2, 21.3.1
analyzing schema objects, 13.2
analyzing tables
distributed processing, 31.4.2.2.2
application development
distributed databases, 29.5, 31, 31.5
application development for distributed databases, 31
analyzing execution plan, 31.4.4
database links, controlling connections, 31.2
handling errors, 31.3, 31.5
handling remote procedure errors, 31.5
managing distribution of data, 31.1
managing referential integrity constraints, 31.3
terminating remote connections, 31.2
tuning distributed queries, 31.4
tuning using collocated inline views, 31.4.1
using cost-based optimization, 31.4.2
using hints to tune queries, 31.4.3
application services
configuring, 2.8.2
defining, 2.8
deploying, 2.8.1
using, 2.8.3
using, client side, 2.8.3.1
using, server side, 2.8.3.2
archive log files
creating in Automatic Storage Management, 12.5.9
ARCHIVE_LAG_TARGET initialization parameter, 6.2.5.1
archived redo logs
archiving modes, 7.3.2
destination availability state, controlling, 7.4.2
destination status, 7.4.2
destinations, specifying, 7.4
failed destinations and, 7.6
mandatory destinations, 7.6.1.1
minimum number of destinations, 7.6.1
multiplexing, 7.4.1
normal transmission of, 7.5
re-archiving to failed destination, 7.6.2
sample destination scenarios, 7.6.1.2
standby transmission of, 7.5
status information, 7.8.1
transmitting, 7.5
ARCHIVELOG mode, 7.2.2
advantages, 7.2.2
archiving, 7.2
automatic archiving in, 7.2.2
definition of, 7.2.2
distributed databases, 7.2.2
enabling, 7.3.2
manual archiving in, 7.2.2
running in, 7.2.2
switching to, 7.3.2
taking datafiles offline and online in, 9.4.1
archiver process
trace output (controlling), 7.7
archiver process (ARCn), 4.3
archiving
changing archiving mode, 7.3.2
controlling number of processes, 7.3.4
destination availability state, controlling, 7.4.2
destination failure, 7.6
destination status, 7.4.2
manual, 7.3.3, 7.3.3
NOARCHIVELOG vs. ARCHIVELOG mode, 7.2
setting initial mode, 7.3.1
to failed destinations, 7.6.2
trace output, controlling, 7.7
viewing information on, 7.8.1
ASM
see Automatic Storage Management
ASM_DISKGROUPS, 12.3.3.1
ASM_DISKSTRING, 12.3.3.1
ASM_POWER_LIMIT, 12.3.3.1
ASMLib, 12.3.1
auditing
database links, 29.3.3
authentication
database links, 29.3.2.1
operating system, 1.6.3.2
selecting a method, 1.6.2
using password file, 1.6.4.1
AUTO_TASK_CONSUMER_GROUP
of Resource Manager, 23.3
AUTOEXTEND clause
for bigfile tablespaces, 8.2.2.2
automatic segment space management, 8.2.1.2
Automatic Storage Management
accessing files with the XML DB virtual folder, 12.7
administering, 12.3
aliases, 12.4.8
authentication, 12.3.2
creating a database in, 12.5.5
creating archive log files in, 12.5.9
creating control file in, 12.5.8
creating files in the database, 12.5.4
creating redo logs in, 12.5.7
creating tablespaces in, 12.5.6
disk discovery, 12.2, 12.3.4.3
disk failures in, 12.4.1.4
filenames, 12.5.2
initialization files and, 3.1.2
initialization parameters, 12.3.3, 12.3.3.1
installation tips, 12.3.1
installing, 12.3.1
migrating a database to, 12.6
operating system authentication for, 12.3.2.1
overview, 12.1
overview of components, 12.2
password file authentication for, 12.3.2.2
shutting down, 12.3.5
starting up, 12.3.4
using in database, 12.5
views, 12.8
XML DB virtual folder, 12.7
automatic undo management, 2.3.4, 10.2

B

background processes, 4.3
FMON, 9.9.2.1.1
BACKGROUND_DUMP_DEST initialization parameter, 4.7.2.2
backups
after creating new databases, 2.2.2.11
effects of archiving on, 7.2.1
batch jobs, authenticating users in, 2.9.3
bigfile tablespaces
creating, 8.2.2.1
creating temporary, 8.2.3.2
description, 8.2.2
setting database default, 2.3.8.1
BLANK_TRIMMING initialization parameter, 15.6.5
BLOCKSIZE clause
of CREATE TABLESPACE, 8.3
BUFFER_POOL parameter
description, 14.3.1
buffers
buffer cache in SGA, 2.4.5.4.1

C

CACHE option
CREATE SEQUENCE statement, 20.2.4.2.2
caches
sequence numbers, 20.2.4.2
calendaring expressions, 27.4.5.1
calls
remote procedure, 29.5.2
capacity planning
space management
capacity planning, 14.8
capacity, managing in disk groups, 12.4.1.6
CASCADE clause
when dropping unique or primary keys, 13.5.3.1
CATBLOCK.SQL script, 4.7.3
centralized user management
distributed systems, 29.3.2.4
chain rules, 27.9.4
chain steps
defining, 27.9.3
chained rows
eliminating from table, procedure, 13.2.3.2
CHAINED_ROWS table
used by ANALYZE statement, 13.2.3.1
chains
creating, 27.9.2
creating jobs for, 27.9.6
disabling, 27.9.10
dropping, 27.9.7
dropping rules from, 27.9.9
enabling, 27.9.5
monitoring, 28.2.13
overview, 26.2.5
running, 27.9.8
stalled, 27.9.13
using, 27.9
change vectors, 6.1.2
CHAR datatype
increasing column length, 15.6.5
character set
choosing, 2.2.1.1
CHECK_OBJECT procedure
DBMS_REPAIR package, 21.2.1
example, 21.4.2
finding extent of corruption, 21.3.2
checkpoint process (CKPT), 4.3
checksums
for data blocks, 9.7
redo log blocks, 6.7, 6.7
CLEAR LOGFILE clause
ALTER DATABASE statement, 6.8, 6.8
clearing redo log files, 6.2.1.1, 6.8
client/server architectures
distributed databases, 29.1.3, 29.1.3
globalization support, 29.6.1
cloning
a database, 1.2.11
an Oracle home, 1.2.11
CLOSE DATABASE LINK clause
ALTER SESSION statement, 31.2
closing database links, 30.4.1
closing windows, 27.6.5
clusters
about, 18.1
allocating extents, 18.4
altering, 18.4
analyzing, 13.2
cluster indexes, 18.5
cluster keys, 18.1, 18.2.2, 18.2.3
clustered tables, 18.1, 18.2.1, 18.3.1, 18.4.1, 18.5.1
columns for cluster key, 18.2.2
creating, 18.3
deallocating extents, 18.4
dropping, 18.5
estimating space, 18.2.3, 18.2.5
guidelines for managing, 18.2, 18.2.5
hash clusters, 19
location, 18.2.4
privileges, 18.3, 18.4, 18.5.1
selecting tables, 18.2.1
single-table hash clusters, 19.3.2
sorted hash, 19.3.1
truncating, 13.3
validating structure, 13.2.2
COALESCE PARTITION clause, 17.4.3.1
coalescing indexes
costs, 16.2.10
collocated inline views
tuning distributed queries, 31.4.1
column encryption, 2.9.2
columns
adding, 15.6.6
displaying information about, 15.14
dropping, 15.6.8, 15.6.8.3
increasing length, 15.6.5
modifying definition, 15.6.5
renaming, 15.6.7
COMMENT statement, 15.14
COMMIT COMMENT statement
used with distributed transactions, 33.2, 33.4.3.2
commit phase, 32.3.1, 32.5.4
in two-phase commit, 32.3.2, 32.3.2.2
commit point site, 32.2.5
commit point strength, 32.2.5.2, 33.1
determining, 32.2.5.2
distributed transactions, 32.2.5, 32.2.5.2
how the database determines, 32.2.5.2
commit point strength
definition, 32.2.5.2
specifying, 33.1
COMMIT statement
FORCE clause, 33.5, 33.5.1.1, 33.5.2
forcing, 33.4.2
two-phase commit and, 29.4.6
COMMIT_POINT_STRENGTH initialization parameter, 32.2.5.2, 33.1
committing transactions
commit point site for distributed transactions, 32.2.5
composite partitioning
default partition, 17.2.5
range-hash, 17.2.4, 17.3.4
range-list, 17.2.5, 17.3.5
subpartition template, modifying, 17.4.11
CONNECT command
starting an instance, 3.1.3
CONNECT INTERNAL
desupported, 1.6.2
connected user database links, 30.2.3.2
advantages and disadvantages, 29.2.7.1
definition, 29.2.7
example, 29.2.8
REMOTE_OS_AUTHENT initialization parameter, 29.2.7.1
connection qualifiers
database links and, 30.2.4
connections
terminating remote, 31.2
constraints
See also integrity constraints
disabling at table creation, 13.5.2.1
distributed system application development issues, 31.3
dropping integrity constraints, 13.5.3.3
enable novalidate state, 13.5.1.3
enabling example, 13.5.2.2
enabling when violations exist, 13.5.1.3
exceptions, 13.5.1.2, 13.5.5
exceptions to integrity constraints, 13.5.5
integrity constraint states, 13.5.1
keeping index when disabling, 13.5.3
keeping index when dropping, 13.5.3
ORA-02055 constraint violation, 31.3
renaming, 13.5.3.2
setting at table creation, 13.5.2
when to disable, 13.5.1.1
control file
creating in Automatic Storage Management, 12.5.8
control files
adding, 5.3.2
changing size, 5.3.1
conflicts with data dictionary, 5.4.1
creating, 5.1, 5.3, 5.3.3.2
creating as Oracle-managed files, 11.3.5
creating as Oracle-managed files, examples, 11.5.1
default name, 2.4.3, 5.3.1
dropping, 5.7
errors during creation, 5.4.2
guidelines for, 5.2
importance of multiplexed, 5.2.2
initial creation, 5.3.1
location of, 5.2.2
log sequence numbers, 6.1.3.2
mirroring, 2.4.3, 5.2.2
moving, 5.3.2
multiplexed, 5.2.2
names, 5.2.1
number of, 5.2.2
overwriting existing, 2.4.3
relocating, 5.3.2
renaming, 5.3.2
requirement of one, 5.1
size of, 5.2.4
specifying names before database creation, 2.4.3
troubleshooting, 5.4
unavailable during startup, 3.1.4
CONTROL_FILES initialization parameter
overwriting existing control files, 2.4.3
specifying file names, 5.2.1
warning about setting, 2.4.3
when creating a database, 2.4.3, 5.3.1
copying jobs, 27.2.3
coraenv and oraenv, 1.3
corruption
repairing data block, 21.1
cost-based optimization, 31.4.2
distributed databases, 29.5.3
hints, 31.4.3
using for distributed queries, 31.4.2
CPU_COUNT initialization parameter, 24.9.2
CREATE BIGFILE TABLESPACE statement, 8.2.2.1
CREATE BIGFILE TEMPORARY TABLESPACE statement, 8.2.3.2
CREATE CLUSTER statement
creating clusters, 18.3
example, 18.3
for hash clusters, 19.3
HASH IS clause, 19.3, 19.3.3.2
HASHKEYS clause, 19.3, 19.3.3.4
SIZE clause, 19.3.3.3
CREATE CONTROLFILE statement
about, 5.3.3.2
checking for inconsistencies, 5.4.1
creating as Oracle-managed files, examples, 11.3.5, 11.5.1
NORESETLOGS clause, 5.3.3.3
Oracle-managed files, using, 11.3.5
RESETLOGS clause, 5.3.3.3
CREATE DATABASE LINK statement, 30.2.2.1
CREATE DATABASE statement
CONTROLFILE REUSE clause, 5.3.1
DEFAULT TEMPORARY TABLESPACE clause, 2.2.2.7, 2.3.6
example of database creation, 2.2.2.7
EXTENT MANAGEMENT LOCAL clause, 2.3.2
MAXLOGFILES parameter, 6.2.4
MAXLOGMEMBERS parameter, 6.2.4, 6.2.4
password for SYS, 2.3.1
password for SYSTEM, 2.3.1
setting time zone, 2.3.9.1
specifying FORCE LOGGING, 2.3.10
SYSAUX DATAFILE clause, 2.2.2.7
UNDO TABLESPACE clause, 2.2.2.7, 2.3.4
used to create an undo tablespace, 10.5.1.1
using Oracle-managed files, 11.3.2
using Oracle-managed files, examples, 11.3.2.6, 11.5.1, 11.5.2
CREATE INDEX statement
NOLOGGING, 16.2.9
ON CLUSTER clause, 18.3.2
partitioned indexes, 17.3.1.2
using, 16.3.1
with a constraint, 16.3.3.1
CREATE SCHEMA statement
multiple tables and views, 13.1
CREATE SEQUENCE statement, 20.2.2
CACHE option, 20.2.4.2.2
examples, 20.2.4.2.2
NOCACHE option, 20.2.4.2.2
CREATE SPFILE statement, 2.7.3
CREATE SYNONYM statement, 20.3.2
CREATE TABLE statement
AS SELECT clause, 15.2.4, 15.3.3
AS SELECT vs. direct-path INSERT, 15.4.2.1
CLUSTER clause, 18.3.1
COMPRESS clause, 15.12.2.7
creating partitioned tables, 17.3.1
creating temporary table, 15.3.2
INCLUDING clause, 15.12.2.5
index-organized tables, 15.12.2
MONITORING clause, 15.5
NOLOGGING clause, 15.2.5
ORGANIZATION EXTERNAL clause, 15.13.1
OVERFLOW clause, 15.12.2.3
parallelizing, 15.3.3
PCTTHRESHOLD clause, 15.12.2.4
TABLESPACE clause, specifying, 15.2.3
use of, 15.3.1
CREATE TABLESPACE statement
BLOCKSIZE CLAUSE, using, 8.3
FORCE LOGGING clause, using, 8.4
using Oracle-managed files, 11.3.3
using Oracle-managed files, examples, 11.3.3.1
CREATE TEMPORARY TABLESPACE statement, 8.2.3.1
using Oracle-managed files, 11.3.4
using Oracle-managed files, example, 11.3.4.1
CREATE UNDO TABLESPACE statement
using Oracle-managed files, 11.3.3
using Oracle-Managed files, example, 11.3.3.2
using to create an undo tablespace, 10.5.1.2
CREATE UNIQUE INDEX statement
using, 16.3.2
CREATE VIEW statement
about, 20.1.2
OR REPLACE clause, 20.1.3
WITH CHECK OPTION, 20.1.2, 20.1.4
CREATE_SIMPLE_PLAN procedure
Database Resource Manager, 24.3
creating
chains, 27.9.2
control files, 5.3
database using Automatic Storage Management, 12.5.5
disk group, 12.4.2
event schedule, 27.8.2.3
event-based job, 27.8.2.1
job classes, 27.5.2
jobs, 27.2.2
programs, 27.3.2
Scheduler windows, 27.6.2
schedules, 27.4.2
sequences, 20.2.4.2.2, 20.2.4.2.2
window groups, 27.7.2
creating database links, 30.2
connected user, 30.2.3.2.1
connected user scenarios, 30.8.3
current user, 30.2.3.2.2
current user scenario, 30.8.5
examples, 29.2.8
fixed user, 30.2.3.1
fixed user scenario, 30.8.1, 30.8.2
obtaining necessary privileges, 30.2.1
private, 30.2.2.1
public, 30.2.2.2
service names within link names, 30.2.4
shared, 30.3
shared connected user scenario, 30.8.4
specifying types, 30.2.2
creating databases, 2
backing up the new database, 2.2.2.11
default temporary tablespace, specifying, 2.3.6
example, 2.2.2.7
manually from a script, 2.1
overriding default tablespace type, 2.3.8.2
planning, 2.2.1
preparing to, 2.2.1
prerequisites for, 2.2.1.2
problems encountered while, 2.5
setting default tablespace type, 2.3.8.1
specifying bigfile tablespaces, 2.3.8, 2.3.8.2
UNDO TABLESPACE clause, 2.3.4
upgrading to a new release, 2.1
using Database Configuration Assistant, 2.1
using Oracle-managed files, 2.3.7, 11.3.2
with locally managed tablespaces, 2.3.2
creating datafiles, 9.2
creating indexes
after inserting table data, 16.2.1
associated with integrity constraints, 16.3.3
NOLOGGING, 16.2.9
USING INDEX clause, 16.3.3.1
creating sequences, 20.2.2
creating synonyms, 20.3.2
creating views, 20.1.2
current user database links
advantages and disadvantages, 29.2.7.3
cannot access in shared schema, 29.3.2.4.2
definition, 29.2.7
example, 29.2.8
schema independence, 29.3.2.4.2
CURRVAL pseudo-column, 20.2.4.1
restrictions, 20.2.4.1.3
cursors
and closing database links, 31.2

D

data
loading using external tables, 15.13.1
data block corruption
repairing, 21.1
data blocks
altering size of, 2.4.4.1
managing space in, 14.2
nonstandard block size, 2.4.4.2
shared in clusters, 18.1
specifying size of, 2.4.4
standard block size, 2.4.4
transaction entry settings, 14.2.1
verifying, 9.7
data dictionary
conflicts with control files, 5.4.1
purging pending rows from, 33.6, 33.6.2
schema object views, 13.10, 14.7
data encryption
distributed systems, 29.3.2.5
data manipulation language
statements allowed in distributed transactions, 29.4.1
database
cloning, 1.2.11
monitoring, 4.7
starting up, 3.1
database administrators
DBA role, 1.5.2.3
operating system account, 1.5.1
password files for, 1.6.2.1
responsibilities of, 1.1.1
security and privileges of, 1.5
security officer versus, 22.1
SYS and SYSTEM accounts, 1.5.2
task definitions, 1.2
utilities for, 1.8.2
Database Configuration Assistant, 2.1
shared server configuration, 4.2.3
database links
advantages, 29.2.3
auditing, 29.3.3
authentication, 29.3.2.1
authentication without passwords, 29.3.2.2
closing, 30.4.1, 31.2
connected user, 29.2.7, 29.2.7.1, 30.2.3.2, 30.8.3
connections, determining open, 30.5.2
controlling connections, 31.2
creating, 30.2, 30.8.1, 30.8.3, 30.8.4, 30.8.5
creating shared, 30.3.2
creating, examples, 29.2.8
creating, scenarios, 30.8
current user, 29.2.7, 29.2.7.3, 30.2.3.2
data dictionary USER views, 30.5.1
definition, 29.2.1
distributed queries, 29.4.2
distributed transactions, 29.4.5
dropping, 30.4.2, 30.4.2
enforcing global naming, 30.1.2
enterprise users and, 29.3.2.4.2
fixed user, 29.2.7, 29.2.7.2, 30.8.1
global, 29.2.6
global names, 29.2.4
global object names, 29.4.7
handling errors, 31.3
limiting number of connections, 30.4.3
listing, 30.5.1, 33.3.1, 33.3.2
managing, 30.4
minimizing network connections, 30.3
name resolution, 29.4.7
names for, 29.2.5
private, 29.2.6
public, 29.2.6
referential integrity in, 31.3
remote transactions, 29.4.1, 29.4.4
resolution, 29.4.7
restrictions, 29.2.10
roles on remote database, 29.2.10
schema objects and, 29.2.9
service names used within link names, 30.2.4
shared, 29.2.2, 30.3.1, 30.3.3, 30.3.3.1, 30.3.3.2
shared SQL, 29.4.3
synonyms for schema objects, 29.2.9.3
tuning distributed queries, 31.4
tuning queries with hints, 31.4.3
tuning using collocated inline views, 31.4.1
types of links, 29.2.6
types of users, 29.2.7
users, specifying, 30.2.3
using cost-based optimization, 31.4.2
viewing, 30.5, 30.5.1
database objects
obtaining growth trends for, 14.8.3
Database Resource Manager
active session pool with queuing, 24.1.4.4.2
administering system privilege, 24.2
and operating system control, 24.9
automatic consumer group switching, 24.1.4.4.4
CREATE_SIMPLE_PLAN procedure, 24.3
description, 24.1
enabling, 24.6
execution time limit, 24.1.4.4.6
pending area, 24.4.1
resource allocation methods, 24.1.3, 24.4.2, 24.4.2, 24.4.2, 24.4.2, 24.4.2, 24.4.2, 24.4.2, 24.4.3
resource consumer groups, 24.1.3, 24.4.3, 24.5
resource plan directives, 24.1.3, 24.4.1.2, 24.4.4
resource plans, 24.1.3, 24.1.4.1, 24.1.4.2, 24.1.4.2, 24.1.4.2, 24.1.4.4.1, 24.3, 24.4.3, 24.6, 24.6, 24.7, 24.7.3, 24.10
specifying a parallel degree limit, 24.1.4.4.3
undo pool, 24.1.4.4.7
used for quiescing a database, 3.4
validating plan schema changes, 24.4.1.2
views, 24.10
database writer process
calculating checksums for data blocks, 9.7
database writer process (DBWn), 4.3
DATABASE_PROPERTIES view
name of default temporary tablespace, 2.3.6
rename of default temporary tablespace, 8.7
databases
administering, 1
administration of distributed, 30
altering availability, 3.2
backing up, 2.2.2.11
control files of, 5.2
creating manually, 2.2
default temporary tablespace, specifying, 2.3.6
dropping, 2.6
global database names in distributed systems, 2.4.1.2
mounting a database, 3.1.4.3
mounting to an instance, 3.2.1
names, about, 2.4.1.1
names, conflicts in, 2.4.1.1
opening a closed database, 3.2.2
planning, 1.2.3
planning creation, 2.2.1
quiescing, 3.4
read-only, opening, 3.2.3
recovery, 3.1.4.6
renaming, 5.3.3.1, 5.3.3.2, 5.3.3.3
restricting access, 3.2.4
resuming, 3.5
shutting down, 3.3
specifying control files, 2.4.3
starting up, 3.1.2
suspending, 3.5
troubleshooting creation problems, 2.5
undo management, 2.3.4
upgrading, 2.1
with locally managed tablespaces, 2.3.2
datafile headers
when renaming tablespaces, 8.7
datafiles
adding to a tablespace, 9.2
bringing online and offline, 9.4
checking associated tablespaces, 8.13.2
copying using database, 9.8
creating, 9.2
creating Oracle-managed files, 11.3, 11.3.6.2
database administrators access, 1.5.1
default directory, 9.2
definition, 9.1
deleting, 8.8
dropping, 9.4.2, 9.6
dropping Oracle-managed files, 11.4.1
file numbers, 9.1
fully specifying filenames, 9.2
guidelines for managing, 9.1
headers when renaming tablespaces, 8.7
identifying OS filenames, 9.5.1.2
location, 9.1.3
mapping files to physical devices, 9.9
minimum number of, 9.1.1
MISSING, 5.4.1
monitoring using views, 9.10
online, 9.4.2
Oracle-managed, 11
relocating, 9.5
renaming, 9.5
reusing, 9.2
size of, 9.1.2
statements to create, 9.2
storing separately from redo log files, 9.1.4
unavailable when database is opened, 3.1.4
verifying data blocks, 9.7
DB_BLOCK_CHECKING initialization parameter, 21.3.1, 21.3.1.4
DB_BLOCK_CHECKSUM initialization parameter, 9.7
enabling redo block checking with, 6.7
DB_BLOCK_SIZE initialization parameter
and nonstandard block sizes, 8.3
setting, 2.4.4
DB_CACHE_SIZE initialization parameter
setting, 2.4.5.4.1
specifying multiple block sizes, 8.3
DB_DOMAIN initialization parameter
setting for database creation, 2.4.1, 2.4.1.2
DB_FILES initialization parameter
determining value for, 9.1.1.1
DB_NAME initialization parameter
setting before database creation, 2.4.1
DB_nK_CACHE_SIZE initialization parameter
setting, 2.4.5.4.1
specifying multiple block sizes, 8.3
using with transportable tablespaces, 8.12.5.5
DBA role, 1.5.2.3
DBA. See database administrators.
DBA_2PC_NEIGHBORS view, 33.3.2
using to trace session tree, 33.3.2
DBA_2PC_PENDING view, 33.3.1, 33.6, 33.7.6
using to list in-doubt transactions, 33.3.1
DBA_DB_LINKS view, 30.5.1, 30.5.1, 30.5.1
DBA_RESUMABLE view, 14.4.4.1
DBA_UNDO_EXTENTS view
undo tablespace extents, 10.7
DBCA. See Database Configuration Assistant
DBMS_FILE_TRANSFER package
copying datafiles, 9.7
DBMS_METADATA package
GET_DDL function, 13.10.1
using for object definition, 13.10.1
DBMS_REDEFINITION package
performing online redefinition with, 15.7.2
required privileges, 15.7.9
DBMS_REPAIR
logical corruptions, 21.3.2
DBMS_REPAIR package
examples, 21.4
limitations, 21.2.2
procedures, 21.2.1
using, 21.3, 21.4.5
DBMS_RESOURCE_MANAGER package, 24.1.4, 24.2, 24.5, 24.5.2
procedures (table of), 24.2
DBMS_RESOURCE_MANAGER_PRIVS package, 24.2, 24.5
procedures (table of), 24.2
DBMS_RESUMABLE package, 14.4.4.3
DBMS_SERVER_ALERT package
setting alert thresholds, 14.1
DBMS_SESSION package, 24.5.2.3
DBMS_SPACE package, 14.5.4
example for unused space, 14.7.1
FREE_BLOCK procedure, 14.7.1
SPACE_USAGE procedure, 14.7.1
UNUSED_SPACE procedure, 14.7.1
DBMS_STATS package, 13.2.1
MONITORING clause of CREATE TABLE, 15.5
DBMS_STORAGE_MAP package
invoking for file mapping, 9.9.3.1
views detailing mapping information, 9.9.3.3
DBMS_TRANSACTION package
PURGE_LOST_DB_ENTRY procedure, 33.6.1
DBVERIFY utility, 21.3.1, 21.3.1.2
DEALLOCATE UNUSED clause, 14.5.4
deallocating unused space, 14.5
DBMS_SPACE package, 14.5.4
DEALLOCATE UNUSED clause, 14.5.4
declarative referential integrity constraints, 31.3
dedicated server processes, 4.1.1
trace files for, 4.7.2
DEFAULT keyword
list partitioning, 17.3.3
default partitions, 17.2.3
default subpartition, 17.2.5
default temporary tablespace
renaming, 8.7
default temporary tablespaces
specifying at database creation, 2.2.2.7, 2.3.6
specifying bigfile tempfile, 2.3.8.2
DEFAULT_CONSUMER_GROUP for Database Resource Manager, 24.4.3, 24.4.3.3, 24.5.3.2
defining
chain steps, 27.9.3
dependencies
between schema objects, 13.7
displaying, 13.10.2.2
dictionary-managed tablespaces
migrating SYSTEM to locally managed, 8.11
Digital POLYCENTER Manager on NetView, 29.3.4.3
directories
managing disk group, 12.4.7
direct-path INSERT
benefits, 15.4.2.1
how it works, 15.4.2.3
index maintenance, 15.4.2.5.1
locking considerations, 15.4.2.5.3
logging mode, 15.4.2.4
parallel INSERT, 15.4.2.2
parallel load compared with parallel INSERT, 15.4.2.1, 15.4.2.1
serial INSERT, 15.4.2.2
space considerations, 15.4.2.5.2
DISABLE ROW MOVEMENT clause, 17.3
disabling
chains, 27.9.10
jobs, 27.2.8
programs, 27.3.5
window groups, 27.7.7
windows, 27.6.7
disabling recoverer process, 33.9.2
disk discovery
in Automatic Storage Management, 12.2, 12.3.4.3
disk failure
in Automatic Storage Management, 12.4.1.4
disk group
adding templates to, 12.4.10.1
altering membership of, 12.4.3
creating, 12.4.2
dropping, 12.4.6
dropping disks from, 12.4.3.2
managing capacity in, 12.4.1.6
manually rebalancing, 12.4.3.5
mounting and dismounting, 12.4.4
resizing disks in, 12.4.3.3
undropping disks in, 12.4.3.4
disk groups, administering, 12.4
dispatcher process (Dnnn), 4.3
dispatcher processes, 4.2.3.3, 4.2.4
DISPATCHERS initialization parameter
setting attributes of, 4.2.3.1
setting initially, 4.2.3.3
distributed applications
distributing data, 31.1
distributed databases
administration overview, 29.3
application development, 29.5, 31, 31.5
client/server architectures, 29.1.3
commit point strength, 32.2.5.2
cost-based optimization, 29.5.3
direct and indirect connections, 29.1.3
distributed processing, 29.1.1.1
distributed queries, 29.4.2
distributed updates, 29.4.2, 29.4.2
forming global database names, 30.1.1
global object names, 29.2.9.4, 30.1
globalization support, 29.6
location transparency, 29.5.1.1, 30.6
management tools, 29.3.4
managing read consistency, 33.10
nodes of, 29.1.3
overview, 29.1.1
remote object security, 30.6.1
remote queries and updates, 29.4.1
replicated databases and, 29.1.1.2
resumable space allocation, 14.4.1.4
running in ARCHIVELOG mode, 7.2.2
running in NOARCHIVELOG mode, 7.2.2
scenarios, 30.8
schema object name resolution, 29.4.8
schema-dependent global users, 29.3.2.4.1
schema-independent global users, 29.3.2.4.2
security, 29.3.2
site autonomy of, 29.3.1
SQL transparency, 29.5.1.2
starting a remote instance, 3.1.4.8
transaction processing, 29.4
transparency, 29.5.1
distributed processing
distributed databases, 29.1.1.1
distributed queries, 29.4.2
analyzing tables, 31.4.2.2.2
application development issues, 31.4
cost-based optimization, 31.4.2
optimizing, 29.5.3
distributed systems
data encryption, 29.3.2.5
distributed transactions, 29.4.5
case study, 32.5
commit point site, 32.2.5
commit point strength, 32.2.5.2, 33.1
committing, 32.2.5.1
database server role, 32.2.2
defined, 32.1
DML and DDL, 32.1.1
failure during, 33.8.1
global coordinator, 32.2.4
local coordinator, 32.2.3
lock timeout interval, 33.8
locked resources, 33.8
locks for in-doubt, 33.8.2
manually overriding in-doubt, 33.4.2
naming, 33.2, 33.4.3.2
session trees, 32.2, 32.2.2, 32.2.3, 32.2.4, 32.2.5, 33.3.2
setting advice, 33.4.3.3
transaction control statements, 32.1.2
transaction timeouts, 33.8.1
two-phase commit, 32.5, 33.4.1
viewing database links, 33.3.1
distributed updates, 29.4.2
DML error logging, inserting data with, 15.4.1
DML. See data manipulation language
DRIVING_SITE hint, 31.4.3.2
DROP CLUSTER statement
CASCADE CONSTRAINTS clause, 18.5
dropping cluster, 18.5
dropping cluster index, 18.5
dropping hash cluster, 19.5
INCLUDING TABLES clause, 18.5
DROP DATABASE statement, 2.6
DROP LOGFILE clause
ALTER DATABASE statement, 6.5.1
DROP LOGFILE MEMBER clause
ALTER DATABASE statement, 6.5.2
DROP PARTITION clause, 17.4.4.1
DROP SYNONYM statement, 20.3.4
DROP TABLE statement
about, 15.10
CASCADE CONSTRAINTS clause, 15.10
for clustered tables, 18.5.1
DROP TABLESPACE statement, 8.8
dropping
aliases from a disk group, 12.4.9
Automatic Storage Management template, 12.4.10.3
chain steps, 27.9.11
chains, 27.9.7
datafiles, 9.6
disk groups, 12.4.6
disks from a disk group, 12.4.3.2
files from a disk group, 12.4.9
job classes, 27.5.4
jobs, 27.2.7, 28.2.8
programs, 27.3.4
rules from chains, 27.9.9
running jobs, 28.2.9
schedules, 27.4.4
tempfiles, 9.6
window groups, 27.7.3
windows, 27.6.6
dropping columns from tables, 15.6.8.1
marking unused, 15.6.8.2
remove unused columns, 15.6.8.2
dropping database links, 30.4.2, 30.4.2
dropping datafiles
Oracle-managed, 11.4.1
dropping partitioned tables, 17.5
dropping tables
CASCADE clause, 15.10
consequences of, 15.10
dropping tempfiles
Oracle-managed, 11.4.1
DUMP_ORPHAN_KEYS procedure, 21.3.2
checking sync, 21.3.2
DBMS_REPAIR package, 21.2.1
example, 21.4.4
recovering data, 21.3.4.1

E

EMPHASIS resource allocation method, 24.4.2
ENABLE ROW MOVEMENT clause, 17.3, 17.3.1.1
enabling
chains, 27.9.5
jobs, 27.2.9
programs, 27.3.6
window groups, 27.7.6
windows, 27.6.8
enabling recoverer process
distributed transactions, 33.9.2
encryption, transparent data, 2.9.2
enterprise users
definition, 29.3.2.4.2
environment variables
selecting an instance with, 1.3
error logging, DML
inserting data with, 15.4.1
errors
alert log and, 4.7.2
assigning names with PRAGMA_EXCEPTION_INIT, 31.5
exception handler, 31.5
integrity constrain violation, 31.3
ORA-00028, 4.6.2
ORA-01090, 3.3
ORA-01173, 5.4.2
ORA-01176, 5.4.2
ORA-01177, 5.4.2
ORA-01578, 9.7
ORA-01591, 33.8.2
ORA-02049, 33.8.1
ORA-02050, 33.4.1
ORA-02051, 33.4.1
ORA-02054, 33.4.1
ORA-1215, 5.4.2
ORA-1216, 5.4.2
RAISE_APPLICATION_ERROR() procedure, 31.5
remote procedure, 31.5
rollback required, 31.3
trace files and, 4.7.2
when creating a database, 2.5
when creating control file, 5.4.2
while starting a database, 3.1.4.5
while starting an instance, 3.1.4.5
event message
passing to event-based job, 27.8.2.5
event schedule
altering, 27.8.2.4
creating, 27.8.2.3
event-based job
altering, 27.8.2.2
creating, 27.8.2.1
passing event messages to, 27.8.2.5
events (Scheduler)
overview, 26.2.4
using, 27.8
exception handler, 31.5
EXCEPTION keyword, 31.5
exceptions
assigning names with PRAGMA_EXCEPTION_INIT, 31.5
integrity constraints, 13.5.5
user-defined, 31.5
EXCHANGE PARTITION clause, 17.4.5.1, 17.4.5.3, 17.4.5.4, 17.4.5.5, 17.4.6
execution plans
analyzing for distributed queries, 31.4.4
export operations
restricted mode and, 3.1.4.4
export utilities
about, 1.8.2.2
expressions, calendaring, 27.4.5.1
EXTENT MANAGEMENT LOCAL clause
CREATE DATABASE, 2.3.2
extents
allocating cluster extents, 18.4
allocating for tables, 15.6.4
data dictionary views for, 14.7.2
deallocating cluster extents, 18.4
displaying free extents, 14.7.2.3
external jobs
running, 27.2.5.4
external procedures
managing processes for, 4.5
external tables
altering, 15.13.2
creating, 15.13.1
defined, 15.13
dropping, 15.13.3
privileges required, 15.13.4
uploading data example, 15.13.1

F

failure groups, 12.2, 12.4.1.5
features
new, Preface
file mapping
examples, 9.9.4
how it works, 9.9.2
how to use, 9.9.3
overview, 9.9.1
structures, 9.9.2.2
views, 9.9.3.3
file system
used for Oracle-managed files, 11.1.1.2
FILE_MAPPING initialization parameter, 9.9.3.1
filenames
Automatic Storage Management, 12.5.2
Oracle-managed files, 11.3.1
files
creating Oracle-managed files, 11.3, 11.3.6.2
FIX_CORRUPT_BLOCKS procedure
DBMS_REPAIR, 21.2.1
example, 21.4.3
marking blocks corrupt, 21.3.3.1
fixed user database links
advantages and disadvantages, 29.2.7.2
creating, 30.2.3.1
definition, 29.2.7
example, 29.2.8
flash recovery area
initialization parameters to specify, 2.4.2
Flashback Drop
about, 15.11
purging recycle bin, 15.11.4
querying recycle bin, 15.11.3
recycle bin, 15.11.1
restoring objects, 15.11.5
Flashback Table
overview, 15.9
Flashback Transaction Query, 15.8
FMON background process, 9.9.2.1.1
FMPUTL external process
used for file mapping, 9.9.2.1.2
FOR PARTITION clause, 17.4.8.1
FORCE clause
COMMIT statement, 33.5
ROLLBACK statement, 33.5
FORCE LOGGING clause
CREATE CONTROLFILE, 2.3.10.1
CREATE DATABASE, 2.3.10
CREATE TABLESPACE, 8.4
performance considerations, 2.3.10.2
FORCE LOGGING mode, 15.4.2.4
forcing
COMMIT or ROLLBACK, 33.3.1, 33.4.2
forcing a log switch, 6.6
using ARCHIVE_LAG_TARGET, 6.2.5
with the ALTER SYSTEM statement, 6.6
forget phase
in two-phase commit, 32.3.3
free space
listing free extents, 14.7.2.3
tablespaces and, 8.13.3
function-based indexes, 16.3.7
functions
recompiling, 13.7.3

G

generic connectivity
definition, 29.1.2.3
global cache service (LMS), 4.3
global coordinators, 32.2.4
distributed transactions, 32.2.4
global database consistency
distributed databases and, 32.3.2.2
global database links, 29.2.6
creating, 30.2.2.3
global database names
changing the domain, 30.1.4
database links, 29.2.4
enforcing for database links, 29.2.5
enforcing global naming, 30.1.2
forming distributed database names, 30.1.1
impact of changing, 29.4.9.1
querying, 30.1.3
global object names
database links, 29.4.7
distributed databases, 30.1
global users, 30.8.5
schema-dependent in distributed systems, 29.3.2.4.1
schema-independent in distributed systems, 29.3.2.4.2
GLOBAL_NAME view
using to determine global database name, 30.1.3
GLOBAL_NAMES initialization parameter
database links, 29.2.5
globalization support
client/server architectures, 29.6.1
distributed databases, 29.6
GRANT statement
SYSOPER/SYSDBA privileges, 1.7.3.1
granting privileges and roles
SYSOPER/SYSDBA privileges, 1.7.3.1
growth trends
of database objects, 14.8.3
GV$DBLINK view, 30.5.2

H

hash clusters
advantages and disadvantages, 19.1
altering, 19.4
choosing key, 19.3.3.1
contrasted with index clusters, 19.1
controlling space use of, 19.3.3
creating, 19.3
dropping, 19.5
estimating storage, 19.3.4
examples, 19.3.3.5.1
hash function, 19.1, 19.2.2, 19.3, 19.3.3.1, 19.3.3.2, 19.3.3.3
HASH IS clause, 19.3, 19.3.3.2
HASHKEYS clause, 19.3, 19.3.3.4
single-table, 19.3.2
SIZE clause, 19.3.3.3
sorted, 19.3.1
hash functions
for hash cluster, 19.1
hash partitioning
creating tables using, 17.3.2
index-organized tables, 17.3.10.2, 17.3.10.3
multicolumn partitioning keys, 17.3.7
heterogeneous distributed systems
definition, 29.1.2
Heterogeneous Services
overview, 29.1.2
hints, 31.4.3
DRIVING_SITE, 31.4.3.2
NO_MERGE, 31.4.3.1
using to tune distributed queries, 31.4.3
historical tables
moving time window, 17.6
HP OpenView, 29.3.4.3

I

IBM NetView/6000, 29.3.4.3
import operations
restricted mode and, 3.1.4.4
import utilities
about, 1.8.2.2
index clusters. See clusters.
indexes
altering, 16.4
analyzing, 13.2
choosing columns to index, 16.2.2
cluster indexes, 18.3.2, 18.4.1, 18.5
coalescing, 16.2.10, 16.4.2
column order for performance, 16.2.3
creating, 16.3
disabling and dropping constraints cost, 16.2.11
dropping, 16.2.5, 16.6, 16.6
estimating size, 16.2.6
estimating space use, 14.8.2
explicitly creating a unique index, 16.3.2
function-based, 16.3.7
guidelines for managing, 16.1
keeping when disabling constraint, 13.5.3
keeping when dropping constraint, 13.5.3
key compression, 16.3.8
limiting for a table, 16.2.4
monitoring space use of, 16.5
monitoring usage, 16.4.3
parallelizing index creation, 16.2.8
partitioned, 17.1
rebuilding, 16.2.10, 16.4.2, 16.4.2
rebuilt after direct-path INSERT, 15.4.2.5.1
setting storage parameters for, 16.2.6
shrinking, 14.5.3
space used by, 16.5
statement for creating, 16.3.1
tablespace for, 16.2.7
temporary segments and, 16.2.1
updating global indexes, 17.4.1
validating structure, 13.2.2
when to create, 16.2.2
index-organized tables
analyzing, 15.12.5
AS subquery, 15.12.2.6
converting to heap, 15.12.7
creating, 15.12.2
described, 15.12.1
INCLUDING clause, 15.12.2.5
key compression, 15.12.2.7
maintaining, 15.12.3
ORDER BY clause, using, 15.12.6
overflow clause, 15.12.2.3
parallel creation, 15.12.2.6
partitioning, 17.3, 17.3.10
partitioning secondary indexes, 17.3.10.1
rebuilding with MOVE clause, 15.12.3.2
storing nested tables, 15.12.2.2
storing object types, 15.12.2.2
threshold value, 15.12.2.4
in-doubt transactions, 32.4
after a system failure, 33.4.1
automatic resolution, 32.4.1, 32.4.1.1
deciding how to handle, 33.4
deciding whether to perform manual override, 33.4.2
defined, 32.3.1.2
manual resolution, 32.4.2
manually committing, 33.5.1
manually committing, example, 33.7
manually overriding, 33.4.2, 33.5
manually overriding, scenario, 33.7
manually rolling back, 33.5.2
overview, 32.4
pending transactions table, 33.7.6
purging rows from data dictionary, 33.6, 33.6.2
recoverer process and, 33.9.2
rolling back, 33.5, 33.5.1.1, 33.5.2, 33.5.2
SCNs and, 32.4.3
simulating, 33.9
tracing session tree, 33.3.2
viewing database links, 33.3.1
INITIAL parameter
cannot alter, 14.3.7, 15.6.2
description, 14.3.1
initialization parameter file
and Automatic Storage Management, 3.1.2
creating, 2.2.2.3
creating for database creation, 2.2.2.3
editing before database creation, 2.4
individual parameter names, 2.4.1
server parameter file, 2.7
understanding, 3.1.2
initialization parameters
ARCHIVE_LAG_TARGET, 6.2.5.1
BACKGROUND_DUMP_DEST, 4.7.2.2
COMMIT_POINT_STRENGTH, 32.2.5.2, 33.1
CONTROL_FILES, 2.4.3, 2.4.3, 2.4.3, 5.2.1, 5.3.1
DB_BLOCK_CHECKING, 21.3.1.4
DB_BLOCK_CHECKSUM, 6.7, 9.7
DB_BLOCK_SIZE, 2.4.4, 8.3
DB_CACHE_SIZE, 2.4.5.4.1, 8.3
DB_DOMA, 2.4.1
DB_DOMAIN, 2.4.1.2
DB_FILES, 9.1.1.1
DB_NAME, 2.4.1
DB_nK_CACHE_SIZE, 2.4.5.4.1, 8.3, 8.12.5.5
DISPATCHERS, 4.2.3.3
FILE_MAPPING, 9.9.3.1
for Automatic Storage Management, 12.3.3
for Automatic Storage Management instance, 12.3.3.1
for buffer cache, 2.4.5.4.1
GLOBAL_NAMES, 29.2.5
LOG_ARCHIVE_DEST, 7.4.1
LOG_ARCHIVE_DEST_n, 7.4.1, 7.6.2
LOG_ARCHIVE_DEST_STATE_n, 7.4.2
LOG_ARCHIVE_MAX_PROCESSES, 7.3.4
LOG_ARCHIVE_MIN_SUCCEED_DEST, 7.6.1
LOG_ARCHIVE_TRACE, 7.7
MAX_DUMP_FILE_SIZE, 4.7.2.3
OPEN_LINKS, 30.4.3
PROCESSES, 2.4.6
REMOTE_LOGIN_PASSWORDFILE, 1.7.2
REMOTE_OS_AUTHENT, 29.2.7.1
RESOURCE_MANAGER_PLAN, 24.6, 24.6
server parameter file and, 2.7, 2.7.9
SET SQL_TRACE, 4.7.2.4
SGA_MAX_SIZE, 2.4.5
shared server and, 4.2.1
SHARED_SERVERS, 4.2.2.2
SORT_AREA_SIZE, 16.2.1
SPFILE, 2.7.4, 3.1.2
SQL_TRACE, 4.7.2
STATISTICS_LEVEL, 15.5
UNDO_MANAGEMENT, 2.3.4, 10.2.1
UNDO_TABLESPACE, 2.4.7.2, 10.2.1
USER_DUMP_DEST, 4.7.2.2
INITRANS parameter
altering, 15.6.2
guidelines for setting, 14.2.1
INSERT statement
with DML error logging, 15.4.1
installing
patches, 1.2.10
installing Automatic Storage Management, 12.3.1
instance
selecting with environment variables, 1.3
INSTANCE_TYPE, 12.3.3.1
instances
aborting, 3.3.4
shutting down immediately, 3.3.2
shutting down normally, 3.3.1
transactional shutdown, 3.3.3
integrity constraints
See also constraints
cost of disabling, 16.2.11
cost of dropping, 16.2.11
creating indexes associated with, 16.3.3
dropping tablespaces and, 8.8
ORA-02055 constraint violation, 31.3
INTERNAL username
connecting for shutdown, 3.3
IOT. See index-organized tables.

J

job classes
altering, 27.5.3
creating, 27.5.2
dropping, 27.5.4
overview, 26.3.1
using, 27.5
job coordinator, 26.4.2, 28.2.5
job recovery (Scheduler), 28.2.11
jobs
altering, 27.2.4
copying, 27.2.3
creating, 27.2.2
creating for chains, 27.9.6
disabling, 27.2.8
dropping, 27.2.7, 28.2.8
dropping running, 28.2.9
enabling, 27.2.9
overview, 26.2.3
priorities, 28.2.12
running, 27.2.5
stopping, 27.2.6
using, 27.2
viewing information on running, 28.2.4
join views
definition, 20.1.2.1
DELETE statements, 20.1.5.2.2
key-preserved tables in, 20.1.5.1
modifying, 20.1.5
rules for modifying, 20.1.5.2
updating, 20.1.5
joins
statement transparency in distributed databases, 30.7

K

key compression, 15.12.2.7
indexes, 16.3.8
key-preserved tables
in join views, 20.1.5.1
in outer joins, 20.1.5.3
keys
cluster, 18.1, 18.2.3

L

links
See database links
LIST CHAINED ROWS clause
of ANALYZE statement, 13.2.3.1
list partitioning
adding values to value list, 17.4.9
creating tables using, 17.3.3
DEFAULT keyword, 17.3.3
dropping values from value-list, 17.4.10
when to use, 17.2.3
listing database links, 30.5.1, 33.3.1, 33.3.2
loading data
using external tables, 15.13.1
LOBs
storage parameters for, 14.3.6
local coordinators, 32.2.3
distributed transactions, 32.2.3
locally managed tablespaces, 8.2.1
automatic segment space management in, 8.2.1.2
DBMS_SPACE_ADMIN package, 8.10
detecting and repairing defects, 8.10
migrating SYSTEM from dictionary-managed, 8.11
tempfiles, 8.2.3.1
temporary, creating, 8.2.3.1
location transparency in distributed databases
creating using synonyms, 30.6.2
creating using views, 30.6.1
restrictions, 30.7
using procedures, 30.6.3.3
lock timeout interval
distributed transactions, 33.8
locks
in-doubt distributed transactions, 33.8, 33.8.2
monitoring, 4.7.3
log
window (Scheduler), 27.6
log sequence number
control files, 6.1.3.2
log switches
description, 6.1.3.2
forcing, 6.6, 6.6
log sequence numbers, 6.1.3.2
multiplexed redo log files and, 6.2.1.1
privileges, 6.6
using ARCHIVE_LAG_TARGET, 6.2.5
waiting for archiving to complete, 6.2.1.1
log writer process (LGWR), 4.3
multiplexed redo log files and, 6.2.1.1
online redo logs available for use, 6.1.3
trace file monitoring, 4.7.2.1
trace files and, 6.2.1.1
writing to online redo log files, 6.1.3
LOG_ARCHIVE_DEST initialization parameter
specifying destinations using, 7.4.1
LOG_ARCHIVE_DEST_n initialization parameter, 7.4.1
REOPEN attribute, 7.6.2
LOG_ARCHIVE_DEST_STATE_n initialization parameter, 7.4.2
LOG_ARCHIVE_DUPLEX_DEST initialization parameter
specifying destinations using, 7.4.1
LOG_ARCHIVE_MAX_PROCESSES initialization parameter, 7.3.4
LOG_ARCHIVE_MIN_SUCCEED_DEST initialization parameter, 7.6.1
LOG_ARCHIVE_TRACE initialization parameter, 7.7
LOGGING clause
CREATE TABLESPACE, 8.4
logging mode
direct-path INSERT, 15.4.2.4
NOARCHIVELOG mode and, 15.4.2.4.1
logical corruptions from DBMS_REPAIR, 21.3.2
logical volume managers
mapping files to physical devices, 9.9, 9.9.4.3
used for Oracle-managed files, 11.1.1.1
LOGON trigger
setting resumable mode, 14.4.3
logs
job, 28.2.6
window (Scheduler), 27.6, 28.2.6
LONG columns, 30.7
LONG RAW columns, 30.7
LOW_GROUP for Database Resource Manager, 24.4.3, 24.7.3

M

maintenance windows
Scheduler, 23.1
managing
Automatic Storage Management templates, 12.4.10
undo tablespace, 10
managing capacity in disk groups, 12.4.1.6
managing datafiles, 9
managing sequences, 20.2.1
managing synonyms, 20.3.1
managing tables, 15
managing views, 20.1
manual archiving
in ARCHIVELOG mode, 7.3.3
manual overrides
in-doubt transactions, 33.5
MAX_DUMP_FILE_SIZE initialization parameter, 4.7.2.3
MAXDATAFILES parameter
changing, 5.3.3.2
MAXINSTANCES, 5.3.3.2
MAXLOGFILES parameter
changing, 5.3.3.2
CREATE DATABASE statement, 6.2.4
MAXLOGHISTORY parameter
changing, 5.3.3.2
MAXLOGMEMBERS parameter
changing, 5.3.3.2
CREATE DATABASE statement, 6.2.4, 6.2.4
MAXTRANS parameter
altering, 15.6.2
media recovery
effects of archiving on, 7.2.1
migrated rows
eliminating from table, procedure, 13.2.3.2
migrating a database to Automatic Storage Management, 12.6
MINEXTENTS parameter
cannot alter, 14.3.7, 15.6.2
description, 14.3.1
mirrored files
control files, 2.4.3, 5.2.2
online redo log, 6.2.1.1
online redo log location, 6.2.2
online redo log size, 6.2.3
MISSING datafiles, 5.4.1
MODIFY DEFAULT ATTRIBUTES clause, 17.4.8.1
using for partitioned tables, 17.4.7.1
MODIFY DEFAULT ATTRIBUTES FOR PARTITION clause
of ALTER TABLE, 17.4.7.2, 17.4.7.3
MODIFY PARTITION clause, 17.4.8.1, 17.4.8.2, 17.4.12, 17.4.14.2.2
MODIFY SUBPARTITION clause, 17.4.8.3
monitoring
chains, 28.2.13
MONITORING clause
CREATE TABLE, 15.5
monitoring datafiles, 9.10
MONITORING USAGE clause
of ALTER INDEX statement, 16.4.3
mounting a database, 3.1.4.3
mounting and dismounting disk groups, 12.4.4
MOVE PARTITION clause, 17.4.8, 17.4.12
MOVE SUBPARTITION clause, 17.4.8, 17.4.12.2
moving control files, 5.3.2
multiple temporary tablespaces, 8.2.4, 8.2.4.3
multiplexed control files
importance of, 5.2.2
multiplexing
archived redo logs, 7.4.1
control files, 5.2.2
redo log file groups, 6.2.1
redo log files, 6.2.1, 6.2.1

N

name resolution in distributed databases
database links, 29.4.7
impact of global name changes, 29.4.9.1
procedures, 29.4.9
schema objects, 29.2.9.4, 29.4.8
synonyms, 29.4.9
views, 29.4.9
when global database name is complete, 29.4.7.1
when global database name is partial, 29.4.7.2
when no global database name is specified, 29.4.7.3
named user limits
setting initially, 2.4.9
nested tables
storage parameters for, 14.3.6
networks
connections, minimizing, 30.3
distributed databases use of, 29.1.1
NEXT parameter
altering, 14.3.7, 15.6.2
NEXTVAL pseudo-column, 20.2.4.1, 20.2.4.1.1
restrictions, 20.2.4.1.3
NO_DATA_FOUND keyword, 31.5
NO_MERGE hint, 31.4.3.1
NOARCHIVELOG mode
archiving, 7.2
definition, 7.2.1
dropping datafiles, 9.4.2
LOGGING mode and, 15.4.2.4.1
media failure, 7.2.1
no hot backups, 7.2.1
running in, 7.2.1
switching to, 7.3.2
taking datafiles offline in, 9.4.2
NOCACHE option
CREATE SEQUENCE statement, 20.2.4.2.2
NOLOGGING clause
CREATE TABLESPACE, 8.4
NOLOGGING mode
direct-path INSERT, 15.4.2.4
NOMOUNT clause
STARTUP command, 3.1.4.2
normal transmission mode
definition, 7.5.1
Novell NetWare Management System, 29.3.4.3
NOWAIT keyword
in REBALANCE clause, 12.4.3

O

object privileges
for external tables, 15.13.4
objects
See also schema objects
offline tablespaces
priorities, 8.5.1
taking offline, 8.5.1
online redefinition of tables, 15.7
abort and cleanup, 15.7.5
examples, 15.7.8
features of, 15.7.1
intermediate synchronization, 15.7.4
redefining a single partition, 15.7.7
rules for, 15.7.7.1
restrictions, 15.7.6
with DBMS_REDEFINITION, 15.7.2
online redo log files
See also online redo logs
online redo logs
See also redo log files
creating groups, 6.3
creating members, 6.3.2
dropping groups, 6.5
dropping members, 6.5
forcing a log switch, 6.6
guidelines for configuring, 6.2
INVALID members, 6.5.2
location of, 6.2.2
managing, 6
moving files, 6.4
number of files in the, 6.2.4
optimum configuration for the, 6.2.4
renaming files, 6.4
renaming members, 6.4
specifying ARCHIVE_LAG_TARGET, 6.2.5
STALE members, 6.5.2
viewing information about, 6.9
online segment shrink, 14.5.3
OPEN_LINKS initialization parameter, 30.4.3
opening windows, 27.6.4
operating system authentication, 1.6.3.2
for Automatic Storage Management, 12.3.2.1
operating systems
database administrators requirements for, 1.5.1
renaming and relocating files, 9.5
ORA_TZFILE environment variable
specifying time zone file for database, 2.3.9.2
ORA-01555 error
snapshot too old, 10.2.2
ORA-02055 error
integrity constraint violation, 31.3
ORA-02067 error
rollback required, 31.3
ORA-04068
existing state of package has been discarded, 13.7.1
Oracle Call Interface. See OCI
Oracle Database
release numbers, 1.4.1
Oracle Database users
types of, 1.1
Oracle Enterprise Manager, 3.1.1.3, 3.1.1.3
Oracle home
cloning, 1.2.11
Oracle Managed Files feature
See also Oracle-managed files
Oracle Net
service names in, 7.5.2
transmitting archived logs via, 7.5.2
Oracle Universal Installer, 2.1
Oracle-managed files
adding to an existing database, 11.5.3
behavior, 11.4
benefits, 11.1.2
CREATE DATABASE statement, 11.3.2
creating, 11.3
creating control files, 11.3.5
creating datafiles, 11.3.3
creating online redo log files, 11.3.6
creating tempfiles, 11.3.4
described, 11.1
dropping datafile, 11.4.1
dropping online redo log files, 11.4.2
dropping tempfile, 11.4.1
initialization parameters, 11.2
introduction, 2.3.7
naming, 11.3.1
renaming, 11.4.3
scenarios for using, 11.5
Oracle-managed files feature
See also Oracle-managed files
oraenv and coraenv, 1.3
ORAPWD utility, 1.7.1
ORGANIZATION EXTERNAL clause
of CREATE TABLE, 15.13.1
orphan key table
example of building, 21.4.1.2
OSDBA group, 1.6.3.1
OSOPER group, 1.6.3.1
OTHER_GROUPS for Database Resource Manager, 24.1.4.2, 24.4.1.2, 24.4.3, 24.4.4.1, 24.7.3
outer joins, 20.1.5.3
key-preserved tables in, 20.1.5.3
overlapping windows, 27.6.9

P

package state discarded error, 13.7.1
packages
DBMS_FILE_TRANSFER, 9.7
DBMS_METADATA, 13.10.1
DBMS_REDEFINITION, 15.7.2, 15.7.9
DBMS_REPAIR, 21.2
DBMS_RESOURCE_MANAGER, 24.1.4, 24.2, 24.2, 24.5, 24.5.2
DBMS_RESOURCE_MANAGER_PRIVS, 24.2, 24.2, 24.5
DBMS_RESUMABLE, 14.4.4.3
DBMS_SESSION, 24.5.2.3
DBMS_SPACE, 14.5.4, 14.7.1
DBMS_STATS, 13.2.1, 15.5
DBMS_STORAGE_MAP, 9.9.3.2, 9.9.3.3
privileges for recompiling, 13.7.4
recompiling, 13.7.4
parallel execution
managing, 4.4
parallel hints, 4.4, 4.4
parallelizing index creation, 16.2.8
resumable space allocation, 14.4.1.5
parallel hints, 4.4, 4.4
PARALLEL_DEGREE_LIMIT_ABSOLUTE resource allocation method, 24.4.2
parallelizing table creation, 15.2.4, 15.3.3
parameter files
See also initialization parameter file.
PARTITION BY HASH clause, 17.3.2
PARTITION BY LIST clause, 17.3.3
PARTITION BY RANGE clause, 17.3.1
for composite-partitioned tables, 17.3.4, 17.3.5
PARTITION clause
for composite-partitioned tables, 17.3.4, 17.3.5
for hash partitions, 17.3.2
for list partitions, 17.3.3
for range partitions, 17.3.1
partitioned indexes, 17
adding partitions, 17.4.2.6
creating local index on composite partitioned table, 17.3.4
creating local index on hash partitioned table, 17.3.2.1
creating range partitions, 17.3.1.2
description, 17.1
dropping partitions, 17.4.4.2
global, 17.2
local, 17.2
maintenance operations, 17.4
maintenance operations, table of, 17.4
modifying partition default attributes, 17.4.7.3
modifying real attributes of partitions, 17.4.8.4
moving partitions, 17.4.12.3
rebuilding index partitions, 17.4.14
renaming index partitions/subpartitions, 17.4.15.3
secondary indexes on index-organized tables, 17.3.10.1
splitting partitions, 17.4.16.5
partitioned tables, 17
adding partitions, 17.4.2
adding subpartitions, 17.4.2.4.2, 17.4.2.5.2
coalescing partitions, 17.4.3
creating hash partitions, 17.3.2
creating list partitions, 17.3.3
creating range partitions, 17.3.1, 17.3.1.2
creating range-hash partitions, 17.3.4
creating range-list partitions, 17.3.5
description, 17.1
DISABLE ROW MOVEMENT, 17.3
dropping, 17.5
dropping partitions, 17.4.4
ENABLE ROW MOVEMENT, 17.3
exchanging partitions, 17.4.5
exchanging subpartitions, 17.4.5.3, 17.4.5.5
global indexes on, 17.2
index-organized tables, 17.3, 17.3.10.1, 17.3.10.2, 17.3.10.3
local indexes on, 17.2
maintenance operations, 17.4
maintenance operations, table of, 17.4
marking indexes UNUSABLE, 17.4.2.2, 17.4.3, 17.4.4.1, 17.4.4.2, 17.4.5, 17.4.6, 17.4.8.1, 17.4.8.1, 17.4.8.2, 17.4.12, 17.4.16, 17.4.17
merging partitions, 17.4.6
modifying default attributes, 17.4.7
modifying real attributes of partitions, 17.4.8
modifying real attributes of subpartitions, 17.4.8.3
moving partitions, 17.4.12
moving subpartitions, 17.4.12.2
multicolumn partitioning keys, 17.3.7
rebuilding index partitions, 17.4.14
redefining partitions online, 15.7.7, 17.4.13
rules for, 15.7.7.1
renaming partitions, 17.4.15
renaming subpartitions, 17.4.15.2
splitting partitions, 17.4.16
truncating partitions, 17.4.17
truncating subpartitions, 17.4.17.2
updating global indexes automatically, 17.4.1
partitioning
See also partitioned tables
creating partitions, 17.3
default partition, 17.2.3
default subpartition, 17.2.5
indexes, 17.1
index-organized tables, 17.3, 17.3.10.1, 17.3.10.2, 17.3.10.3
list, 17.2.3, 17.4.9, 17.4.10
maintaining partitions, 17.4
methods, 17.2
range-hash, 17.2.4, 17.3.4
range-list, 17.2.5, 17.3.5
subpartition templates, 17.3.6
tables, 17.1
partitions
See also partitioned tables.
See also partitioned indexes.
PARTITIONS clause
for hash partitions, 17.3.2
password file
adding users, 1.7.3
creating, 1.7.1
ORAPWD utility, 1.7.1
removing, 1.7.4.2
setting REMOTE_LOGIN_PASSWORD, 1.7.2
viewing members, 1.7.3.2
password file authentication, 1.6.4.1
for Automatic Storage Management, 12.3.2.2
passwords
default for SYS and SYSTEM, 1.5.2
password file, 1.7.3
setting REMOTE_LOGIN_PASSWORD parameter, 1.7.2
patches, installing, 1.2.10
PCTINCREASE parameter, 15.6.2
altering, 14.3.7
pending area for Database Resource Manager plans, 24.4.1, 24.4.1.4
validating plan schema changes, 24.4.1.2
pending transaction tables, 33.7.6
performance
index column order, 16.2.3
location of datafiles and, 9.1.3
plan schemas for Database Resource Manager, 24.1.4.2, 24.1.4.4.1, 24.4.1, 24.4.2.3, 24.6, 24.10
examples, 24.7
validating plan changes, 24.4.1.2
PL/SQL
replaced views and program units, 20.1.3
PRAGMA_EXCEPTION_INIT procedure
assigning exception names, 31.5
prepare phase
abort response, 32.3.1.1.3
in two-phase commit, 32.3.1
prepared response, 32.3.1.1.1
read-only response, 32.3.1.1.2
recognizing read-only nodes, 32.3.1.1.2
steps, 32.3.1.2
prepare/commit phases
effects of failure, 33.8.1
failures during, 33.4.1
locked resources, 33.8
pending transaction table, 33.7.6
prepared response
two-phase commit, 32.3.1.1.1
prerequisites
for creating a database, 2.2.1.2
PRIMARY KEY constraints
associated indexes, 16.3.3.1
dropping associated indexes, 16.6
enabling on creation, 16.3.3
foreign key references when dropped, 13.5.3.1
indexes associated with, 16.3.3
priorities
job, 28.2.12
private database links, 29.2.6
private synonyms, 20.3.1
privileges
adding redo log groups, 6.3
altering indexes, 16.4
altering tables, 15.6
closing a database link, 31.2
creating database links, 30.2.1
creating tables, 15.3
creating tablespaces, 8.2
database administrator, 1.5
dropping indexes, 16.6
dropping online redo log members, 6.5.2
dropping redo log groups, 6.5.1
dropping tables, 15.10
enabling and disabling triggers, 13.4
for external tables, 15.13.4
forcing a log switch, 6.6
managing with procedures, 30.6.3.4
managing with synonyms, 30.6.2.2
managing with views, 30.6.1
manually archiving, 7.3.3
recompiling packages, 13.7.4
recompiling procedures, 13.7.3
recompiling views, 13.7.2
renaming objects, 13.6
renaming redo log members, 6.4
RESTRICTED SESSION system privilege, 3.1.4.4
Scheduler, 28.2.7
sequences, 20.2.2, 20.2.2, 20.2.5
synonyms, 20.3.2, 20.3.4
taking tablespaces offline, 8.5.1
truncating, 13.3.3
using a view, 20.1.4
using sequences, 20.2.4
views, 20.1.2, 20.1.3, 20.1.7
procedures
external, 4.5
location transparency in distributed databases, 30.6.3
name resolution in distributed databases, 29.4.9
recompiling, 13.7.3
remote calls, 29.5.2
process monitor (PMON), 4.3
processes
See also server processes
PROCESSES initialization parameter
setting before database creation, 2.4.6
PRODUCT_COMPONENT_VERSION view, 1.4.2
programs
altering, 27.3.3
creating, 27.3.2
disabling, 27.3.5
dropping, 27.3.4
enabling, 27.3.6
overview, 26.2.1
using, 27.3
public database links, 29.2.6
connected user, 30.8.3
fixed user, 30.8.1
public fixed user database links, 30.8.1
public synonyms, 20.3.1
PURGE_LOST_DB_ENTRY procedure
DBMS_TRANSACTION package, 33.6.1

Q

queries
distributed, 29.4.2
distributed application development issues, 31.4
location transparency and, 29.5.1.2
remote, 29.4.1
quiescing a database, 3.4
quotas
tablespace, 8.1.2

R

RAISE_APPLICATION_ERROR() procedure, 31.5
range partitioning
creating tables using, 17.3.1
index-organized tables, 17.3.10.1
multicolumn partitioning keys, 17.3.7
range-hash partitioning
creating tables using, 17.3.4
subpartitioning template, 17.3.6.1
when to use, 17.2.4
range-list partitioning
creating tables using, 17.3.5
subpartitioning template, 17.3.6.2
when to use, 17.2.5
read consistency
managing in distributed databases, 33.10
read-only database
opening, 3.2.3
read-only response
two-phase commit, 32.3.1.1.2
read-only tablespaces
datafile headers when rename, 8.7
delaying opening of datafiles, 8.6.4
making read-only, 8.6.1
making writable, 8.6.2
WORM devices, 8.6.3
Real Application Clusters
allocating extents for cluster, 18.4
sequence numbers and, 20.2.2
threads of online redo log, 6.1.1
rebalance
tuning, 12.3.3.2
REBALANCE NOWAIT clause, 12.4.3
REBALANCE WAIT clause, 12.4.3
rebalancing a disk group, 12.4.3.5
REBUILD PARTITION clause, 17.4.12.3, 17.4.14.2.1, 17.4.14.2.1
REBUILD UNUSABLE LOCAL INDEXES clause, 17.4.14.2.2
rebuilding indexes, 16.4.2
costs, 16.2.10
online, 16.4.2
reclaiming unused space, 14.5
RECOVER clause
STARTUP command, 3.1.4.6
recoverer process
disabling, 33.9.2
distributed transaction recovery, 33.9.2
enabling, 33.9.2
pending transaction table, 33.9.2
recoverer process (RECO), 4.3
recovering
Scheduler jobs, 28.2.11
recovery
creating new control files, 5.3.3.2
Recovery Manager
starting a database, 3.1.1.2
starting an instance, 3.1.1.2
recycle bin
about, 15.11.1
purging, 15.11.4
renamed objects, 15.11.1
restoring objects from, 15.11.5
viewing, 15.11.3
redefining tables online
See online redefinition of tables
redo log files
See also online redo logs
active (current), 6.1.3.1
archiving, 7.2
available for use, 6.1.3
circular use of, 6.1.3
clearing, 6.2.1.1, 6.8
contents of, 6.1.2
creating as Oracle-managed files, 11.3.6
creating as Oracle-managed files, example, 11.5.1
creating groups, 6.3
creating members, 6.3, 6.3.2
distributed transaction information in, 6.1.3
dropping groups, 6.5
dropping members, 6.5
group members, 6.2.1
groups, defined, 6.2.1
how many in redo log, 6.2.4
inactive, 6.1.3.1, 6.1.3.1
instance recovery use of, 6.1
legal and illegal configurations, 6.2.1.2
LGWR and the, 6.1.3
log switches, 6.1.3.2
maximum number of members, 6.2.4
members, 6.2.1
mirrored, log switches and, 6.2.1.1
multiplexed, 6.2.1, 6.2.1, 6.2.1.1
online, defined, 6.1
planning the, 6.2, 6.2.4
redo entries, 6.1.2
requirements, 6.2.1.2
storing separately from datafiles, 9.1.4
threads, 6.1.1
unavailable when database is opened, 3.1.4
verifying blocks, 6.7
redo logs
See also online redo log
creating in Automatic Storage Management, 12.5.7
redo records, 6.1.2
LOGGING and NOLOGGING, 8.4
REDUNDANCY_LOWERED column
in V$ASM_FILE, 12.4.1.6
referential integrity
distributed database application development, 31.3
release number format, 1.4.1
releases, 1.4.1
checking the Oracle Database release number, 1.4.2
relocating control files, 5.3.2
remote connections
connecting as SYSOPER/SYSDBA, 1.6.1
password files, 1.7.2
remote data
querying, 30.7
updating, 30.7
remote procedure calls, 29.5.2, 29.5.2
distributed databases and, 29.5.2
remote queries
distributed databases and, 29.4.1
remote transactions, 29.4.4
defined, 29.4.4
REMOTE_LOGIN_PASSWORDFILE initialization parameter, 1.7.2
REMOTE_OS_AUTHENT initialization parameter
connected user database links, 29.2.7.1
RENAME PARTITION clause, 17.4.15.1, 17.4.15.2, 17.4.15.3.1
RENAME statement, 13.6
renaming control files, 5.3.2
renaming files
Oracle-managed files, 11.4.3
REOPEN attribute
LOG_ARCHIVE_DEST_n initialization parameter, 7.6.2
repair table
example of building, 21.4.1.1
repairing data block corruption
DBMS_REPAIR, 21.1
repeat interval, schedule, 27.4.5
RESIZE clause
for single-file tablespace, 8.2.2.2
resizing disks in disk groups, 12.4.3.3
resource allocation methods, 24.1.3
active session pool, 24.4.2
ACTIVE_SESS_POOL_MTH, 24.4.2
CPU resource, 24.4.2
EMPHASIS, 24.4.2
limit on degree of parallelism, 24.4.2
PARALLEL_DEGREE_LIMIT_ABSOLUTE, 24.4.2
PARALLEL_DEGREE_LIMIT_MTH, 24.4.2
QUEUEING_MTH, 24.4.2
queuing resource allocation method, 24.4.2
ROUND-ROBIN, 24.4.3
resource consumer groups, 24.1.3
changing, 24.5.2
creating, 24.4.3
DEFAULT_CONSUMER_GROUP, 24.4.3, 24.4.3.3, 24.5.3.2
deleting, 24.4.3.3
granting the switch privilege, 24.5.3
LOW_GROUP, 24.4.3, 24.7.3
managing, 24.5, 24.5.2.3
OTHER_GROUPS, 24.1.4.2, 24.4.1.2, 24.4.3, 24.4.4.1, 24.7.3
parameters, 24.4.3
revoking the switch privilege, 24.5.3.2
setting initial, 24.5.1
switching a session, 24.5.2.1
switching sessions for a user, 24.5.2.2
SYS_GROUP, 24.4.3, 24.7.3
updating, 24.4.3.2
Resource Manager
AUTO_TASK_CONSUMER_GROUP consumer group, 23.3
resource plan directives, 24.1.3, 24.4.1.2
deleting, 24.4.4.3
specifying, 24.4.4
updating, 24.4.4.2
resource plans, 24.1.3, 24.1.4.2
creating, 24.3
DELETE_PLAN_CASCADE, 24.4.2.3
deleting, 24.4.2.3
examples, 24.1.4.1, 24.7
parameters, 24.4.2
plan schemas, 24.1.4.2, 24.1.4.4.1, 24.4.1, 24.4.2.3, 24.6, 24.10
subplans, 24.1.4.2, 24.1.4.2, 24.4.2.3
SYSTEM_PLAN, 24.4.2, 24.4.3, 24.7.3
top plan, 24.4.1.2, 24.6
updating, 24.4.2.2
validating, 24.4.1.2
RESOURCE_MANAGER_PLAN initialization parameter, 24.6, 24.6
RESTRICTED SESSION system privilege
restricted mode and, 3.1.4.4
resumable space allocation
correctable errors, 14.4.1.3
detecting suspended statements, 14.4.4
disabling, 14.4.2
distributed databases, 14.4.1.4
enabling, 14.4.2
example, 14.4.6
how resumable statements work, 14.4.1.1
naming statements, 14.4.2.2.2
parallel execution and, 14.4.1.5
resumable operations, 14.4.1.2
setting as default for session, 14.4.3
timeout interval, 14.4.2.2.1, 14.4.4.1
RESUMABLE_TIMEOUT Initialization Parameter
setting, 14.4.2.1
RESUMABLE_TIMEOUT initialization parameter, 14.4.1.1
retention guarantee (for undo), 10.2.2.1
RMAN. See Recovery Manager.
roles
DBA role, 1.5.2.3
obtained through database links, 29.2.10
ROLLBACK statement
FORCE clause, 33.5, 33.5.1.1, 33.5.2
forcing, 33.4.2
rollbacks
ORA-02, 31.3
ROUND-ROBIN resource allocation method, 24.4.3
row movement clause for partitioned tables, 17.3
rows
listing chained or migrated, 13.2.3
rules
adding to a chain, 27.9.4
dropping from chains, 27.9.9
running
chains, 27.9.8
jobs, 27.2.5

S

Sample Schemas
description, 2.9.4
savepoints
in-doubt transactions, 33.5, 33.5.2
Scheduler
administering, 28
architecture, 26.4
configuring, 28.1
examples of using, 28.4
GATHER_STATS_JOB job, 23.2.1
GATHER_STATS_PROG program, 23.2.1
import and export, 28.3
maintenance windows, 23.1
monitoring and managing, 28.2
overview, 26.1
privileges, 28.2.3, 28.2.7
security, 28.2.17
statistics collection, 23.2.1
using, 27
using in RAC, 26.4.5
views, 28.2.1
Scheduler objects, naming, 27.1
schedules
altering, 27.4.3
creating, 27.4.2
dropping, 27.4.4
overview, 26.2.2
using, 27.4
schema objects
analyzing, 13.2
creating multiple objects, 13.1
defining using DBMS_METADATA package, 13.10.1
dependencies between, 13.7
distributed database naming conventions for, 29.2.9.4
global names, 29.2.9.4
listing by type, 13.10.2.1
name resolution in distributed databases, 29.2.9.4, 29.4.8
name resolution in SQL statements, 13.8
privileges to rename, 13.6
referencing with synonyms, 30.6.2.1
renaming, 13.6, 13.6
validating structure, 13.2.2
viewing information, 13.10, 14.7
SCN. See system change number.
SCOPE clause
ALTER SYSTEM SET, 2.7.5.1
scripts, authenticating users in, 2.9.3
security
accessing a database, 22.1
administrator of, 22.1
centralized user management in distributed databases, 29.3.2.4
database security, 22.1
distributed databases, 29.3.2
establishing policies, 22
privileges, 22.1
remote objects, 30.6.1
Scheduler, 28.2.17
using synonyms, 30.6.2.2
Segment Advisor, 14.5.2
configuring Scheduler job, 14.5.2.4
invoking with Enterprise Manager, 14.5.2.2.1
invoking with PL/SQL, 14.5.2.2.2
running manually, 14.5.2.2
viewing results, 14.5.2.3
views, 14.5.2.5
SEGMENT_FIX_STATUS procedure
DBMS_REPAIR, 21.2.1
segments
available space, 14.7.1
data dictionary views for, 14.7.2
deallocating unused space, 14.5
displaying information on, 14.7.2.1
shrinking, 14.5.3
storage parameters for temporary, 14.3.8
SELECT statement
FOR UPDATE clause and location transparency, 30.7
selecting an instance with environment variables, 1.3
SEQUENCE_CACHE_ENTRIES parameter, 20.2.4.2.2
sequences
accessing, 20.2.4
altering, 20.2.3
caching sequence numbers, 20.2.4.2
creating, 20.2.2, 20.2.4.2.2, 20.2.4.2.2
CURRVAL, 20.2.4.1.2
dropping, 20.2.5
managing, 20.2.1
NEXTVAL, 20.2.4.1.1
Oracle Real Applications Clusters and, 20.2.2
SERVER parameter
net service name, 30.3.3.1
server parameter file
and Automatic Storage Management, 3.1.2
creating, 2.7.3
defined, 2.7.1
error recovery, 2.7.8
exporting, 2.7.6
migrating to, 2.7.2
RMAN backup, 2.7.7
setting initialization parameter values, 2.7.5
SPFILE initialization parameter, 2.7.4
STARTUP command behavior, 2.7.1
viewing parameter settings, 2.7.9
server processes
archiver (ARCn), 4.3
background, 4.3
checkpoint (CKPT), 4.3
database writer (DBWn), 4.3
dedicated, 4.1.1
dispatcher (Dnnn), 4.3
dispatchers, 4.2.3.3
global cache service (LMS), 4.3
log writer (LGWR), 4.3
monitoring, 4.7
monitoring locks, 4.7.3
process monitor (PMON), 4.3
recoverer (RECO), 4.3
shared server, 4.1.2
system monitor (SMON), 4.3
trace files for, 4.7.2
server-generated alerts, 4.7.1
servers
role in two-phase commit, 32.2.2
service names
database links and, 30.2.4
services
application, 2.8
application, configuring, 2.8.2
application, deploying, 2.8.1
application, using, 2.8.3
session trees for distributed transactions
clients, 32.2.1
commit point site, 32.2.5, 32.2.5.2
database servers, 32.2.2
definition, 32.2
global coordinators, 32.2.4
local coordinators, 32.2.3
tracing transactions, 33.3.2
sessions
active, 4.6.2
inactive, 4.6.3
setting advice for transactions, 33.4.3.3
terminating, 4.6
SET TIME_ZONE clause
ALTER SESSION, 2.3.9.1
CREATE DATABASE, 2.3.9.1
SET TRANSACTION statement
naming transactions, 33.2
SGA
See Also system global area
SGA. See system global area.
SGA_MAX_SIZE initialization parameter, 2.4.5
setting size, 2.4.5.2
shared database links
configuring, 30.3.3
creating, 30.3.2
dedicated servers, creating links to, 30.3.3.1
determining whether to use, 30.3.1
example, 29.2.8
shared servers, creating links to, 30.3.3.2
SHARED keyword
CREATE DATABASE LINK statement, 30.3.2
shared server, 4.1.2
configuring dispatchers, 4.2.3
disabling, 4.2.2.2, 4.2.3.6
initialization parameters, 4.2.1
interpreting trace output, 4.7.2.5
setting minimum number of servers, 4.2.2.2
trace files for processes, 4.7.2
views, 4.2.4
shared SQL
for remote and distributed statements, 29.4.3
shrinking segments online, 14.5.3
SHUTDOWN command
ABORT clause, 3.3.4
IMMEDIATE clause, 3.3.2
NORMAL clause, 3.3.1
TRANSACTIONAL clause, 3.3.3
Simple Network Management Protocol (SNMP) support
database management, 29.3.4.3
single-file tablespaces
description, 8.2.2
single-table hash clusters, 19.3.2
site autonomy
distributed databases, 29.3.1
SKIP_CORRUPT_BLOCKS procedure, 21.3.3.1
DBMS_REPAIR, 21.2.1
example, 21.4.5
snapshot too old error, 10.2.2
SORT_AREA_SIZE initialization parameter
index creation and, 16.2.1
space
deallocating unused, 14.5.4
reclaiming unused, 14.5
space allocation
resumable, 14.4
space management
data blocks, 14.2
datatypes, space requirements, 14.6
deallocating unused space, 14.5
Segment Advisor, 14.5
setting storage parameters, 14.3.1, 14.3.7
shrink segment, 14.5
SPACE_ERROR_INFO procedure, 14.4.4.1
SPFILE initialization parameter, 2.7.4
specifying from client machine, 3.1.2
SPLIT PARTITION clause, 17.4.2.1, 17.4.16
SQL statements
distributed databases and, 29.4.1
SQL*Loader
about, 1.8.2.1
SQL*Plus
starting, 3.1.3
starting a database, 3.1.1.1
starting an instance, 3.1.1.1
SQL_TRACE initialization parameter
trace files and, 4.7.2
STALE status
of redo log members, 6.5.2
stalled chain (Scheduler), 27.9.13
standby transmission mode
definition of, 7.5.2
Oracle Net and, 7.5.2
RFS processes and, 7.5.2
starting
Automatic Storage Management instance, 12.5.3
starting a database, 3.1
forcing, 3.1.4.5
Oracle Enterprise Manager, 3.1.1.3
recovery and, 3.1.4.6
Recovery Manager, 3.1.1.2
restricted mode, 3.1.4.4
SQL*Plus, 3.1.1.1
when control files unavailable, 3.1.4
when redo logs unavailable, 3.1.4
starting an instance
automatically at system startup, 3.1.4.7
database closed and mounted, 3.1.4.3
database name conflicts and, 2.4.1.1
forcing, 3.1.4.5
mounting and opening the database, 3.1.4.1
normally, 3.1.4.1
Oracle Enterprise Manager, 3.1.1.3
recovery and, 3.1.4.6
Recovery Manager, 3.1.1.2
remote instance startup, 3.1.4.8
restricted mode, 3.1.4.4
SQL*Plus, 3.1.1.1
when control files unavailable, 3.1.4
when redo logs unavailable, 3.1.4
without mounting a database, 3.1.4.2
STARTUP command
default behavior, 2.7.1
NOMOUNT clause, 2.2.2.6, 3.1.4.2
RECOVER clause, 3.1.4.6
starting a database, 3.1.1.1, 3.1.4
statement transparency in distributed database
managing, 30.7
statistics
automatically collecting for tables, 15.5
statistics collection
using Scheduler, 23.2.1
STATISTICS_LEVEL initialization parameter
automatic statistics collection, 15.5
steps, chain
altering, 27.9.12
dropping, 27.9.11
stopping
jobs, 27.2.6
STORAGE clause
See also storage parameters
storage parameters
applicable objects, 14.3
BUFFER POOL, 14.3.1
INITIAL, 14.3.1, 15.6.2
INITRANS, altering, 15.6.2
MAXTRANS, altering, 15.6.2
MINEXTENTS, 14.3.1, 15.6.2
NEXT, 15.6.2
PCTINCREASE, 15.6.2
precedence of, 14.3.8
setting, 14.3.1
temporary segments, 14.3.8
storage subsystems
mapping files to physical devices, 9.9, 9.9.4.3
STORE IN clause, 17.3.4
stored procedures
managing privileges, 30.6.3.4
privileges for recompiling, 13.7.3
remote object security, 30.6.3.4
SUBPARTITION BY HASH clause
for composite-partitioned tables, 17.3.4
SUBPARTITION BY LIST clause
for composite-partitioned tables, 17.3.5
SUBPARTITION clause, 17.4.2.4.1, 17.4.2.5.1, 17.4.16.3
for composite-partitioned tables, 17.3.4, 17.3.5
subpartition templates, 17.3.6
modifying, 17.4.11
subpartitions, 17.1
SUBPARTITIONS clause, 17.4.2.4.1, 17.4.16.3
for composite-partitioned tables, 17.3.4
subqueries
in remote updates, 29.4.1
statement transparency in distributed databases, 30.7
SunSoft SunNet Manager, 29.3.4.3
SWITCH LOGFILE clause
ALTER SYSTEM statement, 6.6
synonyms, 20.3.3
creating, 20.3.2, 30.6.2.1
definition and creation, 30.6.2.1
displaying dependencies of, 13.10.2.2
dropping, 20.3.4
examples, 30.6.2.1
location transparency in distributed databases, 30.6.2
managing, 20.3.1, 20.3.4
managing privileges in remote database, 30.6.2.2
name resolution in distributed databases, 29.4.9
private, 20.3.1
public, 20.3.1
remote object security, 30.6.2.2
SYS account
default password, 1.5.2
objects owned, 1.5.2.1
privileges, 1.5.2.1
specifying password for CREATE DATABASE statement, 2.3.1
SYS_GROUP for Database Resource Manager, 24.4.3, 24.7.3
SYSAUX tablespace, 8.2
about, 2.3.3
cannot rename, 8.7
creating at database creation, 2.2.2.7, 2.3.3
DATAFILE clause, 2.3.3
monitoring occupants, 8.9.1
moving occupants, 8.9.2
SYSDBA system privilege
adding users to the password file, 1.7.3
connecting to database, 1.6.1.1
determining who has privileges, 1.7.3.2
granting and revoking, 1.7.3.1
SYSOPER system privilege
adding users to the password file, 1.7.3
connecting to database, 1.6.1.1
determining who has privileges, 1.7.3.2
granting and revoking, 1.7.3.1
SYSTEM account
default password, 1.5.2
objects owned, 1.5.2.2
specifying password for CREATE DATABASE, 2.3.1
system change numbers
coordination in a distributed database system, 32.3.2.2
in-doubt transactions, 33.5.1.2
using V$DATAFILE to view information about, 9.10
when assigned, 6.1.2
system global area
holds sequence number cache
initialization parameters affecting size, 2.4.5
specifying buffer cache sizes, 2.4.5.4.1
system monitor process (SMON), 4.3
system privileges
ADMINISTER_RESOURCE_MANAGER, 24.2
for external tables, 15.13.4
SYSTEM tablespace
cannot rename, 8.7
creating at database creation, 2.2.2.7
creating locally managed, 2.2.2.7, 2.3.2
restrictions on taking offline, 9.4
when created, 8.2
SYSTEM_PLAN for Database Resource Manager, 24.4.2, 24.4.3, 24.7.3

T

tables
about, 15.1
adding columns, 15.6.6
allocating extents, 15.6.4
altering, 15.6.1
altering physical attributes, 15.6.2
analyzing, 13.2
clustered (hash). See hash clusters
creating, 15.3
designing before creating, 15.2.1
dropping, 15.10
dropping columns, 15.6.8
estimating size, 15.2.7
estimating space use, 14.8.1
external, 15.13
Flashback Drop, 15.11
Flashback Table, 15.9
Flashback Transaction Query, 15.8
guidelines for managing, 15.2
hash clustered. See hash clusters
increasing column length, 15.6.5
index-organized, 15.12
index-organized, partitioning, 17.3.10
key-preserved, 20.1.5.1
limiting indexes on, 16.2.4
managing, 15
modifying column definition, 15.6.5
moving, 15.6.3
moving time windows in historical, 17.6
parallelizing creation, 15.2.4, 15.3.3
partitioned, 17.1
redefining online, 15.7
renaming columns, 15.6.7
restrictions when creating, 15.2.8
setting storage parameters, 15.2.7
shrinking, 14.5.3
specifying location, 15.2.3
statistics collection, automatic, 15.5
temporary, 15.3.2
truncating, 13.3
unrecoverable (NOLOGGING), 15.2.5
validating structure, 13.2.2
views, 15.14
tablespace set, 8.12.5.2
tablespaces
adding datafiles, 9.2
assigning user quotas, 8.1.2
automatic segment space management, 8.2.1.2
bigfile, 2.3.8, 8.2.2
checking default storage parameters, 8.13.1
containing XMLTypes, 8.12.3
creating in Automatic Storage Management, 12.5.6
creating undo tablespace at database creation, 2.3.4, 2.3.8.2
DBMS_SPACE_ADMIN package, 8.10
default temporary tablespace, creating, 2.3.6, 2.3.8.2
detecting and repairing defects, 8.10
diagnosing and repairing problems in locally managed, 8.10
dictionary managed, 8.2.2.3
dropping, 8.8
guidelines for managing, 8.1
listing files of, 8.13.2
listing free space in, 8.13.3
locally managed, 8.2.1
locally managed SYSTEM, 2.3.2
locally managed temporary, 8.2.3.1
location, 9.1.3
migrating SYSTEM to locally managed, 8.11
multiple block sizes, 8.12.5.5
on a WORM device, 8.6.3
Oracle-managed files, managing, 11.5.1, 11.5.2
overriding default type, 2.3.8.2
quotas, assigning, 8.1.2
read-only, 8.6
renaming, 8.7
setting default type, 2.3.8.1
single-file, 2.3.8, 2.3.8.2, 8.2.2, 8.2.2.2
specifying nonstandard block sizes, 8.3
SYSAUX, 8.2, 8.7
SYSAUX creation, 2.3.3
SYSAUX, managing, 8.9
SYSTEM, 8.2, 8.2.1, 8.6.1, 8.11
taking offline normal, 8.5.1
taking offline temporarily, 8.5.1
tempfiles in locally managed, 8.2.3.1
temporary, 8.2.3, 8.2.4.3
temporary bigfile, 8.2.3.2
temporary for creating large indexes, 16.3.5
transportable
see transportable tablespaces
undo, 10.1
using multiple, 8.1.1
using Oracle-managed files, 11.3.3
tempfiles, 8.2.3.1
creating as Oracle-managed, 11.3.4
dropping, 9.6
dropping Oracle-managed tempfiles, 11.4.1
template
dropping an Automatic Storage Management, 12.4.10.3
managing Automatic Storage Management, 12.4.10
modifying an Automatic Storage Management, 12.4.10.2
temporary segments
index creation and, 16.2.1
temporary tables
creating, 15.3.2
temporary tablespaces
altering, 8.2.3.3
bigfile, 8.2.3.2
creating, 8.2.3.1
groups, 8.2.4
renaming default, 8.7
terminating user sessions
active sessions, 4.6.2
identifying sessions, 4.6.1
inactive session, example, 4.6.3
inactive sessions, 4.6.3
threads
online redo log, 6.1.1
threshold based alerts
managing with Oracle Enterprise Manager, 4.7.1
threshold-based alerts
server-generated, 4.7.1
thresholds
setting alert, 14.1.1
time zone
files, 2.3.9.2
setting for database, 2.3.9.1
TNSNAMES.ORA file, 7.4.1.1
trace files
location of, 4.7.2.2
log writer process and, 6.2.1.1
size of, 4.7.2.3
using, 4.7.2, 4.7.2.1
when written, 4.7.2.4
tracing
archivelog process, 7.7
transaction control statements
distributed transactions and, 32.1.2
transaction failures
simulating, 33.9
transaction management
overview, 32.3
transaction processing
distributed systems, 29.4
transactions
closing database links, 31.2
distributed and two-phase commit, 29.4.6
in-doubt, 32.3.1.2, 32.4, 32.4.3, 33.4
naming distributed, 33.2, 33.4.3.2
remote, 29.4.4
transmitting archived redo logs, 7.5
transparent data encryption, 2.9.2
transportable set
See transportable tablespace set
transportable tablespace set
defined, 8.12.5
transportable tablespaces, 8.12
compatibility considerations, 8.12.4
from backup, 8.12.1
introduction, 8.12.1
limitations, 8.12.3
multiple block sizes, 8.12.5.5
procedure, 8.12.5
when to use, 8.12.6
wizard in Enterprise Manager, 8.12.1
XMLTypes in, 8.12.3
transporting tablespaces between databases
See transportable tablespaces
triggers
disabling, 13.4.2
enabling, 13.4.1
TRUNCATE PARTITION clause, 17.4.17, 17.4.17, 17.4.17.1, 17.4.17.1.1, 17.4.17.1.1
TRUNCATE statement, 13.3.3
DROP STORAGE clause, 13.3.3
REUSE STORAGE clause, 13.3.3
vs. dropping table, 15.10
TRUNCATE SUBPARTITION clause, 17.4.17.2
tuning
analyzing tables, 31.4.2.2.2
cost-based optimization, 31.4.2
two-phase commit
case study, 32.5
commit phase, 32.3.2, 32.5.4
described, 29.4.6
discovering problems with, 33.4.1
distributed transactions, 32.3
example, 32.5
forget phase, 32.3.3
in-doubt transactions, 32.4, 32.4.3
phases, 32.3
prepare phase, 32.3.1, 32.3.1.2
recognizing read-only nodes, 32.3.1.1.2
specifying commit point strength, 33.1
steps in commit phase, 32.3.2.1
tracing session tree in distributed transactions, 33.3.2
viewing database links, 33.3.1

U

undo retention, 10.2.2
automatic tuning of, 10.2.2.2
guaranteeing, 10.2.2.1
setting, 10.3
undo segments
in-doubt distributed transactions, 33.4.2
undo space management
automatic undo management mode, 10.2
described, 10.1
undo tablespace
initialization parameters for, 10.2.1
managing, 10
undo tablespaces
altering, 10.5.2
creating, 10.5.1
dropping, 10.5.3
monitoring, 10.7
PENDING OFFLINE status, 10.5.4
renaming, 8.7
specifying at database creation, 2.2.2.7, 2.3.4, 2.3.8.2
starting an instance using, 10.2.1
statistics for, 10.7
switching, 10.5.4
user quotas, 10.5.5
viewing information about, 10.7
UNDO_MANAGEMENT initialization parameter, 2.3.4
starting instance as AUTO, 10.2.1
UNDO_TABLESPACE initialization parameter
for undo tablespaces, 2.4.7.2
starting an instance using, 10.2.1
undropping disks in disk groups, 12.4.3.4
UNIQUE key constraints
associated indexes, 16.3.3.1
dropping associated indexes, 16.6
enabling on creation, 16.3.3
foreign key references when dropped, 13.5.3.1
indexes associated with, 16.3.3
UNRECOVERABLE DATAFILE clause
ALTER DATABASE statement, 6.8
UPDATE GLOBAL INDEX clause
of ALTER TABLE, 17.4.1
updates
location transparency and, 29.5.1.2
upgrading a database, 2.1
USER_DB_LINKS view, 30.5.1, 30.5.1
USER_DUMP_DEST initialization parameter, 4.7.2.2
USER_RESUMABLE view, 14.4.4.1
usernames
SYS and SYSTEM, 1.5.2
users
assigning tablespace quotas, 8.1.2
in a newly created database, 2.9.1
limiting number of, 2.4.9
session, terminating, 4.6.3
utilities
export, 1.8.2.2
for the database administrator, 1.8.2
import, 1.8.2.2
SQL*Loader, 1.8.2.1
UTLCHAIN.SQL script
listing chained rows, 13.2.3.1
UTLCHN1.SQL script
listing chained rows, 13.2.3.1
UTLLOCKT.SQL script, 4.7.3

V

V$ARCHIVE view, 7.8
V$ARCHIVE_DEST view
obtaining destination status, 7.4.2
V$BLOCKING_QUIESCE view, 3.4.1, 24.10
V$DATABASE view, 7.8.1
V$DBLINK view, 30.5.2
V$DISPATCHER view
monitoring shared server dispatchers, 4.2.3.4
V$DISPATCHER_RATE view
monitoring shared server dispatchers, 4.2.3.4
V$INSTANCE view
for database quiesce state, 3.4.3
V$LOG view, 7.8
displaying archiving status, 7.8
online redo log, 6.9
viewing redo data with, 6.9
V$LOG_HISTORY view
viewing redo data, 6.9
V$LOGFILE view
log file status, 6.5.2
viewing redo data, 6.9
V$OBJECT_USAGE view
for monitoring index usage, 16.4.3
V$PWFILE_USERS view, 1.7.3.2
V$QUEUE view
monitoring shared server dispatchers, 4.2.3.4
V$ROLLSTAT view
undo segments, 10.7
V$SESSION view, 4.6.3
V$SYSAUX_OCCUPANTS view
occupants of SYSAUX tablespace, 8.9.2
V$THREAD view, 6.9
V$TIMEZONE_NAMES view
time zone table information, 2.3.9.2
V$TRANSACTION view
undo tablespaces information, 10.7
V$UNDOSTAT view
statistics for undo tablespaces, 10.7
V$VERSION view, 1.4.2
VALIDATE STRUCTURE clause
of ANALYZE statement, 13.2.2
VALIDATE STRUCTURE ONLINE clause
of ANALYZE statement, 13.2.2
varrays
storage parameters for, 14.3.6
verifying blocks
redo log files, 6.7
viewing
alerts, 14.1.2
views, 6.9
creating, 20.1.2
creating with errors, 20.1.2.3
Database Resource Manager, 24.10
DATABASE_PROPERTIES, 2.3.6
DBA_2PC_NEIGHBORS, 33.3.2
DBA_2PC_PENDING, 33.3.1
DBA_DB_LINKS, 30.5.1
DBA_RESUMABLE, 14.4.4.1
displaying dependencies of, 13.10.2.2
dropping, 20.1.7
file mapping views, 9.9.3.3
for monitoring datafiles, 9.10
FOR UPDATE clause and, 20.1.2
invalid, 20.1.4
join. See join views.
location transparency in distributed databases, 30.6.1
managing, 20.1, 20.1.3
managing privileges with, 30.6.1
name resolution in distributed databases, 29.4.9
ORDER BY clause and, 20.1.2
remote object security, 30.6.1
restrictions, 20.1.4
tables, 15.14
tablespace information, 8.13
USER_RESUMABLE, 14.4.4.1
using, 20.1.4
V$ARCHIVE, 7.8
V$ARCHIVE_DEST, 7.4.2
V$DATABASE, 7.8.1
V$LOG, 6.9, 6.9, 7.8
V$LOG_HISTORY, 6.9
V$LOGFILE, 6.5.2, 6.9
V$OBJECT_USAGE, 16.4.3
wildcards in, 20.1.2.2
WITH CHECK OPTION, 20.1.2

W

WAIT keyword, in REBALANCE clause, 12.4.3
wildcards
in views, 20.1.2.2
window groups
creating, 27.7.2
disabling, 27.7.7
dropping, 27.7.3
dropping a member from, 27.7.5
enabling, 27.7.6
overview, 26.3.3
using, 27.7
window logs, 27.6
windows (Scheduler)
altering, 27.6.3
closing, 27.6.5
creating, 27.6.2
disabling, 27.6.7
dropping, 27.6.6
enabling, 27.6.8
opening, 27.6.4
overlapping, 27.6.9
overview, 26.3.2
using, 27.6
WORM devices
and read-only tablespaces, 8.6.3
WRH$_UNDOSTAT view, 10.7

X

XML DB
virtual folder for Automatic Storage Management, 12.7
XMLTypes
in transportable tablespaces, 8.12.3
PK3T3PKgpUIOEBPS/part5.htmV Database Security

Part V

Database Security

Part V addresses issues of user and privilege management affecting the security of the database. It includes the following chapters:

PK|͎r[VPKgpUIOEBPS/undo.htm Managing the Undo Tablespace

10 Managing the Undo Tablespace

This chapter describes how to manage the undo tablespace, which stores information used to roll back changes to the Oracle Database. It contains the following topics:


See Also:

Part III, "Automated File and Storage Management" for information about creating an undo tablespace whose datafiles are both created and managed by the Oracle Database server.

What Is Undo?

Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.

Undo records are used to:

When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.

Introduction to Automatic Undo Management

This section introduces the concepts of Automatic Undo Management and discusses the following topics:

Overview of Automatic Undo Management

Oracle provides a fully automated mechanism, referred to as automatic undo management, for managing undo information and space. In this management mode, you create an undo tablespace, and the server automatically manages undo segments and space among the various active sessions.

You set the UNDO_MANAGEMENT initialization parameter to AUTO to enable automatic undo management. A default undo tablespace is then created at database creation. An undo tablespace can also be created explicitly. The methods of creating an undo tablespace are explained in "Creating an Undo Tablespace".

When the instance starts, the database automatically selects the first available undo tablespace. If no undo tablespace is available, then the instance starts without an undo tablespace and stores undo records in the SYSTEM tablespace. This is not recommended in normal circumstances, and an alert message is written to the alert log file to warn that the system is running without an undo tablespace.

If the database contains multiple undo tablespaces, you can optionally specify at startup that you want to use a specific undo tablespace. This is done by setting the UNDO_TABLESPACE initialization parameter, as shown in this example:

UNDO_TABLESPACE = undotbs_01

In this case, if you have not already created the undo tablespace (in this example, undotbs_01), the STARTUP command fails. The UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance in an Oracle Real Application Clusters environment.

The following is a summary of the initialization parameters for automatic undo management:

Initialization Parameter Description
UNDO_MANAGEMENT If AUTO, use automatic undo management. The default is MANUAL.
UNDO_TABLESPACE An optional dynamic parameter specifying the name of an undo tablespace. This parameter should be used only when the database has multiple undo tablespaces and you want to direct the database instance to use a particular undo tablespace.

When automatic undo management is enabled, if the initialization parameter file contains parameters relating to manual undo management, they are ignored.


See Also:

Oracle Database Reference for complete descriptions of initialization parameters used in automatic undo management

Undo Retention

After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible.

When automatic undo management is enabled, there is always a current undo retention period, which is the minimum amount of time that Oracle Database attempts to retain old undo information before overwriting it. Old (committed) undo information that is older than the current undo retention period is said to be expired. Old undo information with an age that is less than the current undo retention period is said to be unexpired.

Oracle Database automatically tunes the undo retention period based on undo tablespace size and system activity. You can specify a minimum undo retention period (in seconds) by setting the UNDO_RETENTION initialization parameter. The database makes its best effort to honor the specified minimum undo retention period, provided that the undo tablespace has space available for new transactions. When available space for new transactions becomes short, the database begins to overwrite expired undo. If the undo tablespace has no space for new transactions after all expired undo is overwritten, the database may begin overwriting unexpired undo information. If any of this overwritten undo information is required for consistent read in a current long-running query, the query could fail with the snapshot too old error message.

The following points explain the exact impact of the UNDO_RETENTION parameter on undo retention:

  • The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database may overwrite unexpired undo information when tablespace space becomes low.

  • For an undo tablespace with the AUTOEXTEND option enabled, the database attempts to honor the minimum retention period specified by UNDO_RETENTION. When space is low, instead of overwriting unexpired undo information, the tablespace auto-extends. If the MAXSIZE clause is specified for an auto-extending undo tablespace, when the maximum size is reached, the database may begin to overwrite unexpired undo information.

Retention Guarantee

To guarantee the success of long-running queries or Oracle Flashback operations, you can enable retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This option is disabled by default.


WARNING:

Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.


You enable retention guarantee by specifying the RETENTION GUARANTEE clause for the undo tablespace when you create it with either the CREATE DATABASE or CREATE UNDO TABLESPACE statement. Or, you can later specify this clause in an ALTER TABLESPACE statement. You disable retention guarantee with the RETENTION NOGUARANTEE clause.

You can use the DBA_TABLESPACES view to determine the retention guarantee setting for the undo tablespace. A column named RETENTION contains a value of GUARANTEE, NOGUARANTEE, or NOT APPLY (used for tablespaces other than the undo tablespace).

Automatic Tuning of Undo Retention

Oracle Database automatically tunes the undo retention period based on how the undo tablespace is configured.

  • If the undo tablespace is fixed size, the database tunes the retention period for the best possible undo retention for that tablespace size and the current system load. This tuned retention period can be significantly greater than the specified minimum retention period.

  • If the undo tablespace is configured with the AUTOEXTEND option, the database tunes the undo retention period to be somewhat longer than the longest-running query on the system at that time. Again, this tuned retention period can be greater than the specified minimum retention period.


Note:

Automatic tuning of undo retention is not supported for LOBs. This is because undo information for LOBs is stored in the segment itself and not in the undo tablespace. For LOBs, the database attempts to honor the minimum undo retention period specified by UNDO_RETENTION. However, if space becomes low, unexpired LOB undo information may be overwritten.

You can determine the current retention period by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT view. This view contains one row for each 10-minute statistics collection interval over the last 4 days. (Beyond 4 days, the data is available in the DBA_HIST_UNDOSTAT view.) TUNED_UNDORETENTION is given in seconds.

select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention
from v$undostat order by end_time;

BEGIN_TIME      END_TIME        TUNED_UNDORETENTION
--------------- --------------- -------------------
04-FEB-05 00:01 04-FEB-05 00:11               12100
      ...                                          
07-FEB-05 23:21 07-FEB-05 23:31               86700
07-FEB-05 23:31 07-FEB-05 23:41               86700
07-FEB-05 23:41 07-FEB-05 23:51               86700
07-FEB-05 23:51 07-FEB-05 23:52               86700

576 rows selected.

See Oracle Database Reference for more information about V$UNDOSTAT.

Undo Retention Tuning and Alert Thresholds For a fixed size undo tablespace, the database calculates the maximum undo retention period based on database statistics and on the size of the undo tablespace. For optimal undo management, rather than tuning based on 100% of the tablespace size, the database tunes the undo retention period based on 85% of the tablespace size, or on the warning alert threshold percentage for space used, whichever is lower. (The warning alert threshold defaults to 85%, but can be changed.) Therefore, if you set the warning alert threshold of the undo tablespace below 85%, this may reduce the tuned length of the undo retention period. For more information on tablespace alert thresholds, see "Managing Tablespace Alerts".

Setting the Undo Retention Period

You set the undo retention period by setting the UNDO_RETENTION initialization parameter. This parameter specifies the desired minimum undo retention period in seconds. As described in "Undo Retention", the current undo retention period may be automatically tuned to be greater than UNDO_RETENTION, or, unless retention guarantee is enabled, less than UNDO_RETENTION if space is low.

To set the undo retention period:

The effect of an UNDO_RETENTION parameter change is immediate, but it can only be honored if the current undo tablespace has enough space.

Sizing the Undo Tablespace

You can size the undo tablespace appropriately either by using automatic extension of the undo tablespace or by using the Undo Advisor for a fixed sized tablespace.

Using Auto-Extensible Tablespaces

Oracle Database supports automatic extension of the undo tablespace to facilitate capacity planning of the undo tablespace in the production environment. When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension of the undo tablespace so that it automatically increases in size when more space is needed. You do so by including the AUTOEXTEND keyword when you create the undo tablespace.

Sizing Fixed-Size Undo Tablespaces

If you have decided on a fixed-size undo tablespace, the Undo Advisor can help you estimate needed capacity. You can access the Undo Advisor through Enterprise Manager or through the DBMS_ADVISOR PL/SQL package. Enterprise Manager is the preferred method of accessing the advisor. For more information on using the Undo Advisor through Enterprise Manager, please refer to Oracle Database 2 Day DBA.

The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository (AWR). It is therefore important that the AWR have adequate workload statistics available so that the Undo Advisor can make accurate recommendations. For newly created databases, adequate statistics may not be available immediately. In such cases, an auto-extensible undo tablespace can be used.

An adjustment to the collection interval and retention period for AWR statistics can affect the precision and the type of recommendations that the advisor produces. See "Automatic Workload Repository" for more information.

To use the Undo Advisor, you first estimate these two values:

  • The length of your expected longest running query

    After the database has been up for a while, you can view the Longest Running Query field on the Undo Management page of Enterprise Manager.

  • The longest interval that you will require for flashback operations

    For example, if you expect to run Flashback Queries for up to 48 hours in the past, your flashback requirement is 48 hours.

You then take the maximum of these two undo retention values and use that value to look up the required undo tablespace size on the Undo Advisor graph.

The Undo Advisor PL/SQL Interface

You can activate the Undo Advisor by creating an undo advisor task through the advisor framework. The following example creates an undo advisor task to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'. The analysis is based on Automatic Workload Repository snapshots, which you must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT. In the following example, the START_SNAPSHOT is "1" and END_SNAPSHOT is "2".

DECLARE
   tid    NUMBER;
   tname  VARCHAR2(30);
   oid    NUMBER;
   BEGIN
   DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');
   DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid);
   DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);
   DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
   DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
   DBMS_ADVISOR.SET_TASK_PARAMETER(name, 'INSTANCE', 1);
   DBMS_ADVISOR.execute_task(tname);
   end;
/

After you have created the advisor task, you can view the output and recommendations in the Automatic Database Diagnostic Monitor in Enterprise Manager. This information is also available in the DBA_ADVISOR_* data dictionary views.


See Also:


Managing Undo Tablespaces

This section describes the various steps involved in undo tablespace management and contains the following sections:

Creating an Undo Tablespace

There are two methods of creating an undo tablespace. The first method creates the undo tablespace when the CREATE DATABASE statement is issued. This occurs when you are creating a new database, and the instance is started in automatic undo management mode (UNDO_MANAGEMENT = AUTO). The second method is used with an existing database. It uses the CREATE UNDO TABLESPACE statement.

You cannot create database objects in an undo tablespace. It is reserved for system-managed undo data.

Oracle Database enables you to create a single-file undo tablespace. Single-file, or bigfile, tablespaces are discussed in "Bigfile Tablespaces".

Using CREATE DATABASE to Create an Undo Tablespace

You can create a specific undo tablespace using the UNDO TABLESPACE clause of the CREATE DATABASE statement.

The following statement illustrates using the UNDO TABLESPACE clause in a CREATE DATABASE statement. The undo tablespace is named undotbs_01 and one datafile, /u01/oracle/rbdb1/undo0101.dbf, is allocated for it.

CREATE DATABASE rbdb1
     CONTROLFILE REUSE
     .
     .
     .
     UNDO TABLESPACE undotbs_01 DATAFILE '/u01/oracle/rbdb1/undo0101.dbf';

If the undo tablespace cannot be created successfully during CREATE DATABASE, the entire CREATE DATABASE operation fails. You must clean up the database files, correct the error and retry the CREATE DATABASE operation.

The CREATE DATABASE statement also lets you create a single-file undo tablespace at database creation. This is discussed in "Supporting Bigfile Tablespaces During Database Creation".


See Also:

Oracle Database SQL Reference for the syntax for using the CREATE DATABASE statement to create an undo tablespace

Using the CREATE UNDO TABLESPACE Statement

The CREATE UNDO TABLESPACE statement is the same as the CREATE TABLESPACE statement, but the UNDO keyword is specified. The database determines most of the attributes of the undo tablespace, but you can specify the DATAFILE clause.

This example creates the undotbs_02 undo tablespace with the AUTOEXTEND option:

CREATE UNDO TABLESPACE undotbs_02
     DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON;

You can create more than one undo tablespace, but only one of them can be active at any one time.


See Also:

Oracle Database SQL Reference for the syntax for using the CREATE UNDO TABLESPACE statement to create an undo tablespace

Altering an Undo Tablespace

Undo tablespaces are altered using the ALTER TABLESPACE statement. However, since most aspects of undo tablespaces are system managed, you need only be concerned with the following actions:

  • Adding a datafile

  • Renaming a datafile

  • Bringing a datafile online or taking it offline

  • Beginning or ending an open backup on a datafile

  • Enabling and disabling undo retention guarantee

These are also the only attributes you are permitted to alter.

If an undo tablespace runs out of space, or you want to prevent it from doing so, you can add more files to it or resize existing datafiles.

The following example adds another datafile to undo tablespace undotbs_01:

ALTER TABLESPACE undotbs_01
     ADD DATAFILE '/u01/oracle/rbdb1/undo0102.dbf' AUTOEXTEND ON NEXT 1M 
         MAXSIZE UNLIMITED;

You can use the ALTER DATABASE...DATAFILE statement to resize or extend a datafile.


See Also:


Dropping an Undo Tablespace

Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the undo tablespace undotbs_01:

DROP TABLESPACE undotbs_01;

An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails. However, since DROP TABLESPACE drops an undo tablespace even if it contains unexpired undo information (within retention period), you must be careful not to drop an undo tablespace if undo information is needed by some existing queries.

DROP TABLESPACE for undo tablespaces behaves like DROP TABLESPACE...INCLUDING CONTENTS. All contents of the undo tablespace are removed.


See Also:

Oracle Database SQL Reference for DROP TABLESPACE syntax

Switching Undo Tablespaces

You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace.

The following statement switches to a new undo tablespace:

ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

Assuming undotbs_01 is the current undo tablespace, after this command successfully executes, the instance uses undotbs_02 in place of undotbs_01 as its undo tablespace.

If any of the following conditions exist for the tablespace being switched to, an error is reported and no switching occurs:

The database is online while the switch operation is performed, and user transactions can be executed while this command is being executed. When the switch operation completes successfully, all transactions started after the switch operation began are assigned to transaction tables in the new undo tablespace.

The switch operation does not wait for transactions in the old undo tablespace to commit. If there are any pending transactions in the old undo tablespace, the old undo tablespace enters into a PENDING OFFLINE mode (status). In this mode, existing transactions can continue to execute, but undo records for new user transactions cannot be stored in this undo tablespace.

An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch operation completes successfully. A PENDING OFFLINE undo tablespace cannot be used by another instance, nor can it be dropped. Eventually, after all active transactions have committed, the undo tablespace automatically goes from the PENDING OFFLINE mode to the OFFLINE mode. From then on, the undo tablespace is available for other instances (in an Oracle Real Application Cluster environment).

If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), then the current undo tablespace is switched out and the next available undo tablespace is switched in. Use this statement with care because there may be no undo tablespace available.

The following example unassigns the current undo tablespace:

ALTER SYSTEM SET UNDO_TABLESPACE = '';

Establishing User Quotas for Undo Space

The Oracle Database Resource Manager can be used to establish user quotas for undo space. The Database Resource Manager directive UNDO_POOL allows DBAs to limit the amount of undo space consumed by a group of users (resource consumer group).

You can specify an undo pool for each consumer group. An undo pool controls the amount of total undo that can be generated by a consumer group. When the total undo generated by a consumer group exceeds its undo limit, the current UPDATE transaction generating the undo is terminated. No other members of the consumer group can perform further updates until undo space is freed from the pool.

When no UNDO_POOL directive is explicitly defined, users are allowed unlimited undo space.

Migrating to Automatic Undo Management

If you are currently using rollback segments to manage undo space, Oracle strongly recommends that you migrate your database to automatic undo management. Oracle Database provides a function that provides information on how to size your new undo tablespace based on the configuration and usage of the rollback segments in your system. DBA privileges are required to execute this function:

DECLARE
   utbsiz_in_MB NUMBER;
BEGIN
   utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
end;
/

The function returns the sizing information directly.

Viewing Information About Undo

This section lists views that are useful for viewing information about undo space in the automatic undo management mode and provides some examples. In addition to views listed here, you can obtain information from the views available for viewing tablespace and datafile information. Please refer to "Viewing Datafile Information" for information on getting information about those views.

Oracle Database also provides proactive help in managing tablespace disk space use by alerting you when tablespaces run low on available space. Please refer to "Managing Tablespace Alerts" for information on how to set alert thresholds for the undo tablespace.

In addition to the proactive undo space alerts, Oracle Database also provides alerts if your system has long-running queries that cause SNAPSHOT TOO OLD errors. To prevent excessive alerts, the long query alert is issued at most once every 24 hours. When the alert is generated, you can check the Undo Advisor Page of Enterprise Manager to get more information about the undo tablespace.

The following dynamic performance views are useful for obtaining space information about the undo tablespace:

View Description
V$UNDOSTAT Contains statistics for monitoring and tuning undo space. Use this view to help estimate the amount of undo space required for the current workload. The database also uses this information to help tune undo usage in the system. This view is meaningful only in automatic undo management mode.
V$ROLLSTAT For automatic undo management mode, information reflects behavior of the undo segments in the undo tablespace
V$TRANSACTION Contains undo segment information
DBA_UNDO_EXTENTS Shows the status and size of each extent in the undo tablespace.
DBA_HIST_UNDOSTAT Contains statistical snapshots of V$UNDOSTAT information. Please refer to Oracle Database 2 Day DBA for more information.


See Also:

Oracle Database Reference for complete descriptions of the views used in automatic undo management mode

The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space in the current instance. Statistics are available for undo space consumption, transaction concurrency, the tuning of undo retention, and the length and SQL ID of long-running queries in the instance.

Each row in the view contains statistics collected in the instance for a ten-minute interval. The rows are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 576 rows, spanning a 4 day cycle.

The following example shows the results of a query on the V$UNDOSTAT view.

  SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME,
         TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME,
         UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON"
         FROM v$UNDOSTAT WHERE rownum <= 144;
  
  BEGIN_TIME          END_TIME               UNDOTSN   UNDOBLKS   TXNCOUNT     MAXCON
  ------------------- ------------------- ---------- ---------- ---------- ----------
  10/28/2004 14:25:12 10/28/2004 14:32:17          8         74   12071108          3
  10/28/2004 14:15:12 10/28/2004 14:25:12          8         49   12070698          2
  10/28/2004 14:05:12 10/28/2004 14:15:12          8        125   12070220          1
  10/28/2004 13:55:12 10/28/2004 14:05:12          8         99   12066511          3
  ...
  10/27/2004 14:45:12 10/27/2004 14:55:12          8         15   11831676          1
  10/27/2004 14:35:12 10/27/2004 14:45:12          8        154   11831165          2
 
  144 rows selected.

The preceding example shows how undo space is consumed in the system for the previous 24 hours from the time 14:35:12 on 10/27/2004.

PK09/PKgpUIOEBPS/onlineredo.htm Managing the Redo Log

6 Managing the Redo Log

This chapter explains how to manage the online redo log. The current redo log is always online, unlike archived copies of a redo log. Therefore, the online redo log is usually referred to as simply the redo log.

This chapter contains the following topics:

What Is the Redo Log?

The most crucial structure for recovery operations is the redo log, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure.

Redo Threads

When speaking in the context of multiple database instances, the redo log for each database instance is also referred to as a redo thread. In typical configurations, only one database instance accesses an Oracle Database, so only one thread is present. In an Oracle Real Application Clusters environment, however, two or more instances concurrently access a single database and each instance has its own thread of redo. A separate redo thread for each instance avoids contention for a single set of redo log files, thereby eliminating a potential performance bottleneck.

This chapter describes how to configure and manage the redo log on a standard single-instance Oracle Database. The thread number can be assumed to be 1 in all discussions and examples of statements. For information about redo log groups in a Real Application Clusters environment, please refer to Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.

Redo Log Contents

Redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the undo segment data block, and the transaction table of the undo segments.

Redo entries record data that you can use to reconstruct all changes made to the database, including the undo segments. Therefore, the redo log also protects rollback data. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks.

Redo records are buffered in a circular fashion in the redo log buffer of the SGA (see "How Oracle Database Writes to the Redo Log") and are written to one of the redo log files by the Log Writer (LGWR) database background process. Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.

Redo records can also be written to a redo log file before the corresponding transaction is committed. If the redo log buffer fills, or another transaction commits, LGWR flushes all of the redo log entries in the redo log buffer to a redo log file, even though some redo records may not be committed. If necessary, the database can roll back these changes.

How Oracle Database Writes to the Redo Log

The redo log of a database consists of two or more redo log files. The database requires a minimum of two files to guarantee that one is always available for writing while the other is being archived (if the database is in ARCHIVELOG mode). See "Managing Archived Redo Logs" for more information.

LGWR writes to redo log files in a circular fashion. When the current redo log file fills, LGWR begins writing to the next available redo log file. When the last available redo log file is filled, LGWR returns to the first redo log file and writes to it, starting the cycle again. Figure 6-1 illustrates the circular writing of the redo log file. The numbers next to each line indicate the sequence in which LGWR writes to each redo log file.

Filled redo log files are available to LGWR for reuse depending on whether archiving is enabled.

  • If archiving is disabled (the database is in NOARCHIVELOG mode), a filled redo log file is available after the changes recorded in it have been written to the datafiles.

  • If archiving is enabled (the database is in ARCHIVELOG mode), a filled redo log file is available to LGWR after the changes recorded in it have been written to the datafiles and the file has been archived.

Figure 6-1 Reuse of Redo Log Files by LGWR

Description of Figure 6-1  follows
Description of "Figure 6-1 Reuse of Redo Log Files by LGWR"

Active (Current) and Inactive Redo Log Files

Oracle Database uses only one redo log files at a time to store redo records written from the redo log buffer. The redo log file that LGWR is actively writing to is called the current redo log file.

Redo log files that are required for instance recovery are called active redo log files. Redo log files that are no longer required for instance recovery are called inactive redo log files.

If you have enabled archiving (the database is in ARCHIVELOG mode), then the database cannot reuse or overwrite an active online log file until one of the archiver background processes (ARCn) has archived its contents. If archiving is disabled (the database is in NOARCHIVELOG mode), then when the last redo log file is full, LGWR continues by overwriting the first available active file.

Log Switches and Log Sequence Numbers

A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file. However, you can configure log switches to occur at regular intervals, regardless of whether the current redo log file is completely filled. You can also force log switches manually.

Oracle Database assigns each redo log file a new log sequence number every time a log switch occurs and LGWR begins writing to it. When the database archives redo log files, the archived log retains its log sequence number. A redo log file that is cycled back for use is given the next available log sequence number.

Each online or archived redo log file is uniquely identified by its log sequence number. During crash, instance, or media recovery, the database properly applies redo log files in ascending order by using the log sequence number of the necessary archived and redo log files.

Planning the Redo Log

This section provides guidelines you should consider when configuring a database instance redo log and contains the following topics:

Multiplexing Redo Log Files

To protect against a failure involving the redo log itself, Oracle Database allows a multiplexed redo log, meaning that two or more identical copies of the redo log can be automatically maintained in separate locations. For the most benefit, these locations should be on separate disks. Even if all copies of the redo log are on the same disk, however, the redundancy can help protect against I/O errors, file corruption, and so on. When redo log files are multiplexed, LGWR concurrently writes the same redo log information to multiple identical redo log files, thereby eliminating a single point of redo log failure.

Multiplexing is implemented by creating groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of the group. Each redo log group is defined by a number, such as group 1, group 2, and so on.

Figure 6-2 Multiplexed Redo Log Files

Description of Figure 6-2  follows
Description of "Figure 6-2 Multiplexed Redo Log Files"

In Figure 6-2, A_LOG1 and B_LOG1 are both members of Group 1, A_LOG2 and B_LOG2 are both members of Group 2, and so forth. Each member in a group must be exactly the same size.

Each member of a log file group is concurrently active—that is, concurrently written to by LGWR—as indicated by the identical log sequence numbers assigned by LGWR. In Figure 6-2, first LGWR writes concurrently to both A_LOG1 and B_LOG1. Then it writes concurrently to both A_LOG2 and B_LOG2, and so on. LGWR never writes concurrently to members of different groups (for example, to A_LOG1 and B_LOG2).


Note:

Oracle recommends that you multiplex your redo log files. The loss of the log file data can be catastrophic if recovery is required. Note that when you multiplex the redo log, the database must increase the amount of I/O that it performs. Depending on your configuration, this may impact overall database performance.

Responding to Redo Log Failure

Whenever LGWR cannot write to a member of a group, the database marks that member as INVALID and writes an error message to the LGWR trace file and to the database alert log to indicate the problem with the inaccessible files. The specific reaction of LGWR when a redo log member is unavailable depends on the reason for the lack of availability, as summarized in the table that follows.

Condition LGWR Action
LGWR can successfully write to at least one member in a group Writing proceeds as normal. LGWR writes to the available members of a group and ignores the unavailable members.
LGWR cannot access the next group at a log switch because the group needs to be archived Database operation temporarily halts until the group becomes available or until the group is archived.
All members of the next group are inaccessible to LGWR at a log switch because of media failure Oracle Database returns an error, and the database instance shuts down. In this case, you may need to perform media recovery on the database from the loss of a redo log file.

If the database checkpoint has moved beyond the lost redo log, media recovery is not necessary, because the database has saved the data recorded in the redo log to the datafiles. You need only drop the inaccessible redo log group. If the database did not archive the bad log, use ALTER DATABASE CLEAR UNARCHIVED LOG to disable archiving before the log can be dropped.

All members of a group suddenly become inaccessible to LGWR while it is writing to them Oracle Database returns an error and the database instance immediately shuts down. In this case, you may need to perform media recovery. If the media containing the log is not actually lost--for example, if the drive for the log was inadvertently turned off--media recovery may not be needed. In this case, you need only turn the drive back on and let the database perform automatic instance recovery.

Legal and Illegal Configurations

In most cases, a multiplexed redo log should be symmetrical: all groups of the redo log should have the same number of members. However, the database does not require that a multiplexed redo log be symmetrical. For example, one group can have only one member, and other groups can have two members. This configuration protects against disk failures that temporarily affect some redo log members but leave others intact.

The only requirement for an instance redo log is that it have at least two groups. Figure 6-3 shows legal and illegal multiplexed redo log configurations. The second configuration is illegal because it has only one group.

Figure 6-3 Legal and Illegal Multiplexed Redo Log Configuration

Description of Figure 6-3  follows
Description of "Figure 6-3 Legal and Illegal Multiplexed Redo Log Configuration"

Placing Redo Log Members on Different Disks

When setting up a multiplexed redo log, place members of a group on different physical disks. If a single disk fails, then only one member of a group becomes unavailable to LGWR and other members remain accessible to LGWR, so the instance can continue to function.

If you archive the redo log, spread redo log members across disks to eliminate contention between the LGWR and ARCn background processes. For example, if you have two groups of multiplexed redo log members (a duplexed redo log), place each member on a different disk and set your archiving destination to a fifth disk. Doing so will avoid contention between LGWR (writing to the members) and ARCn (reading the members).

Datafiles should also be placed on different disks from redo log files to reduce contention in writing data blocks and redo records.

Setting the Size of Redo Log Members

When setting the size of redo log files, consider whether you will be archiving the redo log. Redo log files should be sized so that a filled group can be archived to a single unit of offline storage media (such as a tape or disk), with the least amount of space on the medium left unused. For example, suppose only one filled redo log group can fit on a tape and 49% of the tape storage capacity remains unused. In this case, it is better to decrease the size of the redo log files slightly, so that two log groups could be archived on each tape.

All members of the same multiplexed redo log group must be the same size. Members of different groups can have different sizes. However, there is no advantage in varying file size between groups. If checkpoints are not set to occur between log switches, make all groups the same size to guarantee that checkpoints occur at regular intervals.

The minimum size permitted for a redo log file is 4 MB.


See Also:

Your operating system–specific Oracle documentation. The default size of redo log files is operating system dependent.

Choosing the Number of Redo Log Files

The best way to determine the appropriate number of redo log files for a database instance is to test different configurations. The optimum configuration has the fewest groups possible without hampering LGWR from writing redo log information.

In some cases, a database instance may require only two groups. In other situations, a database instance may require additional groups to guarantee that a recycled group is always available to LGWR. During testing, the easiest way to determine whether the current redo log configuration is satisfactory is to examine the contents of the LGWR trace file and the database alert log. If messages indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups.

Consider the parameters that can limit the number of redo log files before setting up or altering the configuration of an instance redo log. The following parameters limit the number of redo log files that you can add to a database:

  • The MAXLOGFILES parameter used in the CREATE DATABASE statement determines the maximum number of groups of redo log files for each database. Group values can range from 1 to MAXLOGFILES. When the compatibility level is set earlier than 10.2.0, the only way to override this upper limit is to re-create the database or its control file. Therefore, it is important to consider this limit before creating a database. When compatibility is set to 10.2.0 or later, you can exceed the MAXLOGFILES limit, and the control files expand as needed. If MAXLOGFILES is not specified for the CREATE DATABASE statement, then the database uses an operating system specific default value.

  • The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement determines the maximum number of members for each group. As with MAXLOGFILES, the only way to override this upper limit is to re-create the database or control file. Therefore, it is important to consider this limit before creating a database. If no MAXLOGMEMBERS parameter is specified for the CREATE DATABASE statement, then the database uses an operating system default value.


    See Also:


Controlling Archive Lag

You can force all enabled redo log threads to switch their current logs at regular time intervals. In a primary/standby database configuration, changes are made available to the standby database by archiving redo logs at the primary site and then shipping them to the standby database. The changes that are being applied by the standby database can lag behind the changes that are occurring on the primary database, because the standby database must wait for the changes in the primary database redo log to be archived (into the archived redo log) and then shipped to it. To limit this lag, you can set the ARCHIVE_LAG_TARGET initialization parameter. Setting this parameter lets you specify in seconds how long that lag can be.

Setting the ARCHIVE_LAG_TARGET Initialization Parameter

When you set the ARCHIVE_LAG_TARGET initialization parameter, you cause the database to examine the current redo log of the instance periodically. If the following conditions are met, then the instance will switch the log:

  • The current log was created prior to n seconds ago, and the estimated archival time for the current log is m seconds (proportional to the number of redo blocks used in the current log), where n + m exceeds the value of the ARCHIVE_LAG_TARGET initialization parameter.

  • The current log contains redo records.

In an Oracle Real Application Clusters environment, the instance also causes other threads to switch and archive their logs if they are falling behind. This can be particularly useful when one instance in the cluster is more idle than the other instances (as when you are running a 2-node primary/secondary configuration of Oracle Real Application Clusters).

The ARCHIVE_LAG_TARGET initialization parameter specifies the target of how many seconds of redo the standby could lose in the event of a primary shutdown or failure if the Oracle Data Guard environment is not configured in a no-data-loss mode. It also provides an upper limit of how long (in seconds) the current log of the primary database can span. Because the estimated archival time is also considered, this is not the exact log switch time.

The following initialization parameter setting sets the log switch interval to 30 minutes (a typical value).

ARCHIVE_LAG_TARGET = 1800

A value of 0 disables this time-based log switching functionality. This is the default setting.

You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set specifically to force logs to be switched and archived.

ARCHIVE_LAG_TARGET is a dynamic parameter and can be set with the ALTER SYSTEM SET statement.


Caution:

The ARCHIVE_LAG_TARGET parameter must be set to the same value in all instances of an Oracle Real ApplicatiopRn Clusters environment. Failing to do so results in unpredictable behavior.

Factors Affecting the Setting of ARCHIVE_LAG_TARGET

Consider the following factors when determining if you want to set the ARCHIVE_LAG_TARGET parameter and in determining the value for this parameter.

  • Overhead of switching (as well as archiving) logs

  • How frequently normal log switches occur as a result of log full conditions

  • How much redo loss is tolerated in the standby database

Setting ARCHIVE_LAG_TARGET may not be very useful if natural log switches already occur more frequently than the interval specified. However, in the case of irregularities of redo generation speed, the interval does provide an upper limit for the time range each current log covers.

If the ARCHIVE_LAG_TARGET initialization parameter is set to a very low value, there can be a negative impact on performance. This can force frequent log switches. Set the parameter to a reasonable value so as not to degrade the performance of the primary database.

Creating Redo Log Groups and Members

Plan the redo log of a database and create all required groups and members of redo log files during database creation. However, there are situations where you might want to create additional groups or members. For example, adding groups to a redo log can correct redo log group availability problems.

To create new redo log groups and members, you must have the ALTER DATABASE system privilege. A database can have up to MAXLOGFILES groups.


See Also:

Oracle Database SQL Reference for a complete description of the ALTER DATABASE statement

Creating Redo Log Groups

To create a new group of redo log files, use the SQL statement ALTER DATABASE with the ADD LOGFILE clause.

The following statement adds a new group of redo logs to the database:

ALTER DATABASE
  ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K;

Note:

Use fully specify filenames of new log members to indicate where the operating system file should be created. Otherwise, the files will be created in either the default or current directory of the database server, depending upon your operating system.

You can also specify the number that identifies the group using the GROUP clause:

ALTER DATABASE 
  ADD LOGFILE GROUP 10 ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo')
      SIZE 500K;

Using group numbers can make administering redo log groups easier. However, the group number must be between 1 and MAXLOGFILES. Do not skip redo log file group numbers (that is, do not number your groups 10, 20, 30, and so on), or you will consume unnecessary space in the control files of the database.

Creating Redo Log Members

In some cases, it might not be necessary to create a complete group of redo log files. A group could already exist, but not be complete because one or more members of the group were dropped (for example, because of a disk failure). In this case, you can add new members to an existing group.

To create new redo log members for an existing group, use the SQL statement ALTER DATABASE with the ADD LOGFILE MEMBER clause. The following statement adds a new redo log member to redo log group number 2:

ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2;

Notice that filenames must be specified, but sizes need not be. The size of the new members is determined from the size of the existing members of the group.

When using the ALTER DATABASE statement, you can alternatively identify the target group by specifying all of the other members of the group in the TO clause, as shown in the following example:

ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2c.rdo'
    TO ('/oracle/dbs/log2a.rdo', '/oracle/dbs/log2b.rdo'); 

Note:

Fully specify the filenames of new log members to indicate where the operating system files should be created. Otherwise, the files will be created in either the default or current directory of the database server, depending upon your operating system. You may also note that the status of the new log member is shown as INVALID. This is normal and it will change to active (blank) when it is first used.

Relocating and Renaming Redo Log Members

You can use operating system commands to relocate redo logs, then use the ALTER DATABASE statement to make their new names (locations) known to the database. This procedure is necessary, for example, if the disk currently used for some redo log files is going to be removed, or if datafiles and a number of redo log files are stored on the same disk and should be separated to reduce contention.

To rename redo log members, you must have the ALTER DATABASE system privilege. Additionally, you might also need operating system privileges to copy files to the desired location and privileges to open and back up the database.

Before relocating your redo logs, or making any other structural changes to the database, completely back up the database in case you experience problems while performing the operation. As a precaution, after renaming or relocating a set of redo log files, immediately back up the database control file.

Use the following steps for relocating redo logs. The example used to illustrate these steps assumes:

Steps for Renaming Redo Log Members 

  1. Shut down the database.

    SHUTDOWN
    
    
  2. Copy the redo log files to the new location.

    Operating system files, such as redo log members, must be copied using the appropriate operating system commands. See your operating system specific documentation for more information about copying files.


    Note:

    You can execute an operating system command to copy a file (or perform other operating system commands) without exiting SQL*Plus by using the HOST command. Some operating systems allow you to use a character in place of the word HOST. For example, you can use an exclamation point (!) in UNIX.

    The following example uses operating system commands (UNIX) to move the redo log members to a new location:

    mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo
    mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo
    
    
  3. Startup the database, mount, but do not open it.

    CONNECT / as SYSDBA
    STARTUP MOUNT
    
    
  4. Rename the redo log members.

    Use the ALTER DATABASE statement with the RENAME FILE clause to rename the database redo log files.

    ALTER DATABASE 
      RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo' 
               TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';
    
    
  5. Open the database for normal operation.

    The redo log alterations take effect when the database is opened.

    ALTER DATABASE OPEN; 
    

Dropping Redo Log Groups and Members

In some cases, you may want to drop an entire group of redo log members. For example, you want to reduce the number of groups in an instance redo log. In a different case, you may want to drop one or more specific redo log members. For example, if a disk failure occurs, you may need to drop all the redo log files on the failed disk so that the database does not try to write to the inaccessible files. In other situations, particular redo log files become unnecessary. For example, a file might be stored in an inappropriate location.

Dropping Log Groups

To drop a redo log group, you must have the ALTER DATABASE system privilege. Before dropping a redo log group, consider the following restrictions and precautions:

  • An instance requires at least two groups of redo log files, regardless of the number of members in the groups. (A group comprises one or more members.)

  • You can drop a redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.

  • Make sure a redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.

    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
    
       GROUP# ARC STATUS
    --------- --- ----------------
            1 YES ACTIVE
            2 NO  CURRENT
            3 YES INACTIVE
            4 YES INACTIVE
    
    

Drop a redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.

The following statement drops redo log group number 3:

ALTER DATABASE DROP LOGFILE GROUP 3;

When a redo log group is dropped from the database, and you are not using the Oracle-managed files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping a redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log files.

When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.

Dropping Redo Log Members

To drop a redo log member, you must have the ALTER DATABASE system privilege. Consider the following restrictions and precautions before dropping individual redo log members:

  • It is permissible to drop redo log files so that a multiplexed redo log becomes temporarily asymmetric. For example, if you use duplexed groups of redo log files, you can drop one member of one group, even though all other groups have two members each. However, you should rectify this situation immediately so that all groups have at least two members, and thereby eliminate the single point of failure possible for the redo log.

  • An instance always requires at least two valid groups of redo log files, regardless of the number of members in the groups. (A group comprises one or more members.) If the member you want to drop is the last valid member of the group, you cannot drop the member until the other members become valid. To see a redo log file status, use the V$LOGFILE view. A redo log file becomes INVALID if the database cannot access it. It becomes STALE if the database suspects that it is not complete or correct. A stale log file becomes valid again the next time its group is made the active group.

  • You can drop a redo log member only if it is not part of an active or current group. If you want to drop a member of an active group, first force a log switch to occur.

  • Make sure the group to which a redo log member belongs is archived (if archiving is enabled) before dropping the member. To see whether this has happened, use the V$LOG view.

To drop specific inactive redo log members, use the ALTER DATABASE statement with the DROP LOGFILE MEMBER clause.

The following statement drops the redo log /oracle/dbs/log3c.rdo:

ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';

When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file.

To drop a member of an active group, you must first force a log switch.

Forcing Log Switches

A log switch occurs when LGWR stops writing to one redo log group and starts writing to another. By default, a log switch occurs automatically when the current redo log file group fills.

You can force a log switch to make the currently active group inactive and available for redo log maintenance operations. For example, you want to drop the currently active group, but are not able to do so until the group is inactive. You may also wish to force a log switch if the currently active group needs to be archived at a specific time before the members of the group are completely filled. This option is useful in configurations with large redo log files that take a long time to fill.

To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER SYSTEM statement with the SWITCH LOGFILE clause.

The following statement forces a log switch:

ALTER SYSTEM SWITCH LOGFILE;

Verifying Blocks in Redo Log Files

You can configure the database to use checksums to verify blocks in the redo log files. If you set the initialization parameter DB_BLOCK_CHECKSUM to TRUE, the database computes a checksum for each database block when it is written to disk, including each redo log block as it is being written to the current log. The checksum is stored the header of the block.

Oracle Database uses the checksum to detect corruption in a redo log block. The database verifies the redo log block when the block is read from an archived log during recovery and when it writes the block to an archive log file. An error is raised and written to the alert log if corruption is detected.

If corruption is detected in a redo log block while trying to archive it, the system attempts to read the block from another member in the group. If the block is corrupted in all members of the redo log group, then archiving cannot proceed.

The default value of DB_BLOCK_CHECKSUM is TRUE. The value of this parameter can be changed dynamically using the ALTER SYSTEM statement.


Note:

There is a slight overhead and decrease in database performance with DB_BLOCK_CHECKSUM enabled. Monitor your database performance to decide if the benefit of using data block checksums to detect corruption outweighs the performance impact.


See Also:

Oracle Database Reference for a description of the DB_BLOCK_CHECKSUM initialization parameter

Clearing a Redo Log File

A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used to reinitialize the file without shutting down the database.

The following statement clears the log files in redo log group number 3:

ALTER DATABASE CLEAR LOGFILE GROUP 3;

This statement overcomes two situations where dropping redo logs is not possible:

If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs are available for use even though they were not archived.

If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. The database writes a message in the alert log describing the backups from which you cannot recover.


Note:

If you clear an unarchived redo log file, you should make another backup of the database.

If you want to clear an unarchived redo log that is needed to bring an offline tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER DATABASE CLEAR LOGFILE statement.

If you clear a redo log needed to bring an offline tablespace online, you will not be able to bring the tablespace online again. You will have to drop the tablespace or perform an incomplete recovery. Note that tablespaces taken offline normal do not require recovery.

Viewing Redo Log Information

The following views provide information on redo logs.

View Description
V$LOG Displays the redo log file information from the control file
V$LOGFILE Identifies redo log groups and members and member status
V$LOG_HISTORY Contains log history information

The following query returns the control file information about the redo log for a database.

SELECT * FROM V$LOG;

GROUP# THREAD#   SEQ   BYTES  MEMBERS  ARC STATUS     FIRST_CHANGE# FIRST_TIM
------ ------- ----- -------  -------  --- ---------  ------------- ---------
     1       1 10605 1048576        1  YES ACTIVE          11515628 16-APR-00
     2       1 10606 1048576        1  NO  CURRENT         11517595 16-APR-00
     3       1 10603 1048576        1  YES INACTIVE        11511666 16-APR-00
     4       1 10604 1048576        1  YES INACTIVE        11513647 16-APR-00

To see the names of all of the member of a group, use a query similar to the following:

SELECT * FROM V$LOGFILE;

GROUP#   STATUS  MEMBER
------  -------  ----------------------------------
     1           D:\ORANT\ORADATA\IDDB2\REDO04.LOG
     2           D:\ORANT\ORADATA\IDDB2\REDO03.LOG
     3           D:\ORANT\ORADATA\IDDB2\REDO02.LOG
     4           D:\ORANT\ORADATA\IDDB2\REDO01.LOG

If STATUS is blank for a member, then the file is in use.


See Also:

Oracle Database Reference for detailed information about these views

PKe9PKgpUIOEBPS/views.htm Managing Views, Sequences, and Synonyms

20 Managing Views, Sequences, and Synonyms

This chapter describes the management of views, sequences, and synonyms and contains the following topics:

Managing Views

This section describes aspects of managing views, and contains the following topics:

About Views

A view is a logical representation of another table or combination of tables. A view derives its data from the tables on which it is based. These tables are called base tables. Base tables might in turn be actual tables or might be views themselves. All operations performed on a view actually affect the base table of the view. You can use views in almost the same way as tables. You can query, update, insert into, and delete from views, just as you can standard tables.

Views can provide a different representation (such as subsets or supersets) of the data that resides within other tables and views. Views are very powerful because they allow you to tailor the presentation of data to different types of users.


See Also:

Oracle Database Concepts for a more complete description of views

Creating Views

To create a view, you must meet the following requirements:

  • To create a view in your schema, you must have the CREATE VIEW privilege. To create a view in another user's schema, you must have the CREATE ANY VIEW system privilege. You can acquire these privileges explicitly or through a role.

  • The owner of the view (whether it is you or another user) must have been explicitly granted privileges to access all objects referenced in the view definition. The owner cannot have obtained these privileges through roles. Also, the functionality of the view depends on the privileges of the view owner. For example, if the owner of the view has only the INSERT privilege for Scott's emp table, then the view can be used only to insert new rows into the emp table, not to SELECT, UPDATE, or DELETE rows.

  • If the owner of the view intends to grant access to the view to other users, the owner must have received the object privileges to the base objects with the GRANT OPTION or the system privileges with the ADMIN OPTION.

You can create views using the CREATE VIEW statement. Each view is defined by a query that references tables, materialized views, or other views. As with all subqueries, the query that defines a view cannot contain the FOR UPDATE clause.

The following statement creates a view on a subset of data in the emp table:

CREATE VIEW sales_staff AS
      SELECT empno, ename, deptno
      FROM emp
      WHERE deptno = 10
    WITH CHECK OPTION CONSTRAINT sales_staff_cnst;
      

The query that defines the sales_staff view references only rows in department 10. Furthermore, the CHECK OPTION creates the view with the constraint (named sales_staff_cnst) that INSERT and UPDATE statements issued against the view cannot result in rows that the view cannot select. For example, the following INSERT statement successfully inserts a row into the emp table by means of the sales_staff view, which contains all rows with department number 10:

INSERT INTO sales_staff VALUES (7584, 'OSTER', 10);

However, the following INSERT statement returns an error because it attempts to insert a row for department number 30, which cannot be selected using the sales_staff view:

INSERT INTO sales_staff VALUES (7591, 'WILLIAMS', 30);

The view could have been constructed specifying the WITH READ ONLY clause, which prevents any updates, inserts, or deletes from being done to the base table through the view. If no WITH clause is specified, the view, with some restrictions, is inherently updatable.


See Also:

Oracle Database SQL Reference for syntax and semantics of the CREATE VIEW statement

Join Views

You can also create views that specify more than one base table or view in the FROM clause. These are called join views. The following statement creates the division1_staff view that joins data from the emp and dept tables:

CREATE VIEW division1_staff AS
      SELECT ename, empno, job, dname
      FROM emp, dept
      WHERE emp.deptno IN (10, 30)
         AND emp.deptno = dept.deptno;

An updatable join view is a join view where UPDATE, INSERT, and DELETE operations are allowed. See "Updating a Join View" for further discussion.

Expansion of Defining Queries at View Creation Time

When a view is created, Oracle Database expands any wildcard (*) in a top-level view query into a column list. The resulting query is stored in the data dictionary; any subqueries are left intact. The column names in an expanded column list are enclosed in quote marks to account for the possibility that the columns of the base object were originally entered with quotes and require them for the query to be syntactically correct.

As an example, assume that the dept view is created as follows:

CREATE VIEW dept AS SELECT * FROM scott.dept;

The database stores the defining query of the dept view as:

SELECT "DEPTNO", "DNAME", "LOC" FROM scott.dept;

Views created with errors do not have wildcards expanded. However, if the view is eventually compiled without errors, wildcards in the defining query are expanded.

Creating Views with Errors

If there are no syntax errors in a CREATE VIEW statement, the database can create the view even if the defining query of the view cannot be executed. In this case, the view is considered "created with errors." For example, when a view is created that refers to a nonexistent table or an invalid column of an existing table, or when the view owner does not have the required privileges, the view can be created anyway and entered into the data dictionary. However, the view is not yet usable.

To create a view with errors, you must include the FORCE clause of the CREATE VIEW statement.

CREATE FORCE VIEW AS ...;

By default, views with errors are created as INVALID. When you try to create such a view, the database returns a message indicating the view was created with errors. If conditions later change so that the query of an invalid view can be executed, the view can be recompiled and be made valid (usable). For information changing conditions and their impact on views, see "Managing Object Dependencies".

Replacing Views

To replace a view, you must have all of the privileges required to drop and create a view. If the definition of a view must change, the view must be replaced; you cannot use an ALTER VIEW statement to change the definition of a view. You can replace views in the following ways:

  • You can drop and re-create the view.


    Caution:

    When a view is dropped, all grants of corresponding object privileges are revoked from roles and users. After the view is re-created, privileges must be regranted.

  • You can redefine the view with a CREATE VIEW statement that contains the OR REPLACE clause. The OR REPLACE clause replaces the current definition of a view and preserves the current security authorizations. For example, assume that you created the sales_staff view as shown earlier, and, in addition, you granted several object privileges to roles and other users. However, now you need to redefine the sales_staff view to change the department number specified in the WHERE clause. You can replace the current version of the sales_staff view with the following statement:

    CREATE OR REPLACE VIEW sales_staff AS
         SELECT empno, ename, deptno
         FROM emp
         WHERE deptno = 30
         WITH CHECK OPTION CONSTRAINT sales_staff_cnst;
    
    

Before replacing a view, consider the following effects:

  • Replacing a view replaces the view definition in the data dictionary. All underlying objects referenced by the view are not affected.

  • If a constraint in the CHECK OPTION was previously defined but not included in the new view definition, the constraint is dropped.

  • All views dependent on a replaced view become invalid (not usable). In addition, dependent PL/SQL program units may become invalid, depending on what was changed in the new version of the view. For example, if only the WHERE clause of the view changes, dependent PL/SQL program units remain valid. However, if any changes are made to the number of view columns or to the view column names or data types, dependent PL/SQL program units are invalidated. See "Managing Object Dependencies" for more information on how the database manages such dependencies.

Using Views in Queries

To issue a query or an INSERT, UPDATE, or DELETE statement against a view, you must have the SELECT, INSERT, UPDATE, or DELETE object privilege for the view, respectively, either explicitly or through a role.

Views can be queried in the same manner as tables. For example, to query the Division1_staff view, enter a valid SELECT statement that references the view:

SELECT * FROM Division1_staff;

ENAME        EMPNO       JOB             DNAME
------------------------------------------------------
CLARK         7782       MANAGER         ACCOUNTING
KING          7839       PRESIDENT       ACCOUNTING
MILLER        7934       CLERK           ACCOUNTING
ALLEN         7499       SALESMAN        SALES
WARD          7521       SALESMAN        SALES
JAMES         7900       CLERK           SALES
TURNER        7844       SALESMAN        SALES
MARTIN        7654       SALESMAN        SALES
BLAKE         7698       MANAGER         SALES

With some restrictions, rows can be inserted into, updated in, or deleted from a base table using a view. The following statement inserts a new row into the emp table using the sales_staff view:

INSERT INTO sales_staff
    VALUES (7954, 'OSTER', 30);

Restrictions on DML operations for views use the following criteria in the order listed:

  1. If a view is defined by a query that contains SET or DISTINCT operators, a GROUP BY clause, or a group function, then rows cannot be inserted into, updated in, or deleted from the base tables using the view.

  2. If a view is defined with WITH CHECK OPTION, a row cannot be inserted into, or updated in, the base table (using the view), if the view cannot select the row from the base table.

  3. If a NOT NULL column that does not have a DEFAULT clause is omitted from the view, then a row cannot be inserted into the base table using the view.

  4. If the view was created by using an expression, such as DECODE(deptno, 10, "SALES", ...), then rows cannot be inserted into or updated in the base table using the view.

The constraint created by WITH CHECK OPTION of the sales_staff view only allows rows that have a department number of 30 to be inserted into, or updated in, the emp table. Alternatively, assume that the sales_staff view is defined by the following statement (that is, excluding the deptno column):

CREATE VIEW sales_staff AS
    SELECT empno, ename
    FROM emp
    WHERE deptno = 10
    WITH CHECK OPTION CONSTRAINT sales_staff_cnst;

Considering this view definition, you can update the empno or ename fields of existing records, but you cannot insert rows into the emp table through the sales_staff view because the view does not let you alter the deptno field. If you had defined a DEFAULT value of 10 on the deptno field, then you could perform inserts.

When a user attempts to reference an invalid view, the database returns an error message to the user:

ORA-04063: view 'view_name' has errors

This error message is returned when a view exists but is unusable due to errors in its query (whether it had errors when originally created or it was created successfully but became unusable later because underlying objects were altered or dropped).

Updating a Join View

An updatable join view (also referred to as a modifiable join view) is a view that contains more than one table in the top-level FROM clause of the SELECT statement, and is not restricted by the WITH READ ONLY clause.

The rules for updatable join views are shown in the following table. Views that meet these criteria are said to be inherently updatable.

Rule Description
General Rule Any INSERT, UPDATE, or DELETE operation on a join view can modify only one underlying base table at a time.
UPDATE Rule All updatable columns of a join view must map to columns of a key-preserved table. See "Key-Preserved Tables" for a discussion of key-preserved tables. If the view is defined with the WITH CHECK OPTION clause, then all join columns and all columns of repeated tables are not updatable.
DELETE Rule Rows from a join view can be deleted as long as there is exactly one key-preserved table in the join. The key preserved table can be repeated in the FROM clause. If the view is defined with the WITH CHECK OPTION clause and the key preserved table is repeated, then the rows cannot be deleted from the view.
INSERT Rule An INSERT statement must not explicitly or implicitly refer to the columns of a non-key-preserved table. If the join view is defined with the WITH CHECK OPTION clause, INSERT statements are not permitted.

There are data dictionary views that indicate whether the columns in a join view are inherently updatable. See "Using the UPDATABLE_ COLUMNS Views" for descriptions of these views.


Note:

There are some additional restrictions and conditions that can affect whether a join view is inherently updatable. Specifics are listed in the description of the CREATE VIEW statement in the Oracle Database SQL Reference.

If a view is not inherently updatable, it can be made updatable by creating an INSTEAD OF trigger on it. This is described in Oracle Database Application Developer's Guide - Fundamentals.

Additionally, if a view is a join on other nested views, then the other nested views must be mergeable into the top level view. For a discussion of mergeable and unmergeable views, and more generally, how the optimizer optimizes statements that reference views, see the Oracle Database Performance Tuning Guide.


Examples illustrating the rules for inherently updatable join views, and a discussion of key-preserved tables, are presented in following sections. The examples in these sections work only if you explicitly define the primary and foreign keys in the tables, or define unique indexes. The following statements create the appropriately constrained table definitions for emp and dept.

CREATE TABLE dept (
      deptno        NUMBER(4) PRIMARY KEY,
      dname         VARCHAR2(14),
      loc           VARCHAR2(13));
 
CREATE TABLE emp (
      empno        NUMBER(4) PRIMARY KEY,
      ename        VARCHAR2(10),
      job          VARCHAR2(9),
      mgr          NUMBER(4),
      sal          NUMBER(7,2),
      comm         NUMBER(7,2),
      deptno       NUMBER(2),
      FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO));

You could also omit the primary and foreign key constraints listed in the preceding example, and create a UNIQUE INDEX on dept (deptno) to make the following examples work.

The following statement created the emp_dept join view which is referenced in the examples:

CREATE VIEW emp_dept AS
      SELECT emp.empno, emp.ename, emp.deptno, emp.sal, dept.dname, dept.loc
      FROM emp, dept
      WHERE emp.deptno = dept.deptno
         AND dept.loc IN ('DALLAS', 'NEW YORK', 'BOSTON');

Key-Preserved Tables

The concept of a key-preserved table is fundamental to understanding the restrictions on modifying join views. A table is key-preserved if every key of the table can also be a key of the result of the join. So, a key-preserved table has its keys preserved through a join.


Note:

It is not necessary that the key or keys of a table be selected for it to be key preserved. It is sufficient that if the key or keys were selected, then they would also be keys of the result of the join.

The key-preserving property of a table does not depend on the actual data in the table. It is, rather, a property of its schema. For example, if in the emp table there was at most one employee in each department, then deptno would be unique in the result of a join of emp and dept, but dept would still not be a key-preserved table.

If you select all rows from emp_dept, the results are:

EMPNO      ENAME      DEPTNO  DNAME          LOC 
---------- ---------- ------- -------------- -----------
      7782 CLARK           10 ACCOUNTING     NEW YORK
      7839 KING            10 ACCOUNTING     NEW YORK
      7934 MILLER          10 ACCOUNTING     NEW YORK
      7369 SMITH           20 RESEARCH       DALLAS
      7876 ADAMS           20 RESEARCH       DALLAS
      7902 FORD            20 RESEARCH       DALLAS
      7788 SCOTT           20 RESEARCH       DALLAS
      7566 JONES           20 RESEARCH       DALLAS
8 rows selected.

In this view, emp is a key-preserved table, because empno is a key of the emp table, and also a key of the result of the join. dept is not a key-preserved table, because although deptno is a key of the dept table, it is not a key of the join.

DML Statements and Join Views

The general rule is that any UPDATE, DELETE, or INSERT statement on a join view can modify only one underlying base table. The following examples illustrate rules specific to UPDATE, DELETE, and INSERT statements.

UPDATE Statements

The following example shows an UPDATE statement that successfully modifies the emp_dept view:

UPDATE emp_dept
     SET sal = sal * 1.10 
     WHERE deptno = 10;

The following UPDATE statement would be disallowed on the emp_dept view:

UPDATE emp_dept
     SET loc = 'BOSTON'
     WHERE ename = 'SMITH';

This statement fails with an error (ORA-01779 cannot modify a column which maps to a non key-preserved table), because it attempts to modify the base dept table, and the dept table is not key-preserved in the emp_dept view.

In general, all updatable columns of a join view must map to columns of a key-preserved table. If the view is defined using the WITH CHECK OPTION clause, then all join columns and all columns taken from tables that are referenced more than once in the view are not modifiable.

So, for example, if the emp_dept view were defined using WITH CHECK OPTION, the following UPDATE statement would fail:

UPDATE emp_dept
     SET deptno = 10
     WHERE ename = 'SMITH';

The statement fails because it is trying to update a join column.


See Also:

Oracle Database SQL Reference for syntax and additional information about the UPDATE statement

DELETE Statements

You can delete from a join view provided there is one and only one key-preserved table in the join. The key-preserved table can be repeated in the FROM clause.

The following DELETE statement works on the emp_dept view:

DELETE FROM emp_dept
     WHERE ename = 'SMITH';

This DELETE statement on the emp_dept view is legal because it can be translated to a DELETE operation on the base emp table, and because the emp table is the only key-preserved table in the join.

In the following view, a DELETE operation is permitted, because although there are two key-preserved tables, they are the same table. That is, the key-preserved table is repeated. In this case, the delete statement operates on the first table in the FROM list (e1, in this example):

CREATE VIEW emp_emp AS
     SELECT e1.ename, e2.empno, e2.deptno
     FROM emp e1, emp e2
     WHERE e1.empno = e2.empno;

If a view is defined using the WITH CHECK OPTION clause and the key-preserved table is repeated, rows cannot be deleted from such a view.

CREATE VIEW emp_mgr AS
     SELECT e1.ename, e2.ename mname
     FROM emp e1, emp e2
     WHERE e1.mgr = e2.empno
     WITH CHECK OPTION;

See Also:

Oracle Database SQL Reference for syntax and additional information about the DELETE statement

INSERT Statements

The following INSERT statement on the emp_dept view succeeds:

INSERT INTO emp_dept (ename, empno, deptno)
   VALUES ('KURODA', 9010, 40);

This statement works because only one key-preserved base table is being modified (emp), and 40 is a valid deptno in the dept table (thus satisfying the FOREIGN KEY integrity constraint on the emp table).

An INSERT statement, such as the following, would fail for the same reason that such an UPDATE on the base emp table would fail: the FOREIGN KEY integrity constraint on the emp table is violated (because there is no deptno 77).

INSERT INTO emp_dept (ename, empno, deptno)
   VALUES ('KURODA', 9010, 77);

The following INSERT statement would fail with an error (ORA-01776 cannot modify more than one base table through a join view):

INSERT INTO emp_dept (empno, ename, loc)
   VALUES (9010, 'KURODA', 'BOSTON');

An INSERT cannot implicitly or explicitly refer to columns of a non-key-preserved table. If the join view is defined using the WITH CHECK OPTION clause, then you cannot perform an INSERT to it.


See Also:

Oracle Database SQL Reference for syntax and additional information about the INSERT statement

Updating Views That Involve Outer Joins

Views that involve outer joins are modifiable in some cases. For example:

CREATE VIEW emp_dept_oj1 AS
    SELECT empno, ename, e.deptno, dname, loc
    FROM emp e, dept d
    WHERE e.deptno = d.deptno (+);

The statement:

SELECT * FROM emp_dept_oj1;

Results in:

EMPNO   ENAME      DEPTNO  DNAME          LOC         
------- ---------- ------- -------------- -------------
7369    SMITH      40      OPERATIONS     BOSTON       
7499    ALLEN      30      SALES          CHICAGO      
7566    JONES      20      RESEARCH       DALLAS       
7654    MARTIN     30      SALES          CHICAGO      
7698    BLAKE      30      SALES          CHICAGO      
7782    CLARK      10      ACCOUNTING     NEW YORK     
7788    SCOTT      20      RESEARCH       DALLAS       
7839    KING       10      ACCOUNTING     NEW YORK     
7844    TURNER     30      SALES          CHICAGO      
7876    ADAMS      20      RESEARCH       DALLAS       
7900    JAMES      30      SALES          CHICAGO      
7902    FORD       20      RESEARCH       DALLAS       
7934    MILLER     10      ACCOUNTING     NEW YORK     
7521    WARD       30      SALES          CHICAGO      
14 rows selected.

Columns in the base emp table of emp_dept_oj1 are modifiable through the view, because emp is a key-preserved table in the join.

The following view also contains an outer join:

CREATE VIEW emp_dept_oj2 AS
SELECT e.empno, e.ename, e.deptno, d.dname, d.loc
FROM emp e, dept d
WHERE e.deptno (+) = d.deptno;

The following statement:

SELECT * FROM emp_dept_oj2;

Results in:

EMPNO      ENAME      DEPTNO    DNAME          LOC
---------- ---------- --------- -------------- ----
7782       CLARK      10        ACCOUNTING     NEW YORK
7839       KING       10        ACCOUNTING     NEW YORK
7934       MILLER     10        ACCOUNTING     NEW YORK
7369       SMITH      20        RESEARCH       DALLAS
7876       ADAMS      20        RESEARCH       DALLAS
7902       FORD       20        RESEARCH       DALLAS
7788       SCOTT      20        RESEARCH       DALLAS 
7566       JONES      20        RESEARCH       DALLAS
7499       ALLEN      30        SALES          CHICAGO
7698       BLAKE      30        SALES          CHICAGO
7654       MARTIN     30        SALES          CHICAGO
7900       JAMES      30        SALES          CHICAGO
7844       TURNER     30        SALES          CHICAGO
7521       WARD       30        SALES          CHICAGO
                                OPERATIONS     BOSTON
15 rows selected.

In this view, emp is no longer a key-preserved table, because the empno column in the result of the join can have nulls (the last row in the preceding SELECT statement). So, UPDATE, DELETE, and INSERT operations cannot be performed on this view.

In the case of views containing an outer join on other nested views, a table is key preserved if the view or views containing the table are merged into their outer views, all the way to the top. A view which is being outer-joined is currently merged only if it is "simple." For example:

SELECT col1, col2, ... FROM T;

The select list of the view has no expressions, and there is no WHERE clause.

Consider the following set of views:

CREATE VIEW emp_v AS
    SELECT empno, ename, deptno
        FROM emp;
CREATE VIEW emp_dept_oj1 AS
    SELECT e.*, Loc, d.dname
        FROM emp_v e, dept d
            WHERE e.deptno = d.deptno (+);

In these examples, emp_v is merged into emp_dept_oj1 because emp_v is a simple view, and so emp is a key-preserved table. But if emp_v is changed as follows:

CREATE VIEW emp_v_2 AS
    SELECT empno, ename, deptno
        FROM emp
            WHERE sal > 1000;

Then, because of the presence of the WHERE clause, emp_v_2 cannot be merged into emp_dept_oj1, and hence emp is no longer a key-preserved table.

If you are in doubt whether a view is modifiable, then you can select from the USER_UPDATABLE_COLUMNS view to see if it is. For example:

SELECT owner, table_name, column_name, updatable FROM USER_UPDATABLE_COLUMNS 
     WHERE TABLE_NAME = 'EMP_DEPT_VIEW';

This returns output similar to the following:

OWNER       TABLE_NAME      COLUMN_NAM      UPD
----------  ----------      ----------      ---
SCOTT       EMP_DEPT_V      EMPNO           NO
SCOTT       EMP_DEPT_V      ENAME           NO
SCOTT       EMP_DEPT_V      DEPTNO          NO
SCOTT       EMP_DEPT_V      DNAME           NO
SCOTT       EMP_DEPT_V      LOC             NO
5 rows selected.

Using the UPDATABLE_ COLUMNS Views

The views described in the following table can assist you to identify inherently updatable join views.

View Description
DBA_UPDATABLE_COLUMNS Shows all columns in all tables and views that are modifiable.
ALL_UPDATABLE_COLUMNS Shows all columns in all tables and views accessible to the user that are modifiable.
USER_UPDATABLE_COLUMNS Shows all columns in all tables and views in the user's schema that are modifiable.

The updatable columns in view emp_dept are shown below.

SELECT COLUMN_NAME, UPDATABLE
      FROM USER_UPDATABLE_COLUMNS
      WHERE TABLE_NAME = 'EMP_DEPT';

COLUMN_NAME                    UPD
------------------------------ ---
EMPNO                          YES
ENAME                          YES
DEPTNO                         YES
SAL                            YES
DNAME                          NO
LOC                            NO

6 rows selected.

See Also:

Oracle Database Reference for complete descriptions of the updatable column views

Altering Views

You use the ALTER VIEW statement only to explicitly recompile a view that is invalid. If you want to change the definition of a view, see "Replacing Views".

The ALTER VIEW statement lets you locate recompilation errors before run time. To ensure that the alteration does not affect the view or other objects that depend on it, you can explicitly recompile a view after altering one of its base tables.

To use the ALTER VIEW statement, the view must be in your schema, or you must have the ALTER ANY TABLE system privilege.


See Also:

Oracle Database SQL Reference for syntax and additional information about the ALTER VIEW statement

Dropping Views

You can drop any view contained in your schema. To drop a view in another user's schema, you must have the DROP ANY VIEW system privilege. Drop a view using the DROP VIEW statement. For example, the following statement drops the emp_dept view:

DROP VIEW emp_dept;

See Also:

Oracle Database SQL Reference for syntax and additional information about the DROP VIEW statement

Managing Sequences

This section describes aspects of managing sequences, and contains the following topics:

About Sequences

Sequences are database objects from which multiple users can generate unique integers. The sequence generator generates sequential numbers, which can help to generate unique primary keys automatically, and to coordinate keys across multiple rows or tables.

Without sequences, sequential values can only be produced programmatically. A new primary key value can be obtained by selecting the most recently produced value and incrementing it. This method requires a lock during the transaction and causes multiple users to wait for the next value of the primary key; this waiting is known as serialization. If developers have such constructs in applications, then you should encourage the developers to replace them with access to sequences. Sequences eliminate serialization and improve the concurrency of an application.


See Also:

Oracle Database Concepts for a more complete description of sequences

Creating Sequences

To create a sequence in your schema, you must have the CREATE SEQUENCE system privilege. To create a sequence in another user's schema, you must have the CREATE ANY SEQUENCE privilege.

Create a sequence using the CREATE SEQUENCE statement. For example, the following statement creates a sequence used to generate employee numbers for the empno column of the emp table:

CREATE SEQUENCE emp_sequence
      INCREMENT BY 1
      START WITH 1
      NOMAXVALUE
      NOCYCLE
      CACHE 10;

Notice that several parameters can be specified to control the function of sequences. You can use these parameters to indicate whether the sequence is ascending or descending, the starting point of the sequence, the minimum and maximum values, and the interval between sequence values. The NOCYCLE option indicates that the sequence cannot generate more values after reaching its maximum or minimum value.

The CACHE clause preallocates a set of sequence numbers and keeps them in memory so that sequence numbers can be accessed faster. When the last of the sequence numbers in the cache has been used, the database reads another set of numbers into the cache.

The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import. See Oracle Database Utilities for details.


See Also:


Altering Sequences

To alter a sequence, your schema must contain the sequence, or you must have the ALTER ANY SEQUENCE system privilege. You can alter a sequence to change any of the parameters that define how it generates sequence numbers except the sequence starting number. To change the starting point of a sequence, drop the sequence and then re-create it.

Alter a sequence using the ALTER SEQUENCE statement. For example, the following statement alters the emp_sequence:

ALTER SEQUENCE emp_sequence
    INCREMENT BY 10
    MAXVALUE 10000
    CYCLE
    CACHE 20;

See Also:

Oracle Database SQL Reference for syntax and additional information about the ALTER SEQUENCE statement

Using Sequences

To use a sequence, your schema must contain the sequence or you must have been granted the SELECT object privilege for another user's sequence. Once a sequence is defined, it can be accessed and incremented by multiple users (who have SELECT object privilege for the sequence containing the sequence) with no waiting. The database does not wait for a transaction that has incremented a sequence to complete before that sequence can be incremented again.

The examples outlined in the following sections show how sequences can be used in master/detail table relationships. Assume an order entry system is partially comprised of two tables, orders_tab (master table) and line_items_tab (detail table), that hold information about customer orders. A sequence named order_seq is defined by the following statement:

CREATE SEQUENCE Order_seq
    START WITH 1
    INCREMENT BY 1
    NOMAXVALUE
    NOCYCLE
    CACHE 20;

Referencing a Sequence

A sequence is referenced in SQL statements with the NEXTVAL and CURRVAL pseudocolumns; each new sequence number is generated by a reference to the sequence pseudocolumn NEXTVAL, while the current sequence number can be repeatedly referenced using the pseudo-column CURRVAL.

NEXTVAL and CURRVAL are not reserved words or keywords and can be used as pseudocolumn names in SQL statements such as SELECT, INSERT, or UPDATE.

Generating Sequence Numbers with NEXTVAL

To generate and use a sequence number, reference seq_name.NEXTVAL. For example, assume a customer places an order. The sequence number can be referenced in a values list. For example:

INSERT INTO Orders_tab (Orderno, Custno)
    VALUES (Order_seq.NEXTVAL, 1032);

Or, the sequence number can be referenced in the SET clause of an UPDATE statement. For example:

UPDATE Orders_tab
    SET Orderno = Order_seq.NEXTVAL
    WHERE Orderno = 10112;

The sequence number can also be referenced outermost SELECT of a query or subquery. For example:

SELECT Order_seq.NEXTVAL FROM dual;

As defined, the first reference to order_seq.NEXTVAL returns the value 1. Each subsequent statement that references order_seq.NEXTVAL generates the next sequence number (2, 3, 4,. . .). The pseudo-column NEXTVAL can be used to generate as many new sequence numbers as necessary. However, only a single sequence number can be generated for each row. In other words, if NEXTVAL is referenced more than once in a single statement, then the first reference generates the next number, and all subsequent references in the statement return the same number.

Once a sequence number is generated, the sequence number is available only to the session that generated the number. Independent of transactions committing or rolling back, other users referencing order_seq.NEXTVAL obtain unique values. If two users are accessing the same sequence concurrently, then the sequence numbers each user receives might have gaps because sequence numbers are also being generated by the other user.

Using Sequence Numbers with CURRVAL

To use or refer to the current sequence value of your session, reference seq_name.CURRVAL. CURRVAL can only be used if seq_name.NEXTVAL has been referenced in the current user session (in the current or a previous transaction). CURRVAL can be referenced as many times as necessary, including multiple times within the same statement. The next sequence number is not generated until NEXTVAL is referenced. Continuing with the previous example, you would finish placing the customer's order by inserting the line items for the order:

INSERT INTO Line_items_tab (Orderno, Partno, Quantity)
    VALUES (Order_seq.CURRVAL, 20321, 3);

INSERT INTO Line_items_tab (Orderno, Partno, Quantity)
    VALUES (Order_seq.CURRVAL, 29374, 1);

Assuming the INSERT statement given in the previous section generated a new sequence number of 347, both rows inserted by the statements in this section insert rows with order numbers of 347.

Uses and Restrictions of NEXTVAL and CURRVAL

CURRVAL and NEXTVAL can be used in the following places:

  • VALUES clause of INSERT statements

  • The SELECT list of a SELECT statement

  • The SET clause of an UPDATE statement

CURRVAL and NEXTVAL cannot be used in these places:

  • A subquery

  • A view query or materialized view query

  • A SELECT statement with the DISTINCT operator

  • A SELECT statement with a GROUP BY or ORDER BY clause

  • A SELECT statement that is combined with another SELECT statement with the UNION, INTERSECT, or MINUS set operator

  • The WHERE clause of a SELECT statement

  • DEFAULT value of a column in a CREATE TABLE or ALTER TABLE statement

  • The condition of a CHECK constraint

Caching Sequence Numbers

Sequence numbers can be kept in the sequence cache in the System Global Area (SGA). Sequence numbers can be accessed more quickly in the sequence cache than they can be read from disk.

The sequence cache consists of entries. Each entry can hold many sequence numbers for a single sequence.

Follow these guidelines for fast access to all sequence numbers:

  • Be sure the sequence cache can hold all the sequences used concurrently by your applications.

  • Increase the number of values for each sequence held in the sequence cache.

The Number of Entries in the Sequence Cache

When an application accesses a sequence in the sequence cache, the sequence numbers are read quickly. However, if an application accesses a sequence that is not in the cache, then the sequence must be read from disk to the cache before the sequence numbers are used.

If your applications use many sequences concurrently, then your sequence cache might not be large enough to hold all the sequences. In this case, access to sequence numbers might often require disk reads. For fast access to all sequences, be sure your cache has enough entries to hold all the sequences used concurrently by your applications.

The Number of Values in Each Sequence Cache Entry

When a sequence is read into the sequence cache, sequence values are generated and stored in a cache entry. These values can then be accessed quickly. The number of sequence values stored in the cache is determined by the CACHE parameter in the CREATE SEQUENCE statement. The default value for this parameter is 20.

This CREATE SEQUENCE statement creates the seq2 sequence so that 50 values of the sequence are stored in the SEQUENCE cache:

CREATE SEQUENCE seq2
    CACHE 50;

The first 50 values of seq2 can then be read from the cache. When the 51st value is accessed, the next 50 values will be read from disk.

Choosing a high value for CACHE lets you access more successive sequence numbers with fewer reads from disk to the sequence cache. However, if there is an instance failure, then all sequence values in the cache are lost. Cached sequence numbers also could be skipped after an export and import if transactions continue to access the sequence numbers while the el5xport is running.

If you use the NOCACHE option in the CREATE SEQUENCE statement, then the values of the sequence are not stored in the sequence cache. In this case, every access to the sequence requires a disk read. Such disk reads slow access to the sequence. This CREATE SEQUENCE statement creates the SEQ3 sequence so that its values are never stored in the cache:

CREATE SEQUENCE seq3
    NOCACHE;

Dropping Sequences

You can drop any sequence in your schema. To drop a sequence in another schema, you must have the DROP ANY SEQUENCE system privilege. If a sequence is no longer required, you can drop the sequence using the DROP SEQUENCE statement. For example, the following statement drops the order_seq sequence:

DROP SEQUENCE order_seq;

When a sequence is dropped, its definition is removed from the data dictionary. Any synonyms for the sequence remain, but return an error when referenced.


See Also:

Oracle Database SQL Reference for syntax and additional information about the DROP SEQUENCE statement

Managing Synonyms

This section describes aspects of managing synonyms, and contains the following topics:

About Synonyms

A synonym is an alias for a schema object. Synonyms can provide a level of security by masking the name and owner of an object and by providing location transparency for remote objects of a distributed database. Also, they are convenient to use and reduce the complexity of SQL statements for database users.

Synonyms allow underlying objects to be renamed or moved, where only the synonym needs to be redefined and applications based on the synonym continue to function without modification.

You can create both public and private synonyms. A public synonym is owned by the special user group named PUBLIC and is accessible to every user in a database. A private synonym is contained in the schema of a specific user and available only to the user and the user's grantees.


See Also:

Oracle Database Concepts for a more complete description of synonyms

Creating Synonyms

To create a private synonym in your own schema, you must have the CREATE SYNONYM privilege. To create a private synonym in another user's schema, you must have the CREATE ANY SYNONYM privilege. To create a public synonym, you must have the CREATE PUBLIC SYNONYM system privilege.

Create a synonym using the CREATE SYNONYM statement. The underlying schema object need not exist, nor do you need privileges to access the object. The following statement creates a public synonym named public_emp on the emp table contained in the schema of jward:

CREATE PUBLIC SYNONYM public_emp FOR jward.emp

When you create a synonym for a remote procedure or function, you must qualify the remote object with its schema name. Alternatively, you can create a local public synonym on the database where the remote object resides, in which case the database link must be included in all subsequent calls to the procedure or function.


See Also:

Oracle Database SQL Reference for syntax and additional information about the CREATE SYNONYM statement

Using Synonyms in DML Statements

You can successfully use any private synonym contained in your schema or any public synonym, assuming that you have the necessary privileges to access the underlying object, either explicitly, from an enabled role, or from PUBLIC. You can also reference any private synonym contained in another schema if you have been granted the necessary object privileges for the private synonym.

You can only reference another user's synonym using the object privileges that you have been granted. For example, if you have the SELECT privilege for the jward.emp synonym, then you can query the jward.emp synonym, but you cannot insert rows using the jward.emp synonym.

A synonym can be referenced in a DML statement the same way that the underlying object of the synonym can be referenced. For example, if a synonym named emp refers to a table or view, then the following statement is valid:

INSERT INTO emp (empno, ename, job)
    VALUES (emp_sequence.NEXTVAL, 'SMITH', 'CLERK');

If the synonym named fire_emp refers to a standalone procedure or package procedure, then you could execute it with the command

EXECUTE Fire_emp(7344);

Dropping Synonyms

You can drop any private synonym in your own schema. To drop a private synonym in another user's schema, you must have the DROP ANY SYNONYM system privilege. To drop a public synonym, you must have the DROP PUBLIC SYNONYM system privilege.

Drop a synonym that is no longer required using DROP SYNONYM statement. To drop a private synonym, omit the PUBLIC keyword. To drop a public synonym, include the PUBLIC keyword.

For example, the following statement drops the private synonym named emp:

DROP SYNONYM emp;

The following statement drops the public synonym named public_emp:

DROP PUBLIC SYNONYM public_emp;

When you drop a synonym, its definition is removed from the data dictionary. All objects that reference a dropped synonym remain. However, they become invalid (not usable). For more information about how dropping synonyms can affect other schema objects, see "Managing Object Dependencies".


See Also:

Oracle Database SQL Reference for syntax and additional information about the DROP SYNONYM statement

Viewing Information About Views, Synonyms, and Sequences

The following views display information about views, synonyms, and sequences:

View Description
DBA_VIEWS

ALL_VIEWS

USER_VIEWS

DBA view describes all views in the database. ALL view is restricted to views accessible to the current user. USER view is restricted to views owned by the current user.
DBA_SYNONYMS

ALL_SYNONYMS

USER_SYNONYMS

These views describe synonyms.
DBA_SEQUENCES

ALL_SEQUENCES

USER_SEQUENCES

These views describe sequences.
DBA_UPDATABLE_COLUMNS

ALL_UPDATABLE_COLUMNS

USER_UPDATABLE_COLUMNS

These views describe all columns in join views that are updatable.


See Also:

Oracle Database Reference for complete descriptions of these views

PK~O{5l5PKgpUIOEBPS/start.htm Starting Up and Shutting Down

3 Starting Up and Shutting Down

This chapter describes the procedures for starting up and shutting down an Oracle Database instance and contains the following topics:

Starting Up a Database

When you start up a database, you create an instance of that database and you determine the state of the database. Normally, you start up an instance by mounting and opening the database. Doing so makes the database available for any valid user to connect to and perform typical data access operations. Other options exist, and these are also discussed in this section.

This section contains the following topics relating to starting up an instance of a database:

Options for Starting Up a Database

You can start up a database instance with SQL*Plus, Recovery Manager, or Enterprise Manager.

Starting Up a Database Using SQL*Plus

You can start a SQL*Plus session, connect to Oracle Database with administrator privileges, and then issue the STARTUP command. Using SQL*Plus in this way is the only method described in detail in this book.

Starting Up a Database Using Recovery Manager

You can also use Recovery Manager (RMAN) to execute STARTUP and SHUTDOWN commands. You may prefer to do this if your are within the RMAN environment and do not want to invoke SQL*Plus.


See Also:

Oracle Database Backup and Recovery Basics for information on starting up the database using RMAN

Starting Up a Database Using Oracle Enterprise Manager

You can use Oracle Enterprise Manager (EM) to administer your database, including starting it up and shutting it down. EM combines a GUI console, agents, common services, and tools to provide an integrated and comprehensive systems management platform for managing Oracle products. EM Database Control, which is the portion of EM that is dedicated to administering an Oracle database, enables you to perform the functions discussed in this book using a GUI interface, rather than command line operations.

The remainder of this section describes using SQL*Plus to start up a database instance.

Understanding Initialization Parameter Files

To start an instance, the database must read instance configuration parameters (the initialization parameters) from either a server parameter file (SPFILE) or a text initialization parameter file.

When you issue the SQL*Plus STARTUP command, the database attempts to read the initialization parameters from an SPFILE in a platform-specific default location. If it finds no SPFILE, it searches for a text initialization parameter file.


Note:

For UNIX or Linux, the platform-specific default location (directory) for the SPFILE and text initialization parameter file is:
$ORACLE_HOME/dbs 

For Windows NT and Windows 2000 the location is:

%ORACLE_HOME%\database

In the platform-specific default location, Oracle Database locates your initialization parameter file by examining filenames in the following order:

  1. spfile$ORACLE_SID.ora

  2. spfile.ora

  3. init$ORACLE_SID.ora

The first two filenames represent SPFILEs and the third represents a text initialization parameter file.


Note:

The spfile.ora file is included in this search path because in a Real Application Clusters environment one server parameter file is used to store the initialization parameter settings for all instances. There is no instance-specific location for storing a server parameter file.

For more information about the server parameter file for a Real Application Clusters environment, see Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.


If you (or the Database Configuration Assistant) created a server parameter file, but you want to override it with a text initialization parameter file, you can specify the PFILE clause of the STARTUP command to identify the initialization parameter file.

STARTUP PFILE = /u01/oracle/dbs/init.ora

Starting Up with a Non-Default Server Parameter File

A non-default server parameter file (SPFILE) is an SPFILE that is in a location other than the default location. It is not usually necessary to start an instance with a non-default SPFILE. However, should such a need arise, you can use the PFILE clause to start an instance with a non-default server parameter file as follows:

  1. Create a one-line text initialization parameter file that contains only the SPFILE parameter. The value of the parameter is the non-default server parameter file location.

    For example, create a text initialization parameter file /u01/oracle/dbs/spf_init.ora that contains only the following parameter:

    SPFILE = /u01/oracle/dbs/test_spfile.ora
    

    Note:

    You cannot use the IFILE initialization parameter within a text initialization parameter file to point to a server parameter file. In this context, you must use the SPFILE initialization parameter.

  2. Start up the instance pointing to this initialization parameter file.

    STARTUP PFILE = /u01/oracle/dbs/spf_init.ora
    
    

The SPFILE must reside on the machine running the database server. Therefore, the preceding method also provides a means for a client machine to start a database that uses an SPFILE. It also eliminates the need for a client machine to maintain a client-side initialization parameter file. When the client machine reads the initialization parameter file containing the SPFILE parameter, it passes the value to the server where the specified SPFILE is read.

Initialization Files and Automatic Storage Management

A database that uses Automatic Storage Management (ASM) usually has a non-default SPFILE. If you use the Database Configuration Assistant (DBCA) to configure a database to use ASM, DBCA creates an SPFILE for the database instance in an ASM disk group, and then creates a text initialization parameter file in the default location in the local file system to point to the SPFILE.


See Also:

Chapter 2, "Creating an Oracle Database", for more information about initialization parameters, initialization parameter files, and server parameter files

Preparing to Start Up an Instance

You must perform some preliminary steps before attempting to start an instance of your database using SQL*Plus.

  1. Ensure that environment variables are set so that you connect to the desired Oracle instance. For details, see "Selecting an Instance with Environment Variables".

  2. Start SQL*Plus without connecting to the database:

    SQLPLUS /NOLOG
    
    
  3. Connect to Oracle Database as SYSDBA:

    CONNECT username/password AS SYSDBA
    
    

Now you are connected to the database and ready to start up an instance of your database.


See Also:

SQL*Plus User's Guide and Reference for descriptions and syntax for the CONNECT, STARTUP, and SHUTDOWN commands.

Starting Up an Instance

You use the SQL*Plus STARTUP command to start up an Oracle Database instance. You can start an instance in various modes:

  • Start the instance without mounting a database. This does not allow access to the database and usually would be done only for database creation or the re-creation of control files.

  • Start the instance and mount the database, but leave it closed. This state allows for certain DBA activities, but does not allow general access to the database.

  • Start the instance, and mount and open the database. This can be done in unrestricted mode, allowing access to all users, or in restricted mode, allowing access for database administrators only.

  • Force the instance to start after a startup or shutdown problem, or start the instance and have complete media recovery begin immediately.


Note:

You cannot start a database instance if you are connected to the database through a shared server process.

The following scenarios describe and illustrate the various states in which you can start up an instance. Some restrictions apply when combining clauses of the STARTUP command.


Note:

It is possible to encounter problems starting up an instance if control files, database files, or redo log files are not available. If one or more of the files specified by the CONTROL_FILES initialization parameter does not exist or cannot be opened when you attempt to mount a database, Oracle Database returns a warning message and does not mount the database. If one or more of the datafiles or redo log files is not available or cannot be opened when attempting to open a database, the database returns a warning message and does not open the database.


See Also:

SQL*Plus User's Guide and Reference for information about the restrictions that apply when combining clauses of the STARTUP command

Starting an Instance, and Mounting and Opening a Database

Normal database operation means that an instance is started and the database is mounted and open. This mode allows any valid user to connect to the database and perform data access operations.

The following command starts an instance, reads the initialization parameters from the default location, and then mounts and opens the database. (You can optionally specify a PFILE clause.)

STARTUP

Starting an Instance Without Mounting a Database

You can start an instance without mounting a database. Typically, you do so only during database creation. Use the STARTUP command with the NOMOUNT clause:

STARTUP NOMOUNT 

Starting an Instance and Mounting a Database

You can start an instance and mount a database without opening it, allowing you to perform specific maintenance operations. For example, the database must be mounted but not open during the following tasks:

The following command starts an instance and mounts the database, but leaves the database closed:

STARTUP MOUNT

Restricting Access to an Instance at Startup

You can start an instance, and optionally mount and open a database, in restricted mode so that the instance is available only to administrative personnel (not general database users). Use this mode of instance startup when you need to accomplish one of the following tasks:

  • Perform an export or import of data

  • Perform a data load (with SQL*Loader)

  • Temporarily prevent typical users from using data

  • Perform certain migration or upgrade operations

Typically, all users with the CREATE SESSION system privilege can connect to an open database. Opening a database in restricted mode allows database access only to users with both the CREATE SESSION and RESTRICTED SESSION system privilege. Only database administrators should have the RESTRICTED SESSION system privilege. Further, when the instance is in restricted mode, a database administrator cannot access the instance remotely through an Oracle Net listener, but can only access the instance locally from the machine that the instance is running on.

The following command starts an instance (and mounts and opens the database) in restricted mode:

STARTUP RESTRICT

You can use the RESTRICT clause in combination with the MOUNT, NOMOUNT, and OPEN clauses.

Later, use the ALTER SYSTEM statement to disable the RESTRICTED SESSION feature:

ALTER SYSTEM DISABLE RESTRICTED SESSION;

If you open the database in nonrestricted mode and later find that you need to restrict access, you can use the ALTER SYSTEM statement to do so, as described in "Restricting Access to an Open Database".


See Also:

Oracle Database SQL Reference for more information on the ALTER SYSTEM statement

Forcing an Instance to Start

In unusual circumstances, you might experience problems when attempting to start a database instance. You should not force a database to start unless you are faced with the following:

  • You cannot shut down the current instance with the SHUTDOWN NORMAL, SHUTDOWN IMMEDIATE, or SHUTDOWN TRANSACTIONAL commands.

  • You experience problems when starting an instance.

If one of these situations arises, you can usually solve the problem by starting a new instance (and optionally mounting and opening the database) using the STARTUP command with the FORCE clause:

STARTUP FORCE

If an instance is running, STARTUP FORCE shuts it down with mode ABORT before restarting it. In this case, beginning with Oracle Database 10g Release 2, the alert log shows the message "Shutting down instance (abort)" followed by "Starting ORACLE instance (normal)." (Earlier versions of the database showed only "Starting ORACLE instance (force)" in the alert log.)


See Also:

"Shutting Down with the ABORT Clause" to understand the side effects of aborting the current instance

Starting an Instance, Mounting a Database, and Starting Complete Media Recovery

If you know that media recovery is required, you can start an instance, mount a database to the instance, and have the recovery process automatically start by using the STARTUP command with the RECOVER clause:

STARTUP OPEN RECOVER

If you attempt to perform recovery when no recovery is required, Oracle Database issues an error message.

Automatic Database Startup at Operating System Start

Many sites use procedures to enable automatic startup of one or more Oracle Database instances and databases immediately following a system start. The procedures for performing this task are specific to each operating system. For information about automatic startup, see your operating system specific Oracle documentation.

Starting Remote Instances

If your local Oracle Database server is part of a distributed database, you might want to start a remote instance and database. Procedures for starting and stopping remote instances vary widely depending on communication protocol and operating system.

Altering Database Availability

You can alter the availability of a database. You may want to do this in order to restrict access for maintenance reasons or to make the database read only. The following sections explain how to alter the availability of a database:

Mounting a Database to an Instance

When you need to perform specific administrative operations, the database must be started and mounted to an instance, but closed. You can achieve this scenario by starting the instance and mounting the database.

To mount a database to a previously started, but not opened instance, use the SQL statement ALTER DATABASE with the MOUNT clause as follows:

ALTER DATABASE MOUNT;

See Also:

"Starting an Instance and Mounting a Database" for a list of operations that require the database to be mounted and closed (and procedures to start an instance and mount a database in one step)

Opening a Closed Database

You can make a mounted but closed database available for general use by opening the database. To open a mounted database, use the ALTER DATABASE statement with tc@he OPEN clause:

ALTER DATABASE OPEN;

After executing this statement, any valid Oracle Database user with the CREATE SESSION system privilege can connect to the database.

Opening a Database in Read-Only Mode

Opening a database in read-only mode enables you to query an open database while eliminating any potential for online data content changes. While opening a database in read-only mode guarantees that datafile and redo log files are not written to, it does not restrict database recovery or operations that change the state of the database without generating redo. For example, you can take datafiles offline or bring them online since these operations do not affect data content.

If a query against a database in read-only mode uses temporary tablespace, for example to do disk sorts, then the issuer of the query must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the query will fail. This is explained in "Creating a Locally Managed Temporary Tablespace".

Ideally, you open a database in read-only mode when you alternate a standby database between read-only and recovery mode. Be aware that these are mutually exclusive modes.

The following statement opens a database in read-only mode:

ALTER DATABASE OPEN READ ONLY;

You can also open a database in read/write mode as follows:

ALTER DATABASE OPEN READ WRITE;

However, read/write is the default mode.


Note:

You cannot use the RESETLOGS clause with a READ ONLY clause.


See Also:

Oracle Database SQL Reference for more information about the ALTER DATABASE statement

Restricting Access to an Open Database

To place an instance in restricted mode, where only users with administrative privileges can access it, use the SQL statement ALTER SYSTEM with the ENABLE RESTRICTED SESSION clause. After placing an instance in restricted mode, you should consider killing all current user sessions before performing any administrative tasks.

To lift an instance from restricted mode, use ALTER SYSTEM with the DISABLE RESTRICTED SESSION clause.


See Also:


Shutting Down a Database

To initiate database shutdown, use the SQL*Plus SHUTDOWN command. Control is not returned to the session that initiates a database shutdown until shutdown is complete. Users who attempt connections while a shutdown is in progress receive a message like the following:

ORA-01090: shutdown in progress - connection is not permitted

Note:

You cannot shut down a database if you are connected to the database through a shared server process.

To shut down a database and instance, you must first connect as SYSOPER or SYSDBA. There are several modes for shutting down a database. These are discussed in the following sections:

Some shutdown modes wait for certain events to occur (such as transactions completing or users disconnecting) before actually bringing down the database. There is a one-hour timeout period for these events. This timeout behavior is discussed in this additional section:

Shutting Down with the NORMAL Clause

To shut down a database in normal situations, use the SHUTDOWN command with the NORMAL clause:

SHUTDOWN NORMAL

The NORMAL clause is optional, because this is the default shutdown method if no clause is provided.

Normal database shutdown proceeds with the following conditions:

  • No new connections are allowed after the statement is issued.

  • Before the database is shut down, the database waits for all currently connected users to disconnect from the database.

The next startup of the database will not require any instance recovery procedures.

Shutting Down with the IMMEDIATE Clause

Use immediate database shutdown only in the following situations:

  • To initiate an automated and unattended backup

  • When a power shutdown is going to occur soon

  • When the database or one of its applications is functioning irregularly and you cannot contact users to ask them to log off or they are unable to log off

To shut down a database immediately, use the SHUTDOWN command with the IMMEDIATE clause:

SHUTDOWN IMMEDIATE

Immediate database shutdown proceeds with the following conditions:

  • No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued.

  • Any uncommitted transactions are rolled back. (If long uncommitted transactions exist, this method of shutdown might not complete quickly, despite its name.)

  • Oracle Database does not wait for users currently connected to the database to disconnect. The database implicitly rolls back active transactions and disconnects all connected users.

The next startup of the database will not require any instance recovery procedures.

Shutting Down with the TRANSACTIONAL Clause

When you want to perform a planned shutdown of an instance while allowing active transactions to complete first, use the SHUTDOWN command with the TRANSACTIONAL clause:

SHUTDOWN TRANSACTIONAL

Transactional database shutdown proceeds with the following conditions:

  • No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued.

  • After all transactions have completed, any client still connected to the instance is disconnected.

  • At this point, the instance shuts down just as it would when a SHUTDOWN IMMEDIATE statement is submitted.

The next startup of the database will not require any instance recovery procedures.

A transactional shutdown prevents clients from losing work, and at the same time, does not require all users to log off.

Shutting Down with the ABORT Clause

You can shut down a database instantaneously by aborting the database instance. If possible, perform this type of shutdown only in the following situations:

The database or one of its applications is functioning irregularly and none of the other types of shutdown works.

  • You need to shut down the database instantaneously (for example, if you know a power shutdown is going to occur in one minute).

  • You experience problems when starting a database instance.

When you must do a database shutdown by aborting transactions and user connections, issue the SHUTDOWN command with the ABORT clause:

SHUTDOWN ABORT

An aborted database shutdown proceeds with the following conditions:

  • No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued.

  • Current client SQL statements being processed by Oracle Database are immediately terminated.

  • Uncommitted transactions are not rolled back.

  • Oracle Database does not wait for users currently connected to the database to disconnect. The database implicitly disconnects all connected users.

The next startup of the database will require instance recovery procedures.

Shutdown Timeout

Shutdown modes that wait for users to disconnect or for transactions to complete have a limit on the amount of time that they wait. If all events blocking the shutdown do not occur within one hour, the shutdown command cancels with the following message: ORA-01013: user requested cancel of current operation.

Quiescing a Database

Occasionally you might want to put a database in a state that allows only DBA transactions, queries, fetches, or PL/SQL statements. Such a state is referred to as a quiesced state, in the sense that no ongoing non-DBA transactions, queries, fetches, or PL/SQL statements are running in the system.


Note:

In this discussion of quiesce database, a DBA is defined as user SYS or SYSTEM. Other users, including those with the DBA role, are not allowed to issue the ALTER SYSTEM QUIESCE DATABASE statement or proceed after the database is quiesced.

The quiesced state lets administrators perform actions that cannot safely be done otherwise. These actions include:

Without the ability to quiesce the database, you would need to shut down the database and reopen it in restricted mode. This is a serious restriction, especially for systems requiring 24 x 7 availability. Quiescing a database is much a smaller restriction, because it eliminates the disruption to users and the downtime associated with shutting down and restarting the database.

When the database is in the quiesced state, it is through the facilities of the Database Resource Manager that non-DBA sessions are prevented from becoming active. Therefore, while this statement is in effect, any attempt to change the current resource plan will be queued until after the system is unquiesced. See Chapter 24, "Using the Database Resource Manager" for more information about the Database Resource Manager.

Placing a Database into a Quiesced State

To place a database into a quiesced state, issue the following statement:

ALTER SYSTEM QUIESCE RESTRICTED;

Non-DBA active sessions will continue until they become inactive. An active session is one that is currently inside of a transaction, a query, a fetch, or a PL/SQL statement; or a session that is currently holding any shared resources (for example, enqueues). No inactive sessions are allowed to become active. For example, If a user issues a SQL query in an attempt to force an inactive session to become active, the query will appear to be hung. When the database is later unquiesced, the session is resumed, and the blocked action is processed.

Once all non-DBA sessions become inactive, the ALTER SYSTEM QUIESCE RESTRICTED statement completes, and the database is in a quiesced state. In an Oracle Real Application Clusters environment, this statement affects all instances, not just the one that issues the statement.

The ALTER SYSTEM QUIESCE RESTRICTED statement may wait a long time for active sessions to become inactive. You can determine the sessions that are blocking the quiesce operation by querying the V$BLOCKING_QUIESCE view. This view returns only a single column: SID (Session ID). You can join it with V$SESSION to get more information about the session, as shown in the following example:

select bl.sid, user, osuser, type, program
from v$blocking_quiesce bl, v$session se
where bl.sid = se.sid;

See Oracle Database Reference for details on these view.

If you interrupt the request to quiesce the database, or if your session terminates abnormally before all active sessions are quiesced, then Oracle Database automatically reverses any partial effects of the statement.

For queries that are carried out by successive multiple Oracle Call Interface (OCI) fetches, the ALTER SYSTEM QUIESCE RESTRICTED statement does not wait for all fetches to finish. It only waits for the current fetch to finish.

For both dedicated and shared server connections, all non-DBA logins after this statement is issued are queued by the Database Resource Manager, and are not allowed to proceed. To the user, it appears as if the login is hung. The login will resume when the database is unquiesced.

The database remains in the quiesced state even if the session that issued the statement exits. A DBA must log in to the database to issue the statement that specifically unquiesces the database.


Note:

You cannot perform a cold backup when the database is in the quiesced state, because Oracle Database background processes may still perform updates for internal purposes even while the database is quiesced. In addition, the file headers of online datafiles continue to appear to be accessible. They do not look the same as if a clean shutdown had been performed. However, you can still take online backups while the database is in a quiesced state.

Restoring the System to Normal Operation

The following statement restores the database to normal operation:

ALTER SYSTEM UNQUIESCE;

All non-DBA activity is allowed to proceed. In an Oracle Real Application Clusters environment, this statement is not required to be issued from the same session, or even the same instance, as that which quiesced the database. If the session issuing the ALTER SYSTEM UNQUIESCE statement terminates abnormally, then the Oracle Database server ensures that the unquiesce operation completes.

Viewing the Quiesce State of an Instance

You can query the ACTIVE_STATE column of the V$INSTANCE view to see the current state of an instance. The column values has one of these values:

  • NORMAL: Normal unquiesced state.

  • QUIESCING: Being quiesced, but some non-DBA sessions are still active.

  • QUIESCED: Quiesced; no non-DBA sessions are active or allowed.

Suspending and Resuming a Database

The ALTER SYSTEM SUSPEND statement halts all input and output (I/O) to datafiles (file header and file data) and control files. The suspended state lets you back up a database without I/O interference. When the database is suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a queued state.

The suspend command is not specific to an instance. In an Oracle Real Application Clusters environment, when you issue the suspend command on one system, internal locking mechanisms propagate the halt request across instances, thereby quiescing all active instances in a given cluster. However, if someone starts a new instance another instance is being suspended, the new instance will not be suspended.

Use the ALTER SYSTEM RESUME statement to resume normal database operations. The SUSPEND and RESUME commands can be issued from different instances. For example, if instances 1, 2, and 3 are running, and you issue an ALTER SYSTEM SUSPEND statement from instance 1, then you can issue a RESUME statement from instance 1, 2, or 3 with the same effect.

The suspend/resume feature is useful in systems that allow you to mirror a disk or file and then split the mirror, providing an alternative backup and restore solution. If you use a system that is unable to split a mirrored disk from an existing database while writes are occurring, then you can use the suspend/resume feature to facilitate the split.

The suspend/resume feature is not a suitable substitute for normal shutdown operations, because copies of a suspended database can contain uncommitted updates.


Caution:

Do not use the ALTER SYSTEM SUSPEND statement as a substitute for placing a tablespace in hot backup mode. Precede any database suspend operation by an ALTER TABLESPACE BEGIN BACKUP statement.

The following statements illustrate ALTER SYSTEM SUSPEND/RESUME usage. The V$INSTANCE view is queried to confirm database status.

SQL> ALTER SYSTEM SUSPEND;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
---------
SUSPENDED

SQL> ALTER SYSTEM RESUME;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
---------
ACTIVE

See Also:

Oracle Database Backup and Recovery Advanced User's Guide for details about backing up a database using the database suspend/resume feature

PKPKgpUIOEBPS/storeman.htm Using Automatic Storage Management

12 Using Automatic Storage Management

This chapter discusses some of the concepts behind Automatic Storage Management and describes how to use it. It contains the following topics:

What Is Automatic Storage Management?

Automatic Storage Management (ASM) is an integrated file system and volume manager expressly built for Oracle database files. ASM provides the performance of raw I/O with the easy management of a file system. It simplifies database administration by eliminating the need for you to directly manage potentially thousands of Oracle database files. It does this by enabling you to divide all available storage into disk groups. You manage a small set of disk groups and ASM automates the placement of the database files within those disk groups.

In the SQL statements that you use for creating database structures such as tablespaces, control files, and redo and archive log files, you specify file location in terms of disk groups. ASM then creates and manages the associated underlying files for you.

Mirroring and Striping

ASM extends the power of Oracle-managed files. With Oracle-managed files, files are created and managed automatically for you, but with ASM you get the additional benefits of features such as mirroring and striping.

ASM divides files into 1 MB extents and spreads each file's extents evenly across all disks in the disk group. This optimizes performance and disk utilization, making manual I/O performance tuning unnecessary.

ASM mirroring is more flexible than operating system mirrored disks because ASM mirroring enables the redundancy level to be specified on a per-file basis. Thus two files can share the same disk group with one file being mirrored while the other is not. Mirroring takes place at the extent level. If a file is mirrored, depending upon redundancy level set for the file, each extent has one or more mirrored copies, and mirrored copies are always kept on different disks in the disk group. Table 12-1 describes the three mirroring options that ASM supports on a per-file basis.

Table 12-1 ASM Mirroring Options

Mirroring Option Description

2-way mirroring

Each extent has 1 mirrored copy.

3-way mirroring

Each extent has 2 mirrored copies.

Unprotected

ASM provides no mirroring. Used when mirroring is provided by the disk subsystem itself.


Dynamic Storage Configuration

ASM enables you to change the storage configuration without having to take the database offline. It automatically rebalances—redistributes file data evenly across all the disks of the disk group—after you add disks to or drop disks from a disk group.

Should a disk failure occur, ASM automatically rebalances to restore full redundancy for files that had extents on the failed disk. When you replace the failed disk with a new disk, ASM rebalances the disk group to spread data evenly across all disks, including the replacement disk.

Interoperability with Existing Databases

ASM does not eliminate any existing database functionality. Existing databases using file systems or with storage on raw devices can operate as they always have. New files can be created as ASM files while existing ones are administered as before. Databases can have a mixture of ASM files and non-ASM files.

ASM Instance

ASM is implemented as a special kind of Oracle instance, with its own System Global Area (SGA) and background processes. The ASM instance typically has a much smaller SGA than a database instance.

Single Instance and Clustered Environments

Each database server that has database files managed by ASM needs to be running an ASM instance. A single ASM instance can service one or more single-instance databases on a stand-alone server. Each ASM disk group can be shared among all the databases on the server.

In a clustered environment, each node runs an ASM instance, and the ASM instances communicate with each other on a peer-to-peer basis. This is true for both Real Application Clusters (RAC) environments, and non-RAC clustered environments where multiple single-instance databases across multiple nodes share a clustered pool of storage that is managed by ASM. If a node is part of a Real Application Clusters (RAC) system, the peer-to-peer communications service is already installed on that server. If the node is part of a cluster where RAC is not installed, the Oracle Clusterware, Cluster Ready Services (CRS), must be installed on that node.

An ASM instance on a node in a cluster can manage storage simultaneously for one or more RAC database instances and one or more single instance databases.


See Also:

Oracle Database Concepts for an overview of Automatic Storage Management

Overview of the Components of Automatic Storage Management

The components of Automatic Storage Management (ASM) are disk groups, disks, failure groups, files, and templates.

Disk Groups

The primary component of ASM is the disk group. A disk group consists of a grouping of disks that are managed together as a unit. You configure ASM by creating disk groups to store database files. Oracle provides SQL statements that create and manage disk groups, their contents, and their metadata.

The disk group type determines the levels of mirroring that files in the disk group can be created with. You specify disk group type when you create the disk group. Table 12-2 lists ASM disk group types, their supported mirroring levels, and their default mirroring levels. The default mirroring level indicates the mirroring level that a file is created with unless you designate a different mirroring level. (See Templates, later in this section.)

Table 12-2 Mirroring Options for Each Disk Group Type

Disk Group Type Supported Mirroring Levels Default Mirroring Level

Normal redundancy


2-way
3-way
Unprotected (none)

2-way

High redundancy

3-way

3-way

External redundancy

Unprotected (none)

Unprotected


If you do not specify a disk group type (redundancy level) when you create a disk group, the disk group defaults to normal redundancy.

As Table 12-2 indicates, files in a high redundancy disk group are always 3-way mirrored, files in an external redundancy disk group have no ASM mirroring, and files in a normal redundancy disk group can be 2-way or 3-way mirrored or unprotected, with 2-way mirroring as the default. Mirroring level for each file is set by templates, which are described later in this section.

Disks

The disks in a disk group are referred to as ASM disks. On Windows operating systems, an ASM disk is always a partition. On all other platforms, an ASM disk can be:


Note:

Although you can also present a volume (a logical collection of disks) for management by ASM, it is not recommended to run ASM on top of another host-based volume manager.

When an ASM instance starts, it automatically discovers all available ASM disks. Discovery is the process of determining every disk device to which the ASM instance has been given I/O permissions (by some operating system mechanism), and of examining the contents of the first block of such disks to see if they are recognized as belonging to a disk group. ASM discovers disks in the paths that are listed in an initialization parameter, or if the parameter is NULL, in an operating system–dependent default path.

Failure Groups

Failure groups define ASM disks that share a common potential failure mechanism. An example of a failure group is a set of SCSI disks sharing the same SCSI controller. Failure groups are used to determine which ASM disks to use for storing redundant copies of data. For example, if two-way mirroring is specified for a file, ASM automatically stores redundant copies of file extents in separate failure groups. Failure groups apply only to normal and high redundancy disk groups. You define the failure groups in a disk group when you create or alter the disk group.

Description of admin070.gif follows


Files

Files written on ASM disks are ASM files, whose names are automatically generated by ASM. You can specify user-friendly alias names (or just aliases) for ASM files, and you can create a hierarchical directory structure for these aliases. Each ASM file is completely contained within a single disk group, and is evenly spaced over all of the ASM disks in the disk group.

Templates

Templates are collections of file attribute values, and are used to set mirroring and striping attributes of each type of database file (datafile, control file, redo log file, and so on) created in ASM disk groups. Each disk group has a default template associated with each file type. See "Managing Disk Group Templates" for more information.

You can also create your own templates to meet unique requirements. You can then include a template name when creating a file, thereby assigning desired attributes on an individual file basis rather than on the basis of file type. See "About ASM Filenames" for more information.

Administering an Automatic Storage Management Instance

Administering an Automatic Storage Management (ASM) instance is similar to managing a database instance, but with fewer tasks. An ASM instance does not require a database instance to be running for you to administer it.

You can perform all ASM administration tasks with SQL*Plus.


Note:

To administer ASM with SQL*Plus, you must set the ORACLE_SID environment variable to the ASM SID before you start SQL*Plus. If ASM and the database have different Oracle homes, you must also set the ORACLE_HOME environment variable to point to the ASM Oracle home. Depending on platform, you may have to change other environment variables as well. See "Selecting an Instance with Environment Variables" for more information.

The default ASM SID for a single instance database is +ASM, and the default SID for ASM on Real Application Clusters nodes is +ASMnode#.


You can also use Oracle Enterprise Manager (EM) or the Database Configuration Assistant (DBCA) for configuring and altering disk groups. DBCA eases the configuration and creation of your database while EM provides an integrated graphical approach for managing both your ASM instance and database instance. See Appendix A of Oracle Database 2 Day DBA for instructions on administering an ASM instance with EM.

This section contains the following topics:


See Also:

Oracle Database 2 Day DBA for information on administering ASM with Enterprise Manager.

Installing ASM

Because ASM is integrated into the database server, you use the Oracle Universal Installer (OUI) and the Database Configuration Assistant (DBCA) to install and initially configure it. OUI has options to either install and configure a database that uses ASM for storage management, or to install and configure an ASM instance by itself, without creating a database instance. Refer to the Oracle Database Installation Guide for your operating system for details on installing ASM.

ASM Installation Tips

Keep the following in mind when installing ASM:

  • When running more than one database instance on a single server or node, it is recommended that you install ASM in its own Oracle home on that server or node. This is advisable even if you are running only one database instance but plan to add one or more database instances to the server or node in the future.

    With separate Oracle homes, you can upgrade and patch ASM and databases independently, and you can deinstall database software without impacting the ASM instance.

    If an ASM instance does not already exist and you select the OUI option to install and configure ASM only, OUI installs ASM in its own Oracle home.

  • If you are running a single database instance on a server or node and have no plans to add one or more database instances to this server or node, ASM and the database can share a single Oracle home

    If an ASM instance does not already exist and you select the OUI option to create a database that uses ASM for storage, OUI creates this single-home configuration.

  • When you install ASM in a single-instance configuration, DBCA creates a separate server parameter file (SPFILE) and password file for the ASM instance.

  • When installing ASM in a clustered environment where the ASM Oracle home is shared among all nodes, DBCA creates an SPFILE for ASM. In a clustered environment without a shared ASM Oracle home, DBCA installs a text initialization parameter file (PFILE) for ASM on each node.

  • Before installing ASM, you may want to install the ASM support library, ASMLib. ASMLib is an application program interface (API) developed by Oracle to simplify the operating system–to-database interface, and to exploit the capabilities and strengths of vendors' storage arrays. The purpose of ASMLib, which is an optional add-on to ASM, is to provide an alternative interface for the ASM-enabled kernel to discover and access block devices. It provides storage and operating system vendors the opportunity to supply extended storage-related features. These features provide benefits such as improved performance and greater data integrity.

    See the ASM page of the Oracle Technology Network web site at http://www.oracle.com/technology/products/database/asm for more information on ASMLib. To download ASMLib for Linux, go to http://www.oracle.com/technology/tech/linux/asmlib.

Authentication for Accessing an ASM Instance

ASM security considerations derive from the fact that a particular ASM instance is tightly bound to one or more database instances operating on the same server. In effect, the ASM instance is a logical extension of these database instances. Both the ASM instance and the database instances must have equivalent operating system access rights (read/write) to the disk group member disks. For UNIX this is typically provided through shared UNIX group membership. See the Oracle Database Installation Guide for your operating system for information on how to ensure that the ASM and database instances have the proper access to member disks.

ASM instances do not have a data dictionary, so the only way to connect to one is as an administrator. This means that you use operating system authentication and connect as SYSDBA, or when connecting remotely through Oracle Net Services, you use a password file.

Operating System Authentication for ASM

Using operating system authentication, the authorization to connect locally with the SYSDBA privilege is granted through the use of a special operating system user group, generically referred to as OSDBA. (On UNIX, OSDBA is typically the dba group.) See "Using Operating System Authentication" for more information about OSDBA.

By default, members of the OSDBA group are authorized to connect with the SYSDBA privilege on all instances on the node, including the ASM instance. Users who connect to the ASM instance with SYSDBA privilege have complete administrative access to all disk groups that are managed by that ASM instance.


Note:

The user that is the software owner for the database Oracle home (typically, the user named oracle) must be a member of the OSDBA group defined for the ASM Oracle home. This is automatically the case when ASM and a single instance of Oracle Database share the same Oracle home. If you install the ASM and database instances in separate Oracle homes, you must ensure that you configure the proper group memberships, otherwise the database instance will not be able to connect to the ASM instance.

Password File Authentication for ASM

To enable remote administration of ASM (through Oracle Net Services), a password file must be created for ASM. A password file is also required for Enterprise Manager to connect to ASM.

The Oracle Database Configuration Assistant (DBCA) creates a password file for ASM when it initially configures ASM disk groups. Like a database password file, the only user added to the password file upon creation is SYS. If you want to add other users to the password file, you must share the password file with a database instance and add the users with the database.

If you configure an ASM instance without using DBCA, you must create a password file yourself. See "Creating and Maintaining a Password File" for more information.

Setting Initialization Parameters for an ASM Instance

The ASM instance has its own initialization parameter file, which, like that of the database instance, can be a server parameter file (SPFILE) or a text file.


Note:

For ASM installations in clustered environments, server parameter files (SPFILEs) are not used unless there is a shared ASM Oracle home. Without a shared ASM Oracle home, each ASM instance gets its own text initialization parameter file (PFILE).

The ASM parameter file name is distinguished from the database file name by the SID embedded in the name. (The SID for ASM defaults to +ASM for a single-instance database and +ASMnode# for Real Application Clusters nodes.) The same rules for file name, default location, and search order that apply to the database initialization parameter file apply to the ASM initialization parameter file. Thus, on single-instance Unix platforms, the server parameter file for ASM would have the following path:

$ORACLE_HOME/dbs/spfile+ASM.ora

For more information about initialization parameter files, see "Understanding Initialization Parameters".

Some initialization parameters are specifically relevant to an ASM instance. Of those initialization parameters intended for a database instance, only a few are relevant to an ASM instance. You can set those parameters at database creation time using Database Configuration Assistant or later using Enterprise Manager. The remainder of this section describes setting the parameters manually by editing the initialization parameter file.

Initialization Parameters for ASM Instances

The following initialization parameters relate to an ASM instance. Parameters that start with ASM_ cannot be set in database instances.

Name Description
INSTANCE_TYPE Must be set to ASM

Note: This is the only required parameter. All other parameters take suitable defaults for most environments.

ASM_POWER_LIMIT The default power for disk rebalancing.

Default: 1, Range: 0 – 11

See Also: "Tuning Rebalance Operations"

ASM_DISKSTRING A comma-separated list of strings that limits the set of disks that ASM discovers. May include wildcard characters. Only disks that match one of the strings are discovered. String format depends on the ASM library in use and on the operating system. The standard system library for ASM supports glob pattern matching.

For example, on a Solaris server that does not use ASMLib, to limit discovery to disks that are in the /dev/rdsk/ directory, ASM_DISKSTRING would be set to:

/dev/rdsk/*

Note that the asterisk cannot be omitted. To limit discovery to disks in this directory that have a name that ends in s3 or s4, ASM_DISKSTRING would be set to:

/dev/rdsk/*s3,/dev/rdsk/*s4

This could be simplified to:

/dev/rdsk/*s[34]

The ? character, when used as the first character of a path, expands to the Oracle home directory. Depending on operating system, when the ? character is used elsewhere in the path, it is a wildcard for a single character.

Default: NULL. A NULL value causes ASM to search a default path for all disks in the system to which the ASM instance has read/write access. The default search path is platform-specific. See the Oracle Database Administrator's Reference for UNIX Systems on OTN for a list of default search paths for Unix platforms. For the Windows platform, the default search path is \\.\ORCLDISK*. See the Oracle Database Installation Guide for Windows or the Real Application Clusters Quick Installation Guide for Oracle Database Standard Edition for Windows for more information.

See Also: "Improving Disk Discovery Time"

ASM_DISKGROUPS A list of the names of disk groups to be mounted by an ASM instance at startup, or when the ALTER DISKGROUP ALL MOUNT statement is used.

Default: NULL (If this parameter is not specified, then no disk groups are mounted.)

This parameter is dynamic, and if you are using a server parameter file (SPFILE), you should not need to manually alter this value. ASM automatically adds a disk group to this parameter when the disk group is successfully created or mounted, and automatically removes a disk group from this parameter when the disk group is dropped or dismounted. However, when using a text initialization parameter file (PFILE), you must edit the initialization parameter file to add the name of any disk group that you want automatically mounted at instance startup, and remove the name of any disk group that you no longer want automatically mounted.

Note: Issuing the ALTER DISKGROUP...ALL MOUNT or ALTER DISKGROUP...ALL DISMOUNT command does not affect the value of this parameter.


Tuning Rebalance Operations

If the POWER clause is not specified in an ALTER DISKGROUP command, or when rebalance is implicitly invoked by adding or dropping a disk, the rebalance power defaults to the value of the ASM_POWER_LIMIT initialization parameter. You can adjust this parameter dynamically. The higher the limit, the faster a rebalance operation may complete. Lower values cause rebalancing to take longer, but consume fewer processing and I/O resources. This leaves these resources available for other applications, such as the database. The default value of 1 minimizes disruption to other applications. The appropriate value is dependent upon your hardware configuration as well as performance and availability requirements.

If a rebalance is in progress because a disk is manually or automatically dropped, increasing the power of the rebalance shortens the window during which redundant copies of that data on the dropped disk are reconstructed on other disks.

The V$ASM_OPERATION view provides information that can be used for adjusting ASM_POWER_LIMIT and the resulting power of rebalance operations. The V$ASM_OPERATION view also gives an estimate in the EST_MINUTES column of the amount of time remaining for the rebalance operation to complete. You can see the effect of changing the rebalance power by observing the change in the time estimate.


See Also:

"Manually Rebalancing a Disk Group" for more information.

Improving Disk Discovery Time

The value for the ASM_DISKSTRING initialization parameter is an operating system–dependent value used by ASM to limit the set of paths that the discovery process uses to search for disks. When a new disk is added to a disk group, each ASM instance that has the disk group mounted must be able to discover the new disk using its ASM_DISKSTRING.

In many cases, the default value (NULL) is sufficient. Using a more restrictive value may reduce the time required for ASM to perform discovery, and thus improve disk group mount time or the time for adding a disk to a disk group. It may be necessary to dynamically change the ASM_DISKSTRING before adding a disk so that the new disk will be discovered through this parameter.

Note that the default value of ASM_DISKSTRING may not find all disks in all situations. If your site is using a third-party vendor ASMLib, that vendor may have discovery string conventions you must use for ASM_DISKSTRING. In addition, if your installation uses multipathing software, the software may place pseudo-devices in a path that is different from the operating system default. Consult the multipathing vendor documentation for details.

Behavior of Database Initialization Parameters in an ASM Instance

If you specify a database instance initialization parameter in an ASM initialization parameter file, it can have one of these effects:

  • If the parameter is not valid in the ASM instance, it produces an ORA-15021 error.

  • If the database parameter is valid in an ASM instance, for example parameters relating to dump destinations and some buffer cache parameters, ASM accepts the parameter. In general, ASM selects appropriate defaults for any database parameters that are relevant to the ASM instance.

Behavior of ASM Initialization Parameters in a Database Instance

If you specify any of the ASM-specific parameters (names starting with ASM_) in a database instance parameter file, you receive an ORA-15021 error.

Starting Up an ASM Instance

ASM instances are started similarly to Oracle database instances with some minor differences:

  • To connect to the ASM instance with SQL*Plus, you must set the ORACLE_SID environment variable to the ASM SID. (The default ASM SID for a single instance database is +ASM, and the default SID for ASM on Real Application Clusters is +ASMnode#.) Depending on your operating system and whether or not you installed ASM in its own Oracle home, you may have to change other environment variables. For more information, see "Selecting an Instance with Environment Variables".

  • The initialization parameter file, which can be a server parameter file, must contain:

    INSTANCE_TYPE = ASM
    
    

    This parameter signals the Oracle executable that an ASM instance is starting and not a database instance.

  • The STARTUP command, rather than trying to mount and open a database, tries to mount the disk groups specified by the initialization parameter ASM_DISKGROUPS. If ASM_DISKGROUPS is blank, the ASM instance starts and warns that no disk groups were mounted. You can then mount disk groups with the ALTER DISKGROUP...MOUNT command.

The SQL*Plus STARTUP command parameters are interpreted by ASM as follows:

STARTUP Parameter Description
FORCE Issues a SHUTDOWN ABORT to the ASM instance before restarting it
MOUNT, OPEN Mounts the disk groups specified in the ASM_DISKGROUPS initialization parameter. This is the default if no command parameter is specified.
NOMOUNT Starts up the ASM instance without mounting any disk groups

The following is a sample SQL*Plus session in which an ASM instance is started.

% sqlplus /nolog
SQL> CONNECT / AS sysdba
Connected to an idle instance.

SQL> STARTUP
ASM instance started

Total System Global Area   71303168 bytes
Fixed Size                  1069292 bytes
Variable Size              45068052 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted

ASM Instance Memory Requirements

ASM instances are smaller than database instances. A 64 MB SGA should be sufficient for all but the largest ASM installations. Total memory footprint for a typical ASM instance is approximately 100 MB.

CSS Requirement

The Cluster Synchronization Services (CSS) daemon is required to enable synchronization between ASM and its client database instances. The CSS daemon is normally started (and configured to start upon reboot) when you use Database Configuration Assistant (DBCA) to create your database. If you did not use DBCA to create the database, you must ensure that the CSS daemon is running before you start the ASM instance.

CSS Daemon on the Linux and Unix Platforms

To determine if the CSS daemon is running, issue the command crsctl check cssd. If you receive the message CSS appears healthy, the CSS daemon is running.

To start the CSS daemon and configure the host to always start the daemon upon reboot, do the following:

  1. Log in to the host as root.

  2. Ensure that $ORACLE_HOME/bin is in your PATH environment variable.

  3. Enter the following command:

    localconfig add
    
CSS Daemon on the Windows Platform

You can also use the crsctl and localconfig commands on the Windows platform to check the status of the CSS daemon or to start it. If you prefer to use Windows GUI tools, you can do the following:

To determine if the CSS daemon is properly configured and running, double-click the Services icon in the Windows Control Panel, and look for the OracleCSService service. Its status should be Started and its startup type should be Automatic.

Refer to the Windows documentation for information on how to start a Windows service and how to configure it for Automatic startup.

Disk Discovery

When an ASM instance initializes, ASM discovers and examines the contents of all of the disks that are in the paths designated in the ASM_DISKSTRING initialization parameter. For purposes of this discussion, a "disk" is an ASM disk as defined in "Overview of the Components of Automatic Storage Management". Disk discovery also takes place when you:

  • Run the ALTER DISKGROUP...ADD DISK and ALTER DISKGROUP...RESIZE DISK commands.

  • Query the V$ASM_DISKGROUP and V$ASM_DISK views.

After a disk is successfully discovered, it appears in the V$ASM_DISK view. Disks that belong to a disk group—that is, that have a disk group name in the disk header—have a header status of MEMBER. Disks that were discovered but that have not yet been assigned to a disk group have a header status of either CANDIDATE or PROVISIONED.


Note:

The PROVISIONED header status implies that an additional platform-specific action has been taken by an administrator to make the disk available for ASM. For example, on Windows, the administrator used asmtool or asmtoolg to stamp the disk with a header, or on Linux, the administrator used ASMLib to prepare the disk for ASM.

The following query shows six MEMBER disks and one CANDIDATE disk.

SQL> select name, header_status, path from v$asm_disk;
 
NAME         HEADER_STATUS PATH
------------ ------------- -------------------------
             CANDIDATE     /dev/rdsk/disk07
DISK06       MEMBER        /dev/rdsk/disk06
DISK05       MEMBER        /dev/rdsk/disk05
DISK04       MEMBER        /dev/rdsk/disk04
DISK03       MEMBER        /dev/rdsk/disk03
DISK02       MEMBER        /dev/rdsk/disk02
DISK01       MEMBER        /dev/rdsk/disk01
 
7 rows selected.

Discovery Rules

Rules for discovery are as follows:

  • ASM discovers no more than 10,000 disks. That is, if more than 10,000 disks match the ASM_DISKSTRING initialization parameter, only the first 10,000 are discovered.

  • ASM does not discover a disk that contains an operating system partition table, even if the disk is in an ASM_DISKSTRING search path and ASM has read/write permission on the disk.

  • If ASM recognizes a disk header as that of an Oracle object, such as the header of an Oracle datafile, the disk is discovered, but can only be added to a disk group with the FORCE keyword. Such a disk appears in V$ASM_DISK with a header status of FOREIGN.

In addition, ASM identifies the following configuration errors during discovery:

  • Multiple paths to the same disk

    In this case, if the disk is part of a disk group, disk group mount fails. If the disk is being added to a disk group with the ADD DISK or CREATE DISKGROUP command, the command fails. To correct the error, restrict ASM_DISKSTRING so that it does not include multiple paths to the same disk, or if you are using multipathing software, ensure that you include only the pseudo-device in ASM_DISKSTRING.

  • Multiple ASM disks with the same disk header

    This can be caused by a bit copy of one disk onto another. In this case, disk group mount fails.

Disk Group Recovery

Like any other file system or volume manager, if an ASM instance fails, then all Oracle database instances on the same node as the ASM instance and that use a disk group managed by the ASM instance also fail. In a single ASM instance configuration, if the ASM instance fails while ASM metadata is open for update, then after the ASM instance reinitializes, it reads the disk group log and recovers all transient changes.

With multiple ASM instances sharing disk groups, if one ASM instance should fail, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. The failure of an Oracle database instance is not significant here because only ASM instances update ASM metadata.

Shutting Down an ASM Instance

Automatic Storage Management shutdown is initiated by issuing the SHUTDOWN command in SQL*Plus.

You must first ensure that the ORACLE_SID environment variable is set to the ASM SID to connect to the ASM instance. Depending on your operating system and whether or not you installed ASM in its own Oracle home, you may have to change other environment variables before starting SQL*Plus. For more information, see "Selecting an Instance with Environment Variables".

% sqlplus /nolog
SQL> CONNECT / AS sysdba
Connected.
SQL> SHUTDOWN NORMAL

The table that follows lists the SHUTDOWN modes and describes the behavior of the ASM instance in each mode.

SHUTDOWN Mode Action Taken By Automatic Storage Management
NORMAL, IMMEDIATE, or TRANSACTIONAL ASM waits for any in-progress SQL to complete before doing an orderly dismount of all disk groups and shutting down the ASM instance. If any database instances are connected to the ASM instance, the SHUTDOWN command returns an error and leaves the ASM instance running.
ABORT The ASM instance immediately shuts down without the orderly dismount of disk groups. This causes recovery upon the next startup of ASM. If any database instance is connected to the ASM instance, the database instance aborts.

It is strongly recommended that you shut down all database instances that use the ASM instance before shutting down the ASM instance.

Administering Automatic Storage Management Disk Groups

This section explains how to create and manage your Automatic Storage Management (ASM) disk groups. If you have one or more database instances that use ASM, you can keep them open and running while you administer disk groups.

You can administer ASM disk groups with Database Configuration Assistant (DBCA), Enterprise Manager (EM), or SQL*Plus. In addition, the ASM command-line utility, ASMCMD, enables you to easily view and manipulate files and directories within disk groups.

This section provides details on administration with SQL*Plus, but includes background information that applies to all administration methods.

The SQL statements introduced in this section are only available in an ASM instance.


Note:

All sample SQL*Plus sessions in this section assume that the ORACLE_SID environment variable is changed to the ASM SID before starting SQL*Plus. Depending on your operating system and whether or not you installed ASM in its own Oracle home, other environment variables may have to be changed as well. For more information, see "Selecting an Instance with Environment Variables".

The following topics are contained in this section:


See Also:


Considerations and Guidelines for Configuring Disk Groups

The following are some considerations and guidelines to be aware of as you configure disk groups.

Determining the Number of Disk Groups

The following criteria can help you determine the number of disk groups that you create:

  • Disks in a given disk group should have similar size and performance characteristics. If you have several different types of disks in terms of size and performance, then it would be better to form several disk groups accordingly.

  • For recovery reasons, you might feel more comfortable having separate disk groups for your database files and flash recovery area files. Using this approach, even with the loss of one disk group, the database would still be intact.

Performance Characteristics when Grouping Disks

ASM eliminates the need for manual I/O tuning. However, to ensure consistent performance, you should avoid placing dissimilar disks in the same disk group. For example, the newest and fastest disks might reside in a disk group reserved for the database work area, and slower drives could reside in a disk group reserved for the flash recovery area.

ASM load balances file activity by uniformly distributing file extents across all disks in a disk group. For this technique to be effective it is important that the disks used in a disk group be of similar performance characteristics.

There may be situations where it is acceptable to temporarily have disks of different sizes and performance co-existing in a disk group. This would be the case when migrating from an old set of disks to a new set of disks. The new disks would be added and the old disks dropped. As the old disks are dropped, their storage is migrated to the new disks while the disk group is online.

Effects of Adding and Dropping Disks from a Disk Group

ASM automatically rebalances whenever disks are added or dropped. For a normal drop operation (without the FORCE option), a disk is not released from a disk group until data is moved off of the disk through rebalancing. Likewise, a newly added disk cannot support its share of the I/O workload until rebalancing completes. It is more efficient to add or drop multiple disks at the same time so that they are rebalanced as a single operation. This avoids unnecessary movement of data.

For a drop operation, when rebalance is complete, ASM takes the disk offline momentarily, and then drops it, setting disk header status to FORMER.

You can add or drop disks without shutting down the database. However, a performance impact on I/O activity may result.

How ASM Handles Disk Failures

Depending on the redundancy level of a disk group and how failure groups are defined, the failure of one or more disks could result in either of the following:

  • The disks are first taken offline and then automatically dropped.

    The disk group remains mounted and serviceable, and, thanks to mirroring, all disk group data remains accessible. Following the disk drop, ASM performs a rebalance to restore full redundancy for the data on the failed disks.

  • The entire disk group is automatically dismounted, which means loss of data accessibility.

Disk failure in this context means individual spindle failure or failure of another disk subsystem component, such as power supply, a controller, or host bus adapter. Here are the rules for how ASM handles disk failures:

  • A failure group is considered to have failed if at least one disk in the failure group fails.

  • A normal redundancy disk group can tolerate the failure of one failure group. If only one failure group fails, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks (including the surviving disks in the failed failure group) to restore redundancy for the data in the failed disks. If more than one failure group fails, ASM dismounts the disk group.

  • A high redundancy disk group can tolerate the failure of two failure groups. If one or two failure groups fail, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks to restore redundancy for the data in the failed disks. If more than two failure groups fail, ASM dismounts the disk group.

  • An external redundancy disk group cannot tolerate the failure of any disks in the disk group. Any kind of disk failure causes ASM to dismount the disk group.

When considering these rules, remember that if a disk is not explicitly assigned to a failure group with the CREATE DISKGROUP command, ASM puts the disk in its own failure group. Also, failure of one disk in a failure group does not affect the other disks in a failure group. For example, a failure group could consist of 6 disks connected to the same disk controller. If one of the 6 disks fails, the other 5 disks can continue to operate. The failed disk is dropped from the disk group and the other 5 remain in the disk group. Depending on the rules stated previously, the disk group may then remain mounted, or it may be dismounted.

When ASM drops a disk, the disk is not automatically added back to the disk group when it is repaired or replaced. You must issue an ALTER DISKGROUP...ADD DISK command to return the disk to the disk group. Similarly, when ASM automatically dismounts a disk group, you must issue an ALTER DISKGROUP...MOUNT command to remount the disk group.



Failure Groups and Mirroring

Mirroring of metadata and user data is achieved through failure groups. System reliability can be hampered if an insufficient number of failure groups are provided. Consequently, failure group configuration is very important to creating a highly reliable system. Here are some rules and guidelines for failure groups:

  • Each disk in a disk group belongs to exactly one failure group.

  • After a disk has been assigned to a failure group, it cannot be reassigned to another failure group. If it needs to be in another failure group, it can be dropped from the disk group and then added back. Because the choice of failure group depends on hardware configuration, a disk does not need to be reassigned unless it is physically moved.

  • It is best if all failure groups are the same size. Failure groups of different sizes can lead to wasted disk space.

  • ASM requires at least two failure groups to create a normal redundancy disk groups and at least three failure groups to create a high redundancy disk group. This implies that if you do not explicitly define failure groups, a normal redundancy disk group requires at least two disks, and a high redundancy disk group requires at least 3 disks.

  • Most systems do not need to explicitly define failure groups. The default behavior of putting every disk in its own failure group works well for most installations. Failure groups are only needed for large, complex systems that need to protect against failures other than individual spindle failures.

  • Choice of failure groups depends on the kinds of failures that need to be tolerated without loss of data availability. For small numbers of disks (fewer than 20) it is usually best to use default failure groups, where every disk is in its own failure group. This is true even for large numbers of disks when the main concern is spindle failure. If there is a need to protect against the simultaneous loss of multiple disk drives due to a single component failure, you can specify failure groups. For example, a disk group may be constructed from several small modular disk arrays. If the system needs to continue operation when an entire modular array fails, a failure group should consist of all the disks in one module. If one module fails, a rebalance occurs on the surviving modules to restore redundancy to the data that was on the failed module. You must place disks in the same failure group if they depend on a common piece of hardware whose failure needs to be tolerated with no loss of availability.

  • Having a small number of large failure groups may actually reduce availability in some cases. For example, half the disks in a disk group could be on one power supply, while the other half are on a different power supply. If this is used to divide the disk group into two failure groups, tripping the breaker on one power supply could drop half the disks in the disk group. Restoring redundancy for data on the dropped disks would require copying all the data from the surviving disks. This can be done online, but consumes a lot of I/O and leaves the disk group unprotected against a spindle failure during the copy. However, if each disk were in its own failure group, the disk group would be dismounted when the breaker tripped (assuming that this caused more failure groups to fail than the disk group can tolerate). Resetting the breaker would allow the disk group to be manually remounted and no data copying would be needed.

Managing Capacity in Disk Groups

You must have sufficient spare capacity in each disk group to handle the largest failure that you are willing to tolerate. After one or more disks fail, the process of restoring redundancy for all data requires space from the surviving disks in the disk group. If not enough space remains, some files may end up with reduced redundancy.

Reduced redundancy means that one or more extents in the file are not mirrored at the expected level. For example, a reduced redundancy file in a high redundancy disk group has at least one file extent with two or fewer total copies of the extent instead of three. In the case of unprotected files, data extents could be missing altogether. Other causes of reduced redundancy files are disks running out of space or an insufficient number of failure groups. The V$ASM_FILE column REDUNDANCY_LOWERED indicates a file with reduced redundancy.

The following guidelines help ensure that you have sufficient space to restore full redundancy for all disk group data after the failure of one or more disks.

  • In a normal redundancy disk group, it is best to have enough free space in your disk group to tolerate the loss of all disks in one failure group. The amount of free space should be equivalent to the size of the largest failure group.

  • In a high redundancy disk group, it is best to have enough free space to cope with the loss of all disks in two failure groups. The amount of free space should be equivalent to the sum of the sizes of the two largest failure groups.

The V$ASM_DISKGROUP view contains columns that help you manage capacity:

  • REQUIRED_MIRROR_FREE_MB indicates the amount of space that must be available in the disk group to restore full redundancy after the worst failure that can be tolerated by the disk group.

  • USABLE_FILE_MB indicates the amount of free space, adjusted for mirroring, that is available for new files.

USABLE_FILE_MB is computed by subtracting REQUIRED_MIRROR_FREE_MB from total free space in the disk group and then adjusting for mirroring. For example, in a normal redundancy disk group, where by default mirrored files take up disk space equal to twice their size, if 4 GB of actual usable file space remains, USABLE_FILE_MB equals roughly 2 GB. You can then add up to a 2 GB file.

The following query shows capacity metrics for a normal redundancy disk group that consists of six 1 GB (1024 MB) disks, each in its own failure group:

select name, type, total_mb, free_mb, required_mirror_free_mb, 
usable_file_mb from v$asm_diskgroup;

NAME         TYPE     TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
------------ ------ ---------- ---------- ----------------------- --------------
DISKGROUP1   NORMAL       6144       3768                    1024           1372

The REQUIRED_MIRROR_FREE_MB column shows that 1 GB of extra capacity must be available to restore full redundancy after one or more disks fail. Note that the first three numeric columns in the query results are raw numbers. That is, they do not take redundancy into account. Only the last column is adjusted for normal redundancy. Notice that:

FREE_MB - REQUIRED_MIRROR_FREE_MB = 2 * USABLE_FILE_MB

or

3768 - 1024 = 2 * 1372 = 2744

Negative Values of USABLE_FILE_MB Due to the relationship between FREE_MB, REQUIRED_MIRROR_FREE_MB, and USABLE_FILE_MB, USABLE_FILE_MB can go negative. Although this is not necessarily a critical situation, it does mean that:

  • Depending on the value of FREE_MB, you may not be able to create new files.

  • The next failure may result in files with reduced redundancy.

If USABLE_FILE_MB becomes negative, it is strongly recommended that you add more space to the disk group as soon as possible.

Scalability

ASM imposes the following limits:

  • 63 disk groups in a storage system

  • 10,000 ASM disks in a storage system

  • 4 petabyte maximum storage for each ASM disk

  • 40 exabyte maximum storage for each storage system

  • 1 million files for each disk group

  • Maximum files sizes as shown in the following table:

    Disk Group Type Maximum File Size
    External redundancy 35 TB
    Normal redundancy 5.8 TB
    High redundancy 3.9 TB

Creating a Disk Group

You use the CREATE DISKGROUP statement to create disk groups. This statement enables you to assign a name to the disk group and to specify the disks that are to be formatted as ASM disks belonging to the disk group. You specify the disks as one or more operating system dependent search strings that ASM then uses to find the disks.

You can specify the disks as belonging to specific failure groups, and you can specify the redundancy level for the disk group.

If you want ASM to mirror files, you specify the redundancy level as NORMAL REDUNDANCY (2-way mirroring by default for most file types) or HIGH REDUNDANCY (3-way mirroring for all files). You specify EXTERNAL REDUNDANCY if you want no mirroring by ASM. For example, you might choose EXTERNAL REDUNDANCY if you want to use storage array protection features. See the Oracle Database SQL Reference for more information on redundancy levels. See "Overview of the Components of Automatic Storage Management" and "Failure Groups and Mirroring" for information on failure groups.

ASM programmatically determines the size of each disk. If for some reason this is not possible, or if you want to restrict the amount of space used on a disk, you are able to specify a SIZE clause for each disk. ASM creates operating system–independent names for the disks in a disk group that you can use to reference the disks in other SQL statements. Optionally, you can provide your own name for a disk using the NAME clause. Disk names are available in the V$ASM_DISK view.

The ASM instance ensures that any disk being included in the newly created disk group is addressable and is not already a member of another disk group. This requires reading the first block of the disk to determine if it already belongs to a disk group. If not, a header is written. It is not possible for a disk to be a member of multiple disk groups.

If a disk has a header indicates that it is part of another disk group, you can force it to become a member of the disk group you are creating by specifying the FORCE clause. For example, a disk with an ASM header might have failed temporarily, so that its header could not be cleared when it was dropped from its disk group. After the disk is repaired, it is no longer part of any disk group, but it still has an ASM header. The FORCE flag is required to use the disk in a new disk group. The original disk group must not be mounted, and the disk must have a disk group header, otherwise the operation fails. Note that if you do this, you may cause another disk group to become unusable. If you specify NOFORCE, which is the default, you receive an error if you attempt to include a disk that already belongs to another disk group.

The CREATE DISKGROUP statement mounts the disk group for the first time, and adds the disk group name to the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used and you want the disk group to be automatically mounted at instance startup, then you must remember to add the disk group name to the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance.

Creating a Disk Group: Example

The following examples assume that the ASM_DISKSTRING initialization parameter is set to '/devices/*'. Assume the following:

  • ASM disk discovery identifies the following disks in directory /devices.

/devices/diska1
/devices/diska2
/devices/diska3
/devices/diska4
/devices/diskb1
/devices/diskb2
/devices/diskb3
/devices/diskb4
  • The disks diska1 - diska4 are on a separate SCSI controller from disks diskb1 - diskb4.

The following SQL*Plus session illustrates starting an ASM instance and creating a disk group named dgroup1.

% SQLPLUS /NOLOG
SQL> CONNECT / AS SYSDBA
Connected to an idle instance.
SQL> STARTUP NOMOUNT
SQL> CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY 
  2  FAILGROUP controller1 DISK
  3 '/devices/diska1',
  4 '/devices/diska2',
  5 '/devices/diska3',
  6 '/devices/diska4'
  7 FAILGROUP controller2 DISK
  8 '/devices/diskb1',
  9 '/devices/diskb2',
 10 '/devices/diskb3',
 11 '/devices/diskb4';

In this example, dgroup1 is composed of eight disks that are defined as belonging to either failure group controller1 or controller2. Because NORMAL REDUNDANCY level is specified for the disk group, ASM provides mirroring for each type of database file according to the mirroring settings in the system default templates.

For example, in the system default templates shown in Table 12-5, default redundancy for the online redo log files (ONLINELOG template) for a normal redundancy disk group is MIRROR. This means that when one copy of a redo log file extent is written to a disk in failure group controller1, a mirrored copy of the file extent is written to a disk in failure group controller2. You can see that to support the default mirroring of a normal redundancy disk group, at least two failure groups must be defined.


Note:

If you do not specify failure groups, each disk is automatically placed in its own failure group.

Because no NAME clauses are provided for any of the disks being included in the disk group, the disks are assigned the names of dgroup1_0001, dgroup1_0002, ..., dgroup1_0008.


Note:

If you do not provide a NAME clause and you assigned a label to a disk through ASMLib, the label is used as the disk name.

Altering the Disk Membership of a Disk Group

At a later time after the creation of a disk group, you can change its composition by adding more disks, resizing disks, or dropping disks. You use clauses of the ALTER DISKGROUP statement to perform these actions. You can perform multiple operations with one ALTER DISKGROUP statement.

ASM automatically rebalances when the composition of a disk group changes. Because rebalancing can be a long running operation, the ALTER DISKGROUP statement by default does not wait until the operation is complete before returning. To monitor progress of these long running operations, query the V$ASM_OPERATION dynamic performance view.

If you want the ALTER DISKGROUP statement to wait until the rebalance operation is complete before returning, you can add the REBALANCE WAIT clause. This is especially useful in scripts. The statement also accepts a REBALANCE NOWAIT clause, which invokes the default behavior of conducting the rebalance operation asynchronously in the background. You can interrupt a rebalance running in wait mode by typing CTRL-C on most platforms. This causes the command to return immediately with the message ORA-01013: user requested cancel of current operation, and to continue the operation asynchronously. Typing CTRL-C does not cancel the rebalance operation or any disk add, drop, or resize operations.

To control the speed and resource consumption of the rebalance operation, you can include the REBALANCE POWER clause in statements that add, drop, or resize disks. See "Manually Rebalancing a Disk Group" for more information on this clause.

Adding Disks to a Disk Group

You use the ADD clause of the ALTER DISKGROUP statement to add disks to a disk group, or to add a failure group to the disk group. The ALTER DISKGROUP clauses that you can use when adding disks to a disk group are similar to those that can be used when specifying the disks to be included when initially creating a disk group. This is discussed in "Creating a Disk Group".

The new disks will gradually start to carry their share of the workload as rebalancing progresses.

ASM behavior when adding disks to a disk group is best illustrated through examples.

Adding Disks to a Disk Group: Example 1 The following statement adds disks to dgroup1:

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/diska5' NAME diska5,
     '/devices/diska6' NAME diska6;

Because no FAILGROUP clauses are included in the ALTER DISKGROUP statement, each disk is assigned to its own failure group. The NAME clauses assign names to the disks, otherwise they would have been assigned system-generated names.


Note:

If you do not provide a NAME clause and you assigned a label to a disk through ASMLib, the label is used as the disk name.

Adding Disks to a Disk Group: Example 2 The statements presented in this example demonstrate the interactions of disk discovery with the ADD DISK operation.

Assume that disk discovery now identifies the following disks in directory /devices:


/devices/diska1 -- member of dgroup1
/devices/diska2 -- member of dgroup1
/devices/diska3 -- member of dgroup1
/devices/diska4 -- member of dgroup1
/devices/diska5 -- candidate disk
/devices/diska6 -- candidate disk
/devices/diska7 -- candidate disk
/devices/diska8 -- candidate disk

/devices/diskb1 -- member of dgroup1
/devices/diskb2 -- member of dgroup1
/devices/diskb3 -- member of dgroup1
/devices/diskb4 -- member of dgroup2

/devices/diskc1 -- member of dgroup2
/devices/diskc2 -- member of dgroup2
/devices/diskc3 -- member of dgroup3
/devices/diskc4 -- candidate disk

/devices/diskd1 -- candidate disk
/devices/diskd2 -- candidate disk
/devices/diskd3 -- candidate disk
/devices/diskd4 -- candidate disk
/devices/diskd5 -- candidate disk
/devices/diskd6 -- candidate disk
/devices/diskd7 -- candidate disk
/devices/diskd8 -- candidate disk

On a server that runs Unix, Solaris, or Linux and that does not have ASMLib installed, issuing the following statement would successfully add disks /devices/diska5 through /devices/diska8 to dgroup1.

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/diska[5678]';

The following statement would successfully add disks /devices/diska5 and /devices/diskd5 to dgroup1.

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/disk*5';

The following statement would fail because /devices/diska1 - /devices/diska4 already belong to dgroup1.

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/diska*';

The following statement would fail because the search string matches disks that are contained in other disk groups. Specifically, /devices/diska4 belongs to disk group dgroup1 and /devices/diskb4 belongs to disk group dgroup2.

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/disk*4';

The following statement would successfully add /devices/diska5 and /devices/diskd1 through /devices/diskd8 to disk group dgroup1. It does not matter that /devices/diskd5 is included in both search strings. This statement runs with a rebalance power of 5, and does not return until the rebalance operation is complete.

ALTER DISKGROUP dgroup1 ADD DISK
      '/devices/disk*5',
      '/devices/diskd*'
       REBALANCE POWER 5 WAIT;

The following use of the FORCE clause enables /devices/diskc3 to be added to dgroup2, even though it is a current member of dgroup3.

ALTER DISKGROUP dgroup2 ADD DISK
     '/devices/diskc3' FORCE;

For this statement to succeed, dgroup3 cannot be mounted.

Dropping Disks from Disk Groups

To drop disks from a disk group, use the DROP DISK clause of the ALTER DISKGROUP statement. You can also drop all of the disks in specified failure groups using the DROP DISKS IN FAILGROUP clause.

When a disk is dropped, the disk group is rebalanced by moving all of the file extents from the dropped disk to other disks in the disk group. A drop disk operation may fail if not enough space is available on the other disks. If you intend to add some disks and drop others, it is prudent to add disks first to ensure that enough space is available for the drop operation. The best approach is to perform both the add and drop with the same ALTER DISKGROUP statement. This can reduce total time spent rebalancing.


WARNING:

The ALTER DISKGROUP...DROP DISK statement returns before the drop and rebalance operations are complete. Do not reuse, remove, or disconnect the dropped disk until the HEADER_STATUS column for this disk in the V$ASM_DISK view changes to FORMER. You can query the V$ASM_OPERATION view to determine the amount of time remaining for the drop/rebalance operation to complete. For more information, see Oracle Database SQL Reference and Oracle Database Reference.


If you specify the FORCE clause for the drop operation, the disk is dropped even if Automatic Storage Management cannot read or write to the disk. You cannot use the FORCE flag when dropping a disk from an external redundancy disk group.


Caution:

A DROP FORCE operation leaves data at reduced redundancy for as long as it takes for the subsequent rebalance operation to complete. This increases your exposure to data loss if there is a subsequent disk failure during rebalancing. DROP FORCE should be used only with great care.

Dropping Disks from Disk Groups: Example 1 This example drops diska5 (the operating system independent name assigned to /devices/diska5 in "Adding Disks to a Disk Group: Example 1") from disk group dgroup1.

ALTER DISKGROUP dgroup1 DROP DISK diska5;

Dropping Disks from Disk Groups: Example 2 This example drops diska5 from disk group dgroup1, and also illustrates how multiple actions are possible with one ALTER DISKGROUP statement.

ALTER DISKGROUP dgroup1 DROP DISK diska5
     ADD FAILGROUP failgrp1 DISK '/devices/diska9' NAME diska9;

Resizing Disks in Disk Groups

The RESIZE clause of ALTER DISKGROUP enables you to perform the following operations:

  • Resize all disks in the disk group

  • Resize specific disks

  • Resize all of the disks in a specified failure group

If you do not specify a new size in the SIZE clause then ASM uses the size of the disk as returned by the operating system. This could be a means of recovering disk space when you had previously restricted the size of the disk by specifying a size smaller than disk capacity.

The new size is written to the ASM disk header record and if the size of the disk is increasing, then the new space is immediately available for allocation. If the size is decreasing, rebalancing must relocate file extents beyond the new size limit to available space below the limit. If the rebalance operation can successfully relocate all extents, then the new size is made permanent, otherwise the rebalance fails.

Resizing Disks in Disk Groups: Example The following example resizes all of the disks in failure group failgrp1 of disk group dgroup1. If the new size is greater than disk capacity, the statement will fail.

ALTER DISKGROUP dgroup1 
     RESIZE DISKS IN FAILGROUP failgrp1 SIZE 100G;

Undropping Disks in Disk Groups

The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel all pending drops of disks within disk groups. If a drop disk operation has already completed, then this statement cannot be used to restore it. This statement cannot be used to restore disks that are being dropped as the result of a DROP DISKGROUP statement, or for disks that are being dropped using the FORCE clause.

Undropping Disks in Disk Groups: Example The following example cancels the dropping of disks from disk group dgroup1:

ALTER DISKGROUP dgroup1 UNDROP DISKS;  

Manually Rebalancing a Disk Group

You can manually rebalance the files in a disk group using the REBALANCE clause of the ALTER DISKGROUP statement. This would normally not be required, because ASM automatically rebalances disk groups when their composition changes. You might want to do a manual rebalance operation if you want to control the speed of what would otherwise be an automatic rebalance operation.

The POWER clause of the ALTER DISKGROUP...REBALANCE statement specifies the degree of parallelization, and thus the speed of the rebalance operation. It can be set to a value from 0 to 11. A value of 0 halts a rebalancing operation until the statement is either implicitly or explicitly reinvoked. The default rebalance power is set by the ASM_POWER_LIMIT initialization parameter. See "Tuning Rebalance Operations" for more information.

The power level of an ongoing rebalance operation can be changed by entering the rebalance statement with a new level.

The ALTER DISKGROUP...REBALANCE command by default returns immediately so that you can issue other commands while the rebalance operation takes place asynchronously in the background. You can query the V$ASM_OPERATION view for the status of the rebalance operation.

If you want the ALTER DISKGROUP...REBALANCE command to wait until the rebalance operation is complete before returning, you can add the WAIT keyword to the REBALANCE clause. This is especially useful in scripts. The command also accepts a NOWAIT keyword, which invokes the default behavior of conducting the rebalance operation asynchronously. You can interrupt a rebalance running in wait mode by typing CTRL-C on most platforms. This causes the command to return immediately with the message ORA-01013: user requested cancel of current operation, and to continue the rebalance operation asynchronously.

Additional rules for the rebalance operation include the following:

  • The ALTER DISKGROUP...REBALANCE statement uses the resources of the single node upon which it is started.

  • ASM can perform one rebalance at a time on a given instance.

  • Rebalancing continues across a failure of the ASM instance performing the rebalance.

  • The REBALANCE clause (with its associated POWER and WAIT/NOWAIT keywords) can also be used in ALTER DISKGROUP commands that add, drop, or resize disks.

Manually Rebalancing a Disk Group: Example The following example manually rebalances the disk group dgroup2. The command does not return until the rebalance operation is complete.

ALTER DISKGROUP dgroup2 REBALANCE POWER 5 WAIT;

Mounting and Dismounting Disk Groups

Disk groups that are specified in the ASM_DISKGROUPS initialization parameter are mounted automatically at ASM instance startup. This makes them available to all database instances running on the same node as ASM. The disk groups are dismounted at ASM instance shutdown. ASM also automatically mounts a disk group when you initially create it, and dismounts a disk group if you drop it.

There may be times that you want to mount or dismount disk groups manually. For these actions use the ALTER DISKGROUP...MOUNT or ALTER DISKGROUP...DISMOUNT statement. You can mount or dismount disk groups by name, or specify ALL.

If you try to dismount a disk group that contains open files, the statement will fail, unless you also specify the FORCE clause.

Dismounting Disk Groups: Example

The following statement dismounts all disk groups that are currently mounted to the ASM instance:

ALTER DISKGROUP ALL DISMOUNT;

Mounting Disk Groups: Example

The following statement mounts disk group dgroup1:

ALTER DISKGROUP dgroup1 MOUNT; 

Checking Internal Consistency of Disk Group Metadata

You can check the internal consistency of disk group metadata using the ALTER DISKGROUP...CHECK statement. Checking can be specified for specific files in a disk group, specific disks or all disks in a disk group, or specific failure groups within a disk group. The disk group must be mounted in order to perform these checks.

If any errors are detected, an error message is displayed and details of the errors are written to the alert log. Automatic Storage Management attempts to correct any errors, unless you specify the NOREPAIR clause in your ALTER DISKGROUP...CHECK statement.

The following statement checks for consistency in the metadata for all disks in the dgroup1 disk group:

ALTER DISKGROUP dgroup1 CHECK ALL;

See Oracle Database SQL Reference for additional CHECK clause syntax.

Dropping Disk Groups

The DROP DISKGROUP statement enables you to delete an ASM disk group and optionally, all of its files. You can specify the INCLUDING CONTENTS clause if you want any files that may still be contained in the disk group also to be deleted. The default is EXCLUDING CONTENTS, which provides syntactic consistency and prevents you from dropping the disk group if it has any contents

The ASM instance must be started and the disk group must be mounted with none of the disk group files open, in order for the DROP DISKGROUP statement to succeed. The statement does not return until the disk group has been dropped.

When you drop a disk group, ASM dismounts the disk group and removes the disk group name from the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used, and the disk group is mentioned in the ASM_DISKGROUPS initialization parameter, then you must remember to remove the disk group name from the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance.

The following statement deletes dgroup1:

DROP DISKGROUP dgroup1;

After ensuring that none of the files contained in dgroup1 are open, ASM rewrites the header of each disk in the disk group to remove ASM formatting information. The statement does not specify INCLUDING CONTENTS, so the drop operation will fail if the disk group contains any files.

Managing Disk Group Directories

ASM disk groups contain a system-generated hierarchical directory structure for storing ASM files. The system-generated filename that ASM assigns to each file represents a path in this directory hierarchy. The following is an example of a system-generated filename:

+dgroup2/sample/controlfile/Current.256.541956473

The plus sign represents the root of the ASM file system. The dgroup2 directory is the parent directory for all files in the dgroup2 disk group. The sample directory is the parent directory for all files in the sample database, and the controlfile directory contains all control files for the sample database.

You can create your own directories within this hierarchy to store aliases that you create. Thus, in addition to having user-friendly alias names for ASM files, you can have user-friendly paths to those names.

This section describes how to use the ALTER DISKGROUP statement to create a directory structure for aliases. It also describes how you can rename a directory or drop a directory.


See Also:

  • The chapter on ASMCMD in Oracle Database Utilities. ASMCMD is a command line utility that you can use to easily create aliases and directories in ASM disk groups.

  • "About ASM Filenames" for a discussion of ASM filenames and how they are formed


Creating a New Directory

Use the ADD DIRECTORY clause of the ALTER DISKGROUP statement to create a hierarchical directory structure for alias names for ASM files. Use the slash character (/) to separate components of the directory path. The directory path must start with the disk group name, preceded by a plus sign (+), followed by any subdirectory names of your choice.

The parent directory must exist before attempting to create a subdirectory or alias in that directory.

Creating a New Directory: Example 1 The following statement creates a hierarchical directory for disk group dgroup1, which can contain, for example, the alias name +dgroup1/mydir/control_file1:

ALTER DISKGROUP dgroup1 ADD DIRECTORY '+dgroup1/mydir'; 

Creating a New Directory: Example 2 Assuming no subdirectory exists under the directory +dgoup1/mydir, the following statement fails:

ALTER DISKGROUP dgroup1
     ADD DIRECTORY '+dgroup1/mydir/first_dir/second_dir';

Renaming a Directory

The RENAME DIRECTORY clause of the ALTER DISKGROUP statement enables you to rename a directory. System created directories (those containing system-generated names) cannot be renamed.

Renaming a Directory: Example The following statement renames a directory:

ALTER DISKGROUP dgroup1 RENAME DIRECTORY '+dgroup1/mydir'
     TO '+dgroup1/yourdir';

Dropping a Directory

You can delete a directory using the DROP DIRECTORY clause of the ALTER DISKGROUP statement. You cannot drop a system created directory. You cannot drop a directory containing alias names unless you also specify the FORCE clause.

Dropping a Directory: Example This statement deletes a directory along with its contents:

ALTER DISKGROUP dgroup1 DROP DIRECTORY '+dgroup1/yourdir' FORCE;

Managing Alias Names for ASM Filenames

Alias names (or just "aliases") are intended to provide a more user-friendly means of referring to ASM files, rather than using the system-generated filenames.

You can create an alias for a file when you create it in the database, or you can add an alias to an existing file using the ADD ALIAS clause of the ALTER DISKGROUP statement. You can create an alias in any system-generated or user-created ASM directory. You cannot create an alias at the root level (+), however.

The V$ASM_ALIAS view contains a row for every alias name known to the ASM instance. This view also contains ASM directories.


Note:

V$ASM_ALIAS also contains a row for every system-generated filename. These rows are indicated by the value 'Y' in the column SYSTEM_CREATED.


See Also:

The chapter on ASMCMD in Oracle Database Utilities. ASMCMD is a command line utility that you can use to easily create aliases.

Adding an Alias Name for an ASM Filename

Use the ADD ALIAS clause of the ALTER DISKGROUP statement to create an alias name for an ASM filename. The alias name must consist of the full directory path and the alias itself.

Adding an Alias Name for an ASM Filename: Example 1 The following statement adds a new alias name for a system-generated file name:

ALTER DISKGROUP dgroup1 ADD ALIAS '+dgroup1/mydir/second.dbf'
     FOR '+dgroup1/sample/datafile/mytable.342.3';

Adding an Alias Name for as ASM Filename: Example 2 This statement illustrates another means of specifying the ASM filename for which the alias is to be created. It uses the numeric form of the ASM filename, which is an abbreviated and derived form of the system-generated filename.

ALTER DISKGROUP dgroup1 ADD ALIAS '+dgroup1/mydir/second.dbf'
     FOR '+dgroup1.342.3';

Renaming an Alias Name for an ASM Filename

Use the RENAME ALIAS clause of the ALTER DISKGROUP statement to rename an alias for an ASM filename. The old and the new alias names must consist of the full directory paths of the alias names.

Renaming an Alias Name for an ASM Filename: Example The following statement renames an alias:

ALTER DISKGROUP dgroup1 RENAME ALIAS '+dgroup1/mydir/datafile.dbf'
     TO '+dgroup1/payroll/compensation.dbf';

Dropping an Alias Name for an ASM Filename

Use the DROP ALIAS clause of the ALTER DISKGROUP statement to drop an alias for an ASM filename. The alias name must consist of the full directory path and the alias itself. The underlying file to which the alias refers is unchanged.

Dropping an Alias Name for an ASM Filename: Example 1 The following statement drops an alias:

ALTER DISKGROUP dgroup1 DROP ALIAS '+dgroup1/payroll/compensation.dbf';

Dropping an Alias Name for an ASM Filename: Example 2 The following statement will fail because it attempts to drop a system-generated filename. This is not allowed:

ALTER DISKGROUP dgroup1 
     DROP ALIAS '+dgroup1/sample/datafile/mytable.342.3';

Dropping Files and Associated Aliases from a Disk Group

You can delete ASM files and their associated alias names from a disk group using the DROP FILE clause of the ALTER DISKGROUP statement. You must use a fully qualified filename, a numeric filename, or an alias name when specifying the file that you want to delete.

Some reasons why you might need to delete files include:

  • Files created using aliases are not Oracle-managed files. Consequently, they are not automatically deleted.

  • A point in time recovery of a database might restore the database to a time before a tablespace was created. The restore does not delete the tablespace, but there is no reference to the tablespace (or its datafile) in the restored database. You can manually delete the datafile.

Dropping an alias does not drop the underlying file on the file system.

Dropping Files and Associated Aliases from a Disk Group: Example 1

The following statement uses the alias name for the file to delete both the file and the alias:

ALTER DISKGROUP dgroup1 DROP FILE '+dgroup1/payroll/compensation.dbf';

Dropping Files and Associated Aliases from a Disk Group: Example 2

In this example the system-generated filename is used to drop the file and any associated alias:

ALTER DISKGROUP dgroup1
     DROP FILE '+dgroup1/sample/datafile/mytable.342.372642';

Managing Disk Group Templates

Templates are used to set redundancy (mirroring) and striping attributes of files created in an ASM disk group. When a file is created, redundancy and striping attributes are set for that file based on an explicitly named template or the system template that is the default template for the file type.

When a disk group is created, ASM creates a set of default templates for that disk group. The set consists of one template for each file type (data file, control file, redo log file, and so on) supported by ASM. For example, a template named ONLINELOG provides the default file redundancy and striping attributes for all redo log files written to ASM disks. Default template settings depend on the disk group type. For example, the default template for datafiles for a normal redundancy disk group sets 2-way mirroring, while the corresponding default template in a high redundancy disk group sets 3-way mirroring.You can modify these default templates. Table 12-5 lists the default templates and the attributes that they apply to matching files. As the table shows, the initial redundancy value of each default template depends on the type of disk group that the template belongs to.


Note:

The striping attribute of templates applies to all disk group types (normal redundancy, high redundancy, and external redundancy). However, the mirroring attribute of templates applies only to normal redundancy disk groups, and is ignored for high-redundancy disk groups (where every file is always 3-way mirrored) and external redundancy disk groups (where no files are mirrored by ASM). Nevertheless, each type of disk group gets a full set of templates, and the redundancy value in each template is always set to the proper default for the disk group type.

Using clauses of the ALTER DISKGROUP statement, you can add new templates to a disk group, modify existing ones, or drop templates. The reason to add templates is to create the right combination of attributes to meet unique requirements. You can then reference a template name when creating a file, thereby assigning desired attributes on an individual file basis rather than on the basis of file type.The V$ASM_TEMPLATE view lists all of the templates known to the ASM instance.

Template Attributes

Table 12-3 shows the permitted striping attribute values, and Table 12-4 shows the permitted redundancy values for ASM templates. These values correspond to the STRIPE and REDUND columns of V$ASM_TEMPLATE.

Table 12-3 Permitted Values for ASM Template Striping Attribute

Striping Attribute Value Description

FINE

Striping in 128KB chunks.

COARSE

Striping in 1MB chunks.


Table 12-4 Permitted Values for ASM Template Redundancy Attribute

Redundancy Attribute Value Resulting Mirroring in Normal Redundancy Disk Group Resulting Mirroring in High Redundancy Disk Group Resulting Mirroring in External Redundancy Disk Group

MIRROR

2-way mirroring

3-way mirroring

(Not allowed)

HIGH

3-way mirroring

3-way mirroring

(Not allowed)

UNPROTECTED

No mirroring

(Not allowed)

No mirroring


Table 12-5 ASM System Default Templates Attribute Settings

Template Name Striping Mirroring, Normal Redundancy Disk Group Mirroring, High Redundancy Disk Group Mirroring, External Redundancy Disk Group

CONTROLFILE

FINE

HIGH

HIGH

UNPROTECTED

DATAFILE

COARSE

MIRROR

HIGH

UNPROTECTED

ONLINELOG

FINE

MIRROR

HIGH

UNPROTECTED

ARCHIVELOG

COARSE

MIRROR

HIGH

UNPROTECTED

TEMPFILE

COARSE

MIRROR

HIGH

UNPROTECTED

BACKUPSET

COARSE

MIRROR

HIGH

UNPROTECTED

PARAMETERFILE

COARSE

MIRROR

HIGH

UNPROTECTED

DATAGUARDCONFIG

COARSE

MIRROR

HIGH

UNPROTECTED

FLASHBACK

FINE

MIRROR

HIGH

UNPROTECTED

CHANGETRACKING

COARSE

MIRROR

HIGH

UNPROTECTED

DUMPSET

COARSE

MIRROR

HIGH

UNPROTECTED

XTRANSPORT

COARSE

MIRROR

HIGH

UNPROTECTED

AUTOBACKUP

COARSE

MIRROR

HIGH

UNPROTECTED


Adding Templates to a Disk Group

To add a new template for a disk group, you use the ADD TEMPLATE clause of the ALTER DISKGROUP statement. You specify the name of the template, its redundancy attribute, and its striping attribute.


Note:

If the name of your new template is not one of the names listed in Table 12-5, it is not used as a default template for database file types. To use it, you must reference its name when creating a file. See "About ASM Filenames" for more information.

The syntax of the ALTER DISKGROUP command for adding a template is as follows:

ALTER DISKGROUP disk_group_name ADD TEMPLATE template_name 
  ATTRIBUTES ([{MIRROR|HIGH|UNPROTECTED}] [{FINE|COARSE}]);

Both types of attribute are optional. If no redundancy attribute is specified, the value defaults to MIRROR for a normal redundancy disk group, HIGH for a high redundancy disk group, and UNPROTECTED for an external redundancy disk group. If no striping attribute is specified, the value defaults to COARSE.

Adding Templates to a Disk Group: Example 1 The following statement creates a new template named reliable for the normal redundancy disk group dgroup2:

ALTER DISKGROUP dgroup2 ADD TEMPLATE reliable ATTRIBUTES (HIGH FINE);

Adding Templates to a Disk Group: Example 2 This statement creates a new template named unreliable that specifies files are to be unprotected (no mirroring). (Oracle discourages the use of unprotected files unless hardware mirroring is in place; this example is presented only to further illustrate how the attributes for templates are set.)

ALTER DISKGROUP dgroup2 ADD TEMPLATE unreliable ATTRIBUTES (UNPROTECTED);

See Also:

Oracle Database SQL Reference for more information on the ALTER DISKGROUP...ADD TEMPLATE command.

Modifying a Disk Group Template

The ALTER TEMPLATE clause of the ALTER DISKGROUP statement enables you to modify the attribute specifications of an existing system default or user-defined disk group template. Only specified template properties are changed. Unspecified properties retain their current value.

When you modify an existing template, only new files created by the template will reflect the attribute changes. Existing files maintain their attributes.

Modifying a Disk Group Template: Example The following example changes the striping attribute specification of the reliable template for disk group dgroup2.

ALTER DISKGROUP dgroup2 ALTER TEMPLATE reliable 
     ATTRIBUTES (COARSE);

Dropping Templates from a Disk Group

Use the DROP TEMPLATE clause of the ALTER DISKGROUP statement to drop one or more templates from a disk group. You can only drop templates that are user-defined; you cannot drop system default templates.

Dropping Templates from a Disk Group: Example This example drops the previously created template unreliable from dgroup2:

ALTER DISKGROUP dgroup2 DROP TEMPLATE unreliable;

Using Automatic Storage Management in the Database

This section discusses how you use Automatic Storage Management (ASM) to manage database files.


Note:

This section does not address Real Application Clusters environments. For this information, see Oracle Real Application Clusters Installation and Configuration Guide and Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.

When you use ASM, Oracle database files are stored in ASM disk groups. Your Oracle Database then benefits from the improved performance, improved resource utilization, and higher availability provided by ASM. Most ASM files are Oracle-managed files. See Table 12-7 and Chapter 11, "Using Oracle-Managed Files" for more information.


Note:

ASM supports good forward and backward compatibility between 10.x versions of Oracle Database and 10.x versions of ASM. That is, any combination of versions 10.1.x.y and 10.2.x.y for either the ASM instance or the database instance works correctly, with one caveat: For a 10.1.x.y database instance to connect to a 10.2.x.y ASM instance, the database must be version 10.1.0.3 or later.

When mixing software versions, ASM functionality reverts to the functionality of the earliest version in use. For example, a 10.1.0.3 database working with a 10.2.0.0 ASM instance does not exploit new features in ASM 10.2.0.0. Conversely, a 10.2.0.0 database working with a 10.1.0.2 ASM instance behaves as a 10.1.0.2 database instance when interacting with ASM. Two columns in the view V$ASM_CLIENT, SOFTWARE_VERSION and COMPATIBLE_VERSION, display version information. These columns are described in Oracle Database Reference.


Oracle database files that are stored in disk groups are not visible to the operating system or its utilities, but are visible to RMAN, ASMCMD, and to the XML DB repository.

This following topics are contained in this section:

What Types of Files Does ASM Support?

ASM supports most file types required by the database. However, most administrative files cannot be stored on a ASM disk group. These include trace files, audit files, alert logs, backup files, export files, tar files, and core files.

Table 12-6 lists file types, indicates if they are supported, and lists the system default template that provides the attributes for file creation. Some of the file types shown in the table are related to specific products or features, and are not discussed in this book.

Table 12-6 File Types Supported by Automatic Storage Management

File Type Supported Default Templates

Control files

yes

CONTROLFILE

Datafiles

yes

DATAFILE

Redo log files

yes

ONLINELOG

Archive log files

yes

ARCHIVELOG

Trace files

no

n/a

Temporary files

yes

TEMPFILE

Datafile backup pieces

yes

BACKUPSET

Datafile incremental backup pieces

yes

BACKUPSET

Archive log backup piece

yes

BACKUPSET

Datafile copy

yes

DATAFILE

Persistent initialization parameter file (SPFILE)

yes

PARAMETERFILE

Disaster recovery configurations

yes

DATAGUARDCONFIG

Flashback logs

yes

FLASHBACK

Change tracking file

yes

CHANGETRACKING

Data Pump dumpset

yes

DUMPSET

Automatically generated control file backup

yes

AUTOBACKUP

Cross-platform transportable datafiles

yes

XTRANSPORT

Operating system files

no

n/a



See Also:

"Managing Disk Group Templates" for a description of the system default templates

About ASM Filenames

Every file created in ASM gets a system-generated filename, otherwise known as a fully qualified filename. The fully qualified filename represents a complete path name in the ASM file system. An example of a fully qualified filename is:

+dgroup2/sample/controlfile/Current.256.541956473

You can use the fully qualified filename to reference (read or retrieve) an ASM file. You can also use other abbreviated filename formats, described later in this section, to reference an ASM file.

ASM generates a fully qualified filename upon any request to create a file. A creation request does not—in fact, cannot—specify a fully qualified filename. Instead, it uses a simpler syntax to specify a file, such as an alias or just a disk group name. ASM then creates the file, placing it in the correct ASM "path" according to file type, and then assigns an appropriate fully qualified filename. If you specify an alias in the creation request, ASM also creates the alias so that it references the fully qualified filename.

ASM file creation requests are either single file creation requests or multiple file creation request.

Single File Creation Request This is a request to create a single file, such as a datafile or a control file. The form of the ASM filename in this type of request is either an alias (such as +dgroup2/control/ctl.f) or a disk group name preceded by a plus sign. You use the alias or disk group name where a filename is called for in a statement, such as CREATE TABLESPACE or CREATE CONTROLFILE.


Note:

'/ ' and '\' are interchangeable in filenames. Filenames are case insensitive, but case retentive.

Multiple File Creation Request This is a request that can occur multiple times to create an ASM file. For example, if you assign a value to the initialization parameter DB_CREATE_FILE_DEST, you can issue a CREATE TABLESPACE statement (without a filename specification) multiple times. Each time, ASM creates a different unique datafile name.

One form of the ASM filename to use in this type of request is an incomplete filename, which is just a disk group name preceded by a plus sign. In this case, you set DB_CREATE_FILE_DEST to an incomplete filename (for example, +dgroup2), and whenever a command is executed that must create a database file in DB_CREATE_FILE_DEST, the file is created in the designated disk group and assigned a unique fully qualified name. You can use an incomplete filename in other *_DEST initialization parameters.


Note:

Every time ASM creates a fully qualified name, it writes an alert log message containing the ASM-generated name. You can also find the generated name in database views displaying Oracle file names, such as V$DATAFILE and V$LOGFILE. You can use this name, or an abbreviated form of it, if you later need to reference an ASM file in a SQL statement. Like other Oracle database filenames, ASM filenames are kept in the control file and the RMAN catalog.

The sections that follow provide details on each of the six possible forms of an ASM filename:

The following table specifies the valid contexts for each filename form, and if the form is used for file creation, whether the created file is an Oracle-managed file (OMF).

Table 12-7 Valid Contexts for the ASM Filename Forms

Filename Form Valid Context OMF
Reference Single-File Creation Multiple File Creation Created as Oracle-managed?

Fully qualified filename

Yes

No

No


Numeric filename

Yes

No

No


Alias filename

Yes

Yes

No

No

Alias with template filename

No

Yes

No

No

Incomplete filename

No

Yes

Yes

Yes

Incomplete filename with template

No

Yes

Yes

Yes



Note:

Fully qualified and numeric filenames can be used in single-file create if you specify the REUSE keyword, as described in "Using ASM Filenames in SQL Statements".

Fully Qualified ASM Filename

This form of ASM filename can be used for referencing existing ASM files. It is the filename that ASM always automatically generates when an ASM file is created.

A fully qualified filename has the following form:

+group/dbname/file_type/file_type_tag.file.incarnation 

where:

  • +group is the disk group name preceded by a plus sign.

    You can think of the plus sign (+) as the root directory of the ASM file system, like the slash (/) in Unix operating systems.

  • dbname is the DB_UNIQUE_NAME of the database to which the file belongs.

  • file_type is the Oracle file type and can be one of the file types shown in Table 12-8.

  • file_type_tag is type specific information about the file and can be one of the tags shown in Table 12-8.

  • file.incarnation is the file/incarnation pair, used to ensure uniqueness.

An example of a fully qualified ASM filename is:

+dgroup2/sample/controlfile/Current.256.541956473 

Table 12-8 Oracle File Types and Automatic Storage Management File Type Tags

Automatic Storage Management file_type Description Automatic Storage Management file_type_tag Comments

CONTROLFILE

Control files and backup control files

Current

Backup

--

DATAFILE

Datafiles and datafile copies

tsname

Tablespace into which the file is added

ONLINELOG

Online logs

group_group#

--

ARCHIVELOG

Archive logs

thread_thread#_seq_sequence#

--

TEMPFILE

Tempfiles

tsname

Tablespace into which the file is added

BACKUPSET

Datafile and archive log backup pieces; datafile incremental backup pieces

hasspfile_timestamp

hasspfile can take one of two values: s indicates that the backup set includes the spfile; n indicates that the backup set does not include the spfile.

PARAMETERFILE

Persistent parameter files

spfile


DAATAGUARDCONFIG

Data Guard configuration file

db_unique_name

Data Guard tries to use the service provider name if it is set. Otherwise the tag defaults to DRCname.

FLASHBACK

Flashback logs

log_log#

--

CHANGETRACKING

Block change tracking data

ctf

Used during incremental backups

DUMPSET

Data Pump dumpset

user_obj#_file#

Dump set files encode the user name, the job number that created the dump set, and the file number as part of the tag.

XTRANSPORT

Datafile convert

tsname

--

AUTOBACKUP

Automatic backup files

hasspfile_timestamp

hasspfile can take one of two values: s indicates that the backup set includes the spfile; n indicates that the backup set does not include the spfile.


Numeric ASM Filename

The numeric ASM filename can be used for referencing existing files. It is derived from the fully qualified ASM filename and takes the form:

 +group.file.incarnation 

Numeric ASM filenames can be used in any interface that requires an existing file name.

An example of a numeric ASM filename is:

+dgroup2.257.541956473

Alias ASM Filenames

Alias ASM filenames, otherwise known as aliases, can be used both for referencing existing ASM files and for creating new ASM files. Alias names start with the disk group name preceded by a plus sign, after which you specify a name string of your choosing. Alias filenames are implemented using a hierarchical directory structure, with the slash (/) or backslash (\) character separating name components. You can create an alias in any system-generated or user-created ASM directory. You cannot create an alias at the root level (+), however.

When you create an ASM file with an alias filename, the file is created with a fully qualified name, and the alias filename is additionally created. You can then access the file with either name.

Alias ASM filenames are distinguished from fully qualified filenames or numeric filenames because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers. Examples of ASM alias filenames are:

+dgroup1/myfiles/control_file1
+dgroup2/mydir/second.dbf

Oracle Database references database files by their alias filenames, but only if you create the database files with aliases. If you create database files without aliases and then add aliases later, the database references the files by their fully qualified filenames. The following are examples of how the database uses alias filenames:

  • Alias filenames appear in V$ views. For example, if you create a tablespace and use an alias filename for the datafile, the V$DATAFILE view shows the alias filename.

  • When a controlfile points to datafiles and online redo log files, it can use alias filenames.

  • The CONTROL_FILES initialization parameter can use the alias filenames of the controlfiles. The Database Configuration Assistant (DBCA) creates controlfiles with alias filenames.


Note:

Files created using an alias filename are not considered Oracle-managed files and may require manual deletion in the future if they are no longer needed.


See Also:


Alias ASM Filename with Template

An alias ASM filename with template is used only for ASM file creation operations. It has the following format:

+dgroup(template_name)/alias 

Alias filenames with template behave identically to alias filenames. The only difference is that a file created with an alias filename with template receives the mirroring and striping attributes specified by the named template. The template must belong to the disk group that the file is being created in.

The creation and maintenance of ASM templates is discussed in "Managing Disk Group Templates".

An example of an alias ASM filename with template is:

+dgroup1(my_template)/config1

Explicitly specifying a template name, as in this example, overrides the system default template for the type of file being created.


Note:

Files created using an alias filename with template are not considered Oracle-managed files and may require manual deletion in the future if they are no longer needed.

Incomplete ASM Filename

Incomplete ASM filenames are used only for file creation operations and are used for both single and multiple file creation. They consist only of the disk group name. ASM uses a system default template to determine the ASM file mirroring and striping attributes. The system template that is used is determined by the file type that is being created. For example, if you are creating a datafile for a tablespace, the datafile template is used.

An example of using an incomplete ASM filename is setting the DB_CREATE_FILE_DEST initialization parameter to:

+dgroup1

With this setting, every time you create a tablespace, a datafile is created in the disk group dgroup1, and each datafile is assigned a different fully qualified name. See "Creating ASM Files Using a Default Disk Group Specification" for more information.

Incomplete ASM Filename with Template

Incomplete ASM filenames with templates are used only for file creation operations and are used for both single and multiple file creation. They consist of the disk group name followed by the template name in parentheses. When you explicitly specify a template in a file name, ASM uses the specified template instead of the default template for that file type to determine mirroring and striping attributes for the file.

An example of using an incomplete ASM filename with template is setting the DB_CREATE_FILE_DEST initialization parameter to:

+dgroup1(my_template)

Starting the ASM and Database Instances

Start the ASM and database instances in the following order:

  1. Start the ASM Instance.

    You start the ASM instance on the same node as the database before you start the database instance. Starting an ASM instance is discussed in "Starting Up an ASM Instance"

  2. Start the database instance.

    Consider the following before you start your database instance:

    • When using SQL*Plus to start the database, you must first set the ORACLE_SID environment variable to the database SID. If ASM and the database have different Oracle homes, you must also set the ORACLE_HOME environment variable. Depending on platform, you may have to change other environment variables as well.

    • You must have the INSTANCE_TYPE initialization parameter set as follows:

      INSTANCE_TYPE = RDBMS

      This the default.

    • If you want ASM to be the default destination for creating database files, you must specify an incomplete ASM filename in one or more of the following initialization parameters (see "Creating ASM Files Using a Default Disk Group Specification"):

      • DB_CREATE_FILE_DEST

      • DB_CREATE_ONLINE_LOG_DEST_n

      • DB_RECOVERY_FILE_DEST

      • CONTROL_FILES

      • LOG_ARCHIVE_DEST_n

      • LOG_ARCHIVE_DEST

      • STANDBY_ARCHIVE_DEST

    • Some additional initialization parameter considerations:

      • LOG_ARCHIVE_FORMAT is ignored if a disk group is specified for LOG_ARCHIVE_DEST (for example, LOG_ARCHIVE_DEST = +dgroup1).

      • DB_BLOCK_SIZE must be one of the standard block sizes (2K, 4K, 8K, 16K or 32K bytes).

      • LARGE_POOL_SIZE must be set to at least 1 MB.

Your database instance is now able to create ASM files. You can keep your database instance open and running when you reconfigure disk groups. When you add or remove disks from a disk group, ASM automatically rebalances file data in the reconfigured disk group to ensure a balanced I/O load, even while the database is running.

Creating and Referencing ASM Files in the Database

ASM files are Oracle-managed files unless you created the file using an alias. Any Oracle-managed file is automatically deleted when it is no longer needed. An ASM file is deleted if the creation fails.

Creating ASM Files Using a Default Disk Group Specification

Using the Oracle-managed files feature for operating system files, you can specify a directory as the default location for the creation of datafiles, tempfiles, redo log files, and control files. Using the Oracle-managed files feature for ASM, you can specify a disk group, in the form of an incomplete ASM filename, as the default location for creation of these files, and additional types of files, including archived log files. As for operating system files, the name of the default disk group is stored in an initialization parameter and is used whenever a file specification (for example, DATAFILE clause) is not explicitly specified during file creation.

The following initialization parameters accept the multiple file creation context form of ASM filenames as a destination:

Initialization Parameter Description
DB_CREATE_FILE_DEST Specifies the default disk group location in which to create:
  • Datafiles

  • Tempfiles

If DB_CREATE_ONLINE_LOG_DEST_n is not specified, then also specifies the default disk group for:

  • Redo log files

  • Control file

DB_CREATE_ONLINE_LOG_DEST_n Specifies the default disk group location in which to create:
  • Redo log files

  • Control files

DB_RECOVERY_FILE_DEST If this parameter is specified and DB_CREATE_ONLINE_LOG_DEST_n and CONTROL_FILES are not specified, then this parameter specifies a default disk group for a flash recovery area that contains a copy of:
  • Control file

  • Redo log files

If no local archive destination is specified, then this parameter implicitly sets LOG_ARCHIVE_DEST_10 to the USE_DB_RECOVERY_FILE_DEST value.

CONTROL_FILES Specifies a disk group in which to create control files.

The following initialization parameters accept the multiple file creation context form of the ASM filenames and ASM directory names as a destination:

Initialization Parameter Description
LOG_ARCHIVE_DEST_n Specifies a default disk group or ASM directory as destination for archiving redo log files
LOG_ARCHIVE_DEST Optional parameter to use to specify a default disk group or ASM directory as destination for archiving redo log files. Use when specifying only one destination.
STANDBY_ARCHIVE_DEST Relevant only for a standby database in managed recovery mode. It specifies a default disk group or ASM directory that is the location of archive logs arriving from a primary database. Not discussed in this book. See Oracle Data Guard Concepts and Administration.

The following example illustrates how an ASM file, in this case a datafile, might be created in a default disk group.

Creating a Datafile Using a Default Disk Group: Example Assume the following initialization parameter setting:

DB_CREATE_FILE_DEST = '+dgroup1'

The following statement creates tablespace tspace1.

CREATE TABLESPACE tspace1;

ASM automatically creates and manages the datafile for tspace1 on ASM disks in the disk group dgroup1. File extents are stored using the attributes defined by the default template for a datafile.

Using ASM Filenames in SQL Statements

You can specify ASM filenames in the file specification clause of your SQL statements. If you are creating a file for the first time, use the creation form of an ASM filename. If the ASM file already exists, you must use the reference context form of the filename, and if you are trying to re-create the file, you must add the REUSE keyword. The space will be reused for the new file. This usage might occur when, for example, trying to re-create a control file, as shown in "Creating a Control File in ASM".

If a reference context form is used with the REUSE keyword and the file does not exist, an error results.

Partially created files resulting from system errors are automatically deleted.

Using an ASM Filename in a SQL Statement: Example The following is an example of specifying an ASM filename in a SQL statement. In this case, it is used in the file creation context:

CREATE TABLESPACE  tspace2 DATAFILE '+dgroup2' SIZE 200M AUTOEXTEND ON;

The tablespace tspace2 is created and is comprised of one datafile of size 200M contained in the disk group dgroup2. The datafile is set to auto-extensible with an unlimited maximum size. An AUTOEXTEND clause can be used to override this default.

Creating a Database in ASM

The recommended method of creating your database is to use the Database Configuration Assistant (DBCA). However, if you choose to create your database manually using the CREATE DATABASE statement, then ASM enables you to create a database and all of its underlying files with a minimum of input from you.

The following is an example of using the CREATE DATABASE statement, where database files are created and managed automatically by ASM.

Creating a Database in ASM: Example

This example creates a database with the following ASM files:

  • A SYSTEM tablespace datafile in disk group dgroup1.

  • A SYSAUX tablespace datafile in disk group dgroup1. The tablespace is locally managed with automatic segment-space management.

  • A multiplexed online redo log is created with two online log groups, one member of each in dgroup1 and dgroup2 (flash recovery area).

  • If automatic undo management mode is enabled, then an undo tablespace datafile in directory dgroup1.

  • If no CONTROL_FILES initialization parameter is specified, then two control files, one in drgoup1 and another in dgroup2 (flash recovery area). The control file in dgroup1 is the primary control file.

The following initialization parameter settings are included in the initialization parameter file:

DB_CREATE_FILE_DEST = '+dgroup1'
DB_RECOVERY_FILE_DEST = '+dgroup2'
DB_RECOVERY_FILE_DEST_SIZE = 10G

The following statement is issued at the SQL prompt:

SQL> CREATE DATABASE sample;

Creating Tablespaces in ASM

When ASM creates a datafile for a permanent tablespace (or a tempfile for a temporary tablespace), the datafile is set to auto-extensible with an unlimited maximum size and 100 MB default size. You can use the AUTOEXTEND clause to override this default extensibility and the SIZE clause to override the default size.

Automatic Storage Management applies attributes to the datafile, as specified in the system default template for a datafile as shown in the table in "Managing Disk Group Templates". You can also create and specify your own template.

Files in a tablespace may be in both ASM files and non-ASM files as a result of the tablespace history. RMAN commands enable non-ASM files to be relocated to a ASM disk group and enable ASM files to be relocated as non-ASM files.

The following are some examples of creating tablespaces using Automatic Storage Management. The examples assume that disk groups have already been configured.

Creating a Tablespace in ASM: Example 1

This example illustrates "out of the box" usage of Automatic Storage Management. You let Automatic Storage Management create and manage the tablespace datafile for you, using Oracle supplied defaults that are adequate for most situations.

Assume the following initialization parameter setting:

DB_CREATE_FILE_DEST = '+dgroup2'

The following statement creates the tablespace and its datafile:

CREATE TABLESPACE tspace2;

Creating a Tablespace in ASM: Example 2

The following statements create a tablespace that uses a user defined template (assume it has been defined) to specify the redundancy and striping attributes of the datafile:

SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '+dgroup1(my_template)';
SQL> CREATE TABLESPACE tspace3;

Creating a Tablespace in ASM: Example 3

The following statement creates an undo tablespace with a datafile that has an alias name, and with attributes that are set by the user defined template my_undo_template. It assumes a directory has been created in disk group dgroup3 to contain the alias name and that the user defined template exists. Because an alias is used to create the datafile, the file is not an Oracle-managed file and will not be automatically deleted when the tablespace is dropped.

CREATE UNDO TABLESPACE myundo 
     DATAFILE '+dgroup3(my_undo_template)/myfiles/my_undo_ts' SIZE 200M; 

The following statement drops the file manually after the tablespace has been dropped:

ALTER DISKGROUP dgroup3 DROP FILE '+dgroup3/myfiles/my_undo_ts';

Creating Redo Logs in ASM

Online redo logs can be created in multiple disk groups, either implicitly in the initialization parameter file or explicitly in an ALTER DATABASE...ADD LOGFILE statement. Each online log should have one log member in multiple disk groups. The filenames for log file members are automatically generated.

All partially created redo log files, created as a result of a system error, are automatically deleted.

Adding New Redo Log Files: Example

The following example creates a log file with a member in each of the disk groups dgroup1 and dgroup2.

The following parameter settings are included in the initialization parameter file:

DB_CREATE_ONLINE_LOG_DEST_1 = '+dgroup1'
DB_CREATE_ONLINE_LOG_DEST_2 = '+dgroup2'

The following statement is issued at the SQL prompt:

ALTER DATABASE ADD LOGFILE;

Creating a Control File in ASM

Control files can be explicitly created in multiple disk groups. The filenames for control files are automatically generated. If an attempt to create a control file fails, partially created control files will be automatically be deleted.

There may be times when you need to specify a control file by name. Alias filenames are provided to allow administrators to reference ASM files with human-understandable names. The use of an alias in the specification of the control file during its creation allows the DBA to later refer to the control file with a human-meaningful name. Furthermore, an alias can be specified as a control file name in the CONTROL_FILES initialization parameter. Control files created without aliases can be given alias names at a later time. The ALTER DISKGROUP...CREATE ALIAS statement is used for this purpose.

When creating a control file, datafiles and log files stored in an ASM disk group should be given to the CREATE CONTROLFILE command using the file reference context form of their ASM filenames. However, the use of the RESETLOGS option requires the use of a file creation context form for the specification of the log files.

Creating a Control File in ASM: Example 1

The following CREATE CONTROLFILE statement is generated by an ALTER DATABASE BACKUP CONTROLFILE TO TRACE command for a database with datafiles and log files created on disk groups dgroup1 and dgroup2:

CREATE CONTROLFILE REUSE DATABASE "SAMPLE" NORESETLOGS ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 2
    MAXDATAFILES 30
    MAXINSTANCES 1
    MAXLOGHISTORY 226
LOGFILE
  GROUP 1 (
    '+DGROUP1/db/onlinelog/group_1.258.541956457',
    '+DGROUP2/db/onlinelog/group_1.256.541956473'
  ) SIZE 100M,
  GROUP 2 (
    '+DGROUP1/db/onlinelog/group_2.257.541956477',
    '+DGROUP2/db/onlinelog/group_2.258.541956487'
  ) SIZE 100M
DATAFILE
  '+DGROUP1/db/datafile/system.260.541956497',
  '+DGROUP1/db/datafile/sysaux.259.541956511'
CHARACTER SET US7ASCII
;

Creating a Control File in ASM: Example 2

This example is a CREATE CONTROLFILE statement for a database with datafiles, but uses a RESETLOGS clause, and thus uses the creation context form for log files:

CREATE CONTROLFILE REUSE DATABASE "SAMPLE" RESETLOGS ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 2
    MAXDATAFILES 30
    MAXINSTANCES 1
    MAXLOGHISTORY 226
LOGFILE
  GROUP 1 (
    '+DGROUP1',
    '+DGROUP2'
  ) SIZE 100M,
  GROUP 2 (
    '+DGROUP1',
    '+DGROUP2'
  ) SIZE 100M
DATAFILE
  '+DGROUP1/db/datafile/system.260.541956497',
  '+DGROUP1/db/datafile/sysaux.259.541956511'
CHARACTER SET US7ASCII
;

Creating Archive Log Files in ASM

Disk groups can be specified as archive log destinations in the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_n initialization parameters. When destinations are specified in this manner, the archive log filename will be unique, even if archived twice. All partially created archive files, created as a result of a system error, are automatically deleted.

If LOG_ARCHIVE_DEST is set to a disk group name, LOG_ARCHIVE_FORMAT is ignored. Unique filenames for archived logs are automatically created by the Oracle database. If LOG_ARCHIVE_DEST is set to a directory in a disk group, LOG_ARCHIVE_FORMAT has its normal semantics.

The following sample archive log names might be generated with DB_RECOVERY_FILE_DEST set to +dgroup2. SAMPLE is the value of the DB_UNIQUE_NAME parameter:

+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_1_seq_38.614.541956473
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_4_seq_35.609.541956477
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_23/thread_2_seq_34.603.541956487
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_25/thread_3_seq_100.621.541956497
+DGROUP2/SAMPLE/ARCHIVELOG/2003_09_25/thread_1_seq_38.614.541956511

Recovery Manager (RMAN) and ASM

RMAN is critical to ASM and is responsible for tracking the ASM filenames and for deleting obsolete ASM files. Because ASM files cannot be copied through normal operating system interfaces (other than with the XML DB repository through FTP or HTTP/WebDAV), RMAN is the preferred means of copying ASM files. RMAN is the only method for performing backups of a database containing ASM files.

RMAN can also be used for moving databases or files into ASM storage.

Migrating a Database to Automatic Storage Management

With a new installation of Oracle Database and Automatic Storage Management (ASM), you initially create your database in ASM. If you have an existing Oracle database that stores database files in the operating system file system or on raw devices, you can migrate some or all of these database files to ASM.

There are two ways to migrate database files to ASM:

Accessing Automatic Storage Management Files with the XML DB Virtual Folder

Automatic Storage Management (ASM) files and directories can be accessed through a virtual folder in the XML DB repository. The repository path to the virtual folder is /sys/asm. The folder is virtual because its contents do not actually reside in the repository; they exist as normal ASM files and directories. /sys/asm provides a means to access and manipulate the ASM files and directories with programmatic APIs such as the DBMS_XDB package and with XML DB protocols such as FTP and HTTP/WebDAV.

A typical use for this capability might be to view /sys/asm as a Web Folder in a graphical user interface (with the WebDAV protocol), and then copy a Data Pump dumpset from an ASM disk group to an operating system file system by dragging and dropping.

You must log in as a user other than SYS and you must have been granted the DBA role to access /sys/asm with XML DB protocols.


Note:

The FTP protocol is initially disabled for a new XML DB installation. To enable it, you must set the FTP port to a non-zero value. The easiest way to do this is with the catxdbdbca.sql script. This script takes two arguments. The first is the FTP port number, and the second is the HTTP/WebDAV port number. The following example configures the FTP port number to 7787, and the HTTP/WebDAV port number to 8080:
SQL> @?/rdbms/admin/catxdbdbca.sql 7787 8080

Another way to set these port numbers is with the XDB Configuration page in Enterprise Manager.



See Also:

Oracle XML DB Developer's Guide for information on Oracle XML DB, including additional ways to configure port numbers for the XML DB protocol servers, and Oracle Database PL/SQL Packages and Types Reference for information on the DBMS_XDB package.

Inside /sys/asm

The ASM virtual folder is created by default during XML DB installation. If the database is not configured to use ASM, the folder is empty and no operations are permitted on it.

The ASM virtual folder contains folders and subfolders that follow the hierarchy defined by the structure of an ASM fully qualified file name. Thus, /sys/asm contains one subfolder for every mounted disk group, and each disk group folder contains one subfolder for each database that uses the disk group. (In addition, a disk group folder may contain files and folders corresponding to aliases created by the administrator.) Continuing the hierarchy, the database folders contain file type folders, which contain the ASM files. This hierarchy is shown in the following diagram, which for simplicity, excludes directories created for aliases.

Description of admin075.gif follows


Restrictions

The following are usage restrictions on /sys/asm:

  • You cannot create hard links to existing ASM files or directories with APIs such as DBMS_XDB.LINK.

  • You cannot rename (move) an ASM file to another disk group or to a directory outside ASM.

Sample FTP Session

In the following sample FTP session, the disk groups are DATA and RECOVERY, the database name is MFG, and dbs is a directory that was created for aliases. All files in /sys/asm are binary.

ftp> open myhost 7777
ftp> user system
ftp> passwd dba
ftp> cd /sys/asm
ftp> ls
DATA
RECOVERY
ftp> cd DATA
ftp> ls
dbs
MFG
ftp> cd dbs
ftp> ls
t_dbl.f
t_axl.f
ftp> binary
ftp> get t_dbl.f t_axl.f
ftp> put t_db2.f

Viewing Information About Automatic Storage Management

You can use these views to query information about Automatic Storage Management:

View Description
V$ASM_DISKGROUP In an ASM instance, describes a disk group (number, name, size related info, state, and redundancy type).

In a DB instance, contains one row for every ASM disk group mounted by the local ASM instance.

This view performs disk discovery every time it is queried.

V$ASM_DISK In an ASM instance, contains one row for every disk discovered by the ASM instance, including disks that are not part of any disk group.

In a DB instance, contains rows only for disks in the disk groups in use by that DB instance.

This view performs disk discovery every time it is queried.

V$ASM_DISKGROUP_STAT Has the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use V$ASM_DISKGROUP instead.
V$ASM_DISK_STAT Has the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use V$ASM_DISK instead.
V$ASM_FILE In an ASM instance, contains one row for every ASM file in every disk group mounted by the ASM instance.

In a DB instance, contains no rows.

V$ASM_TEMPLATE In an ASM or DB instance, contains one row for every template present in every disk group mounted by the ASM instance.
V$ASM_ALIAS In an ASM instance, contains one row for every alias present in every disk group mounted by the ASM instance.

In a DB instance, contains no rows.

V$ASM_OPERATION In an ASM instance, contains one row for every active ASM long running operation executing in the ASM instance.

In a DB instance, contains no rows.

V$ASM_CLIENT In an ASM instance, identifies databases using disk groups managed by the ASM instance.

In a DB instance, contains one row for the ASM instance if the database has any open ASM files.



See Also:

Oracle Database Reference for details on all of these dynamic performance views

PKJB8YYPKgpUIOEBPS/schema.htm Managing Space for Schema Objects

14 Managing Space for Schema Objects

This chapter offers guidelines for managing space for schema objects. You should familiarize yourself with the concepts in this chapter before attempting to manage specific schema objects as described in later chapters.

This chapter contains the following topics:

Managing Tablespace Alerts

Oracle Database provides proactive help in managing disk space for tablespaces by alerting you when available space is running low. Two alert thresholds are defined by default: warning and critical. The warning threshold is the limit at which space is beginning to run low. The critical threshold is a serious limit that warrants your immediate attention. The database issues alerts at both thresholds.

There are two ways to specify alert thresholds for both locally managed and dictionary managed tablespaces:

Alerts for locally managed tablespaces are server-generated. For dictionary managed tablespaces, Enterprise Manager provides this functionality. See "Server-Generated Alerts" for more information.

New tablespaces are assigned alert thresholds as follows:


Note:

In a database that is upgraded from version 9.x or earlier to 10.x, database defaults for all locally managed tablespace alert thresholds are set to zero. This setting effectively disables the alert mechanism to avoid excessive alerts in a newly migrated database.

Setting Alert Thresholds

For each tablespace, you can set just percent-full thresholds, just free-space-remaining thresholds, or both types of thresholds simultaneously. Setting either type of threshold to zero disables it.

The ideal setting for the warning threshold is one that issues an alert early enough for you to resolve the problem before it becomes critical. The critical threshold should be one that issues an alert still early enough so that you can take immediate action to avoid loss of service.

To set alert threshold values:

Example—Locally Managed Tablespace

The following example sets the free-space-remaining thresholds in the USERS tablespace to 10 MB (warning) and 2 MB (critical), and disables the percent-full thresholds.

BEGIN
DBMS_SERVER_ALERT.SET_THRESHOLD(
   metrics_id              => DBMS_SERVER_ALERT.TABLESPACE_BYT_FREE,
   warning_operator        => DBMS_SERVER_ALERT.OPERATOR_LE,
   warning_value           => '10240',
   critical_operator       => DBMS_SERVER_ALERT.OPERATOR_LE,
   critical_value          => '2048',
   observation_period      => 1,
   consecutive_occurrences => 1,
   instance_name           => NULL,
   object_type             => DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,
   object_name             => 'USERS');

DBMS_SERVER_ALERT.SET_THRESHOLD(
   metrics_id              => DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,
   warning_operator        => DBMS_SERVER_ALERT.OPERATOR_GT,
   warning_value           => '0',
   critical_operator       => DBMS_SERVER_ALERT.OPERATOR_GT,
   critical_value          => '0',
   observation_period      => 1,
   consecutive_occurrences => 1,
   instance_name           => NULL,
   object_type             => DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,
   object_name             => 'USERS');
END;
/

Note:

When setting non-zero values for percent-full thresholds, use the greater-than-or-equal-to operator, OPERATOR_GE.

Restoring a Tablespace to Database Default Thresholds

After explicitly setting values for locally managed tablespace alert thresholds, you can cause the values to revert to the database defaults by setting them to NULL with DBMS_SERVER_ALERT.SET_THRESHOLD.

Modifying Database Default Thresholds

To modify database default thresholds for locally managed tablespaces, invoke DBMS_SERVER_ALERT.SET_THRESHOLD as shown in the previous example, but set object_name to NULL. All tablespaces that use the database default are then switched to the new default.

Viewing Alerts

You view alerts by accessing the home page of Enterprise Manager Database Control.

Description of alerts_tablespace_full.gif follows


You can also view alerts for locally managed tablespaces with the DBA_OUTSTANDING_ALERTS view. See "Viewing Alert Data" for more information.

Limitations

Threshold-based alerts have the following limitations:

  • Alerts are not issued for locally managed tablespaces that are offline or in read-only mode. However, the database reactivates the alert system for such tablespaces after they become read/write or available.

  • When you take a tablespace offline or put it in read-only mode, you should disable the alerts for the tablespace by setting the thresholds to zero. You can then reenable the alerts by resetting the thresholds when the tablespace is once again online and in read/write mode.


See Also:


Managing Space in Data Blocks

The following topics are contained in this section:

Specifying the INITRANS Parameter

INITRANS specifies the number of update transaction entries for which space is initially reserved in the data block header. Space is reserved in the headers of all data blocks in the associated segment.

As multiple transactions concurrently access the rows of the same data block, space is allocated for each update transaction entry in the block. Once the space reserved by INITRANS is depleted, space for additional transaction entries is allocated out of the free space in a block, if available. Once allocated, this space effectively becomes a permanent part of the block header.


Note:

In earlier releases of Oracle Database, the MAXTRANS parameter limited the number of transaction entries that could concurrently use data in a data block. This parameter has been deprecated. Oracle Database now automatically allows up to 255 concurrent update transactions for any data block, depending on the available space in the block.

The database ignores MAXTRANS when specified by users only for new objects created when the COMPATIBLE initialization parameter is set to 10.0 or greater.


You should consider the following when setting the INITRANS parameter for a schema object:

  • The space you would like to reserve for transaction entries compared to the space you would reserve for database data

  • The number of concurrent transactions that are likely to touch the same data blocks at any given time

For example, if a table is very large and only a small number of users simultaneously access the table, the chances of multiple concurrent transactions requiring access to the same data block is low. Therefore, INITRANS can be set low, especially if space is at a premium in the database.

Alternatively, assume that a table is usually accessed by many users at the same time. In this case, you might consider preallocating transaction entry space by using a high INITRANS. This eliminates the overhead of having to allocate transaction entry space, as required when the object is in use.

In general, Oracle recommends that you not change the value of INITRANS from its default.

Managing Storage Parameters

This section describes the storage parameters that you can specify for schema object segments to tell the database how to store the object in the database. Schema objects include tables, indexes, partitions, clusters, materialized views, and materialized view logs

The following topics are contained in this section:

Identifying the Storage Parameters

Storage parameters determine space allocation for objects when their segments are created in a tablespace. Not all storage parameters can be specified for every type of database object, and not all storage parameters can be specified in both the CREATE and ALTER statements. Storage parameters for objects in locally managed tablespaces are supported mainly for backward compatibility.

The Oracle Database server manages extents for locally managed tablespaces. If you specified the UNIFORM clause when the tablespace was created, then the database creates all extents of a uniform size that you specified (or a default size) for any objects created in the tablespace. If you specified the AUTOALLOCATE clause, then the database determines the extent sizing policy for the tablespace. So, for example, if you specific the INITIAL clause when you create an object in a locally managed tablespace you are telling the database to preallocate at least that much space. The database then determines the appropriate number of extents needed to allocate that much space.

Table 14-1 contains a brief description of each storage parameter. For a complete description of these parameters, including their default, minimum, and maximum settings, see the Oracle Database SQL Reference.

Table 14-1 Object Storage Parameters

Parameter Description

INITIAL

In a tablespace that is specified as EXTENT MANAGEMENT LOCAL, the database uses the value of INITIAL with the extent size for the tablespace to determine the initial amount of space to reserve for the object. For example, in a uniform locally managed tablespace with 5M extents, if you specify an INITIAL value of 1M, then the database must allocate one 5M extent. If the extent size of the tablespace is smaller than the value of INITIAL, then the initial amount of space allocated will in fact be more than one extent.

MINEXTENTS

In a tablespace that is specified as EXTENT MANAGEMENT LOCAL, MINEXTENTS is used to compute the initial amount of space that is allocated. The initial amount of space that is allocated is equal to INITIAL * MINEXTENTS.Thereafter it is set to 1 (as seen in the DBA_SEGMENTS view).

BUFFER POOL

Defines a default buffer pool (cache) for a schema object. For information on the use of this parameter, see Oracle Database Performance Tuning Guide.


Specifying Storage Parameters at Object Creation

At object creation, you can specify storage parameters for each individual schema object. These parameter settings override any default storage settings. Use the STORAGE clause of the CREATE or ALTER statement for specifying storage parameters for the individual object.

Setting Storage Parameters for Clusters

Use the STORAGE clause of the CREATE TABLE or ALTER TABLE statement to set the storage parameters for non-clustered tables.

In contrast, set the storage parameters for the data segments of a cluster using the STORAGE clause of the CREATE CLUSTER or ALTER CLUSTER statement, rather than the individual CREATE or ALTER statements that put tables into the cluster. Storage parameters specified when creating or altering a clustered table are ignored. The storage parameters set for the cluster override the table storage parameters.

Setting Storage Parameters for Partitioned Tables

With partitioned tables, you can set default storage parameters at the table level. When creating a new partition of the table, the default storage parameters are inherited from the table level (unless you specify them for the individual partition). If no storage parameters are specified at the table level, then they are inherited from the tablespace.

Setting Storage Parameters for Index Segments

Storage parameters for an index segment created for a table index can be set using the STORAGE clause of the CREATE INDEX or ALTER INDEX statement.

Storage parameters of an index segment created for the index used to enforce a primary key or unique key constraint can be set in either of the following ways:

  • In the ENABLE ... USING INDEX clause of the CREATE TABLE or ALTER TABLE statement

  • In the STORAGE clause of the ALTER INDEX statement

Setting Storage Parameters for LOBs, Varrays, and Nested Tables

A table or materialized view can contain LOB, varray, or nested table column types. These entities can be stored in their own segments. LOBs and varrays are stored in LOB segments, while a nested table is stored in a storage table. You can specify a STORAGE clause for these segments that will override storage parameters specified at the table level.


See Also:


Changing Values of Storage Parameters

You can alter default storage parameters for tablespaces and specific storage parameters for individual objects if you so choose. Default storage parameters can be reset for a tablespace; however, changes affect only new objects created in the tablespace or new extents allocated for a segment. As discussed previously, you cannot specify default storage parameters for locally managed tablespaces, so this discussion does not apply.

The INITIAL and MINEXTENTS storage parameters cannot be altered for an existing table, cluster, index. If only NEXT is altered for a segment, the next incremental extent is the size of the new NEXT, and subsequent extents can grow by PCTINCREASE as usual.

If both NEXT and PCTINCREASE are altered for a segment, the next extent is the new value of NEXT, and from that point forward, NEXT is calculated using PCTINCREASE as usual.

Understanding Precedence in Storage Parameters

Starting with default values, the storage parameters in effect for a database object at a given time are determined by the following, listed in order of precedence (where higher numbers take precedence over lower numbers):

  1. Oracle Database default values

  2. DEFAULT STORAGE clause of CREATE TABLESPACE statement

  3. DEFAULT STORAGE clause of ALTER TABLESPACE statement

  4. STORAGE clause of CREATE [TABLE | CLUSTER | MATERIALIZED VIEW | MATERIALIZED VIEW LOG | INDEX] statement

  5. STORAGE clause of ALTER [TABLE | CLUSTER | MATERIALIZED VIEW | MATERIALIZED VIEW LOG | INDEX] statement

Any storage parameter specified at the object level overrides the corresponding option set at the tablespace level. When storage parameters are not explicitly set at the object level, they default to those at the tablespace level. When storage parameters are not set at the tablespace level, Oracle Database system defaults apply. If storage parameters are altered, the new options apply only to the extents not yet allocated.


Note:

The storage parameters for temporary segments always use the default storage parameters set for the associated tablespace.

Managing Resumable Space Allocation

Oracle Database provides a means for suspending, and later resuming, the execution of large database operations in the event of space allocation failures. This enables you to take corrective action instead of the Oracle Database server returning an error to the user. After the error condition is corrected, the suspended operation automatically resumes. This feature is called resumable space allocation. The statements that are affected are called resumable statements.

This section contains the following topics:

Resumable Space Allocation Overview

This section provides an overview of resumable space allocation. It describes how resumable space allocation works, and specifically defines qualifying statements and error conditions.

How Resumable Space Allocation Works

The following is an overview of how resumable space allocation works. Details are contained in later sections.

  1. A statement executes in a resumable mode only if its session has been enabled for resumable space allocation by one of the following actions:

    • The RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value.

    • The ALTER SESSION ENABLE RESUMABLE statement is issued.

  2. A resumable statement is suspended when one of the following conditions occur (these conditions result in corresponding errors being signalled for non-resumable statements):

    • Out of space condition

    • Maximum extents reached condition

    • Space quota exceeded condition.

  3. When the execution of a resumable statement is suspended, there are mechanisms to perform user supplied operations, log errors, and to query the status of the statement execution. When a resumable statement is suspended the following actions are taken:

    • The error is reported in the alert log.

    • The system issues the Resumable Session Suspended alert.

    • If the user registered a trigger on the AFTER SUSPEND system event, the user trigger is executed. A user supplied PL/SQL procedure can access the error message data using the DBMS_RESUMABLE package and the DBA_ or USER_RESUMABLE view.

  4. Suspending a statement automatically results in suspending the transaction. Thus all transactional resources are held through a statement suspend and resume.

  5. When the error condition is resolved (for example, as a result of user intervention or perhaps sort space released by other queries), the suspended statement automatically resumes execution and the Resumable Session Suspended alert is cleared.

  6. A suspended statement can be forced to throw the exception using the DBMS_RESUMABLE.ABORT() procedure. This procedure can be called by a DBA, or by the user who issued the statement.

  7. A suspension time out interval is associated with resumable statements. A resumable statement that is suspended for the timeout interval (the default is two hours) wakes up and returns the exception to the user.

  8. A resumable statement can be suspended and resumed multiple times during execution.

What Operations are Resumable?

The following operations are resumable:

  • Queries

    SELECT statements that run out of temporary space (for sort areas) are candidates for resumable execution. When using OCI, the calls OCIStmtExecute() and OCIStmtFetch() are candidates.

  • DML

    INSERT, UPDATE, and DELETE statements are candidates. The interface used to execute them does not matter; it can be OCI, SQLJ, PL/SQL, or another interface. Also, INSERT INTO...SELECT from external tables can be resumable.

  • Import/Export

    As for SQL*Loader, a command line parameter controls whether statements are resumable after recoverable errors.

  • DDL

    The following statements are candidates for resumable execution:

    • CREATE TABLE ... AS SELECT

    • CREATE INDEX

    • ALTER INDEX ... REBUILD

    • ALTER TABLE ... MOVE PARTITION

    • ALTER TABLE ... SPLIT PARTITION

    • ALTER INDEX ... REBUILD PARTITION

    • ALTER INDEX ... SPLIT PARTITION

    • CREATE MATERIALIZED VIEW

    • CREATE MATERIALIZED VIEW LOG

What Errors are Correctable?

There are three classes of correctable errors:

  • Out of space condition

    The operation cannot acquire any more extents for a table/index/temporary segment/undo segment/cluster/LOB/table partition/index partition in a tablespace. For example, the following errors fall in this category:

    ORA-1653 unable to extend table ... in tablespace ...
    ORA-1654 unable to extend index ... in tablespace ...
    
    
  • Maximum extents reached condition

    The number of extents in a table/index/temporary segment/undo segment/cluster/LOB/table partition/index partition equals the maximum extents defined on the object. For example, the following errors fall in this category:

    ORA-1631 max # extents ... reached in table ...
    ORA-1654 max # extents ... reached in index ...
    
    
  • Space quota exceeded condition

    The user has exceeded his assigned space quota in the tablespace. Specifically, this is noted by the following error:

    ORA-1536 space quote exceeded for tablespace string 
    

Resumable Space Allocation and Distributed Operations

In a distributed environment, if a user enables or disables resumable space allocation, or if you, as a DBA, alter the RESUMABLE_TIMEOUT initialization parameter, only the local instance is affected. In a distributed transaction, sessions or remote instances are suspended only if RESUMABLE has been enabled in the remote instance.

Parallel Execution and Resumable Space Allocation

In parallel execution, if one of the parallel execution server processes encounters a correctable error, that server process suspends its execution. Other parallel execution server processes will continue executing their respective tasks, until either they encounter an error or are blocked (directly or indirectly) by the suspended server process. When the correctable error is resolved, the suspended process resumes execution and the parallel operation continues execution. If the suspended operation is terminated, the parallel operation aborts, throwing the error to the user.

Different parallel execution server processes may encounter one or more correctable errors. This may result in firing an AFTER SUSPEND trigger multiple times, in parallel. Also, if a parallel execution server process encounters a non-correctable error while another parallel execution server process is suspended, the suspended statement is immediately aborted.

For parallel execution, every parallel execution coordinator and server process has its own entry in the DBA_ or USER_RESUMABLE view.

Enabling and Disabling Resumable Space Allocation

Resumable space allocation is only possible when statements are executed within a session that has resumable mode enabled. There are two means of enabling and disabling resumable space allocation. You can control it at the system level with the RESUMABLE_TIMEOUT initialization parameter, or users can enable it at the session level using clauses of the ALTER SESSION statement.


Note:

Because suspended statements can hold up some system resources, users must be granted the RESUMABLE system privilege before they are allowed to enable resumable space allocation and execute resumable statements.

Setting the RESUMABLE_TIMEOUT Initialization Parameter

You can enable resumable space allocation system wide and specify a timeout interval by setting the RESUMABLE_TIMEOUT initialization parameter. For example, the following setting of the RESUMABLE_TIMEOUT parameter in the initialization parameter file causes all sessions to initially be enabled for resumable space allocation and sets the timeout period to 1 hour:

RESUMABLE_TIMEOUT  = 3600

If this parameter is set to 0, then resumable space allocation is disabled initially for all sessions. This is the default.

You can use the ALTER SYSTEM SET statement to change the value of this parameter at the system level. For example, the following statement will disable resumable space allocation for all sessions:

ALTER SYSTEM SET RESUMABLE_TIMEOUT=0;

Within a session, a user can issue the ALTER SESSION SET statement to set the RESUMABLE_TIMEOUT initialization parameter and enable resumable space allocation, change a timeout value, or to disable resumable mode.

Using ALTER SESSION to Enable and Disable Resumable Space Allocation

A user can enable resumable mode for a session, using the following SQL statement:

ALTER SESSION ENABLE RESUMABLE;

To disable resumable mode, a user issues the following statement:

ALTER SESSION DISABLE RESUMABLE;

The default for a new session is resumable mode disabled, unless the RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value.

The user can also specify a timeout interval, and can provide a name used to identify a resumable statement. These are discussed separately in following sections.

Specifying a Timeout Interval

A timeout period, after which a suspended statement will error if no intervention has taken place, can be specified when resumable mode is enabled. The following statement specifies that resumable transactions will time out and error after 3600 seconds:

ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600;

The value of TIMEOUT remains in effect until it is changed by another ALTER SESSION ENABLE RESUMABLE statement, it is changed by another means, or the session ends. The default timeout interval when using the ENABLE RESUMABLE TIMEOUT clause to enable resumable mode is 7200 seconds.


See Also:

"Setting the RESUMABLE_TIMEOUT Initialization Parameter" for other methods of changing the timeout interval for resumable space allocation

Naming Resumable Statements

Resumable statements can be identified by name. The following statement assigns a name to resumable statements:

ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600 NAME 'insert into table';

The NAME value remains in effect until it is changed by another ALTER SESSION ENABLE RESUMABLE statement, or the session ends. The default value for NAME is 'User username(userid), Session sessionid, Instance instanceid'.

The name of the statement is used to identify the resumable statement in the DBA_RESUMABLE and USER_RESUMABLE views.

Using a LOGON Trigger to Set Default Resumable Mode

Another method of setting default resumable mode, other than setting the RESUMABLE_TIMEOUT initialization parameter, is that you can register a database level LOGON trigger to alter a user's session to enable resumable and set a timeout interval.


Note:

If there are multiple triggers registered that change default mode and timeout for resumable statements, the result will be unspecified because Oracle Database does not guarantee the order of trigger invocation.

Detecting Suspended Statements

When a resumable statement is suspended, the error is not raised to the client. In order for corrective action to be taken, Oracle Database provides alternative methods for notifying users of the error and for providing information about the circumstances.

Notifying Users: The AFTER SUSPEND System Event and Trigger

When a resumable statement encounter a correctable error, the system internally generates the AFTER SUSPEND system event. Users can register triggers for this event at both the database and schema level. If a user registers a trigger to handle this system event, the trigger is executed after a SQL statement has been suspended.

SQL statements executed within a AFTER SUSPEND trigger are always non-resumable and are always autonomous. Transactions started within the trigger use the SYSTEM rollback segment. These conditions are imposed to overcome deadlocks and reduce the chance of the trigger experiencing the same error condition as the statement.

Users can use the USER_RESUMABLE or DBA_RESUMABLE views, or the DBMS_RESUMABLE.SPACE_ERROR_INFO function, within triggers to get information about the resumable statements.

Triggers can also call the DBMS_RESUMABLE package to terminate suspended statements and modify resumable timeout values. In the following example, the default system timeout is changed by creating a system wide AFTER SUSPEND trigger that calls DBMS_RESUMABLE to set the timeout to 3 hours:

CREATE OR REPLACE TRIGGER resumable_default_timeout
AFTER SUSPEND
ON DATABASE
BEGIN
   DBMS_RESUMABLE.SET_TIMEOUT(10800);
END;

See Also:

Oracle Database Application Developer's Guide - Fundamentals for information about system events, triggers, and attribute functions

Using Views to Obtain Information About Suspended Statements

The following views can be queried to obtain information about the status of resumable statements:

View Description
DBA_RESUMABLE

USER_RESUMABLE

These views contain rows for all currently executing or suspended resumable statements. They can be used by a DBA, AFTER SUSPEND trigger, or another session to monitor the progress of, or obtain specific information about, resumable statements.
V$SESSION_WAIT When a statement is suspended the session invoking the statement is put into a wait state. A row is inserted into this view for the session with the EVENT column containing "statement suspended, wait error to be cleared".


See Also:

Oracle Database Reference for specific information about the columns contained in these views

Using the DBMS_RESUMABLE Package

The DBMS_RESUMABLE package helps control resumable space allocation. The following procedures can be invoked:

Procedure Description
ABORT(sessionID) This procedure aborts a suspended resumable statement. The parameter sessionID is the session ID in which the statement is executing. For parallel DML/DDL, sessionID is any session ID which participates in the parallel DML/DDL.

Oracle Database guarantees that the ABORT operation always succeeds. It may be called either inside or outside of the AFTER SUSPEND trigger.

The caller of ABORT must be the owner of the session with sessionID, have ALTER SYSTEM privilege, or have DBA privileges.

GET_SESSION_TIMEOUT(sessionID) This function returns the current timeout value of resumable space allocation for the session with sessionID. This returned timeout is in seconds. If the session does not exist, this function returns -1.
SET_SESSION_TIMEOUT(sessionID, timeout) This procedure sets the timeout interval of resumable space allocation for the session with sessionID. The parameter timeout is in seconds. The new timeout setting will applies to the session immediately. If the session does not exist, no action is taken.
GET_TIMEOUT() This function returns the current timeout value of resumable space allocation for the current session. The returned value is in seconds.
SET_TIMEOUT(timeout) This procedure sets a timeout value for resumable space allocation for the current session. The parameter timeout is in seconds. The new timeout setting applies to the session immediately.

Operation-Suspended Alert

When a resumable session is suspended, an operation-suspended alert is issued on the object that needs allocation of resource for the operation to complete. Once the resource is allocated and the operation completes, the operation-suspended alert is cleared. Please refer to "Managing Tablespace Alerts" for more information on system-generated alerts.

Resumable Space Allocation Example: Registering an AFTER SUSPEND Trigger

In the following example, a system wide AFTER SUSPEND trigger is created and registered as user SYS at the database level. Whenever a resumable statement is suspended in any session, this trigger can have either of two effects:

  • If an undo segment has reached its space limit, then a message is sent to the DBA and the statement is aborted.

  • If any other recoverable error has occurred, the timeout interval is reset to 8 hours.

Here are the statements for this example:

CREATE OR REPLACE TRIGGER resumable_default
AFTER SUSPEND
ON DATABASE
DECLARE
   /* declare transaction in this trigger is autonomous */
   /* this is not required because transactions within a trigger
      are always autonomous */
   PRAGMA AUTONOMOUS_TRANSACTION;
   cur_sid           NUMBER;
   cur_inst          NUMBER;
   errno             NUMBER;
   err_type          VARCHAR2;
   object_owner      VARCHAR2;
   object_type       VARCHAR2;
   table_space_name  VARCHAR2;
   object_name       VARCHAR2;
   sub_object_name   VARCHAR2;
   error_txt         VARCHAR2;
   msg_body          VARCHAR2;
   ret_value         BOOLEAN;
   mail_conn         UTL_SMTP.CONNECTION;
BEGIN
   -- Get session ID
   SELECT DISTINCT(SID) INTO cur_SID FROM V$MYSTAT;

   -- Get instance number
   cur_inst := userenv('instance');

   -- Get space error information
   ret_value := 
   DBMS_RESUMABLE.SPACE_ERROR_INFO(err_type,object_type,object_owner,
        table_space_name,object_name, sub_object_name);
   /*
   -- If the error is related to undo segments, log error, send email
   -- to DBA, and abort the statement. Otherwise, set timeout to 8 hours.
   -- 
   -- sys.rbs_error is a table which is to be
   -- created by a DBA manually and defined as
   -- (sql_text VARCHAR2(1000), error_msg VARCHAR2(4000),
   -- suspend_time DATE)
   */

   IF OBJECT_TYPE = 'UNDO SEGMENT' THEN
       /* LOG ERROR */
       INSERT INTO sys.rbs_error (
           SELECT SQL_TEXT, ERROR_MSG, SUSPEND_TIME
           FROM DBMS_RESUMABLE
           WHERE SESSION_ID = cur_sid AND INSTANCE_ID = cur_inst
        );
       SELECT ERROR_MSG INTO error_txt FROM DBMS_RESUMABLE 
           WHERE SESSION_ID = cur_sid and INSTANCE_ID = cur_inst;

        -- Send email to receipient via UTL_SMTP package
        msg_body:='Subject: Space Error Occurred

                   Space limit reached for undo segment ' || object_name || 
                   on ' || TO_CHAR(SYSDATE, 'Month dd, YYYY, HH:MIam') ||
                   '. Error message was ' || error_txt;

        mail_conn := UTL_SMTP.OPEN_CONNECTION('localhost', 25);
        UTL_SMTP.HELO(mail_conn, 'localhost');
        UTL_SMTP.MAIL(mail_conn, 'sender@localhost');
        UTL_SMTP.RCPT(mail_conn, 'recipient@localhost');
        UTL_SMTP.DATA(mail_conn, msg_body);
        UTL_SMTP.QUIT(mail_conn);

        -- Abort the statement
        DBMS_RESUMABLE.ABORT(cur_sid);
    ELSE
        -- Set timeout to 8 hours
        DBMS_RESUMABLE.SET_TIMEOUT(28800);
    END IF;

    /* commit autonomous transaction */
    COMMIT;   
END;
/

Reclaiming Wasted Space

This section explains how to reclaim wasted space, and also introduces the Segment Advisor, which is the Oracle Database component that identifies segments that have space available for reclamation.

In This Section

Understanding Reclaimable Unused Space

Over time, updates and deletes on objects within a tablespace can create pockets of empty space that individually are not large enough to be reused for new data. This type of empty space is referred to as fragmented free space.

Objects with fragmented free space can result in much wasted space, and can impact database performance. The preferred way to defragment and reclaim this space is to perform an online segment shrink. This process consolidates fragmented free space below the high water mark and compacts the segment. After compaction, the high water mark is moved, resulting in new free space above the high water mark. That space above the high water mark is then deallocated. The segment remains available for queries and DML during most of the operation, and no extra disk space need be allocated.

You use the Segment Advisor to identify segments that would benefit from online segment shrink. Only segments in locally managed tablespaces with automatic segment space management (ASSM) are eligible. Other restrictions on segment type exist. For more information, see "Shrinking Database Segments Online".

If a table with reclaimable space is not eligible for online segment shrink, or if you want to make changes to logical or physical attributes of the table while reclaiming space, you can use online table redefinition as an alternative to segment shrink. Online redefinition is also referred to as reorganization. Unlike online segment shrink, it requires extra disk space to be allocated. See "Redefining Tables Online" for more information.

Using the Segment Advisor

The Segment Advisor identifies segments that have space available for reclamation. It performs its analysis by examining usage and growth statistics in the Automatic Workload Repository (AWR), and by sampling the data in the segment. It is configured to run automatically at regular intervals, and you can also run it on demand (manually). The regularly scheduled Segment Advisor run is known as the Automatic Segment Advisor.

The Segment Advisor generates the following types of advice:

  • If the Segment Advisor determines that an object has a significant amount of free space, it recommends online segment shrink. If the object is a table that is not eligible for shrinking, as in the case of a table in a tablespace without automatic segment space management, the Segment Advisor recommends online table redefinition.

  • If the Segment Advisor encounters a table with row chaining above a certain threshold, it records that fact that the table has an excess of chained rows.


    Note:

    The Segment Advisor flags only the type of row chaining that results from updates that increase row length.

If you receive a space management alert, or if you decide that you want to reclaim space, you should start with the Segment Advisor.

To use the Segment Advisor:

  1. Check the results of the Automatic Segment Advisor.

    To understand the Automatic Segment Advisor, see "Automatic Segment Advisor", later in this section. For details on how to view results, see "Viewing Segment Advisor Results".

  2. (Optional) Obtain updated results on individual segments by rerunning the Segment Advisor manually.

    See "Running the Segment Advisor Manually", later in this section.

Automatic Segment Advisor

The Automatic Segment Advisor is started by a Scheduler job that is configured to run during the default maintenance window. The default maintenance window is specified in the Scheduler, and is initially defined as follows:

  • Weeknights, Monday through Friday, from 10:00 p.m. to 6:00 a.m. (8 hours each night)

  • Weekends, from Saturday morning at 12:00 a.m. to Monday morning at 12:00 a.m. (for a total of 48 hours)

The Automatic Segment Advisor does not analyze every database object. Instead, it examines database statistics, samples segment data, and then selects the following objects to analyze:

  • Tablespaces that have exceeded a critical or warning space threshold

  • Segments that have the most activity

  • Segments that have the highest growth rate

If an object is selected for analysis but the maintenance window expires before the Segment Advisor can process the object, the object is included in the next Automatic Segment Advisor run.

You cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor job, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust Automatic Segment Advisor system resource utilization. See "Configuring the Automatic Segment Advisor Job" for more information.

Running the Segment Advisor Manually

You can manually run the Segment Advisor at any time with Enterprise Manager or with PL/SQL package procedure calls. Reasons to manually run the Segment Advisor include the following:

  • You want to analyze a tablespace or segment that was not selected by the Automatic Segment Advisor.

  • You want to repeat the analysis of an individual tablespace or segment to get more up-to-date recommendations.

You can request advice from the Segment Advisor at three levels:

  • Segment level—Advice is generated for a single segment, such as an unpartitioned table, a partition or subpartition of a partitioned table, an index, or a LOB column.

  • Object level—Advice is generated for an entire object, such as a table or index. If the object is partitioned, advice is generated on all the partitions of the object. In addition, if you run Segment Advisor manually from Enterprise Manager, you can request advice on the object's dependent objects, such as indexes and LOB segments for a table.

  • Tablespace level—Advice is generated for every segment in a tablespace.

The OBJECT_TYPE column of Table 14-3 shows the types of objects for which you can request advice.

To run the Segment Advisor, you must have ADVISOR and CREATE JOB or CREATE ANY JOB privileges.

Running the Segment Advisor Manually with Enterprise Manager

There are two ways to run the Segment Advisor manually with Enterprise Manager:

  • Using the Segment Advisor Wizard

    This method enables you to request advice at the tablespace level or object level. At the object level, you can request advice on tables, indexes, table partitions, and index partitions. Dependent objects such as LOB segments cannot be included in the analysis.

  • Using the Run Segment Advisor command on a schema object display page.

    For example, if you display a list of tables on the Tables page (in the Administration section of Enterprise Manager Database Control), you can select a table and then select the Run Segment Advisor command from the Actions menu.

    Figure 14-1 Tables page

    Description of Figure 14-1  follows


    This method enables you to include the schema object's dependent objects in the Segment Advisor run. For example, if you select a table and select the Run Segment Advisor command, Enterprise Manager displays the table's dependent objects, such as partitions, index segments, LOB segments, and so on. You can then select dependent objects to include in the run.

In both cases, Enterprise Manager creates the Segment Advisor task as an Oracle Database Scheduler job. You can schedule the job to run immediately, or can take advantage of advanced scheduling features offered by the Scheduler.

To run the Segment Advisor manually with the Segment Advisor Wizard:

  1. From the database Home page, under Related Links, click Advisor Central.

    The Advisor Central page appears. (See Figure 14-2.)

  2. Under Advisors, click Segment Advisor.

    The first page of the Segment Advisor wizard appears.

  3. Follow the wizard steps to schedule the Segment Advisor job, and then click Submit on the final wizard page.

    The Advisor Central page reappears, with the new Segment Advisor job at the top of the list under the Results heading. The job status is SCHEDULED. (If you don't see your job, use the search fields above the list to display it.)

  4. Check the status of the job. If it is not COMPLETED, click the Refresh button at the top of the page repeatedly. (Do not use your browser's Refresh icon.)

    When the job status changes to COMPLETED, select the job by clicking in the Select column, and then click View Result.

    Figure 14-2 Advisor Central page

    Description of Figure 14-2  follows



See Also:

Chapter 27, "Using the Scheduler" for more information about the advanced scheduling features of the Scheduler.

Running the Segment Advisor Manually with PL/SQL

You can also run the Segment Advisor with the DBMS_ADVISOR package. You use package procedures to create a Segment Advisor task, set task arguments, and then execute the task. Table 14-2 shows the procedures that are relevant for the Segment Advisor. Please refer to Oracle Database PL/SQL Packages and Types Reference for more details on these procedures.

Table 14-2 DBMS_ADVISOR package procedures relevant to the Segment Advisor

Package Procedure Name Description

CREATE_TASK

Use this procedure to create the Segment Advisor task. Specify 'Segment Advisor' as the value of the ADVISOR_NAME parameter.

CREATE_OBJECT

Use this procedure to identify the target object for segment space advice. The parameter values of this procedure depend upon the object type. Table 14-3 lists the parameter values for each type of object.

Note: To request advice on an IOT overflow segment, use an object type of TABLE, TABLE PARTITION, or TABLE SUBPARTITION. Use the following query to find the overflow segment for an IOT and to determine the overflow segment table name to use with CREATE_OBJECT:

select table_name, iot_name, iot_type from dba_tables;

SET_TASK_PARAMETER

Use this procedure to describe the segment advice that you need. Table 14-4 shows the relevant input parameters of this procedure. Parameters not listed here are not used by the Segment Advisor.

EXECUTE_TASK

Use this procedure to execute the Segment Advisor task.


Table 14-3 Input for DBMS_ADVISOR.CREATE_OBJECT

Input Parameter
OBJECT_TYPE ATTR1 ATTR2 ATTR3 ATTR4

TABLESPACE

tablespace name

NULL

NULL

Unused. Specify NULL.

TABLE

schema name

table name

NULL

Unused. Specify NULL.

INDEX

schema name

index name

NULL

Unused. Specify NULL.

TABLE PARTITION

schema name

table name

table partition name

Unused. Specify NULL.

INDEX PARTITION

schema name

index name

index partition name

Unused. Specify NULL.

TABLE SUBPARTITION

schema name

table name

table subpartition name

Unused. Specify NULL.

INDEX SUBPARTITION

schema name

index name

index subpartition name

Unused. Specify NULL.

LOB

schema name

segment name

NULL

Unused. Specify NULL.

LOB PARTITION

schema name

segment name

lob partition name

Unused. Specify NULL.

LOB SUBPARTITION

schema name

segment name

lob subpartition name

Unused. Specify NULL.


Table 14-4 Input for DBMS_ADVISOR.SET_TASK_PARAMETER

Input Parameter Description Possible Values Default Value

time_limit

The time limit for the Segment Advisor run, specified in seconds.

Any number of seconds

UNLIMITED

recommend_all

Whether the Segment Advisor should generate findings for all segments.

TRUE: Findings are generated on all segments specified, whether or not space reclamation is recommended.

FALSE: Findings are generated only for those objects that generate recommendations for space reclamation.

TRUE


Example The example that follows shows how to use the DBMS_ADVISOR procedures to run the Segment Advisor for the sample table hr.employees. The user executing these package procedures must have the EXECUTE object privilege on the package or the ADVISOR system privilege.

Note that passing an object type of TABLE to DBMS_ADVISOR.CREATE_OBJECT amounts to an object level request. If the table is not partitioned, the table segment is analyzed (without any dependent segments like index or LOB segments). If the table is partitioned, the Segment Advisor analyzes all table partitions and generates separate findings and recommendations for each.

variable id number;
begin
  declare
  name varchar2(100);
  descr varchar2(500);
  obj_id number;
  begin
  name:='Manual_Employees';
  descr:='Segment Advisor Example';

  dbms_advisor.create_task (
    advisor_name     => 'Segment Advisor',
    task_id          => :id,
    task_name        => name,
    task_desc        => descr);

  dbms_advisor.create_object (
    task_name        => name,
    object_type      => 'TABLE',
    attr1            => 'HR',
    attr2            => 'EMPLOYEES',
    attr3            => NULL,
    attr4            => NULL,
    attr5            => NULL,
    object_id        => obj_id);

  dbms_advisor.set_task_parameter(
    task_name        => name,
    parameter        => 'recommend_all',
    value            => 'TRUE');

  dbms_advisor.execute_task(name);
  end;
end; 
/

Viewing Segment Advisor Results

The Segment Advisor creates several types of results: recommendations, findings, actions, and objects. You can view results in the following ways:

  • With Enterprise Manager

  • By querying the DBA_ADVISOR_* views

  • By calling the DBMS_SPACE.ASA_RECOMMENDATIONS procedure

Table Table 14-5 describes the various result types and their associated DBA_ADVISOR_* views.

Table 14-5 Segment Advisor Result Types

Result Type Associated View Description

Recommendations

DBA_ADVISOR_RECOMMENDATIONS

If a segment would benefit from a segment shrink or reorganization, the Segment Advisor generates a recommendation for the segment. Table 14-6 shows examples of generated findings and recommendations.

Findings

DBA_ADVISOR_FINDINGS

Findings are a report of what the Segment Advisor observed in analyzed segments. Findings include space used and free space statistics for each analyzed segment. Not all findings result in a recommendation. (There may be only a few recommendations, but there could be many findings.) When running the Segment Advisor manually with PL/SQL, if you specify 'TRUE' for recommend_all in the SET_TASK_PARAMETER procedure, then the Segment Advisor generates a finding for each segment that qualifies for analysis, whether or not a recommendation is made for that segment. For row chaining advice, the Automatic Segment Advisor generates findings only, and not recommendations. If the Automatic Segment Advisor has no space reclamation recommendations to make, it does not generate findings.

Actions

DBA_ADVISOR_ACTIONS

Every recommendation is associated with a suggested action to perform: either segment shrink or online redefinition (reorganization). The DBA_ADVISOR_ACTIONS view provides either the SQL that you need to perform a segment shrink, or a suggestion to reorganize the object.

Objects

DBA_ADVISOR_OBJECTS

All findings, recommendations, and actions are associated with an object. If the Segment Advisor analyzes more than one segment, as with a tablespace or partitioned table, then one entry is created in the DBA_ADVISOR_OBJECTS view for each analyzed segment. Table 14-3 defines the columns in this view to query for information on the analyzed segments. You can correlate the objects in this view with the objects in the findings, recommendations, and actions views.



See Also:


Viewing Segment Advisor Results with Enterprise Manager

With Enterprise Manager (EM), you can view Segment Advisor results for both Automatic Segment Advisor runs and manual Segment Advisor runs. You can view the following types of results:

  • All recommendations (multiple automatic and manual Segment Advisor runs)

  • Recommendations from the last Automatic Segment Advisor run

  • Recommendations from a specific run

  • Row chaining findings

You can also view a list of the segments that were analyzed by the last Automatic Segment Advisor run.

To view Segment Advisor results with EM—All runs:

  1. On the database Home page, under the Space Summary heading, click the numeric link next to the title Segment Advisor Recommendations.

    Description of space_summary_seg_advisor.gif follows


    The Segment Advisor Recommendations page appears. Recommendations are organized by tablespace.

    Figure 14-3 Segment Advisor Recommendations page

    Description of Figure 14-3  follows


  2. If any recommendations are present, click in the Select column to select a tablespace, and then click Recommendation Details.

    The Recommendation Details page appears. You can initiate the recommended activity from this page (shrink or reorganize).

    Figure 14-4 Recommendation Details page

    Description of Figure 14-4  follows



    Tip:

    The list entries are sorted in descending order by reclaimable space. You can click column headings to change the sort order or to change from ascending to descending order.

To view Segment Advisor results with EM—Last Automatic Segment Advisor run:

  1. On the database Home page, under the Space Summary heading, click the numeric link next to the title Segment Advisor Recommendations.

    The Segment Advisor Recommendations page appears. (See Figure 14-3.)

  2. In the View drop-down list, select Recommendations from Last Automatic Run.

  3. If any recommendations are present, click in the Select column to select a tablespace, and then click Recommendation Details.

    The Recommendation Details page appears. (See Figure 14-4.) You can initiate the recommended activity from this page (shrink or reorganize).

To view Segment Advisor results with EM—Specific run:

  1. Start at the Advisor Central page.

    If you ran the Segment Advisor with the Enterprise Manager wizard, the Advisor Central page appears after you submit the Segment Advisor task. Otherwise, to get to this page, on the database Home page, under Related Links, click Advisor Central.

  2. Check that your task appears in the list under the Results heading. If it does not, complete these steps (See Figure 14-2):

    1. In the Search section of the page, under Advisor Tasks, select Segment Advisor in the Advisory Type list.

    2. Enter the task name. Or, in the Advisor Runs list, select Last Run.

    3. Click Go.

      Your Segment Advisor task appears in the Results section.

  3. Check the status of the job. If it is not COMPLETED, click the Refresh button at the top of the page until your task status shows COMPLETED. (Do not use your browser's refresh icon.)

  4. Click the task name.

    The Segment Advisor Task page appears, with recommendations organized by tablespace.

  5. Select a tablespace in the list, and then click Recommendation Details.

    The Recommendation Details page appears. (See Figure 14-4.) You can initiate the recommended activity from this page (shrink or reorganize).

To view row chaining findings

  1. On the database Home page, under the Space Summary heading, click the numeric link next to the title Segment Advisor Recommendations.

    The Segment Advisor Recommendations page appears. (See Figure 14-3.)

  2. Under the Related Links heading, click Chained Row Analysis.

    The Chained Row Analysis page appears, showing all segments that have chained rows, with a chained rows percentage for each.

To view the list of segments that were analyzed by the last Automatic Segment Advisor run:

  1. On the database Home page, under the Space Summary heading, click the numeric link next to the title Segment Advisor Recommendations.

    The Segment Advisor Recommendations page appears.

  2. Under the Related Links heading, click Automatic Segment Advisor Job.

    The Automatic Segment Advisor Job page appears.

  3. Under the Last Run heading, click the View Processed Segments link.

    The Segments Processed In Last Run page appears. Use the search fields above the list to limit the segments displayed.

Viewing Segment Advisor Results by Querying the DBA_ADVISOR_* Views

The headings of Table 14-6 show the columns in the DBA_ADVISOR_* views that contain output from the Segment Advisor. Refer to Oracle Database Reference for a description of these views. The table contents summarize the possible outcomes. In addition, Table 14-3 defines the columns in the DBA_ADVISOR_OBJECTS view that contain information on the analyzed segments.

Before querying the DBA_ADVISOR_* views, you can check that the Segment Advisor task is complete by querying the STATUS column in DBA_ADVISOR_TASKS.

select task_name, status from dba_advisor_tasks
   where owner = 'STEVE' and advisor_name = 'Segment Advisor';
 
TASK_NAME                      STATUS
------------------------------ -----------
Manual Employees               COMPLETED

The following example shows how to query the DBA_ADVISOR_* views to retrieve findings from all Segment Advisor runs submitted by user STEVE:

select af.task_name, ao.attr2 segname, ao.attr3 partition, ao.type, af.message 
  from dba_advisor_findings af, dba_advisor_objects ao
  where ao.task_id = af.task_id
  and ao.object_id = af.object_id
  and ao.owner = 'STEVE';


TASK_NAME          SEGNAME      PARTITION       TYPE             MESSAGE
------------------ ------------ --------------- ---------------- --------------------------
Manual_Employees   EMPLOYEES                    TABLE            The free space in the obje
                                                                 ct is less than 10MB.
 
Manual_Salestable4 SALESTABLE4  SALESTABLE4_P1  TABLE PARTITION  Perform shrink, estimated
                                                                 savings is 74444154 bytes.
 
Manual_Salestable4 SALESTABLE4  SALESTABLE4_P2  TABLE PARTITION  The free space in the obje
                                                                 ct is less than 10MB.

Table 14-6 Segment Advisor Outcomes: Summary

MESSAGE column of DBA_ADVISOR_FINDINGS MORE_INFO column of DBA_ADVISOR_FINDINGS BENEFIT_TYPE column of DBA_ADVISOR_RECOMMENDATIONS ATTR1 column of DBA_ADVISOR_ACTIONS

Insufficient information to make a recommendation.

None

None

None

The free space in the object is less than 10MB.

Allocated Space:xxx: Used Space:xxx: Reclaimable Space :xxx

None

None

The object has some free space but cannot be shrunk because...

Allocated Space:xxx: Used Space:xxx: Reclaimable Space :xxx

None

None

The free space in the object is less than the size of the last extent.

Allocated Space:xxx: Used Space:xxx: Reclaimable Space :xxx

None

None

Perform shrink, estimated savings is xxx bytes.

Allocated Space:xxx: Used Space:xxx: Reclaimable Space :xxx

Perform shrink, estimated savings is xxx bytes.

The command to execute. For example: ALTER object SHRINK SPACE;)

Enable row movement of the table schema.table and perform shrink, estimated savings is xxx bytes.

Allocated Space:xxx: Used Space:xxx: Reclaimable Space :xxx

Enable row movement of the table schema.table and perform shrink, estimated savings is xxx bytes

The command to execute. For example: ALTER object SHRINK SPACE;)

Perform re-org on the object object, estimated savings is xxx bytes.

(Note: This finding is for objects with reclaimable space that are not eligible for online segment shrink.)

Allocated Space:xxx: Used Space:xxx: Reclaimable Space :xxx

Perform re-org on the object object, estimated savings is xxx bytes.

Perform reorg

The object has chained rows that can be removed by re-org.

xx percent chained rows can be removed by re-org.

None

None


Viewing Segment Advisor Results with DBMS_SPACE.ASA_RECOMMENDATIONS

The ASA_RECOMMENDATIONS procedure in the DBMS_SPACE package returns a nested table object that contains findings or recommendations for Automatic Segment Advisor runs and, optionally, manual Segment Advisor runs. Calling this procedure may be easier than working with the DBA_ADVISOR_* views, because the procedure performs all the required joins for you and returns information in an easily consumable format.

The following query returns recommendations by the most recent run of the Auto Segment Advisor, with the suggested command to run to follow the recommendations:

select tablespace_name, segment_name, segment_type, partition_name,
recommendations, c1 from
table(dbms_space.asa_recommendations('FALSE', 'FALSE', 'FALSE'));


TABLESPACE_NAME                SEGMENT_NAME                   SEGMENT_TYPE
------------------------------ ------------------------------ --------------
PARTITION_NAME
------------------------------
RECOMMENDATIONS
-----------------------------------------------------------------------------
C1
-----------------------------------------------------------------------------
TVMDS_ASSM                     ORDERS1                        TABLE PARTITION
ORDERS1_P2
Perform shrink, estimated savings is 57666422 bytes.
alter table "STEVE"."ORDERS1" modify partition "ORDERS1_P2" shrink space
 
TVMDS_ASSM                     ORDERS1                        TABLE PARTITION
ORDERS1_P1
Perform shrink, estimated savings is 45083514 bytes.
alter table "STEVE"."ORDERS1" modify partition "ORDERS1_P1" shrink space
 
TVMDS_ASSM_NEW                 ORDERS_NEW                     TABLE
 
Perform shrink, estimated savings is 155398992 bytes.
alter table "STEVE"."ORDERS_NEW" shrink space
 
TVMDS_ASSM_NEW                 ORDERS_NEW_INDEX               INDEX
 
Perform shrink, estimated savings is 102759445 bytes.
alter index "STEVE"."ORDERS_NEW_INDEX" shrink space

See Oracle Database PL/SQL Packages and Types Reference for details on DBMS_SPACE.ASA_RECOMMENDATIONS.

Configuring the Automatic Segment Advisor Job

The Automatic Segment Advisor is run by a Scheduler job. As such, you can use Enterprise Manager or PL/SQL package procedure calls to modify job attributes to suit your needs. The following are examples of modifications that you can make:

  • Disable or enable the job

  • Change the job schedule

  • Adjust system resources consumed by the job

You can call DBMS_SCHEDULER package procedures to make these changes, but the easier way to is to use Enterprise Manager.

To configure the Automatic Segment Advisor job with Enterprise Manager:

  1. Log in to Enterprise Manager as user SYS or as a user with the following privileges:

    • ALTER privilege on the Automatic Segment Advisor job SYS.AUTO_SPACE_ADVISOR_JOB

    • MANAGE SCHEDULER system privilege

  2. On the database Home page, under the Space Summary heading, click the numeric link next to the title Segment Advisor Recommendations.

    Description of space_summary_seg_advisor.gif follows


    The Segment Advisor Recommendations page appears.

  3. Under the Related Links heading, click the link entitled Automatic Segment Advisor Job.

    The Automatic Segment Advisor Job page appears.

  4. Click Configure.

    The Edit Job page appears. This is the generic Scheduler page that enables you to modify job attributes.

    Description of edit_job_seg_advisor.gif follows


  5. Modify job attributes as needed, including enabling or disabling the job. Click the Help link at the top of the page for information on the Scheduler and on modifying job attributes.

  6. Modify the job schedule, job resource consumption, or other job attributes using the generic Scheduler pages in Enterprise Manager.

    • To adjust the job schedule, modify the window group SYS.MAINTENANCE_WINDOW_GROUP or its member windows.

    • To adjust system resources consumed by the job, either modify the job class AUTO_TASKS_JOB_CLASS, associating it with a different resource consumer group, or modify the resource consumer group AUTO_TASK_CONSUMER_GROUP.

Viewing Automatic Segment Advisor Information

The following views display information specific to the Automatic Segment Advisor. For details, see Oracle Database Reference.

View Description
DBA_AUTO_SEGADV_SUMMARY Each row of this view summarizes one Automatic Segment Advisor run. Fields include number of tablespaces and segments processed, and number of recommendations made.
DBA_AUTO_SEGADV_CTL Contains control information that the Automatic Segment Advisor uses to select and process segments. Each row contains information on a single object (tablespace or segment), including whether the object has been processed, and if so, the task ID under which it was processed and the reason for selecting it.

Shrinking Database Segments Online

You use online segment shrink to reclaim fragmented free space below the high water mark in an Oracle Database segment. The benefits of segment shrink are these:

  • Compaction of data leads to better cache utilization, which in turn leads to better online transaction processing (OLTP) performance.

  • The compacted data requires fewer blocks to be scanned in full table scans, which in turns leads to better decision support system (DSS) performance.

Segment shrink is an online, in-place operation. DML operations and queries can be issued during the data movement phase of segment shrink. Concurrent DML operation are blocked for a short time at the end of the shrink operation, when the space is deallocated. Indexes are maintained during the shrink operation and remain usable after the operation is complete. Segment shrink does not require extra disk space to be allocated.

Segment shrink reclaims unused space both above and below the high water mark. In contrast, space deallocation reclaims unused space only above the high water mark. In shrink operations, by default, the database compacts the segment, adjusts the high water mark, and releases the reclaimed space.

Segment shrink requires that rows be moved to new locations. Therefore, you must first enable row movement in the object you want to shrink and disable any rowid-based triggers defined on the object. You enable row movement in a table with the ALTER TABLE ... ENABLE ROW MOVEMENT command.

Shrink operations can be performed only on segments in locally managed tablespaces with automatic segment space management (ASSM). Within an ASSM tablespace, all segment types are eligible for online segment shrink except these:

  • IOT mapping tables

  • Tables with rowid based materialized views

  • Tables with function-based indexes


See Also:

Oracle Database SQL Reference for more information on the ALTER TABLE command.

Invoking Online Segment Shrink

Before invoking online segment shrink, view the findings and recommendations of the Segment Advisor. For more information, see "Using the Segment Advisor".

You invoke online segment shrink with Enterprise Manager (EM) or with SQL commands in SQL*Plus. The remainder of this section discusses the command line method.


Note:

You can invoke segment shrink directly from the Recommendation Details page in EM. (See Figure 14-4.) Or, to invoke segment shrink for an individual table in EM, display the table on the Tables page, select the table, and then click Shrink Segment in the Actions list. (See Figure 14-1.) Perform a similar operation in EM to shrink indexes, materialized views, and so on.

You can shrink space in a table, index-organized table, index, partition, subpartition, materialized view, or materialized view log. You do this using ALTER TABLE, ALTER INDEX, ALTER MATERIALIZED VIEW, or ALTER MATERIALIZED VIEW LOG statement with the SHRINK SPACE clause. Refer to Oracle Database SQL Reference for the syntax and additional information on shrinking a database object, including restrictions.

Two optional clauses let you control how the shrink operation proceeds:

  • The COMPACT clause lets you divide the shrink segment operation into two phases. When you specify COMPACT, Oracle Database defragments the segment space and compacts the table rows but postpones the resetting of the high water mark and the deallocation of the space until a future time. This option is useful if you have long-running queries that might span the operation and attempt to read from blocks that have been reclaimed. The defragmentation and compaction results are saved to disk, so the data movement does not have to be redone during the second phase. You can reissue the SHRINK SPACE clause without the COMPACT clause during off-peak hours to complete the second phase.

  • The CASCADE clause extends the segment shrink operation to all dependent segments of the object. For example, if you specify CASCADE when shrinking a table segment, all indexes of the table will also be shrunk. (You need not specify CASCADE to shrink the partitions of a partitioned table.) To see a list of dependent segments of a given object, you can run the OBJECT_DEPENDENT_SEGMENTS procedure of the DBMS_SPACE package.

As with other DDL operations, segment shrink causes subsequent SQL statements to be reparsed because of invalidation of cursors unless you specify the COMPACT clause.

Examples

Shrink a table and all of its dependent segments (including LOB segments):

ALTER TABLE employees SHRINK SPACE CASCADE;

Shrink a LOB segment only:

ALTER TABLE employees MODIFY LOB (perf_review) (SHRINK SPACE);

Shrink a single partition of a partitioned table:

ALTER TABLE customers MODIFY PARTITION cust_xl;P1 SHRINK SPACE;

Shrink an IOT index segment and the overflow segment:

ALTER TABLE cities SHRINK SPACE CASCADE;

Shrink an IOT overflow segment only:

ALTER TABLE cities OVERFLOW SHRINK SPACE;

Deallocating Unused Space

When you deallocate unused space, the database frees the unused space at the unused (high water mark) end of the database segment and makes the space available for other segments in the tablespace.

Prior to deallocation, you can run the UNUSED_SPACE procedure of the DBMS_SPACE package, which returns information about the position of the high water mark and the amount of unused space in a segment. For segments in locally managed tablespaces with automatic segment space management, use the SPACE_USAGE procedure for more accurate information on unused space.


See Also:

Oracle Database PL/SQL Packages and Types Reference contains the description of the DBMS_SPACE package

The following statements deallocate unused space in a segment (table, index or cluster):

ALTER TABLE table DEALLOCATE UNUSED KEEP integer;
ALTER INDEX index DEALLOCATE UNUSED KEEP integer;
ALTER CLUSTER cluster DEALLOCATE UNUSED KEEP integer;

The KEEP clause is optional and lets you specify the amount of space retained in the segment. You can verify that the deallocated space is freed by examining the DBA_FREE_SPACE view.


See Also:


Understanding Space Usage of Datatypes

When creating tables and other data structures, you need to know how much space they will require. Each datatype has different space requirements. The Oracle Database PL/SQL User's Guide and Reference and Oracle Database SQL Reference contain extensive descriptions of datatypes and their space requirements.

Displaying Information About Space Usage for Schema Objects

Oracle Database provides data dictionary views and PL/SQL packages that allow you to display information about the space usage of schema objects. Views and packages that are unique to a particular schema object are described in the chapter of this book associated with that object. This section describes views and packages that are generic in nature and apply to multiple schema objects.

Using PL/SQL Packages to Display Information About Schema Object Space Usage

These Oracle-supplied PL/SQL packages provide information about schema objects:

Package and Procedure/Function Description
DBMS_SPACE.UNUSED_SPACE Returns information about unused space in an object (table, index, or cluster).
DBMS_SPACE.FREE_BLOCKS Returns information about free data blocks in an object (table, index, or cluster) whose segment free space is managed by free lists (segment space management is MANUAL).
DBMS_SPACE.SPACE_USAGE Returns information about free data blocks in an object (table, index, or cluster) whose segment space management is AUTO.


See Also:

Oracle Database PL/SQL Packages and Types Reference for a description of PL/SQL packages

Example: Using DBMS_SPACE.UNUSED_SPACE

The following SQL*Plus example uses the DBMS_SPACE package to obtain unused space information.

SQL> VARIABLE total_blocks NUMBER
SQL> VARIABLE total_bytes NUMBER
SQL> VARIABLE unused_blocks NUMBER
SQL> VARIABLE unused_bytes NUMBER
SQL> VARIABLE lastextf NUMBER
SQL> VARIABLE last_extb NUMBER
SQL> VARIABLE lastusedblock NUMBER
SQL> exec DBMS_SPACE.UNUSED_SPACE('SCOTT', 'EMP', 'TABLE', :total_blocks, -
>    :total_bytes,:unused_blocks, :unused_bytes, :lastextf, -
>    :last_extb, :lastusedblock);

PL/SQL procedure successfully completed.

SQL> PRINT

TOTAL_BLOCKS
------------
           5

TOTAL_BYTES
-----------
      10240

...

LASTUSEDBLOCK
-------------
            3

Using Views to Display Information About Space Usage in Schema Objects

These views display information about space usage in schema objects:

View Description
DBA_SEGMENTS

USER_SEGMENTS

DBA view describes storage allocated for all database segments. User view describes storage allocated for segments for the current user.
DBA_EXTENTS

USER_EXTENTS

DBA view describes extents comprising all segments in the database. User view describes extents comprising segments for the current user.
DBA_FREE_SPACE

USER_FREE_SPACE

DBA view lists free extents in all tablespaces. User view shows free space information for tablespaces for which the user has quota.

The following sections contain examples of using some of these views.


See Also:

Oracle Database Reference for a complete description of data dictionary views

Example 1: Displaying Segment Information

The following query returns the name and size of each index segment in schema hr:

SELECT SEGMENT_NAME, TABLESPACE_NAME, BYTES, BLOCKS, EXTENTS
    FROM DBA_SEGMENTS
    WHERE SEGMENT_TYPE = 'INDEX'
    AND OWNER='HR'
    ORDER BY SEGMENT_NAME;

The query output is:

SEGMENT_NAME              TABLESPACE_NAME    BYTES BLOCKS EXTENTS
------------------------- --------------- -------- ------ -------
COUNTRY_C_ID_PK           EXAMPLE            65536     32       1
DEPT_ID_PK                EXAMPLE            65536     32       1
DEPT_LOCATION_IX          EXAMPLE            65536     32       1
EMP_DEPARTMENT_IX         EXAMPLE            65536     32       1
EMP_EMAIL_UK              EXAMPLE            65536     32       1
EMP_EMP_ID_PK             EXAMPLE            65536     32       1
EMP_JOB_IX                EXAMPLE            65536     32       1
EMP_MANAGER_IX            EXAMPLE            65536     32       1
EMP_NAME_IX               EXAMPLE            65536     32       1
JHIST_DEPARTMENT_IX       EXAMPLE            65536     32       1
JHIST_EMPLOYEE_IX         EXAMPLE            65536     32       1
JHIST_EMP_ID_ST_DATE_PK   EXAMPLE            65536     32       1
JHIST_JOB_IX              EXAMPLE            65536     32       1
JOB_ID_PK                 EXAMPLE            65536     32       1
LOC_CITY_IX               EXAMPLE            65536     32       1
LOC_COUNTRY_IX            EXAMPLE            65536     32       1
LOC_ID_PK                 EXAMPLE            65536     32       1
LOC_STATE_PROVINCE_IX     EXAMPLE            65536     32       1
REG_ID_PK                 EXAMPLE            65536     32       1

19 rows selected.

Example 2: Displaying Extent Information

Information about the currently allocated extents in a database is stored in the DBA_EXTENTS data dictionary view. For example, the following query identifies the extents allocated to each index segment in the hr schema and the size of each of those extents:

SELECT SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, EXTENT_ID, BYTES, BLOCKS
    FROM DBA_EXTENTS
    WHERE SEGMENT_TYPE = 'INDEX'
    AND OWNER='HR'
    ORDER BY SEGMENT_NAME;

The query output is:

SEGMENT_NAME              SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID    BYTES BLOCKS
------------------------- ------------ --------------- --------- -------- ------
COUNTRY_C_ID_PK           INDEX        EXAMPLE                 0    65536     32
DEPT_ID_PK                INDEX        EXAMPLE                 0    65536     32
DEPT_LOCATION_IX          INDEX        EXAMPLE                 0    65536     32
EMP_DEPARTMENT_IX         INDEX        EXAMPLE                 0    65536     32
EMP_EMAIL_UK              INDEX        EXAMPLE                 0    65536     32
EMP_EMP_ID_PK             INDEX        EXAMPLE                 0    65536     32
EMP_JOB_IX                INDEX        EXAMPLE                 0    65536     32
EMP_MANAGER_IX            INDEX        EXAMPLE                 0    65536     32
EMP_NAME_IX               INDEX        EXAMPLE                 0    65536     32
JHIST_DEPARTMENT_IX       INDEX        EXAMPLE                 0    65536     32
JHIST_EMPLOYEE_IX         INDEX        EXAMPLE                 0    65536     32
JHIST_EMP_ID_ST_DATE_PK   INDEX        EXAMPLE                 0    65536     32
JHIST_JOB_IX              INDEX        EXAMPLE                 0    65536     32
JOB_ID_PK                 INDEX        EXAMPLE                 0    65536     32
LOC_CITY_IX               INDEX        EXAMPLE                 0    65536     32
LOC_COUNTRY_IX            INDEX        EXAMPLE                 0    65536     32
LOC_ID_PK                 INDEX        EXAMPLE                 0    65536     32
LOC_STATE_PROVINCE_IX     INDEX        EXAMPLE                 0    65536     32
REG_ID_PK                 INDEX        EXAMPLE                 0    65536     32

19 rows selected.

For the hr schema, no segment has more than one extent allocated to it.

Example 3: Displaying the Free Space (Extents) in a Tablespace

Information about the free extents (extents not allocated to any segment) in a database is stored in the DBA_FREE_SPACE data dictionary view. For example, the following query reveals the amount of free space available as free extents in the SMUNDO tablespace:

SELECT TABLESPACE_NAME, FILE_ID, BYTES, BLOCKS
    FROM DBA_FREE_SPACE
    WHERE TABLESPACE_NAME='SMUNDO';

The query output is:

TABLESPACE_NAME  FILE_ID    BYTES BLOCKS
--------------- -------- -------- ------
SMUNDO                 3    65536     32
SMUNDO                 3    65536     32
SMUNDO                 3    65536     32
SMUNDO                 3    65536     32
SMUNDO                 3    65536     32
SMUNDO                 3    65536     32
SMUNDO                 3   131072     64
SMUNDO                 3   131072     64
SMUNDO                 3    65536     32
SMUNDO                 3  3407872   1664

10 rows selected.

Example 4: Displaying Segments that Cannot Allocate Additional Extents

It is possible that a segment cannot be allocated to an extent for any of the following reasons:

  • The tablespace containing the segment does not have enough room for the next extent.

  • The segment has the maximum number of extents.

  • The segment has the maximum number of extents allowed by the data block size, which is operating system specific.

The following query returns the names, owners, and tablespaces of all segments that satisfy any of these criteria:

SELECT a.SEGMENT_NAME, a.SEGMENT_TYPE, a.TABLESPACE_NAME, a.OWNER 
    FROM DBA_SEGMENTS a
    WHERE a.NEXT_EXTENT >= (SELECT MAX(b.BYTES)
        FROM DBA_FREE_SPACE b
        WHERE b.TABLESPACE_NAME = a.TABLESPACE_NAME)
    OR a.EXTENTS = a.MAX_EXTENTS
    OR a.EXTENTS = 'data_block_size' ;

Note:

When you use this query, replace data_block_size with the data block size for your system.

Once you have identified a segment that cannot allocate additional extents, you can solve the problem in either of two ways, depending on its cause:

  • If the tablespace is full, add a datafile to the tablespace or extend the existing datafile.

  • If the segment has too many extents, and you cannot increase MAXEXTENTS for the segment, perform the following steps.

    1. Export the data in the segment

    2. Drop and re-create the segment, giving it a larger INITIAL storage parameter setting so that it does not need to allocate so many extents. Alternatively, you can adjust the PCTINCREASE and NEXT storage parameters to allow for more space in the segment.

    3. Import the data back into the segment.

Capacity Planning for Database Objects

Oracle Database provides two ways to plan capacity for database objects:

This section discusses the PL/SQL method. Refer to Enterprise Manager online help and Oracle Database 2 Day DBA for details on capacity planning with Enterprise Manager.

Three procedures in the DBMS_SPACE package enable you to predict the size of new objects and monitor the size of existing database objects. This section discusses those procedures and contains the following sections:

Estimating the Space Use of a Table

The size of a database table can vary greatly depending on tablespace storage attributes, tablespace block size, and many other factors. The CREATE_TABLE_COST procedure of the DBMS_SPACE package lets you estimate the space use cost of creating a table. Please refer to Oracle Database PL/SQL Packages and Types Reference for details on the parameters of this procedure.

The procedure has two variants. The first variant uses average row size to estimate size. The second variant uses column information to estimate table size. Both variants require as input the following values:

  • TABLESPACE_NAME: The tablespace in which the object will be created. The default is the SYSTEM tablespace.

  • ROW_COUNT: The anticipated number of rows in the table.

  • PCT_FREE: The percentage of free space you want to reserve in each block for future expansion of existing rows due to updates.

In addition, the first variant also requires as input a value for AVG_ROW_SIZE, which is the anticipated average row size in bytes.

The second variant also requires for each anticipated column values for COLINFOS, which is an object type comprising the attributes COL_TYPE (the datatype of the column) and COL_SIZE (the number of characters or bytes in the column).

The procedure returns two values:

  • USED_BYTES: The actual bytes used by the data, including overhead for block metadata, PCT_FREE space, and so forth.

  • ALLOC_BYTES: The amount of space anticipated to be allocated for the object taking into account the tablespace extent characteristics.

Estimating the Space Use of an Index

The CREATE_INDEX_COST procedure of the DBMS_SPACE package lets you estimate the space use cost of creating an index on an existing table.

The procedure requires as input the following values:

  • DDL: The CREATE INDEX statement that would create the index. The table specified in this DDL statement must be an existing table.

  • [Optional] PLAN_TABLE: The name of the plan table to use. The default is NULL.

The results returned by this procedure depend on statistics gathered on the segment. Therefore, be sure to obtain statistics shortly before executing this procedure. In the absence of recent statistics, the procedure does not issue an error, but it may return inappropriate results. The procedure returns the following values:

  • USED_BYTES: The number of bytes representing the actual index data.

  • ALLOC_BYTES: The amount of space allocated for the index in the tablespace.

Obtaining Object Growth Trends

The OBJECT_GROWTH_TREND procedure of the DBMS_SPACE package produces a table of one or more rows, where each row describes the space use of the object at a specific time. The procedure retrieves the space use totals from the Automatic Workload Repository or computes current space use and combines it with historic space use changes retrieved from Automatic Workload Repository. Please refer to [ARPLS] for detailed information on the parameters of this procedure.

The procedure requires as input the following values:

  • OBJECT_OWNER: The owner of the object.

  • OBJECT_NAME: The name of the object.

  • PARTITION_NAME: The name of the table or index partition, is relevant. Specify NULL otherwise.

  • OBJECT_TYPE: The type of the object.

  • START_TIME: A TIMESTAMP value indicating the beginning of the growth trend analysis.

  • END_TIME: A TIMESTAMP value indicating the end of the growth trend analysis. The default is "NOW".

  • INTERVAL: The length in minutes of the reporting interval during which the procedure should retrieve space use information.

  • SKIP_INTERPOLATED: Determines whether the procedure should omit values based on recorded statistics before and after the INTERVAL ('YES') or not ('NO'). This setting is useful when the result table will be displayed as a table rather than a chart, because you can see more clearly how the actual recording interval relates to the requested reporting interval.

The procedure returns a table, each of row of which provides space use information on the object for one interval. If the return table is very large, the results are pipelined so that another application can consume the information as it is being produced. The output table has the following columns:

  • TIMEPOINT: A TIMESTAMP value indicating the time of the reporting interval.

    Records are not produced for values of TIME that precede the oldest recorded statistics for the object.

  • SPACE_USAGE: The number of bytes actually being used by the object data.

  • SPACE_ALLOC: The number of bytes allocated to the object in the tablespace at that time.

  • QUALITY: A value indicating how well the requested reporting interval matches the actual recording of statistics. This information is useful because there is no guaranteed reporting interval for object size use statistics, and the actual reporting interval varies over time and from object to object.

    The values of the QUALITY column are:

    • GOOD: The value whenever the value of TIME is based on recorded statistics with a recorded timestamp within 10% of the INTERVAL specified in the input parameters.

    • INTERPOLATED: The value did not meet the criteria for GOOD, but was based on recorded statistics before and after the value of TIME. Current in-memory statistics can be collected across all instances in a cluster and treated as the "recorded" value for the present time.

    • PROJECTION: The value of TIME is in the future as of the time the table was produced. In a Real Application Clusters environment, the rules for recording statistics allow each instance to choose independently which objects will be selected.

    The output returned by this procedure is an aggregation of values recorded across all instances in a RAC environment. Each value can be computed from a combination of GOOD and INTERPOLATED values. The aggregate value returned is marked GOOD if at least 80% of that value was derived from GOOD instance values.

PKv}xxPKgpUIOEBPS/cover.htm Cover

Oracle Corporation

PK;PKgpUIOEBPS/ds_txns.htm Distributed Transactions Concepts

32 Distributed Transactions Concepts

This chapter describes what distributed transactions are and how Oracle Database maintains their integrity. The following topics are contained in this chapter:

What Are Distributed Transactions?

A distributed transaction includes one or more statements that, individually or as a group, update data on two or more distinct nodes of a distributed database. For example, assume the database configuration depicted in Figure 32-1:

Figure 32-1 Distributed System

Description of Figure 32-1  follows


The following distributed transaction executed by scott updates the local sales database, the remote hq database, and the remote maint database:

UPDATE scott.dept@hq.us.acme.com
  SET loc = 'REDWOOD SHORES'
  WHERE deptno = 10;
UPDATE scott.emp
  SET deptno = 11
  WHERE deptno = 10;
UPDATE scott.bldg@maint.us.acme.com
  SET room = 1225
  WHERE room = 1163;
COMMIT;

Note:

If all statements of a transaction reference only a single remote node, then the transaction is remote, not distributed.

There are two types of permissible operations in distributed transactions:

DML and DDL Transactions

The following are the DML and DDL operations supported in a distributed transaction:

  • CREATE TABLE AS SELECT

  • DELETE

  • INSERT (default and direct load)

  • LOCK TABLE

  • SELECT

  • SELECT FOR UPDATE

You can execute DML and DDL statements in parallel, and INSERT direct load statements serially, but note the following restrictions:

  • All remote operations must be SELECT statements.

  • These statements must not be clauses in another distributed transaction.

  • If the table referenced in the table_expression_clause of an INSERT, UPDATE, or DELETE statement is remote, then execution is serial rather than parallel.

  • You cannot perform remote operations after issuing parallel DML/DDL or direct load INSERT.

  • If the transaction begins using XA or OCI, it executes serially.

  • No loopback operations can be performed on the transaction originating the parallel operation. For example, you cannot reference a remote object that is actually a synonym for a local object.

  • If you perform a distributed operation other than a SELECT in the transaction, no DML is parallelized.

Transaction Control Statements

The following are the supported transaction control statements:

Session Trees for Distributed Transactions

As the statements in a distributed transaction are issued, the database defines a session tree of all nodes participating in the transaction. A session tree is a hierarchical model that describes the relationships among sessions and their roles. Figure 32-2 illustrates a session tree:

Figure 32-2 Example of a Session Tree

Description of Figure 32-2  follows


All nodes participating in the session tree of a distributed transaction assume one or more of the following roles:

Role Description
Client A node that references information in a database belonging to a different node.
Database server A node that receives a request for information from another node.
Global coordinator The node that originates the distributed transaction.
Local coordinator A node that is forced to reference data on other nodes to complete its part of the transaction.
Commit point site The node that commits or rolls back the transaction as instructed by the global coordinator.

The role a node plays in a distributed transaction is determined by:

Clients

A node acts as a client when it references information from a database on another node. The referenced node is a database server. In Figure 32-2, the node sales is a client of the nodes that host the warehouse and finance databases.

Database Servers

A database server is a node that hosts a database from which a client requests data.

In Figure 32-2, an application at the sales node initiates a distributed transaction that accesses data from the warehouse and finance nodes. Therefore, sales.acme.com has the role of client node, and warehouse and finance are both database servers. In this example, sales is a database server and a client because the application also modifies data in the sales database.

Local Coordinators

A node that must reference data on other nodes to complete its part in the distributed transaction is called a local coordinator. In Figure 32-2, sales is a local coordinator because it coordinates the nodes it directly references: warehouse and finance. The node sales also happens to be the global coordinator because it coordinates all the nodes involved in the transaction.

A local coordinator is responsible for coordinating the transaction among the nodes it communicates directly with by:

  • Receiving and relaying transaction status information to and from those nodes

  • Passing queries to those nodes

  • Receiving queries from those nodes and passing them on to other nodes

  • Returning the results of queries to the nodes that initiated them

Global Coordinator

The node where the distributed transaction originates is called the global coordinator. The database application issuing the distributed transaction is directly connected to the node acting as the global coordinator. For example, in Figure 32-2, the transaction issued at the node sales references information from the database servers warehouse and finance. Therefore, sales.acme.com is the global coordinator of this distributed transaction.

The global coordinator becomes the parent or root of the session tree. The global coordinator performs the following operations during a distributed transaction:

  • Sends all of the distributed transaction SQL statements, remote procedure calls, and so forth to the directly referenced nodes, thus forming the session tree

  • Instructs all directly referenced nodes other than the commit point site to prepare the transaction

  • Instructs the commit point site to initiate the global commit of the transaction if all nodes prepare successfully

  • Instructs all nodes to initiate a global rollback of the transaction if there is an abort response

Commit Point Site

The job of the commit point site is to initiate a commit or roll back operation as instructed by the global coordinator. The system administrator always designates one node to be the commit point site in the session tree by assigning all nodes a commit point strength. The node selected as commit point site should be the node that stores the most critical data.

Figure 32-3 illustrates an example of distributed system, with sales serving as the commit point site:

Figure 32-3 Commit Point Site

Description of Figure 32-3  follows


The commit point site is distinct from all other nodes involved in a distributed transaction in these ways:

  • The commit point site never enters the prepared state. Consequently, if the commit point site stores the most critical data, this data never remains in-doubt, even if a failure occurs. In failure situations, failed nodes remain in a prepared state, holding necessary locks on data until in-doubt transactions are resolved.

  • The commit point site commits before the other nodes involved in the transaction. In effect, the outcome of a distributed transaction at the commit point site determines whether the transaction at all nodes is committed or rolled back: the other nodes follow the lead of the commit point site. The global coordinator ensures that all nodes complete the transaction in the same manner as the commit point site.

How a Distributed Transaction Commits

A distributed transaction is considered committed after all non-commit-point sites are prepared, and the transaction has been actually committed at the commit point site. The redo log at the commit point site is updated as soon as the distributed transaction is committed at this node.

Because the commit point log contains a record of the commit, the transaction is considered committed even though some participating nodes may still be only in the prepared state and the transaction not yet actually committed at these nodes. In the same way, a distributed transaction is considered not committed if the commit has not been logged at the commit point site.

Commit Point Strength

Every database server must be assigned a commit point strength. If a database server is referenced in a distributed transaction, the value of its commit point strength determines which role it plays in the two-phase commit. Specifically, the commit point strength determines whether a given node is the commit point site in the distributed transaction and thus commits before all of the other nodes. This value is specified using the initialization parameter COMMIT_POINT_STRENGTH. This section explains how the database determines the commit point site.

The commit point site, which is determined at the beginning of the prepare phase, is selected only from the nodes participating in the transaction. The following sequence of events occurs:

  1. Of the nodes directly referenced by the global coordinator, the database selects the node with the highest commit point strength as the commit point site.

  2. The initially-selected node determines if any of the nodes from which it has to obtain information for this transaction has a higher commit point strength.

  3. Either the node with the highest commit point strength directly referenced in the transaction or one of its servers with a higher commit point strength becomes the commit point site.

  4. After the final commit point site has been determined, the global coordinator sends prepare responses to all nodes participating in the transaction.

Figure 32-4 shows in a sample session tree the commit point strengths of each node (in parentheses) and shows the node chosen as the commit point site:

Figure 32-4 Commit Point Strengths and Determination of the Commit Point Site

Description of Figure 32-4  follows


The following conditions apply when determining the commit point site:

  • A read-only node cannot be the commit point site.

  • If multiple nodes directly referenced by the global coordinator have the same commit point strength, then the database designates one of these as the commit point site.

  • If a distributed transaction ends with a rollback, then the prepare and commit phases are not needed. Consequently, the database never determines a commit point site. Instead, the global coordinator sends a ROLLBACK statement to all nodes and ends the processing of the distributed transaction.

As Figure 32-4 illustrates, the commit point site and the global coordinator can be different nodes of the session tree. The commit point strength of each node is communicated to the coordinators when the initial connections are made. The coordinators retain the commit point strengths of each node they are in direct communication with so that commit point sites can be efficiently selected during two-phase commits. Therefore, it is not necessary for the commit point strength to be exchanged between a coordinator and a node each time a commit occurs.


See Also:


Two-Phase Commit Mechanism

Unlike a transaction on a local database, a distributed transaction involves altering data on multiple databases. Consequently, distributed transaction processing is more complicated, because the database must coordinate the committing or rolling back of the changes in a transaction as a self-contained unit. In other words, the entire transaction commits, or the entire transaction rolls back.

The database ensures the integrity of data in a distributed transaction using the two-phase commit mechanism. In the prepare phase, the initiating node in the transaction asks the other participating nodes to promise to commit or roll back the transaction. During the commit phase, the initiating node asks all participating nodes to commit the transaction. If this outcome is not possible, then all nodes are asked to roll back.

All participating nodes in a distributed transaction should perform the same action: they should either all commit or all perform a rollback of the transaction. The database automatically controls and monitors the commit or rollback of a distributed transaction and maintains the integrity of the global database (the collection of databases participating in the transaction) using the two-phase commit mechanism. This mechanism is completely transparent, requiring no programming on the part of the user or application developer.

The commit mechanism has the following distinct phases, which the database performs automatically whenever a user commits a distributed transaction:

Phase Description
Prepare phase The initiating node, called the global coordinator, asks participating nodes other than the commit point site to promise to commit or roll back the transaction, even if there is a failure. If any node cannot prepare, the transaction is rolled back.
Commit phase If all participants respond to the coordinator that they are prepared, then the coordinator asks the commit point site to commit. After it commits, the coordinator asks all other nodes to commit the transaction.
Forget phase The global coordinator forgets about the transaction.

This section contains the following topics:

Prepare Phase

The first phase in committing a distributed transaction is the prepare phase. In this phase, the database does not actually commit or roll back the transaction. Instead, all nodes referenced in a distributed transaction (except the commit point site, described in the "Commit Point Site") are told to prepare to commit. By preparing, a node:

  • Records information in the redo logs so that it can subsequently either commit or roll back the transaction, regardless of intervening failures

  • Places a distributed lock on modified tables, which prevents reads

When a node responds to the global coordinator that it is prepared to commit, the prepared node promises to either commit or roll back the transaction later, but does not make a unilateral decision on whether to commit or roll back the transaction. The promise means that if an instance failure occurs at this point, the node can use the redo records in the online log to recover the database back to the prepare phase.


Note:

Queries that start after a node has prepared cannot access the associated locked data until all phases complete. The time is insignificant unless a failure occurs (see "Deciding How to Handle In-Doubt Transactions").

Types of Responses in the Prepare Phase

When a node is told to prepare, it can respond in the following ways:

Response Meaning
Prepared Data on the node has been modified by a statement in the distributed transaction, and the node has successfully prepared.
Read-only No data on the node has been, or can be, modified (only queried), so no preparation is necessary.
Abort The node cannot successfully prepare.

Prepared Response

When a node has successfully prepared, it issues a prepared message. The message indicates that the node has records of the changes in the online log, so it is prepared either to commit or perform a rollback. The message also guarantees that locks held for the transaction can survive a failure.

Read-Only Response

When a node is asked to prepare, and the SQL statements affecting the database do not change any data on the node, the node responds with a read-only message. The message indicates that the node will not participate in the commit phase.

There are three cases in which all or part of a distributed transaction is read-only:

Case Conditions Consequence
Partially read-only Any of the following occurs:
  • Only queries are issued at one or more nodes.

  • No data is changed.

  • Changes rolled back due to triggers firing or constraint violations.

The read-only nodes recognize their status when asked to prepare. They give their local coordinators a read-only response. Thus, the commit phase completes faster because the database eliminates read-only nodes from subsequent processing.
Completely read-only with prepare phase All of following occur:
  • No data changes.

  • Transaction is not started with SET TRANSACTION READ ONLY statement.

All nodes recognize that they are read-only during prepare phase, so no commit phase is required. The global coordinator, not knowing whether all nodes are read-only, must still perform the prepare phase.
Completely read-only without two-phase commit All of following occur:
  • No data changes.

  • Transaction is started with SET TRANSACTION READ ONLY statement.

Only queries are allowed in the transaction, so global coordinator does not have to perform two-phase commit. Changes by other transactions do not degrade global transaction-level read consistency because of global SCN coordination among nodes. The transaction does not use undo segments.

Note that if a distributed transaction is set to read-only, then it does not use undo segments. If many users connect to the database and their transactions are not set to READ ONLY, then they allocate undo space even if they are only performing queries.

Abort Response

When a node cannot successfully prepare, it performs the following actions:

  1. Releases resources currently held by the transaction and rolls back the local portion of the transaction.

  2. Responds to the node that referenced it in the distributed transaction with an abort message.

These actions then propagate to the other nodes involved in the distributed transaction so that they can roll back the transaction and guarantee the integrity of the data in the global database. This response enforces the primary rule of a distributed transaction: all nodes involved in the transaction either all commit or all roll back the transaction at the same logical time.

Steps in the Prepare Phase

To complete the prepare phase, each node excluding the commit point site performs the following steps:

  1. The node requests that its descendants, that is, the nodes subsequently referenced, prepare to commit.

  2. The node checks to see whether the transaction changes data on itself or its descendants. If there is no change to the data, then the node skips the remaining steps and returns a read-only response (see "Read-Only Response").

  3. The node allocates the resources it needs to commit the transaction if data is changed.

  4. The node saves redo records corresponding to changes made by the transaction to its redo log.

  5. The node guarantees that locks held for the transaction are able to survive a failure.

  6. The node responds to the initiating node with a prepared response (see "Prepared Response") or, if its attempt or the attempt of one of its descendents to prepare was unsuccessful, with an abort response (see "Abort Response").

These actions guarantee that the node can subsequently commit or roll back the transaction on the node. The prepared nodes then wait until a COMMIT or ROLLBACK request is received from the global coordinator.

After the nodes are prepared, the distributed transaction is said to be in-doubt (see "In-Doubt Transactions").It retains in-doubt status until all changes are either committed or rolled back.

Commit Phase

The second phase in committing a distributed transaction is the commit phase. Before this phase occurs, all nodes other than the commit point site referenced in the distributed transaction have guaranteed that they are prepared, that is, they have the necessary resources to commit the transaction.

Steps in the Commit Phase

The commit phase consists of the following steps:

  1. The global coordinator instructs the commit point site to commit.

  2. The commit point site commits.

  3. The commit point site informs the global coordinator that it has committed.

  4. The global and local coordinators send a message to all nodes instructing them to commit the transaction.

  5. At each node, the database commits the local portion of the distributed transaction and releases locks.

  6. At each node, the database records an additional redo entry in the local redo log, indicating that the transaction has committed.

  7. The participating nodes notify the global coordinator that they have committed.

When the commit phase is complete, the data on all nodes of the distributed system is consistent.

Guaranteeing Global Database Consistency

Each committed transaction has an associated system change number (SCN) to uniquely identify the changes made by the SQL statements within that transaction. The SCN functions as an internal timestamp that uniquely identifies a committed version of the database.

In a distributed system, the SCNs of communicating nodes are coordinated when all of the following actions occur:

  • A connection occurs using the path described by one or more database links

  • A distributed SQL statement executes

  • A distributed transaction commits

Among other benefits, the coordination of SCNs among the nodes of a distributed system ensures global read-consistency at both the statement and transaction level. If necessary, global time-based recovery can also be completed.

During the prepare phase, the database determines the highest SCN at all nodes involved in the transaction. The transaction then commits with the high SCN at the commit point site. The commit SCN is then sent to all prepared nodes with the commit decision.


See Also:

"Managing Read Consistency" for information about managing time lag issues in read consistency

Forget Phase

After the participating nodes notify the commit point site that they have committed, the commit point site can forget about the transaction. The following steps occur:

  1. After receiving notice from the global coordinator that all nodes have committed, the commit point site erases status information about this transaction.

  2. The commit point site informs the global coordinator that it has erased the status information.

  3. The global coordinator erases its own information about the transaction.

In-Doubt Transactions

The two-phase commit mechanism ensures that all nodes either commit or perform a rollback together. What happens if any of the three phases fails because of a system or network error? The transaction becomes in-doubt.

Distributed transactions can become in-doubt in the following ways:

The RECO process automatically resolves in-doubt transactions when the machine, network, or software problem is resolved. Until RECO can resolve the transaction, the data is locked for both reads and writes. The database blocks reads because it cannot determine which version of the data to display for a query.

This section contains the following topics:

Automatic Resolution of In-Doubt Transactions

In the majority of cases, the database resolves the in-doubt transaction automatically. Assume that there are two nodes, local and remote, in the following scenarios. The local node is the commit point site. User scott connects to local and executes and commits a distributed transaction that updates local and remote.

Failure During the Prepare Phase

Figure 32-5 illustrates the sequence of events when there is a failure during the prepare phase of a distributed transaction:

Figure 32-5 Failure During Prepare Phase

Description of Figure 32-5  follows


The following steps occur:

  1. User SCOTT connects to Local and executes a distributed transaction.

  2. The global coordinator, which in this example is also the commit point site, requests all databases other than the commit point site to promise to commit or roll back when told to do so.

  3. The remote database crashes before issuing the prepare response back to local.

  4. The transaction is ultimately rolled back on each database by the RECO process when the remote site is restored.

Failure During the Commit Phase

Figure 32-6 illustrates the sequence of events when there is a failure during the commit phase of a distributed transaction:

Figure 32-6 Failure During Commit Phase

Description of Figure 32-6  follows


The following steps occur:

  1. User Scott connects to local and executes a distributed transaction.

  2. The global coordinator, which in this case is also the commit point site, requests all databases other than the commit point site to promise to commit or roll back when told to do so.

  3. The commit point site receives a prepared message from remote saying that it will commit.

  4. The commit point site commits the transaction locally, then sends a commit message to remote asking it to commit.

  5. The remote database receives the commit message, but cannot respond because of a network failure.

  6. The transaction is ultimately committed on the remote database by the RECO process after the network is restored.


    See Also:

    "Deciding How to Handle In-Doubt Transactions" for a description of failure situations and how the database resolves intervening failures during two-phase commit

Manual Resolution of In-Doubt Transactions

You should only need to resolve an in-doubt transaction in the following cases:

  • The in-doubt transaction has locks on critical data or undo segments.

  • The cause of the machine, network, or software failure cannot be repaired quickly.

Resolution of in-doubt transactions can be complicated. The procedure requires that you do the following:

  • Identify the transaction identification number for the in-doubt transaction.

  • Query the DBA_2PC_PENDING and DBA_2PC_NEIGHBORS views to determine whether the databases involved in the transaction have committed.

  • If necessary, force a commit using the COMMIT FORCE statement or a rollback using the ROLLBACK FORCE statement.


    See Also:

    The following sections explain how to resolve in-doubt transactions:

Relevance of System Change Numbers for In-Doubt Transactions

A system change number (SCN) is an internal timestamp for a committed version of the database. The Oracle Database server uses the SCN clock value to guarantee transaction consistency. For example, when a user commits a transaction, the database records an SCN for this commit in the redo log.

The database uses SCNs to coordinate distributed transactions among different databases. For example, the database uses SCNs in the following way:

  1. An application establishes a connection using a database link.

  2. The distributed transaction commits with the highest global SCN among all the databases involved.

  3. The commit global SCN is sent to all databases involved in the transaction.

SCNs are important for distributed transactions because they function as a synchronized commit timestamp of a transaction, even if the transaction fails. If a transaction becomes in-doubt, an administrator can use this SCN to coordinate changes made to the global database. The global SCN for the transaction commit can also be used to identify the transaction later, for example, in distributed recovery.

Distributed Transaction Processing: Case Study

In this scenario, a company has separate Oracle Database servers, sales.acme.com and warehouse.acme.com. As users insert sales records into the sales database, associated records are being updated at the warehouse database.

This case study of distributed processing illustrates:

Stage 1: Client Application Issues DML Statements

At the Sales department, a salesperson uses SQL*Plus to enter a sales order and then commit it. The application issues a number of SQL statements to enter the order into the sales database and update the inventory in the warehouse database:

CONNECT scott/tiger@sales.acme.com ...;
INSERT INTO orders ...;
UPDATE inventory@warehouse.acme.com ...;
INSERT INTO orders ...;
UPDATE inventory@warehouse.acme.com ...;
COMMIT;

These SQL statements are part of a single distributed transaction, guaranteeing that all issued SQL statements succeed or fail as a unit. Treating the statements as a unit prevents the possibility of an order being placed and then inventory not being updated to reflect the order. In effect, the transaction guarantees the consistency of data in the global database.

As each of the SQL statements in the transaction executes, the session tree is defined, as shown in Figure 32-7.

Figure 32-7 Defining the Session Tree

Description of Figure 32-7  follows


Note the following aspects of the transaction:

  • An order entry application running on the sales database initiates the transaction. Therefore, sales.acme.com is the global coordinator for the distributed transaction.

  • The order entry application inserts a new sales record into the sales database and updates the inventory at the warehouse. Therefore, the nodes sales.acme.com and warehouse.acme.com are both database servers.

  • Because sales.acme.com updates the inventory, it is a client of warehouse.acme.com.

This stage completes the definition of the session tree for this distributed transaction. Each node in the tree has acquired the necessary data locks to execute the SQL statements that reference local data. These locks remain even after the SQL statements have been executed until the two-phase commit is completed.

Stage 2: Oracle Database Determines Commit Point Site

The database determines the commit point site immediately following the COMMIT statement. sales.acme.com, the global coordinator, is determined to be the commit point site, as shown in Figure 32-8.


See Also:

"Commit Point Strength" for more information about how the commit point site is determined

Figure 32-8 Determining the Commit Point Site

Description of Figure 32-8  follows


Stage 3: Global Coordinator Sends Prepare Response

The prepare stage involves the following steps:

  1. After the database determines the commit point site, the global coordinator sends the prepare message to all directly referenced nodes of the session tree, excluding the commit point site. In this example, warehouse.acme.com is the only node asked to prepare.

  2. Node warehouse.acme.com tries to prepare. If a node can guarantee that it can commit the locally dependent part of the transaction and can record the commit information in its local redo log, then the node can successfully prepare. In this example, only warehouse.acme.com receives a prepare message because sales.acme.com is the commit point site.

  3. Node warehouse.acme.com responds to sales.acme.com with a prepared message.

As each node prepares, it sends a message back to the node that asked it to prepare. Depending on the responses, one of the following can happen:

  • If any of the nodes asked to prepare responds with an abort message to the global coordinator, then the global coordinator tells all nodes to roll back the transaction, and the operation is completed.

  • If all nodes asked to prepare respond with a prepared or a read-only message to the global coordinator, that is, they have successfully prepared, then the global coordinator asks the commit point site to commit the transaction.

Figure 32-9 Sending and Acknowledging the Prepare Message

Description of Figure 32-9  follows


Stage 4: Commit Point Site Commits

The committing of the transaction by the commit point site involves the following steps:

  1. Node sales.acme.com, receiving acknowledgment that warehouse.acme.com is prepared, instructs the commit point site to commit the transaction.

  2. The commit point site now commits the transaction locally and records this fact in its local redo log.

Even if warehouse.acme.com has not yet committed, the outcome of this transaction is predetermined. In other words, the transaction will be committed at all nodes even if the ability of a given node to commit is delayed.

Stage 5: Commit Point Site Informs Global Coordinator of Commit

This stage involves the following steps:

  1. The commit point site tells the global coordinator that the transaction has committed. Because the commit point site and global coordinator are the same node in this example, no operation is required. The commit point site knows that the transaction is committed because it recorded this fact in its online log.

  2. The global coordinator confirms that the transaction has been committed on all other nodes involved in the distributed transaction.

Stage 6: Global and Local Coordinators Tell All Nodes to Commit

The committing of the transaction by all the nodes in the transaction involves the following steps:

  1. After the global coordinator has been informed of the commit at the commit point site, it tells all other directly referenced nodes to commit.

  2. In turn, any local coordinators instruct their servers to commit, and so on.

  3. Each node, including the global coordinator, commits the transaction and records appropriate redo log entries locally. As each node commits, the resource locks that were being held locally for that transaction are released.

In Figure 32-10, sales.acme.com, which is both the commit point site and the global coordinator, has already committed the transaction locally. sales now instructs warehouse.acme.com to commit the transaction.

Figure 32-10 Instructing Nodes to Commit

Description of Figure 32-10  follows


Stage 7: Global Coordinator and Commit Point Site Complete the Commit

The completion of the commit of the transaction occurs in the following steps:

  1. After all referenced nodes and the global coordinator have committed the transaction, the global coordinator informs the commit point site of this fact.

  2. The commit point site, which has been waiting for this message, erases the status information about this distributed transaction.

  3. The commit point site informs the global coordinator that it is finished. In other words, the commit point site forgets about committing the distributed transaction. This action is permissible because all nodes involved in the two-phase commit have committed the transaction successfully, so they will never have to determine its status in the future.

  4. The global coordinator finalizes the transaction by forgetting about the transaction itself.

After the completion of the COMMIT phase, the distributed transaction is itself complete. The steps described are accomplished automatically and in a fraction of a second.


PK!XNPKgpUIOEBPS/img/admin011.gifQ:GIF89aV9  !!!"""###$$$%%%&&&'''((()))***+++,,,---...///000111222333444555666777888999:::;;;<<<===>>>???@@@AAABBBCCCDDDEEEFFFGGGHHHIIIJJJKKKLLLMMMNNNOOOPPPQQQRRRSSSTTTUUUVVVWWWXXXYYYZZZ[[[\\\]]]^^^___```aaabbbcccdddeeefffggghhhiiijjjkkklllmmmnnnooopppqqqrrrssstttuuuvvvwwwxxxyyyzzz{{{|||}}}~~~,V9 H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜIM yϟ@e8߯_+[ЧPJ5X3x`Գh IT' ī~eT߿CVuW` 'ǐ)W|ٜlϠCb)>un.?zM?r} pxȟ~S6?Z雼ËO_̚jwV.RR߶v+n["Ⱥ+ S.WVgCVBWlgJGl(,^?@8X ,_͠,́XPG-TWm*tyD\u5Z]ןOj#kx㭯p][y.vhߡVdķ8h4~?We{.埉yvn:`sn{κXc/<{_@Ǔ|˟E_NWϼo 񇯎|ʗO?y9/ HEcxw=m^9't&= `$@yu L0 H>J} LJc@h8 M/1[۸CDuPM>xB$D[ĝE=F,F1e<~XC a#C’!11 S*u\#Q#<$'1,bF=>#X0yHMJnHe>HO~#<%HGSj.xde'JP1e&(9\Q_ld/IK`βKEd摗ˬ;Uy3p{{f4b沕Dv p68Y[sTC }@{ G-\}4$='(P04! ,І1&9:Ђ%MgG-)FQѕ*yD }_13Ӧ _tj |=ZMQ_*1QOњb Z;"j/%UBU>kFZTu  98+lyD1~kzտF$ew|"K  T C{ų ljDZlP%ղMe7  7~!%@1'h *.r+!0a+f@nu;URvձ |AeP]&;u~=I@2r{|Y@^4 _B-)Q6Qyަ4>kZ_Au KBS쵚=qJqW̖~5?čq[)OE9q} d7)-w,#)WuU3!Vf}QeE  Ugc7|3!!^b ,5hΖsۘKtB0z?2V1{'q)dA bMM3 G "DUьv4K$,y@T<0VBz|_${tG 8 9YXxh2؁xwT7(eM  @B8DX?S Faa4 68} aGZ\؅^`_WlEpІ@j0@O_&xzw5hD%(5:ȂYȇ{^y%f3mBrhȅX(7eWxs؉ȉH(}wxylfnwSGBxUxV26GG,dTȌh2X25٨ݸcHט&Ȏ؅jhXŘ[V(dFPT"ysȖlyّl;q7w x H*tHMGT294Y6y80׏Z(59:yHIt}%i@'i!C[/FXYɔuh?RIuZyhi\ioNcacI,hiYu^y ?rYriti&.V4)Eyy{-7IQ7P|əV y}ٖɂN9Yyi& n)5VyؙڹY)Qٝy艞Sxi2T9IQYi멀(yi() ֙,+u ʠp@ǰ Oy$z 5"Z.Z'ڞ8,T#8ڠ.YQ98>*@*JZňG٣GQ9HVzYvB9?ZW1i\ʂ^ j:@b:!gJerZ@mZoNi:z2ujwRyZ ڥ(Q㧉zsʨs-;8 Jx:ZzY~):Z7xWӫ7kĚ*ȪQ!vkҬZЊךj!ֺ Q)*/5EQ#%᮶Y` J>a Q$S!T" k#R5հAUӮK;*OY# %+`OTyq Q4E2Ų+eyOv.1r(CAf99 F:cٳE,*zd]A[Lb"RVYfRBzb܁HaqQcY0ElQCad +K_agªV,y9f7YY#B*iʅR!>$g1"a&/1vQwX4B*:RB%7<*"G+/-@Z#B'{\+c~!2`\n- Eqe/ԩk&!==/"f` 1"h"9Q3HjǠȦ.]=/6!7~dц‘eQ#Yn'*V#<2#xYƯ<"*,Z㘁>{C Z 촪/WZˁ+nY#e! `#܌r!+2"LnkX1tpmҮ1}ykiMA?F; Z$hJX|[+n(']a=1ʬ;n,w&Zdĵ 0l{pb%B* Bd_)y{염ArCF-+'bBtlb|(++1,ak\A4Vjt5μ[H_D`+6 ?{>6' ?E@xD a oQs;,0o?%=):@K5-A5IGmFd+?;24n&a$Qr-ao#eAi"m/vz/KI~ǩzŢbo-%K26CoU'Bgvuh-a R suO+ENJNpeR3ce8?2S$}*$qNQK ۟'ĜH Y *% VZ_"# / &„8E5nG!E Hɓ 7  @0X„¯g @|z r90LHI `!(9Y em[E{BxNZu56|HP"E&\aĉ/fر$%O^hB.eͦwI_=GXLy3HU`J E/GR|`@bUn5[i#_owŏ'_'[Ⱦq =B0bi bH524nOc u!KGˡ 0DSTqSEܳHF0fK>@Eѯo <`k6՘$(R;8, #?LF6tS! pM:ojƪ%&lG\\@j$O)MAS`(κDEAiCSMuV✓6e r(E( @¡>C @ZZOaxZ8@VnM>jK(O9 OLWz'XqބlŗM]CzxGy!&S8"k_Չ3C=Ӟy1"D뱿Gw}C[o+9}翣Px5 U@ЁZC@ J82~ ᬪ?4 1BB0h!! exÖ6a ЇC-(D"&MF DU0Nb'E"Rъ[E.Q#^\ xFaqfDcɈ@6o`F:ώwbG>Vяd 8ҐID$!6Rwd%Y;JVd]&&DŽRdyk@ \JXRe-myKmI s*0R.OFj$f30oĜ2gfS<5gMaS$&7;MŁe@a;A0Sg=yO|Sa̹KdP(QyP&T@0cJSrdyQ鷁hIQm[HI6RƔ1(LGҷt_0iOCӇm8uNS&U0@PFT^HUOapod6Sc(U{Q2VUAU{$IӲjoS+q\WޕVyEzV57@cDVle-{YfV= *W25d+&/dVԦVemk]}2kvmo}۬֬~mTer >ոMDn/\fϥmtVW5VVvk^"buYƗ8:J׻.o+_cxGQfN&&et7` v>N&Nf1l7l` "df#@l$:Pyp\X+DS) O -]꥾$0oZlrK8jg9 3D~hZ31dËfthHGZҍAL*E㐙=E fF8t]jX+ 9yBKHС:QCDձfv=k)j@̀i9Y)2"91?,f7}i!CuM() 8A,۔v'\~w}MC19U-ZCw#lj'y{O 򨎼3w7U;jgiH\+wyt5@g:~s, yL SB!g 6Y sɼ@zk-'dY p8Ld^&2zDOv< x'^g|xϣ6w#l5.` $ 4)M</z^}m{PwJ ^K.7o|G_*! }B˜wr8(@ o 's%EpW􍹛9RwY=l!KD?L@ړ3xظ7::A?Q?`?A:ܽz2:|}}9RAbrAA󁉱_^*AL B"LTB*= T!-l/*$B0+L: {Ө  3ӋO9t:' () CBL1)j*3;σ6"#2#,9, ;0!:@8io2~XEVlEW|EXEYEZ~&;#?;=v)>E)ac_;5Ў DcL? S-.Ԟ&I DD]\8)8)5 9F8 Q0ƂCE(FF!ŔR ͋?xc= yH<Yƅ?@?GGRľGO?R;4){Wé,IɒdIHSL'{DL"#oXCؠ74d,2CscJq?IIo}ȜHJ(CFʏy<\+)Kp  ɜ<,O@g0@⨏h1(ltt$ 0l Db d)`_@j(8 }-y R5HNF)1$LD܈T\؎넓ތz,ϝŇQ 0ЋM;ĩ鴫l$ ´0; {ͱЊ DD%(&Id xϻ 5 E}=;Dt! !M?۸ k cyEKl{GRdvRl!%Ut$| 1?CɗsCh0Vy,2DHƤ6-./]>0DžتH<- WتP3( Tӱɏ'sa{E|I" u\XЋ/΀X#^ _]Um'"K+a ddHbUL9LXTfFgU"h i2,An,Q - `L`ILE/W$1-tU[ې@ pOU92a0.͆$,FM錈h:)R〰ᱵCuDXY|ݩ+ hT{P]ڣjڿU57*-.dQFmT [4jʮݹ +ūm%8}T@t \\-\=\ \5[Zƽ%(ՠ\ܬ[Mv o#%W`ܹgu:i*MZw]8XɸLWPsB}) L^]^h* ]un5"SUt\(^qs9ߑ^X_P_@ߋ=q}_֙ޝb`(, x_6:K`m)^Atay &ru.oxiO0` zߕ.░pg0( ^U3*-&߾m豎G`0 H2a4⪥qP= 6c{9V`㌹^`AA$ƕ!Bdd PތxdHF݈)(4>dc ?>(`U]Lnm VXNݔ*>e7 R1 OaeQeaw ߑfckj v圳dӤ!r^lV3>fg }z.E@4}D !Re{eՄnhN ;B Q^+vg7]h7Efyi܀m~a\PB$ >dS.ޖfadVCii.`FP6]弹jds9\c.[~k U+[ij6kGh'vfnk࣒0llANjΆkigl%l m|C}9+*FV`v6cfvsV_wVm-m7_誅nI^nA.^j~9In&Ɨno~nl9 '\nTosnp 3 o#_pm'v#qOq_q_8`qqqqWqIqY? V H Mr%Z_r'r(%Wr+r,r-r.w.s+0OW*ϺirX@4ƈ+m [skSs 1 Y6sg=. n>87Ls>bnF,^4iK@ @OߗbFCUfQqe)=j7uFDanO U i{!YuHoeG2eh' ]ijc?vMi*^g>x[j$Q[7Hygz$li֟ڂ/]Gxዤix *.ws_tOxT ߈lȴ?ʮdC_ x"[O?;׈ gh/y@myLgW栧wo#7v"  ⶧V xzqz`Ke3,6_J("⨞lPVYaoz.Í)Ӫ : yW@g5 D̜w' N׶_wL ߰i"wy_ ||J|٘<H̒nV|ڟӒɥp'щ2l@~k!Ĉ L"ƌ1#Hq FAiR" ?8_xE!O@ |הą ]~U(Cp(>5ǫg~jN0UhaD=N!־vlȒ O"!|N1c|E@ Ł^YB!Ł22ز_~E%vPz-E@2 "mrFY&9z.2֜_*oX xxsC{;4oM|ZT5%+kH~<O{qz ]b_-V3=I{=M0עNT2Cw XCi=+wIU/QPJrԝAmaCw_[P`p7R!@Yӈ7dR`dӖqz>;ڜ3>mU0͊҅  ;ݼ~0g4xW ~E r4&pS8!Ys`ԥ?u2 G׿=7 kȜI`cšg;dR+ C$.fk nHcL܈rE9-V 8Q;V"FFݬN3L8EUktȡD5Qw88r$!͒H62~HG@ҐxDx Q'IP"!%MGd%XBUoa2Y  + s1iLt&1>^K5 9IlVv<':_'(skFpr}3o$'.r5g A$ύD}rP {s(f"jj^<)*Izzec*ә4al։Ġ!TBHh`x'T/'2ACWT`H`8XO68HYW( PJ[×@? <a z\=dM 4Fd -!Re>< QPW Qo! cɯ[ lQ(v#> PǶdL NfJ[ mB+nV.'r\ S[K * 8|G_|#Nk32~&3L'|+_}$TG܏Pw-fW&#T P+d(R PVWkfkz ǘFˑ!]ģ) 'Rij[,c-_ Qx֑4N,='"s嘒ud\urH++nAZ9eVJ`nQFf0+Ҽ5Aα` 2+O[U\_XZX2gJQhe8hq8 8^`Dzc eVTGh7T<ʙV+n=ikJ9$= dЯW<z4vYU3nٗ,SG" xQ eݭK6Hm7p/6=XS]"+X@@T/@"8fo'ւD*XXP8&6o|][W((e3z0pk}2x-mCT1,I9`'H( cn#]ͯv$_t?7ِٿ#-ԵFߙ_h[¨Y r "Ž3:3qIo{I%"} Ə+EĻS̒N9VxD!yi`o>9OjI~zEG7X{'.O7!%Nhyn5hR#>|e˟=cCg@.QɚIij=Dz% &mkޑ_<`DmUjՄQ j(^ D hY`_ ŜPݬ` z  pAMN{T֝yD`` 3F8Y[A_xE ZaFV`aPGУm6A2aìAV͗YBP9 _q}7NtPM^%M5! _`wT@e$a.&cBF&FOcNd>f]duȇ-gvf$EVRxifEkׄkƦlVfΦmf'J%n&oj&p!pm¦q&g$!r6'ȖsN'p:lNRvbRQv'k]d8EhzV.0pz')FN)T̶, h)>)`i(f@;PK 7V:Q:PKgpUIOEBPS/img/admin063.gif)eGIF89al@@@ 000```pppPPPrrr999dddvvv:::KKK݊|||iii,,,<<E L=|BbvB y =>{yy= v D B~C SwʉC !pCJ!XB`vIn࿀̡dC9C"ŊI磟=PGixPgg0`e@M CYe- S$0 jf̈́ QXaR}zu=%ujȃj7RG_L0ϵL&y[> Nm}=)C˚2k684`-*3!2,y֝f鐾MasL@j%8i|(@@N9o<},s%Jz 7}X* YQI*sH}wMxbqb@ G{ ExXK>lءz%]aGʄ 8t݈s.Ip@-cVi݇-peN1y}5(0s R2鹒ӝ Ak,L|Qp3 3B':&uIBt4D V׽8̡w@ ;8^CL"P)djs/KI<%sE4"3TTD dQ^TxFq&+#4E2FBX.3b6K?1` ţH𐉜@E8V [_!!ѩUO* / e솰HrN +0eb-9L;N9KV3kcdW8d\\X1%?X6!086 ΖONAp*"CB%Lώ)8dbx 4BO-f`IL³CO(t*uI32H4$F(ANz- "vO)=Fw0EIa<X)F.U 4aH[yeYu,VNǐU$PTq+Hđt*]4H-5L 0 }ZR fJePץp^CYB,>++"pT\bPatP.| L$!qR"j|(!KBbSt)b){B)-xנ=_4_DC -B`ǸVB`_#0OAt0* ~ODa!xbDM^ A/F_Na 7f VP&\ & `Bq3'#S1\H5d*4&1Rla,.3e3;!W.B.b+}M&W`NF;ѐ'MJ@H\CZӠ4^ȼsWV/dn|C֕wAs֝MbNf;ZMj[;Gi~Zsn{FԴZpl}ec6niz|ƭ' BZ=l;fJp7<N-q:+Bq ~^ 8"ﳝyX[r6nOfv1ʙFפ(+|/"&BrMߢOVq.s.rmuN4>or8{`\Hd:䖿B+۳kysgyt, x΁{sjA#~GF€#)Zn߀d,< I+a /8A}zIfJS!缠}C!>,Y_/~+ySܷ&.ƒ<+t*NڐD:S^j1D J-j7uZ#E7QѤBZ( yZLvHGn:l{V ` x9p/]J ֚C!g N=H@.vre ZV < *Ҩ0A(azaxQ!)k1`b 1\$ Q͹ve`&3D$DzkCL0^ \0GP"A1 J{wjq! L˩!XhY-tg2 `JV`J2%-,9U;A֥L2ɠ` Z hIytE㪝|t'gðv $4-j++ ːm!9qCQ$;.2pq; ɤK{ Keq&@Q"kd{"S{1L0s;I ibb'ZTWs( qۡl?U |눿RPg0GEаiQ$kBlVtc)J<`Nl[!g`@{;jE,h6!{;_ HM,o%$qWn6mtsTtIlsvc0~sƤzȈD8Z ruQWLb`:v owvn|;sEu I,1 sCLv6MZCmʖg /w#W'5f̴lv"3s\fKڼ錵q|,Μl,Nplnl \gp o 1 PPA'Ȋ=#upxrIp`8$TO5 ޏ7^۫RC ѠICP?oa@D%B*䛍K^➬S`aRL89:p aQ  f~6m = y1caR:"Qu d &Ue>Ln} )%WOO'@1衎hN갼 + &TU . T=|]N>]7~蕐^ʾ^4M€e!K5A;~ ,ɲZqq筁| 7+R '@c@1>\./d^YN@ㄘonrRX$>P@Hy 1& -I`K0^>Ί2gBD"52(!(|'L,<gP|JEP08#g2SFpTenOpr_tJx.<|PRpFQ,4Gt  o@O"'[(T5p.x pV % (cyƸam;z{Yid_.Oj=`Zkzi #l.w߳;l:~Gq&,n^u8  `Ig)gψp8Wp'@,#Sk s#"ux4W?+FC0d`qV/"(hr (H iZ@sW}} ea9 "RȄFmm?VY/n;@}(p٭vY ݛ*S$0mPXAH@4hA\`; 0)lMX,g +a(,^텪̇&)L@&><[ C,""mLB{: pwE)[\MG#~@,c%Nn#4s1Hq.L a5~ 2d4dc4dfFMO7I9YW 5>&œgJ B9oYAz @ 0I耘% siG[fx.nru&zvnvoȪH\sT Ӎ BOB$JP%7eO%̓/I'KYn}-AÖ^Slhs% #(IHP`)iœ*'*YiP&@'d)2 & '}Stx)rvC-IQKxĥVA~B\6`lOEZi+JhuV8 ݕ[Wu_d Tt Hؐ @-#YJͬeOuVߘBHu+c-!$ ʅb!ɦϷ kF$+0&B4  NQvcwס{JFmx0p^OG:%* ߶X)Wx#l`}TDzA"/G F"qA $$`ēny"XKVꃚ~i53)! ZDq\E?0]4<D`.v)njE8GzdȈF殝uy6eڙoF@ănL".tMH9z'%_4]ֵ͚qk]DL8I6leӇuG@*x{w1|ο G3_[~Ϗޕucַ`>q?[\!oP/g&d O*t֯Рn8hIt0N ob`Y֡ !Aaݩ \& oE I@. p90°|ʀ?dP~NpFƐp 0 PIbuAs!1n Ao qO O7 \P 1p߰%bq Q&$Q C/[3f:1jQni!Y`U 1oW #v1qٱOnmqD`- . Rd! r2`"2҇/Pw#{{h/ 5r=1RDHr?W0Vr_2ZbhH$5N1'!y (s'a(f&w&u7'- ArRr1s2Q'mPR Rl2RR-2~Q.{2 22.(m2 q+0#Ò1W$)}--0a. 1p1u!/(@E1 w3SX`3%'s%R7o73a'8%s+oL0sS%z~s͠282s0 5w0e5=i;,A$q2F@eaG]Gp<4HEt&4rTr.Af ^b?gJ+T/44 ”ij뎳HFQT3/pN۠N1t TBRuD[;O( ,505V'@jW{8MV=TX#u|&uVOY?YqYQYZi4W=XˠWWg(X85uB+GUSϕVu tX5^ZuZO_]U!VѦ]SN̕``a [GS[a $]es cPS$bIvXU잕bmS`eecNf]fbd&efhov^Vg@H3B3gC[6-ERX8}fa56fgEuUlj5ksk}ngi6\FXoi#k?֢\p#pqMW_qdVrKsrsmtk9Vm<qQ TYWJKuizbj5v]trrEVowxx7ywyE"̚ 0PJv= Lv {y R!5 vB x[ l ͂lKwLK~!#} r,"a "v B|-0X2 j̦(.84X84@ODI 8R谂h>( "!b~&E r}#VÉ"^8#`Ǥ ,wW?$48'H-X9c| "*Ę l0Al \!@$⍿`5h 99+jjǁ D W ڸ&z!,96(xRb̘`gـ71" .*Bq*, fzDD/j,3AMZNj$u;@8j{3˗+Y6j๦A걘x28{@ sB ,Yi4fzL٦*vYʃ- *.ˣe + fa#-cku"DMoqx(K6y @ ȚϺʊ ԚzH+ (^Zs zkY4t̺Z# v xC`+xMBsfJ۪@' -b{? x;hHC@S{y}ʵ; C"#ۦ;߷; ;PKkR))PKgpUIOEBPS/img/advisor_central.gifiGIF87a>tnd䔲܌܄||ΜԬԼLbd2ܤ\$Vlt\ԼdlΜ$$ʔ쬾̤Ҝ,$$$LLD<<4ޤTTT~|zt<><4f̜dܜ|df442\^4|̬ljDTd̤TV,46´trDDFʼ|zL\Z,<> LJlnLtvDܼĴt̔dƬlln@H*\ȰÇ#JHŋ3jȱǏ CIɓ(S$ 0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիX}`"^p '|u9NIL6EylܴY3I\X23ԌysϠtyhѥ?^ZfҬU~ZԵs5mY Nȋ!9LΝ=tcV}v׹gлcSzßt PKxipMiF(VhU_B r@V0 [:qFL*´/3آ4hc0$L8ڸb=IdH&L6TR K\9%,\n٥Zf`Vegb&Rz!x&|矀*=i衈&袈xԆ~%#জv駠*ꨤjꩨꪬ*무j뭸뮼민 #8A*uR'7a髮D+l+Vkfv+k覫+k[,i'6/ 7G,Ąn% %ࡤ"b@?l(,d1ϙǕ8<o<)ˢh+3!}4NJ/NOGu(MLWwp]L :}>ڀ2>Tk93]0ۀ.6܂-3[G.7nM!蛒^:i9-Wn|m3o.;wNwnWoW Ac޷ه/䗿'Co~xo? Oc\gqb<Ѥ6f"xJoxѫ@& M<.čKh7հ~7thX$tb`o:Pb6XX̢- oJ$mE6FnH Io6(t^PQύ 2t\ ]gGlPx|$#8JZ+dFLz eU%N||ޣ5g`ɞ':%|CXzެCuY$=)O *L2զ*iӲ `MF}T Vխuw׾UCۃJֿŞZ:%b}GZřT2JLv'dVGf%GNr4ͭnɍw pM LbMr:Un)L!DJmej }hC;4ozыzŻ?D/~k_wwޚhw`(Aэ'L [x~{@ْqmGLc`X,0qzJ!gx#hJ@ F{l^FDT>[Fe, [250g62)f. x޲\;~Ѭ/ЃsgBhE3z΋-1uFt'-CSzіn4:Kzhtd%Hc;Y'| qF6ɟ-(zõk5[Чuj؈!ӗ}vw.7+@Y}qݺuQzη~FEL}@6˻aOǭKXxĈ-b4Ϥ5C.glZ$wr]C0Nx3&Nz=9Їs{O1G9@9}cPՠAosQpNx{OxO<;~u?* I&N@A܁:ߤE7U9=#4{ 缂Tس*/!r?G6 CO*~&n MS 6[Ѕ?ށwSج_X~ycַi}k JnxXIXm:DZ8VZ5;X&H!|',>)xya24X6x8:<؃>@B8DX<8 paJ\ kXK\MF(3,.`n,!T"VP8'tOK]!W8tXvxxz|؇?3C>ku_8D4`E_;v_v؈vh? ST≝&8X%WmVr^i,wD$a2G3j 2^a?(={>~،Ҹpxؘڸ؍xEq(hȌY3%xhEh(XCYׂsvx8;P%a(q( Ɍd 8WyiAyّEe0%%9"()i-/Y%߆*hyВ$y%(y; CBG+ LYKٔPyO2V)Uy% ٓ TG)Ayj)h FHYD9B i)4ytUAs&1(:IY)U Y vpȘrI(IY(*RyrXjI1Y9<4xY`Ў G}9iY)0IdI ]iiIFyCȑYr)w)ٓyIIfٞmyqy(g9yɞΙY oi z6A6p789*TS\$Z(\',ڢ.?+2:4]1za7Z.:ڣƣ>Z@8Z,j1yCz??]`Y0C˕%_ňb8:?=`?ikz?mJ?kjmolZ?yJ|\rs?yxp:?'CV#^cj刔_^*_`_\ ȩ8 a _뵢d_h%H,DH?ګj:":z?>@@>\jխ;*ZjJJڮj?HʬKMR!F_Jz(:Z8Z 9^uX˱0jIڬL N Y6ʲ*:;۳:j*>E+aDkZ`G O*R;c˪ 4K\۵8y)1G$qgl۶B%B`7k~(rk[и; 𸎛 p 7 𸖫 :1 А `: ZHzԺ;! 3 ` ̠ 3  0  a0 ȫ B= BkZۿ+BB Y[TKT6PCUPTpT4MUŴEMLPb23p&4zF#M!PA Kk$9#; N$#~ > Arz+^08pYCz-~X |a?DMyFY]AANPR>T^V~9Xb6>GRqG;qw``u_l [_{szcWv9yl~d.m|D }BY]䏞Bnj^`.϶l9rEGM<!K>fn΅fÆjf_V֊#Ўe,Fh>nl_e샦if ?SN~l oih/ d'&/ X t0/hmjD-/rvَ{έ`^WR4u%7Cjqu\5]ebNox6h]flrf>?]aQeen77ysBgt0tsOtHsstIt\H`3_i_b1qx܏q>jSuszk%9o_20 _saa$Fڍ}ZRa~է'Gw}7hgg?x4'@4H‚2<…):|qbB #rh1Ő/n4YReE+Sc̊,alaN%YJ(?$țg>])Te 'O\U+Ө2Z:VX$ A"ܹ0`X`… FXbƍ?Ydʕ-_ƜYfΝ=}۸tͻNN j0G@Z6ň5c oLgL[Ugt6mGc_9lE^x͟G^hrMū_ǟ_~0ڊ A.@ʛιȰB /0C 7C 4ĀA#/Bh6ߢFo1Gw< !lP5s I%dI'2J)J+2K-K/3L1$L3D3M5dM7sE Dd<8O?4PA}ӒKv@4QEtQF4FuTRK4G֨H+THEG7CIX5VT##dll 8@!a55E~̨ǫ>}?'n7{C_7}~M:&M[ 7󵭁\x(Ao :@C.mC^nX'w9B'LaCV48]>\N գH5rmXb8F2R`3,5n4c9Ec,RU(I z4=2ґrhDP*d&4pBcHE-Qe+c)VQr]2j0)^沗eL]>40iRǤ3GUj71EWs$gƙUndg8Nq\AE/}y)CJ<$\yN64|:%PQE5zȌn,U$)BV2YQ=r3XSPԦ_L:F' i8S9CE(T=*TtE,)/L6q/m*EiQ>UEkKe)˱Sl|hC<̡}Ch7A#ڡC2}t׿G9b EY:1PlcڮAv]gnw3wt q% 3RntZ94}nv7q[1((`W@p(x01a S b$##X3g0cF4i5Q khFAasc#GGct74ӥ>uW]kaEֵuw]{Şll-k݂vj _BӈVw gh"x~2O 7. SKhS.tQgH` ?z0=Qշ~;gyE0?I77?ʯ:=~)U~W7O%7*?_}?΃:?`Vpcs $4DTdtAWhFp4X !$"4#D$T%d&tk|,-./01$!tA*S ,3h 83<:WFtGtt C@4MN>SqBd@ TDODTTqJK3\Z[V$E\_FoE`4cDy^ [ V iL+kj|l aZq oLrt5 b+ tT38c1:%BfljF}kF~}ǁ|Gǂ ȃ\ȆlH|FHsH e aH[ȁH!Ǐ|HsTSǤHlۊmh`xTy<;f$ıƊdȂHʄmȇDJHDȥȤ ʥƠu oIʯ$ɱʱ˲l\ɵdxɘL8PKe{d˿L\tE TŬôdȔɴɵx tT Ф$MβbĶeKeSdId5Mgs͗tDkGmMܼ ̟MΛMĜD) ÄLN4\LT)d 1QP)PȃH POs쐼d `m 0Nj#L5TP\P LȄ@OI9XO4Qqhl6u a;NN mPN4;䴶hћtѱ"+sYB5ѽt0԰ҡS R xh35!6挶 QmMk: |!}M  mG C8(Vw*+{D6H@$9 /L,9::6`;x@MMeV?@UUuZǃUJRU[_e\YtcP`UemSUb@R8 ն<#Sfn aMUK-tsuMl5LRouw pU^Vx{MgVzW|,I>Z؂5ؚ*W7yk7}{Xr 7T37Xt4=s؈CL6`wcY}S7`U-|XؙٚYEƘtإX}ِM DbmB7X3}U%Z7ه}볎ڰ=Ʈ[7=Z%۪Z%͓E<ؼ[ۿi[tF$FVMEM0ŏ.G E߯YEeN_v &ݜ >_VaݲZ6Z~ F"6 #V%ƌ$f'NW(+. '-b.-6;U 2N}\0M0Wu26cqEƘVv}SS# D6Bn;yT.]Xi-}|@xDұS:MQhS>5Mv~oNn/3o G|p d N oq( 7?Vzhxn͞mjfoǑ,p fq>̈l'imr'r7mOmm.rn;,'_s!?rrm0l>; 60-AWF/TIIJKLMNO tCD?EoHǬR?tTU70e\N@ G/n>gr _0a)V_LVGTHphivhivjmnvTha`9 s&q/rOvasqfyswfHzEefnlvxgxooo'WZu6y"ycd ))}x'{wq47s%m* swdvgloviOp/蘏1!{gz0g\&-݂L{/|Ŀ_j|G8h荞eЏLϯg7}$}:Vɲ J Qdͪқ}8 T G v ׇɭ8~ 7 ݷ~w~~ _~d_w1m~/~_ X Ad0Á lpbD )*!ŏ#r,iA$ A"L0`2B4IT#I6p!H/:$zSjǰ]92ԢՇYR=tmѥWJemQub $SQz ia+ٴ J y+Ǚ:/㥔#kMfccu3tSN;^iMw `gaxF v z!!8"%ؓ}YQdZ#-ar%\36X06F>hF`t<*QJ9%UZy%`LjW`wY{f( m&q9'x[^-H'}' :(~R%( StAzi(Zz)j*Zb flr**@hFݞ