Index

A  B  C  D  E  F  G  H  I  L  M  N  O  P  Q  R  S  T  U  V  W 

A

Active Data Guard option
assessing database waits, 2.6.6.1
Active Session History Reports (ASH), 2.2.2.2
Advanced Queuing (AQ), 2.6.4.4
after failovers, 2.6.4.4
alerts
Enterprise Manager, 3.2.1
allocation units
increasing for large database, 2.1.16
large databases, 2.1.16
ALTER DATABASE CONVERT TO SNAPSHOT STANDBY statement, 2.6.8
ALTER DATABASE statement
specifying a default temporary tablespace, 2.2.2.7
ALTER DISKGROUP ALL MOUNT statement, 2.1.13
ALTER SESSION ENABLE RESUMABLE statement, 2.2.2.8
ANALYZE TABLE tablename VALIDATE STRUCTURE CASCADE, 2.6.6.1
application failover
DBMS_DG.INITIATE_FS_FAILOVER, 4.2.4
in an Oracle Data Guard configuration, 4.2.4
application failover, in an Oracle RAC configuration, 4.2.4
application workloads
database performance requirements for, 2.1.1
applications
defining as services, 2.3.1.9
failover, 4.2.4
Fast Application Notification (FAN), 4.2.4
fast failover, 2.9
login storms, 2.9.4
monitor response times, 4.2.4
service brownouts, 3.2.3
tracking performance with Beacon, 3.2.2
upgrades, 5.2.8
Apply Lag
event in Grid Control, 3.2.5
architecture
high availability, 1.1
archival backups
keeping, 2.7.2.2
ARCHIVELOG mode, 2.2.1.1
archiver (ARCn) processes
reducing, 2.6.7.1.1
archiving strategy, 2.6.4.5
ASM, 2.1.19
See Automatic Storage Management (ASM)
ASM home
separate from Oracle home, 2.3.1.6
ASM_DISKGROUPS initialization parameter, 2.1.13
asm_diskstring parameter, 2.1.6
ASM_POWER_LIMIT initialization parameter
rebalancing ASM disk groups, 2.1.19
ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter
in extended clusters, 2.5.3
ASMCMD command-line utility
storage management, 2.1.20
ASMLib, 2.1.4
ASMLIB disks
disk labels, 2.1.17
asynchronous disk I/O, 2.2.1.8
asynchronous I/O
enabling, 2.4.3
V$IOSTAT_FILE view, 2.2.1.8
AUTOBACKUP statement
RMAN, 2.7.2.6
Automatic Database Diagnostic Monitor (ADDM), 2.2.2.2
automatic performance tuning, 2.2.2.2
automatic segment space management, 2.2.2.6
using, 2.2.2.6
Automatic Shared Memory Management, 2.2.1.10
Automatic Storage Management (ASM)
allocation units, 2.1.16
asm_diskstring parameter, 2.1.6
ASMLib, 2.1.4
clustering to enable the storage grid, 2.1.8
database file management, 2.1.3
disk device allocation, 2.1.5
disk failures, 4.2.5.2
disk group size, 2.1.7, 2.1.11
failure groups, 2.5.3
failure groups and redundancy, 2.1.15
handling disk errors, 2.1.23
HARD-compliant storage, 2.1.15
imbalanced disk groups, 2.1.18
managing memory with MEMORY_TARGET parameter, 2.1.10
managing with ASMCMD, 2.1.20
migrating databases to and from, 2.1.3, 5.2.4.1
multiple disk failures, 4.2.5.4
power limit for faster rebalancing, 2.1.2.2
REBALANCE POWER, 2.1.19
rebalancing, 2.1.14
rebalancing disks after a failure, 4.2.5.2
recovery, 4.2.5
redundancy, 2.1.7
redundancy disk groups, 2.2.1.6
server-based mirroring, 2.5.3
SYSASM role, 2.1.12
using disk labels, 2.1.17
using normal or high redundancy, 2.5.3
variable size extents, 2.1.16
volume manager, 2.5.3
with disk multipathing software, 2.1.6
automatic tablespace point-in-time recovery TSPITR, 4.2.7.4
automatic undo management
described, 2.2.2.4
Automatic Workload Repository (AWR), 2.2.2.2
best practices, 2.2.2.2
evaluating performance requirements, 2.1.1
AWR
See Automatic Workload Repository (AWR)

B

backup and recovery
best practices, 2.7
checksums calculated during, 2.2.1.6
enabling with ARCHIVELOG mode, 2.2.1.1
recommendations, 2.7
backup files
flash recovery area disk group failure, 4.2.5.4
backup options
comparing, 2.7.3
backups
automatic, 2.7.2.6
configuring, 2.7
creating and synchronizing, 2.7.2.4
determine a retention policy, 2.7.2.2
keeping archival (long term), 2.7.2.2
maintaining offsite, 2.7.4.3
OCR, 2.3.1.12
Oracle Secure Backup, 2.7.4.2
performing regularly, 2.7.5.3
RMAN recovery catalog, 2.7.2.3
Beacons, 3.2.2
configuring, 3.2.2
benefits
Data Guard broker, 2.6.4.4
Grid Control, 1.4
high availability best practices, 1.2
best practices, 1.1
AWR, 2.2.2.2
backup and recovery, 2.7
Data Guard configurations, 2.6
failover (fast-start), 2.6.7.2.3, 4.2.2.2
failover (manual), 2.6.7.2.4, 4.2.2.3, 4.2.2.3
operational, 1.4
Oracle RAC configurations, 2.4, 2.4
security policy, 1.4
storage subsystems, 2.1
switchover, 2.6.7.1.1, 5.2.1.2
upgrades, 5.2.3.1
block checksums, 2.2.1.6
block media recovery, 4.2.6.3
broker
benefits, 2.6.4.4
using FAN/AQ, 2.6.4.4
brownouts, 3.2.3

C

capacity planning, 2.3.1.2
change tracking
for incremental backups, 2.7.2.5
checkpointing
bind Mean Time To Recover (MTTR), 2.4.1
CLB_GOAL attribute of the DBMS_SERVICE PL/SQL package, 2.3.1.10
client connections
migrating to and from nodes, 2.3.1.5
client failover
best practices, 2.9.2
clients
application failover, 4.2.4
configuring for failover, 2.9.1
load balancing, 2.3.1.10
cluster file system
using shared during software patching, 2.3.1.3
Cluster Ready Services (CRS)
described, 4.3.1.1
moving services, 4.2.3.2
recovering service availability, 4.3.1.1
relationship to OCR, 4.2.3.3
clustered ASM
enabling the storage grid, 2.1.8
clusters
extended, 2.5
clusterwide outage
restoring the standby database after, 4.3.4
cold failover clusters, 2.3.2
compatibility
software releases in an Oracle Clusterware environment, 2.3.1.1
complete site failover
recovery time objective (RTO), 4.2.1.2
compression
redo transport, 2.6.5.4
configurations
Oracle RAC, 2.4
Oracle Streams, 2.8
configuring databases for high availability
with the MAA Advisor, 3.3.4
configuring Oracle Database for shared server, 2.9.4
connection load balancing
setting goals for each service, 2.3.1.10
connection pools
adjusting number of, 2.9.4
Connection Rate Limiter
listener, 2.9.4
connect-time failover, 4.3.1.2
control files
in a flash recovery area disk group failure, 4.2.5.4
coordinated, time-based, distributed database recovery, 4.2.8
corruptions
checking database files, 2.7.5.1
DB_BLOCK_CHECKSUMS, 2.2.1.6
detecting in-memory, 2.2.1.6
preventing with Data Recovery Advisor, 2.2.1.7
preventing with OSB, 2.2.1.6
recovery, 4.2.6
crash recovery
understanding, 2.4.1
CREATE DISKGROUP statement
examples, 2.1.5, 2.1.5, 2.1.7
CRS
See Cluster Ready Services (CRS)
CRSD process
OCR backups, 2.3.1.12
cumulative incremental backup set, 2.7.3

D

Dark Fiber
Dense Wavelength Division Multiplexing (DWDM), 2.4.4
data
criticality and RPO, 2.7.2.2
protecting outside of the database, 2.6.11
recovering backups and RTO, 2.7.2.2
data area disk group failure
recovery options, 4.2.5.3
data corruptions
detecting and correcting with Flashback Database, 2.2.1.4
preventing with DB_LOST_WRITE_PROTECT, 2.2.1.6
protection through ASM redundancy disk groups, 2.2.1.6
recovery with Data Guard, 4.2.6.2
recovery with Data Recovery Advisor, 4.2.6.1
data failure
manual re-creation, 4.2.6.5
recovery, 4.2.6, 4.2.6.2
restoring fault tolerance on standby database, 4.3.5
RMAN block media recovery, 4.2.6.3
RMAN data file media recovery, 4.2.6.4
data file block corruption
recovery, 4.2.6
data files
fast open for large databases, 2.1.16
Data Guard
adding to an Oracle RAC primary, 6.3.2
archiving strategies, 2.6.4.5
broker, 2.6.4.4
failover
best practices (fast-start), 2.6.7.2.3
best practices (manual), 2.6.7.2.4
recovery for data area disk group failures, 4.2.5.3
when to perform, 4.2.2.1
log apply services, 2.6.6
managing targets, 3.3.3
monitoring, 3.2.5
multiple standby databases, 2.6.9
performance, 2.6.12
recovery from data corruption and data failure, 4.2.6.2
redo transport services, 2.6.5
restoring standby databases, 4.3.2
role transitions, 2.6.7
snapshot standby databases, 2.6.8
switchover
best practices, 2.6.7.1.1
using SQL*Plus, 5.2.1.3.1
Data Guard Status
events in Grid Control, 3.2.5
data protection modes, 2.6.2
Data Pump
moving the contents of the SYSTEM tablespace, 5.2.6.5
Data Recovery Advisor
detect and prevent data corruption, 2.2.1.7
recovery from data corruption, 4.2.6.1
data retaining backups, 2.7.2.2
data type restrictions
resolving with Extended Datatype Support (EDS), 5.2.5.2, 5.2.6.3
data-area disk group failure
See Also Data Guard failover, fast-start failover, local recovery
database area
contents, 2.1.5
disk partitioning, 2.1.5
database configuration
recommendations, 2.2
database files
ASM integration, 2.1.3
management optimizations, 2.1.3
recovery-related, 2.1.5
database patch upgrades
recommendations, 5.2.3.1
Database Resource Manager, 2.2.2.9
Database Upgrade Assistant (DBUA), 5.2.5.1
databases
checking files for corruption, 2.7.5.1
configuring with the MAA Advisor, 3.3.4
evaluating performance requirements, 2.1.1
migration, 5.2.6.2
object reorganization, 5.2.9
recovery in a distributed environment, 4.2.8
resolving inconsistencies, 4.2.7.3
switching primary and standby roles among, 5.2.1.1
upgrades, 5.2.5
DB_BLOCK_CHECKSUM initialization parameter
detecting redo and data block corruptions, 2.2.1.6
DB_CACHE_SIZE initialization parameter, 2.6.6.1
DB_CREATE_FILE_DEST initialization parameter
enabling Oracle managed files (OMF), 2.1.5
DB_CREATE_ONLINE_LOG_DEST_n initialization parameter
location of Oracle managed files, 2.1.5
DB_FLASHBACK_RETENTION_TARGET initialization parameter, 2.2.1.4
DB_KEEP_CACHE_SIZE initialization parameter, 2.6.6.1
DB_LOST_WRITE_PROTECT initialization parameter
preventing corruptions due to lost writes, 2.2.1.6
DB_RECOVERY_FILE_DEST initialization parameter
flash recovery area, 2.2.1.3
DB_RECOVERY_FILE_DEST_SIZE initialization parameter
limit for flash recovery area, 2.2.1.3
DB_RECYCLE_CACHE_SIZE initialization parameter, 2.6.6.1
DBCA
balancing client connections, 2.3.1.10
DBMS_DG.INITIATE_FS_FAILOVER PL/SQL procedure
application failover, 4.2.4
DBMS_LOGSTDBY.SKIP procedure
skipping database objects, 2.6.6.2.3
DBMS_REDEFINITION PL/SQL package, 5.2.9
DBMS_RESOURCE_MANAGER.CALIBRATE_IO PL/SQL procedure, 2.1.1
DBMS_SERVICE PL/SQL package
GOAL and CLB_GOAL attributes, 2.3.1.10
DBVERIFY utility, 2.6.6.1
decision support systems (DSS)
application workload, 2.1.1
decision-support systems
setting the PRESERVE_COMMIT_ORDER parameter, 2.6.6.2.2
default temporary tablespace
specifying, 2.2.2.7
DEFAULT TEMPORARY TABLESPACE clause
CREATE DATABASE statement, 2.2.2.7
Dense Wavelength Division Multiplexing (DWDM or Dark Fiber), 2.4.4
Device Mapper
disk multipathing, 2.1.6
differential incremental backup set, 2.7.3
disabling parallel recovery, 2.2.1.11
disaster-recovery site
distanced from the primary site, 2.6.2
disk backup methods, 2.7.3
disk devices
ASMLIB disk name defaults, 2.1.17
configuration, 2.1.5, 2.1.7, 2.1.11
disk labels, 2.1.17
multipathing, 2.1.6
naming
asm_diskstring parameter, 2.1.6
ASMLib, 2.1.4
partitioning for ASM, 2.1.5
protecting from failures, 2.1.7
disk errors
mining vendor logs, 2.1.23
disk failures
protection from, 2.1.7
restoring redundancy after, 2.1.7
disk groups
checking with V$ASM_DISK_IOSTAT view, 2.1.18
configuration, 2.1.5
determining proper size of, 2.1.7
determining size of, 2.1.7, 2.1.11
failure of flash recovery area, 4.2.5.4
imbalanced, 2.1.18
mounting, 2.1.13
offline after failures, 4.2.5.4
SYSASM access to ASM instances, 2.1.12
disk multipathing, 2.1.6
DISK_ASYNCH_IO initialization parameter, 2.2.1.8, 2.2.1.8, 2.6.6.1
disks
ASM failures, 4.2.5.2
distances
between the disaster-recovery site and the primary site, 2.6.2
distributed databases
recovering, 4.2.8
DNS failover, 4.2.1.3
downtime
reducing, 1.4
dropped tablespace
fix using Flashback Database, 4.2.7.3
dropping database objects, 4.2.7.1
dual failures
restoring, 4.3.7
DWDM
Dense Wavelength Division Multiplexing., 2.4.4
dynamic instance registration
LISTENER.ORA file example, A.2.2
SQLNET.ORA file example, A.2.1
TNSNAMES.ORA file example, A.2.3

E

endian format
determining, 5.2.6
Enterprise Manager
alerts, 3.2.1
Database Targets page, 3.2.3
managing patches, 3.3.2
metrics, 3.2.1, 3.2.4
notification rules, 3.2.2, 3.2.3
performance, 3.2.3
Enterprise Manager Beacon
application failover, 4.2.4
equations
standby redo log files, 2.6.4.6
Estimated Failover Time
event in Grid Control, 3.2.5
events
setting for Data Guard in Grid Control, 3.2.5
Exadata Cell, 2.1.2
extended clusters
configuring a third site for a voting disk, 2.5.4
overview, 2.5
setting the ASM_PREFERRED_READ_FAILURE_GROUPS parameter, 2.5.3
extents
ASM mirrored, 2.2.1.6
external redundancy
ASM disk failures, 4.2.5.2
ASM server-based mirroring, 2.5.3
EXTERNAL REDUNDANCY clause
on the CREATE DISKGROUP statement, 2.1.7
Extraction, Transformation, and Loading (ETL)
application workload, 2.1.1

F

failovers
application, 4.2.4
comparing manual and fast-start failover, 2.6.7.2.1
complete site, 4.2.1
defined, 4.2.2
described, 4.2.2.2
effect on network routes, 4.2.1.3
Fast Application Notification (FAN), 2.6.4.4
Fast Connection Failover, 2.9
nondisruptive, 2.1.6
restoring standby databases after, 4.3.2
failovers (manual)
best practices, 2.6.7.2.4
when to perform, 2.6.7.2.1, 4.2.2.1
failure detection
CRS response, 4.2.3.2
failure groups
ASM redundancy, 2.1.15
defining, 2.1.7
multiple disk failures, 4.2.5.4
specifying in an extended cluster, 2.5.3
failures
rebalancing ASM disks, 4.2.5.2
space allocation, 2.2.2.8
Fast Application Notification (FAN), 4.2.4
after failovers, 2.6.4.4
Fast Connection Failover, 2.9
fast local restart
after flash recovery area disk group failure, 4.2.5.4
FAST_START_MTTR_TARGET initialization parameter, 2.2.1.11, 2.6.6.1
controlling instance recovery time, 2.2.1.5
setting in a single-instance environment, 2.4.1
FAST_START_PARALLEL_ROLLBACK initialization parameter
determining how many processes are used for transaction recovery, 2.4.2
fast-start failover
comparing to manual failover, 2.6.7.2.1
require Flashback Database, 2.2.1.4
fast-start fault recovery
instance recovery, 2.2.1.5
FastStartFailoverAutoReinstate configuration property, 4.3.2
fault tolerance
configuring storage subsystems, 2.1
restoring, 4.3
restoring after OPEN RESETLOGS, 4.3.6
files
opening faster ASM, 2.1.16
flash recovery area
contents, 2.1.5
disk group failures, 4.2.5.4
disk partitioning, 2.1.5
local recovery steps, 4.2.5.4
using, 2.2.1.3
Flashback Database, 4.2.7, 4.2.7.3
detecing and correcting human errors, 2.2.1.4
enabling, 2.2.1.4, 2.2.1.4
for rolling upgrades, 2.2.1.4
for switchovers, 2.6.7.1.1
in Data Guard configurations, 2.6.4.2
required by snapshot standby databases, 2.2.1.4
required for fast-start failover, 2.2.1.4
setting maximum memory, 2.2.1.9
Flashback Drop, 4.2.7, 4.2.7.1
flashback logs
flash recovery area disk group failure, 4.2.5.4
Flashback Query, 4.2.7, 4.2.7.2
Flashback Table, 4.2.7, 4.2.7.1
flashback technology
example, 4.2.7.2
recovering from user error, 4.2.7
resolving database-wide inconsistencies, 4.2.7.3
resolving tablespace inconsistencies, 4.2.7.4
solutions, 4.2.7
Flashback Transaction, 4.2.7
Flashback Transaction Query, 4.2.7, 4.2.7.2
Flashback Version Query, 4.2.7, 4.2.7.2
FORCE LOGGING mode, 2.6.4.3
full data file copy, 2.7.3
full or level 0 backup set, 2.7.3

G

gap resolution
compression, 2.6.5.4
setting the PRESERVE_COMMIT_ORDER parameter, 2.6.6.2.2
GOAL attribute
of the DBMS_SERVICE PL/SQL package, 2.3.1.10
Grid Control
migrating to MAA, 6.2
See Also Oracle Grid Control, Enterprise Manager
guaranteed restore point
for snapshot standby databases, 2.2.1.4
guaranteed restore points
rolling upgrades, 2.2.1.4
GV$SYSSTAT view
gathering workload statistics, 2.1.1

H

HABPT0473|Configuration best practices for multiple standby databases, 2.6.9
Hardware Assisted Resilient Data (HARD)
when using ASM, 2.1.15
hardware RAID storage subsystem
deferring mirroring to, 2.5.3
high availability
described, 1.1, 1.1
restoring after fast-start failover, 4.3.2
High Availability (HA) Console
monitoring databases, 3.3.3
high redundancy
ASM disk failures, 4.2.5.2
home directories
creating separate, 2.3.1.6
host bus adapters (HBA)
load balancing across, 2.1.6
hosts
using dynamic instance registration
LISTENER.ORA file example, A.2.2
SQLNET.ORA file example, A.2.1
TNSNAMES.ORA file example, A.2.3
HR service
scenarios, 4.3.1.1
human errors
detecting and correcting with Flashback Database, 2.2.1.4
recovery, 4.2.7

I

imbalanced disk groups
checking, 2.1.18
incremental backups
block change tracking, 2.7.2.5, 2.7.2.5
incrementally updated backup, 2.7.3
initialization parameters
primary and physical standby example, 2.6.4.5
in-memory corruption
detecting, 2.2.1.6
installations
out-of-place patch set, 2.3.1.4
instance failures
recovery, 2.2.1.5
single, 4.2.3.1
instance recovery
controlling with fast-start fault recovery, 2.2.1.5
versus crash recovery, 2.4.1
interconnect subnet
verification with Oracle ORADEBUG utility, 2.3.1.13
verifying, 2.3.1.13
interim patches, 5.2.2
I/O operations
load balancing, 2.1.6
tuning, 2.6.6.1
I/O Resource Management (IORM)
usage with Storage Grid, 2.1.2.2

L

library
ASMLib support for ASM, 2.1.4
licensing
Oracle Advanced Compression, 2.6.5.4
listener connection rate throttling, 2.9.4
LISTENER.ORA file sample, A.2.2
listeners
balancing clients across, 2.3.1.10
Connection Rate Limiter, 2.9.4
LISTENER.ORA file example, A.2.2
running, 2.3.1.7
SQLNET.ORA file example, A.2.1
TNSNAMES.ORA file example, A.2.3
load balancing
client connections, 2.3.1.10
I/O operations, 2.1.6
through disk multipathing, 2.1.6
LOAD_BALANCE parameter
balancing client connections, 2.3.1.10
load-balancing application services, 4.3.1.2
local homes
use during rolling patches, 2.3.1.3
local recovery
after flash recovery area disk group failure, 4.2.5.4
for data area disk group failures, 4.2.5.3
for flash recovery area disk group failures, 4.2.5.4
locally managed tablespaces, 2.2.2.5
described, 2.2.2.5
location migration, 5.2.6
log apply services
best practices, 2.6.6
LOG_ARCHIVE_FORMAT initialization parameter, 2.6.4.5
LOG_ARCHIVE_MAX_PROCESSES initialization parameter
setting in a multiple standby environment, 2.6.5.2.3
setting in an Oracle RAC, 2.6.5.2.3
LOG_BUFFER initialization parameter, 2.2.1.9
LOG_FILE_NAME_CONVERT initialization parameter, 2.6.7.1.1
logical standby databases
effect of the MAX_SERVERS parameter, 2.6.6.2.1
failover, 4.2.2.3
setting the PRESERVE_COMMIT_ORDER parameter, 2.6.6.2.2
skipping database objects, 2.6.6.2.3
switchover, 5.2.1.3.2
upgrades on, 5.2.5.2
when to use, 2.6.1
logical unit numbers (LUNs), 2.1.7
defined, Glossary
login storms
controlling with shared server, 2.9.4
preventing, 2.9.4
low bandwidth networks
compression on, 2.6.5.4
low-cost storage subsystems, 2.1.1
LUNs
See Also logical unit numbers (LUNs)
See logical unit numbers (LUNs), 2.1.7

M

MAA
See Oracle Maximum Availability Architecture (MAA)
manageability
improving, 2.2.2
managing scheduled outages, 5.1.1
manual failover
best practices, 2.6.7.2.4, 4.2.2.3
comparing to fast-start failover, 2.6.7.2.1
when to perform, 2.6.7.2.1, 4.2.2.1
MAX_SERVERS initialization parameter, 2.6.6.2.1
Maximum Availability Architecture (MAA) Advisor page, 3.3.4
maximum availability mode
described, 2.6.2
redo transport requirements, 2.6.5.1
when to use, 2.6.2
maximum number of connections
adjusting in the mid tier connection pool, 2.9.4
maximum performance mode
described, 2.6.2
redo transport requirements, 2.6.5.1
when to use, 2.6.2
maximum protection mode
described, 2.6.2
initialization parameter example, 2.6.4.5
when to use, 2.6.2
Mean Time To Recover (MTTR)
checkpointing, 2.4.1
reducing with Data Recovery Advisor, 2.2.2.1
media failure
recovery, 4.2.6
memory consumption
managing with MEMORY_TARGET parameter, 2.1.10
memory management, 2.2.1.10
MEMORY_TARGET initialization parameter, 2.1.10
metrics
Enterprise Manager, 3.2.1
mid tier connection pool
adjusting maximum number of connections, 2.9.4
migrating
Data Guard to an Oracle RAC primary, 6.3.2
databases to and from ASM, 2.1.3
to MAA, 6
to Oracle RAC from a single instance, 6.3.1
transportable database, 5.2.6.2
migration
to MAA using Grid Control, 6.2
minimizing space usage, 2.7.3
minimizing system resource consumption, 2.7.3
mining vendor logs for disk errors, 2.1.23
mirrored extents
protection from data corruptions, 2.2.1.6
mirroring
across storage arrays, 2.1.7
deferring to RAID storage subsystem, 2.5.3
monitoring
application response time, 4.2.4
Oracle Grid Control, 1.4, 3.2
rebalance operations, 5.2.4.2
mounting disk groups, 2.1.13
multipathing (disks)
path abstraction, 2.1.6
multiple disk failures, 4.2.5.4

N

Network Attached Storage (NAS), 2.6.6.1
network detection and failover
Oracle Clusterware and Oracle RAC, 2.3.1.13
network routes
after site failover, 4.2.1.3
before site failover, 4.2.1.3
network server processes (LNSn), Glossary
NOCATALOG Mode
creating backups, 2.7.2.4
node failures
multiple, 4.2.3.1
nodes
migrating client connections, 2.3.1.5
non database object corruption and recommended repair, 4.2.6
nondisruptive failovers, 2.1.6
normal redundancy
ASM disk failures, 4.2.5.2
NORMAL REDUNDANCY clause
on the CREATE DISKGROUP statement, 2.1.7
notification rules
recommended, 3.2.3
service-level requirement influence on monitoring, 3.2.2
notifications
application failover, 4.2.4

O

OCR
backups of, 2.3.1.12
described, 2.3.1.11
recovering, 4.2.3.3
ocrconfig -showbackup command, 2.3.1.12
offsite backups, 2.7.4.3
OMF
See Oracle managed files
online log groups
minimum, 2.2.1.2
online patching, 5.2.2
online redo log files
multiplex, 2.2.1.2
Online Reorganization and Redefinition, 5.2.9
Online Transaction Processing (OLTP)
application workload, 2.1.1
opatch command-line utility, 5.2.3
operational best practices, 1.4
optimizing
recovering times, 2.7.3
Oracle Advanced Compression option, 2.6.5.4
Oracle Cluster Registry (OCR)
failure of, 4.2.3.3
See OCR
Oracle Clusterware
capacity planning, 2.3.1.2
cold failover clusters, 2.3.2
configuring a third site for a voting disk, 2.5.4
OCR mirroring, 2.3.1.11
software release compatibility, 2.3.1.1
system maintenance, 5.2.10
verifying the interconnect subnet, 2.3.1.13
Oracle Data Pump
platform migrations, 5.2.6.4
Oracle Database 11g
configuration recommendations, 2.2
Data Guard, 2.6
extended cluster configurations, 2.5
Oracle RAC configuration recommendations, 2.4
Oracle Enterprise Manager
High Availability (HA) Console, 3.3.3
MAA Advisor page, 3.3.4
Oracle Enterprise Manager Grid Control
migrating to MAA, 6.2
Oracle Exadata Storage Server Software, 2.1.2
Oracle Flashback Database
restoring fault tolerance to configuration, 4.3.2
Oracle Grid Control
benefits, 1.4
home page, 3.2.1
managing Data Guard targets, 3.3.3
monitoring, 3.2
Policy Violations, 3.3.1
Oracle homes
mixed software versions, 2.3.1.1
separate from ASM, 2.3.1.6
Oracle Implicit Connection Cache (ICC), 2.3.1.10
Oracle managed files (OMF)
database file management, 2.1.5
disk and disk group configuration, 2.1.5
flash recovery area, 2.2.1.3
Oracle Management Agent, 3.2
monitoring targets, 3.2
Oracle Maximum Availability Architecture (MAA)
defined, Glossary
described, 1.3
environment, 6.1
Web site, 1.3
Oracle Net
configuration file examples, A.2
Oracle Notification Service (ONS)
after failovers, 2.6.4.4
Oracle ORADEBUG utility
verifying interconnect subnet, 2.3.1.13
Oracle Real Application Clusters (Oracle RAC)
adding Data Guard, 6.3.2
adding disks to nodes, 2.1.4
application failover, 4.2.4
client failover, 2.9.2
configurations, 2.4
extended clusters, 2.5
LISTENER.ORA file sample for, A.2.2
migrating from a single instance, 6.3.1
network detection and failover, 2.3.1.13
preparing for switchovers, 2.6.7.1.1
recovery from unscheduled outages, 4.2.3
restoring failed nodes or instances, 4.3.1
rolling upgrades, 5.2.3
setting LOG_ARCHIVE_MAX_PROCESSES initialization parameter, 2.6.5.2.3
SQLNET.ORA file sample for, A.2.1
system maintenance, 5.2.10
TNSNAMES.ORA file sample for, A.2.3
using redundant dedicated connections, 2.4.4
verifying the interconnect subnet, 2.3.1.13
voting disk, 2.3.1.11
Oracle Secure Backup
OCR backups, 2.3.1.12
protecting data outside of the database, 2.6.11
Oracle Secure Backup (OSB)
fast tape backups, 2.7.4.2
preventing corruptions, 2.2.1.6
Oracle Storage Grid, 2.1.2
Oracle Streams
configuring, 2.8
database migration, 5.2.6.3
upgrades using, 5.2.5.3
Oracle Universal Installer, 5.2.3.2
outages
scheduled, 5.1
unscheduled, 4.1
out-of-place patch set installation, 2.3.1.4

P

parallel recovery
disabling, 2.2.1.11
partitions
allocating disks for ASM use, 2.1.5
patch sets
out of place, 2.3.1.4
out-of-place installations, 2.3.1.4
rolling upgrades, 5.2.3
patches
managing with Enterprise Manager, 3.3.2
rolling, 2.3.1.3
using shared cluster file system, 2.3.1.3
path failures
protection from, 2.1.6
performance
application, tracking with Beacon, 3.2.2
asynchronous disk I/O, 2.2.1.8
automatic tuning, 2.2.2.2
Data Guard, 2.6.12
database, gathering requirements, 2.1.1
PGA memory
usage, 2.6.6.2.1
physical standby databases
as snapshot standby databases, 2.6.8
failover, 4.2.2.3
location migrations, 5.2.6.6
real-time query, 2.6.10
switchover, 5.2.1.3.1
when to use, 2.6.1
planned maintenance
IORM, 2.1.2.2
platform migration
endian format for, 5.2.6
platform migrations, 5.2.5, 5.2.6
point-in-time recovery
TSPITR, 4.2.7.4
pool
resizing, 2.2.1.10
power limit
setting for rebalancing, 2.1.2.2
preferred read failure groups
specifying ASM, 2.5.3
PRESERVE_COMMIT_ORDER initialization parameter, 2.6.6.2.2
preventing login storms, 2.9.4
primary database
distance from the disaster-recovery site, 2.6.2
reinstating after a fast-start failover, 4.3.2
restoring fault tolerance, 4.3.6
protection modes
described, 2.6.2
determining appropriate, 2.6.2
See Also data protection modes, maximum protection mode, maximum availability mode, maximum performance mode

Q

query SCN, 2.6.10.2

R

RAID protection, 2.1.7
real-time apply
configuring for switchover, 2.6.7.1.1
real-time query
Active Data Guard option, 2.6.10
rebalance operations, 2.1.19
ASM disk partitions, 2.1.5, 2.1.5
monitoring, 5.2.4.2
REBALANCE POWER
limits, 2.1.19
rebalancing, 2.1.14
ASM disks after failure, 4.2.5.2
setting ASM power limit, 2.1.2.2
rebalancing ASM disk groups, 2.1.18
recommendations
database configuration, 2.2
recovery
coordinated, time-based, distributed database recovery, 4.2.8
options for flash recovery area, 4.2.5.4
testing procedures, 2.7.5.2
recovery catalog
including in regular backups, 2.7.5.3
RMAN repository, 2.7.2.3
recovery files
created in the recovery area location, 2.2.1.3
Recovery Manager (RMAN)
creating standby databases, 2.6.4.1
TSPITR, 4.2.7.4
recovery point objective (RPO)
criticality of data, 2.7.2.2
defined, Glossary
for data area disk group failures, 4.2.5.3
solutions for disk group failures, 4.2.5.4
recovery steps for unscheduled outages, 4.1.1
recovery time objective (RTO)
defined, Glossary
described, 4.2.1.2
for data-area disk group failures, 4.2.5.3
recovery time, 2.7.2.2
solutions for disk group failures, 4.2.5.4
recovery times
optimizing, 2.7.3
RECOVERY_ESTIMATED_IOS initialization parameter
for parallel recovery, 2.2.1.11
RECOVERY_PARALLELISM initialization parameter, 2.2.1.11
recycle bin, 4.2.7.1
Redo Apply
real-time query, 2.6.10
Redo Apply Rate
event in Grid Control, 3.2.5
redo data
compressing, 2.6.5.4
redo log files and groups
size, 2.2.1.2
redo log members
flash recovery area disk group failure, 4.2.5.4
redo transport mode
setting for compression, 2.6.5.4
redo transport services
best practices, 2.6.5
redundancy
CREATE DISKGROUP DATA statement, 2.1.7
dedicated connections, 2.4.4
disk devices, 2.1.7
restoring after disk failures, 2.1.7
reinstatement, 4.3.2
FastStartFailoverAutoReinstate property, 4.3.2
remote archiving, 2.6.4.5
reporting systems
setting the PRESERVE_COMMIT_ORDER parameter, 2.6.6.2.2
resetlogs on primary database
restoring standby database, 4.3.6
resource consumption
minimizing, 2.7.3
resource management
using Database Resource Manager, 2.2.2.9
response times
detecting slowdown, 4.2.4
restore points
for rolling upgrades, 2.2.1.4
restoring
client connections, 4.3.1.2
failed instances, 4.3.1
failed nodes, 4.3.1
services, 4.3.1.1
resumable space allocation, 2.2.2.8
space allocation
failures, 2.2.2.8
RESUMABLE_TIMEOUT initialization parameter, 2.2.2.8
RESYNC CATALOG command
resynchronize backup information, 2.7.2.4
RETENTION GUARANTEE clause, 2.2.2.4
retention policy for backups, 2.7.2.2
RMAN
calculates checksums, 2.2.1.6
recovery catalog, 2.7.2.3
RMAN BACKUP VALIDATE command, 2.6.6.1, 4.2.6.3
RMAN block media recovery, 4.2.6.3
RMAN data file media recovery, 4.2.6.4, 4.2.6.4
RMAN RECOVER BLOCK command, 4.2.6.3
role transitions
best practices, 2.6.7
role-based destinations, 2.6.4.5
rolling patches, 2.3.1.3
rolling upgrades
Flashback Database and guaranteed restore points, 2.2.1.4
patch set, 5.2.3
row and transaction inconsistencies, 4.2.7.2
RPO
See recovery point objective (RPO)
RTO
See recovery time objective (RPO)

S

SALES scenarios
setting initialization parameters, 2.6.4.5
SAME
See stripe and mirror everything (SAME)
scenarios
ASM disk failure and repair, 4.2.5.2.1
fast-start failover, 4.3.2.1
HR service, 4.3.1.1
object reorganization, 5.2.9
recovering from human error, 4.2.7.2
SALES, 2.6.4.5
verifying interconnect subnet, 2.3.1.13
scheduled outages
described, 5.1
recommended solutions, 5.1.1
reducing downtime for, 5.2
types of, 5.1
See Also unscheduled outages
secondary site outage
restoring the standby database after, 4.3.4
security
recommendations, 1.4
server parameter file
See SPFILE
server-based mirroring
ASM, 2.5.3
service availability
recovering, 4.3.1.1
service level agreements (SLA), 1.3
effect on monitoring and notification, 3.2.2
operational best practices, 1.4
service tests and Beacons
configuring, 3.2.2
services
automatic relocation, 4.2.3.2
making highly available, 2.3.1.8
Oracle RAC application failover, 4.2.4
Oracle RAC application workloads, 2.3.1.9
relocation after application failover, 4.2.4
tools for administration, 2.3.1.9
SGA_TARGET initialization parameter, 2.2.1.10
shared server
configuring Oracle Database, 2.9.4
site failover
network routes, 4.2.1.3
skipping
database objects that do not require replication to the standby database, 2.6.6.2.3
SLA
See service level agreements (SLA)
SMON process
in a surviving instance, 2.4.1
snapshot standby databases
guaranteed restore points, 2.2.1.4
require Flashback Database, 2.2.1.4
sort operations
improving, 2.2.2.7
space management, 2.2.2.6
space usage
minimizing, 2.7.3
SPFILE
samples, A.1
SQL Access Advisor, 2.2.2.2
SQL Tuning Advisor, 2.2.2.2
SQLNET.ORA file sample, A.2.1
standby databases
choosing physical versus logical, 2.6.1
configuring multiple, 2.6.9
creating, 2.6.4.1
distance from the primary site, 2.6.2
restoring, 4.3.2
standby redo log files
determining number of, 2.6.4.6
Statspack
assessing database waits, 2.6.6.1
storage
mirroring to RAID, 2.5.3
Oracle Exadata Storage Server Software, 2.1.2
Storage Area Network (SAN), 2.6.6.1
storage arrays
determining maximum capacity of, 2.1.1
mirroring across, 2.1.7
multiple disk failures in, 4.2.5.4
storage grid
through clustered ASM, 2.1.8
storage migration, 2.1.21
storage subsystems, 2.1
configuring ASM, 2.1.3
configuring redundancy, 2.1.7
performance requirements, 2.1.1
stripe and mirror everything (SAME), 2.1.3
switchovers
configuring real-time apply, 2.6.7.1.1
described, 5.2.1.1
in Oracle RAC, 2.6.7.1.1
preparing Flashback Database, 2.6.7.1.1
querying V$DATAGUARD_STATS view, 5.2.1.3.1
reducing archiver (ARCn) processes, 2.6.7.1.1
See Also Data Guard
setting the LOG_FILE_NAME_CONVERT initialization parameter, 2.6.7.1.1
to a logical standby database, 5.2.1.3.2
to a physical standby database, 5.2.1.3.1
SYSASM role
ASM Authentication, 2.1.12
system failure
recovery, 2.2.1.5
system maintenance, 5.2.10
system resources
assessing, 2.6.6.1
SYSTEM tablespace
moving the contents of, 5.2.6.5

T

table inconsistencies, 4.2.7.1
tablespace point-in-time recovery (TSPITR), 4.2.7.4
tablespaces
locally managed, 2.2.2.5
renaming, 5.2.9
resolving inconsistencies, 4.2.7.4
temporary, 2.2.2.7, 2.2.2.7
targets
in Oracle Grid Control, 3.2
monitoring, 3.2
TCP Nagle algorithm
disabling, 2.6.5.3.3
temporary tablespaces, 2.2.2.7, 2.2.2.7
test environments
operational best practices for, 1.4
third site
for a voting disk, 2.5.4
TNSNAMES.ORA file sample, A.2.3
transaction recovery
determining how many processes are used, 2.4.2
Transport Lag
event in Grid Control, 3.2.5
transportable database, 5.2.6.2
transportable tablespaces
database upgrades, 5.2.5.4
platform migration, 5.2.6.5

U

undo retention
tuning, 2.2.2.4
undo space
managing, 2.2.2.4
UNDO_MANAGEMENT initialization parameter
automatic undo management, 2.2.2.4
UNDO_RETENTION initialization parameter
automatic undo management, 2.2.2.4
UNDO_TABLESPACE initialization parameter
automatic undo management, 2.2.2.4
unscheduled outages
described, 4.1
Oracle RAC recovery, 4.2.3
recovery from, 4.1.1, 4.2
types, 4.1
See Also scheduled outages
upgrades
application, 5.2.8
applying interim patches, 5.2.2
best practices, 5.2.3.1
Database Upgrade Assistant (DBUA), 5.2.5.1
methods, 5.2.5
online patching, 5.2.2
USABLE_FILE_MB column
on the V$ASM_DISKGROUP view, 2.1.7
user error
flashback technology, 4.2.7

V

V$ASM_DISK view, 2.6.6.1
V$ASM_DISK_IOSTAT view
checking disk group imbalance, 2.1.18
V$ASM_DISKGROUP
REQUIRED_MIRROR_FREE_MB column, 2.1.7
V$ASM_DISKGROUP view
REQUIRED_MIRROR_FREE_MB column, 2.1.7
USABLE_FILE_MB column, 2.1.7
V$ASM_OPERATION view
monitoring rebalance operations, 5.2.4.2
V$DATAGUARD_STATS view
querying during switchover, 5.2.1.3.1
V$EVENT_HISTOGRAM view, 2.6.6.1
V$INSTANCE_RECOVERY view
tuning recovery processes, 2.2.1.11
V$IOSTAT_FILE view
asynchronous I/O, 2.2.1.8
V$OSSTAT view, 2.6.6.1
V$SESSION_WAITS view, 2.6.6.1
V$SYSTEM_EVENT view, 2.6.6.1
V$SYSTEM_EVENTS view, 2.6.6.1
VALID_FOR attribute, 2.6.4.5
VALIDATE option
on the RMAN BACKUP command, 2.6.6.1
validation
checksums during RMAN backup, 2.2.1.6
variable size extents, 2.1.16
large ASM data files, 2.1.16
verifying the interconnect subnet, 2.3.1.13
VIP address
connecting to applications, 2.3.1.9
described, 2.3.1.9
during recovery, 4.3.1.1
workload management, 2.3.1.9
Virtual Internet Protocol (VIP) Address
See VIP address
Virtual Internet Protocol Configuration Assistant (VIPCA)
configuration, 2.3.1.9
volume manager
ASM, 2.5.3
voting disk (Oracle RAC)
best practices, 2.3.1.11
configuring a third site, 2.5.4

W

wait events
assessing with Active Data Guard and Statspack, 2.6.6.1
Web sites
ASMLib, 2.1.4
MAA, 1.3
workload management
connecting through VIP address, 2.3.1.9
workloads
examples, 2.1.1
gathering statistics, 2.1.1