6 Database Management

This chapter introduces the concept of database management in the following sections:

Introduction to Database Management

Database management involves the monitoring, administration, and maintenance of the databases and database groups in your enterprise. Enterprise Manager is the premier tool for managing your database environment.

With Enterprise Manager, you receive:

  • A complete set of integrated features for managing Oracle Databases

  • Unparalleled scalability that lets you manage a single database or thousands of instances

  • An intuitive management product that leads the industry in ease of deployment and use

Database Control Versus Grid Control

Enterprise Manager provides two separate consoles that you can use to monitor your database: Database Control and Grid Control.

  • Database Control is the Enterprise Manager Web-based application for managing Oracle Database 11g Release 1 (11.1) and later. Database Control is installed and available with every Oracle Database 11g installation. From Database Control, you can monitor and administer a single Oracle Database instance or a clustered database.

  • Grid Control is the Enterprise Manager console you use to centrally manage your entire Oracle environment. Within Grid Control, you access the multiple database targets using the Targets tab, then Databases.

Database Home Page as the Management Hub

The Enterprise Manager Database Home page (Figure 6-1) shows important status and performance information about your database instance from a single source, including:

  • A quick view of the status of the database and basic information about the database

  • Relative CPU utilization of the Oracle host

  • Amount of time the instance consumed using CPU and I/O, and the amount of time it consumed in bottlenecks

  • Current response of the tracked set of SQL versus the reference collection response

  • The number of Automatic Database Diagnostic Monitoring (ADDM) findings, the number of policy violations (Database Control only), and the Alert Log status

  • Storage-related issues and recommendations for improved performance, and information about space violations and ADDM

  • The most recent backup time and backup status

  • Outstanding alerts and related alerts

  • Rolled-up number of Oracle policy violations across all member targets in a system

  • A quick view of database security

  • Job executions that show the scheduled, running, suspended, and problem executions

Starting from the Database Home page, you can access additional details and management functions by drilling down through the user interface. In addition, the Database Home page provides a list of Related Links. From these links, you can perform activities such as editing metric thresholds, analyzing job activity and metric collection errors, and accessing a number of advisors to help you improve the performance of your database.

See Also:

"Oracle Database Home Page" in the Enterprise Manager online help.

Figure 6-1 Database Home Page

Enterprise Manager Database Home page
Description of "Figure 6-1 Database Home Page"

Monitoring Databases

Comprehensive database monitoring enables you to identify the problem areas in your database environment that are degrading performance. After you have identified the areas to improve, you can tune your database's performance using the Enterprise Manager administration capabilities.

Enterprise Manager uses data from the Automatic Workload Repository (AWR) to display performance information and initiate database alerts. The user interface provides several real-time performance charts and drill-downs for the targets you manage. Both aggregate and instance-specific performance statistics are displayed using color-coded charts for easier viewing. To help you identify the source of a problem and resolve it, you can click a legend link next to a chart to display a detail page that provides comprehensive information.

The basic work flow in monitoring performance is to first go to the Database Performance page, which provides a high-level, comprehensive overview of important performance indicators, then diagnose problems either interactively or automatically.


For more information about monitoring databases not covered in the following sections, see the Oracle Database 2 Day + Performance Tuning Guide and the Oracle Database Performance Tuning Guide.

Assessing Database Performance

Database tuning is much easier and more effective when all the information for an in-depth performance diagnosis is available on a single screen. On the Database Performance page (Figure 6-2), which is accessible from the Database Home page, you can quickly view performance data for the instance and its underlying host. Additionally, you can analyze trends for all critical performance metrics and compare the trends to those of other database instances.

The most direct method for investigating and diagnosing performance problems from this page consists of accessing the Automatic Database Diagnostic Monitor (ADDM). After you start your Oracle database, the Automatic Workload Repository (AWR) begins taking snapshots of database activity once every hour by default, and ADDM executes when the snapshots are collected. ADDM uses these snapshots to perform a top-down analysis of your database activity and then provide tuning recommendations.

Another method for investigating and diagnosing performance problems consists of observing which of the wait classes beside the Average Active Sessions chart appears to be consuming an excessive amount of time, shown as a spike above the Maximum CPU line, then drilling down for more information. This provides a way for you to visualize the data for which ADDM has made its recommendations.

Either method assists you in diagnosing and resolving problems. The first automated method produces textual findings, whereas the second interactive method produces graphical findings.

Figure 6-2 Database Performance Page

Enterprise Manager Database Performance page
Description of "Figure 6-2 Database Performance Page"

The charts on the Database Performance page display current and recent metric information on a common time axis that enables visual correlation of metrics. These charts provide context-sensitive drill-downs where you can find additional details for rapid problem diagnosis.

Table 6-1 Performance Page Charts

Chart Description

Host: Runnable Processes

This chart provides an overview of overall CPU usage. The Non-Database Host CPU value represents all other CPU work not done by this instance. The Instance Background CPU value represents Oracle processes that you cannot tune. The Instance Foreground CPU value represents any active work, such as database queries, that is tuneable.

For information about the associated checkboxes that enhance the chart, see the online help for this page.

Average Active Sessions

This chart is the centerpiece of Oracle performance monitoring, and shows potential problems inside the database. It displays a profile of the amount of time sessions are either working or waiting to work in the database instance.

Categories, called wait classes, show how much of the database is waiting for a resource, such as CPU or disk I/O. The chart displays the load on the instance and identifies bottlenecks in performance.

For information about this chart, see the online help for this page.

Throughput charts

These charts show any contention that appears in the Average Active Sessions chart, and show how much work the database is performing for the user.

I/O charts

These charts show the following information:

  • Latency for Synchronous Single Block Reads

    Most systems are performing satisfactorily if this chart shows that the latency is less than 10 msec.

  • Function Charts

    Show components such as RMAN, direct reads, and direct writes.

  • I/O Type Charts

    Show components such as small reads, small writes, large reads, and large writes.

Parallel Execution charts

These charts show system metrics related to parallel queries. These charts show parallel queries that were waiting for a particular wait event that accounted for the highest percentages of sampled session activity.

Services chart

Shows the top services waiting for the corresponding wait event during the time period shown. Only active services are shown.

The Additional Monitoring Links section enables you to access several related pages. Among those, the following pages have real-time diagnostic capabilities:

  • Top Activity

  • Top Consumers

  • Instance Activity

  • Historical SQL (appears only when View Data is set to Historical)

See Also:

"Performance Page" in the Enterprise Manager online help

Diagnosing Problems Interactively

You can manually diagnose problems using an interactive method by first investigating a wait class that you think may be problematic, then drilling down from there for either SQL details or session details. You can alternatively investigate all wait classes in one location and then drill down from there. The following sections explain these methods.

Investigating a Wait Class

Wait class drill-downs enable you to investigate a specific wait class where it appears that time is accumulating. You can then determine whether the problem is caused by one or several SQL statements that are accumulating an unusual amount of time, or whether the problem is caused by one or several sessions.

The wait class drill-downs are called Active Sessions Waiting pages. The pages are sourced from the Active Session History (ASH), which samples session activity every second. It continuously records what is happening, such as which sessions are using CPU and which sessions are waiting on I/O.

Select a wait class link beside the Average Active Sessions chart to obtain detailed information for the wait class. For instance, if you click User I/O, the Active Sessions Waiting: User I/O page appears, as shown in Figure 6–3.

Figure 6-3 Active Sessions Waiting Page

Enterprise Manager Active Sessions Waiting page
Description of "Figure 6-3 Active Sessions Waiting Page"

By default, the largest consumers of resources appear at the top of the detail tables. Look for skewed activity for either Top SQL or Top Sessions. If an excessive accumulation of activity appears to originate from an SQL source, you can click its associated SQL ID to go to the SQL Details page, which shows the SQL statement and activity. If excessive accumulation appears to originate from a session source, you can click its associated Session ID to go to the Session Details page, where you can abort the session if needed. You can also view the wait events associated with the session.

See Also:

"Active Sessions Waiting Page" in the Enterprise Manager online help
Viewing SQL Details

The SQL Details page, shown in Figure 6–4, displays the SQL statement you select in the Active Sessions Waiting page. You can also:

  • Examine the activity of this SQL statement over time.

  • View SQL-level statistics.

  • Investigate the SQL plan.

  • Access any prior Tuning Advisor run.

  • Schedule a Tuning Advisor run.

Figure 6-4 SQL Details Page

Enterprise Manager SQL Details page

See Also:

"SQL Details Page" in the Enterprise Manager online help
Viewing Session Details

The Session Details page, shown in Figure 6–5, displays the wait events associated with the session you selected in the Active Sessions Waiting page. You can also:

  • View the current values for metrics associated with the current session.

  • View the list of currently open cursors in the selected session, including their hash value and SQL text.

  • View sessions that are blocking other sessions.

Figure 6-5 Session Details Page

Enterprise Manager Session Details page.

See Also:

"Session Details Page" in the Enterprise Manager online help

Viewing Top Activity

The Top Activity page, shown in Figure 6–6, is essentially the sum of all wait class drill-downs. The interface format is the same as the Active Sessions Waiting page, but rather than showing the average active sessions for a particular wait class, the Top Activity page shows the average active sessions for all wait classes. The page also shows the Top SQL and Top Sessions consumers for all wait classes. As with the Active Sessions Waiting pages, look for skewed activity for either Top SQL or Top Sessions.

Figure 6-6 Top Activity Page

Enterprise Manager Top Activity page
Description of "Figure 6-6 Top Activity Page"

See Also:

"Top Activity Page" in the Enterprise Manager online help

Monitoring Real-time SQL

The real-time SQL monitoring feature of Oracle Database enables you to monitor the performance of SQL statements while they are executing. By default, SQL monitoring automatically starts when a SQL statement runs parallel, or when it has consumed at least 5 seconds of CPU or I/O time in a single execution.

Each row in the table in the Monitored SQL Executions page, shown in Figure 6–7, represents overall information and statistics for one execution of a SQL statement that Oracle has monitored. This table only shows the last few monitored executions, because the database server reclaims the space occupied by old monitoring data to store the monitoring data for new executions (that is, monitoring data is recycled).

Figure 6-7 Monitored SQL Executions Page

Enterprise Manager Monitored SQL Executions page

To view detailed information on the execution of a SQL statement beyond what this page provides, you can either click the Status icon or right-click on any row of a column in the table to select the Monitored SQL Execution Details page, shown in Figure 6–8. You can click the View Report button to generate an active SQL Monitoring Report that provides the SQL text, detailed global information, and global statistics.


For more information, see "Real-time SQL Monitoring" in the Oracle Database Performance Tuning Guide.

Figure 6-8 Monitored SQL Execution Details Page

Enterprise Manager Monitored SQL Execution Details page

Diagnosing Problems Automatically

As mentioned in Assessing Database Performance, you can use the Automatic Database Diagnostic Monitor (ADDM) to automatically investigate and diagnose performance problems from either the Database Home page or the Database Performance page. Using regularly scheduled snapshots of the database activity, ADDM identifies the most resource-intensive components or operations, and determines whether or not these components or operations are acting as performance bottlenecks. If one or more problems have occurred, ADDM diagnoses these potential problems and provides advice, which may recommend that you run an advisor or change your database configuration.

You can view ADDM findings over a predetermined period of recent time or at the present time. You can view ADDM findings for the most recent time interval by clicking the ADDM Findings link in the Diagnostic Summary section of the Database Home page. You can also view ADDM findings for this interval by clicking the small ADDM icon below the Average Active Sessions chart in the Database Performance page. Either option displays the Automatic Database Diagnostic Monitor page, shown in Figure 6–9. To determine ADDM findings at the present time, you can click the Run ADDM button on the Database Performance page.

By default, the database takes snapshots at 60-minute intervals. You can use the Automatic Workload Repository to change the snapshot interval, ranging from 10 minutes to 2 hours.

Figure 6-9 Automatic Database Diagnostic Monitor (ADDM) Page

Automatic Database Diagnostic Monitor (ADDM) Page

See Also:

"Performance Finding Details Page" in the Enterprise Manager online help

Using Additional Diagnostic Pages

Besides the primary diagnostic pages discussed above, other important secondary pages can assist you in diagnosing and correcting performance problems. The following sections discuss these diagnostic pages:

  • Top Consumers

  • Instance Activity

  • Historical SQL

Top Consumers

The Top Consumers link provides global summary information for the top database consumers of system resources. You can access detailed metrics data for a specific top consumer, such as sessions, services, modules, and clients. This enables you to pinpoint the most problematic areas on which to focus your database tuning efforts. Figure 6–10 shows the Top Modules page of Top Consumers, where you can perform tasks such as enabling and disabling aggregation and SQL traces for the consumer. You can use an SQL trace to trace statistics of SQL statements, such as CPU time, elapsed time, and the Explain Plan.

Figure 6-10 Top Consumers Top Modules Page

Top Consumers Top Modules page

See Also:

"Top Consumers Page" in the Enterprise Manager online help

Instance Activity

The Instance Activity link displays database activity for specific data about groups of metrics, such as cursors, sessions, and transactions (Figure 6–11). For instance, the Cursors metric group displays information about opened and session cursors along with authentications and parse counts. The legend links below the chart in the graphic view, or the name links in the table view, enable you to access the Top Sessions page for more detailed information.

Figure 6-11 Instance Activity Page

Enterprise Manager Instance Activity page

See Also:

"Instance Activity Page" in the Enterprise Manager online help

Historical SQL

The Historical SQL (AWR) link displays statements stored in the Automatic Workload Repository (AWR) for a 24-hour period. This link is available in the Additional Monitoring Links section when you select the Historical data view (Figure 6–12).

The table at the bottom of the page shows an analysis of all SQL statements for performance and resource consumption. You can select a statement link to see SQL details (statistics, activity, SQL plan, and tuning information) for the statement.You can also run the SQL Tuning Advisor to receive recommendations for one or more statements.

Figure 6-12 Historical SQL Page

Enterprise Manager Historical SQL page

See Also:

"Historical SQL (AWR) Page" in the Enterprise Manager online help

Using Other Performance Tools

Enterprise Manager offers several tools that provide additional assistance ranging from viewing the relationships between system components to collecting system statistics when database performance is degraded. The following sections discuss these tools:

  • Topology Viewer

  • AWR Baselines

  • Memory Access Mode

  • Hang Analysis

Topology Viewer

Enterprise Manager provides a Topology Viewer for several applications. The Topology Viewer allows you to view the relationships between components, nodes, or objects within different Oracle applications. You can zoom, pan, see selection details and summary information, and evaluate aggregate components. Individually distinct icons are used for each object type, and standardized visual indicators are used across all applications.

The Topology Viewer is available for the following database applications:

  • Scheduler

  • SQL Details

  • SQL Explain Plans

  • Oracle Real Application Clusters

  • Cluster Databases

Figure 6–13 shows the Topology Viewer for the SQL Details Plan page. The Topology Viewer provides a graphical representation of your SQL steps as modeled in Enterprise Manager.

Figure 6-13 Topology for SQL Details Plan

Enterprise Manager SQL Details Topology page.
Description of "Figure 6-13 Topology for SQL Details Plan"

AWR Baselines

A baseline is a named period of time associated with a target and used as a reference for evaluating target performance (Figure 6–14). Baselines are an effective way to diagnose performance problems. AWR supports the capture of baseline data by enabling you to specify and preserve a pair of snapshots as a baseline.

Figure 6-14 AWR Baselines Page

AWR Baselines page

See Also:

"AWR Baselines Page" in the Enterprise Manager online help

Memory Access Mode

When the database server experiences performance issues, diagnostic queries could further impact the system. Memory Access Mode assists you in diagnosing performance-related problems by collecting system statistics even when the database is either slow or hung.

Instead of relying on the normal SQL engine, the data is retrieved by accessing tables directly from the Shared Global Area (SGA). The data collection is quick and does not further impact a system that is already slow. Potentially, this allows for sub-second sampling of performance metrics without causing any additional load on a system.

You can disable the standard SQL Access Mode and switch to Memory Access Mode by clicking the Monitor in Memory Access Mode link in the Related Links section.

See Also:

"Database Performance Page" in the Enterprise Manager online help

Hang Analysis

Hang Analysis enables you to diagnose locking issues that might either slow down a system or cause hanging. Normally, if a system is slow or in a hung state, the diagnostic queries are also either extremely slow or never return a result. This utility bypasses the typical query engine instead, and leverages the Oraclebug API to return results quickly even on systems that might be seemingly in a hung state.

The Hang Analysis page shows a visual map of the sessions that are either blocking or are blocked. A tree view of the sessions is displayed, and the problem session that is blocking other sessions is located at the root of the tree. Each session is shown color-coded, signifying how long the session has been blocked. Clicking on the session box brings up another page that shows session details. Using information from this page, you can cancel the problem session and return the system to its normal state.

You can access the Hang Analysis page by clicking the Hang Analysis link in the Additional Monitoring Links section.

See Also:

"Hang Analysis Page" in the Enterprise Manager online help

Administering Databases

Oracle Enterprise Manager effectively keeps your Oracle Databases available and running efficiently. Enterprise Manager can help database administrators perform everyday tasks. Specifically, it provides a graphical user interface for managing database storage structures and schemas. As with database monitoring, administration of the Oracle Database begins with the Oracle Database Home page. From this page, you can display an overview of your database properties and performance. However, you can also use the Administration section of the page to perform common administration tasks such as the following:

  • Allocating system storage and planning future storage requirements for the database system

  • Creating and managing primary database storage structures (tablespaces)

  • Creating and managing primary objects (tables, views, indexes)

  • Enrolling users and maintaining system security by controlling and monitoring user access to the database

  • Backing up and restoring the database

Just as Enterprise Manager monitoring identifies problem areas in your database and database groups, you can administer your database using the Enterprise Manager administration tools. The administration tools allow you to manage database objects and initiate database operations inside an Oracle Database. The following sections provide an overview of the database administrative features available to you in Enterprise Manager.

Managing Storage Objects

As an administrator, you can use the administration tools in Oracle Enterprise Manager to optimize database performance. Using these tools, you can manage storage structures such as control files, tablespaces, datafiles, and archive logs. In addition to viewing, editing, and deleting these structures, you can also perform other functions such as making tablespaces locally managed, displaying the dependencies of a datafile, or backing up a control file to a trace.

Using Database Configuration Features

Oracle Enterprise Manager incorporates many features that help you to manage your database configuration. For example, you can use Enterprise Manager to manage the memory size of the system global area and program global area of your system. You can also use the Undo Management feature to provide a means of explicitly specifying the amount of undo information to retain and ensure that undo information is not overwritten.

You can create or edit initialization parameters for the current database, setting these parameters to specific values to initialize many of the memory and process settings of an Oracle Database instance. You can also display a list of database features, showing how often the features are used in the operation of the database. Usage information can be utilized by support groups and other organizations to gain knowledge about how the system is being used, and to help apportion resources as necessary.

Using Automatic Storage Management

Databases can use Automatic Storage Management (ASM) for automation and layout simplification of datafiles, control files, and log files. You can use Enterprise Manager to migrate an existing database to ASM. If the database is already using ASM, you can use Enterprise Manager to create data files on disk groups.

Automatic Storage Management (ASM) simplifies database administration by managing disk groups instead of thousands of individual database files using disk groups. Database files are automatically distributed across all available disks, and database storage is rebalanced whenever the storage configuration changes. ASM internally creates, deletes and manages files as and when needed using the storage space from the disk groups. ASM also includes storage reliability features such as mirroring and parity protection commonly found in Logical Volume Managers. These features are available to an Oracle database or to Real Application Clusters (RAC).

An ASM instance is a special Oracle instance for coordinating ASM activities. An instance provides services to an Oracle database within a node. On a single node, there is typically a single ASM instance on the node which manages all disk groups. In a RAC environment, there is typically one ASM instance on each node which manages all disk groups for its node in a coordinated manner.

Automatic Storage Management files can be configured for use as general storage containers or volume devices. Oracle File System is a general-purpose file system that supports all customer application data files, both database and non-database including those associated with an Oracle home. This support is on single host and cluster configurations.

See Also:

"Administering Oracle ASM with Oracle Enterprise Manager" in the Oracle Database Storage Administrator's Guide.

Converting Single Instances to Oracle Real Application Clusters

Oracle Real Application Clusters (RAC) provides a high-availability database environment spanning multiple hosts. Each cluster can be comprised of multiple cluster databases, each of which consists of multiple cluster database instances. A cluster database is available as long as one of its instances is available. You can use Enterprise Manager to asynchronously convert a single-instance database to an Oracle RAC database.

Converting to a Locally-Managed Tablespace

An added feature allows you to convert a dictionary-managed tablespace to a locally-managed tablespace, thereby simplifying space allocation, improving performance, and reducing reliance on the data dictionary.

Controlling Resources with Resource Manager

The Database Resource Manager controls the distribution of resources among various sessions by controlling the execution schedule inside the database. By controlling which sessions use more or less CPU, the Database Resource Manager can ensure that resource distribution matches the plan directive and hence, the business objectives. You can use the Database Resource Manager to automatically assign sessions to specific consumer groups by setting up mappings between session attributes and consumer groups. You can also map consumer groups in specified categories to users, client programs, modules, or services.

Resource consumer groups let you group user sessions together by resource requirements. Resource consumer groups are different from user roles; one database user can have different sessions assigned to different resource consumer groups. You can then use a resource plan to specify how the resources are to be distributed among various resource consumer groups.

Resource plans specify the resource consumer groups belonging to the plan and contain directives for how resources are to be allocated among these groups. Plan information is stored in tables in the data dictionary. Several views are available for viewing plan data. In addition to containing resource consumer groups, a plan can contain subplans. Use Enterprise Manager to manage all aspects of the resource plan.

Tracking Statistics to Improve Database Performance

The Workload Repository provides you with a mechanism for gathering database statistics for specific time intervals. You can use the Optimizer Statistics feature of Enterprise Manager to simplify the management of Optimizer Statistics operations such as gathering, restoring, deleting, locking, and unlocking statistics. Use these statistics to improve the performance of your SQL statements.

Using Oracle Scheduler

Oracle Scheduler enables database administrators and application developers to control when and where various tasks occur in the database environment. Using the Oracle Scheduler in Enterprise Manager helps to improve the management and planning of these tasks. By separating a task into its component parts such as time, location, and database object, the Oracle Scheduler provides an easier way to manage your database environment. Database administrators can schedule and monitor recurring database maintenance jobs, such as backups or nightly data warehousing loads and extracts, and they can schedule job execution based on time or events.

With Enterprise Manager, you can enable and disable Scheduler jobs, alter the settings for existing jobs, start or stop current jobs, and view Scheduler information.

Working With Database Schemas

A schema is a collection of database objects comprised of logical structures that directly refer to the data in the database. Schema objects include structures such as tables, views, and indexes. These schema objects can be created and managed using the tools available in Oracle Enterprise Manager.

Managing Database Objects

Oracle Enterprise Manager provides a comprehensive set of tools that allows you to manage all aspects of database directory objects such as tables, indexes, and views. You can use the tools available in Enterprise Manager for fundamental tasks such as creating, editing, and viewing object properties, but you can also use Enterprise Manager for more comprehensive tasks such as running the Segment Advisor to evaluate a table for block and space usage and to determine whether space can be saved by shrinking highly fragmented segments. The space gained by the implementation of these recommendations is returned to the table.

Indexes are optional structures associated with tables that can be created to increase the performance of data retrieval. When you manage indexes in Enterprise Manager, you can perform functions such as shrinking segments to compact segments and free the recovered space to the current tablespace. Alternatively, you can also eliminate space problems by reorganizing space usage while changing the storage settings and location of the index.

Views are customized presentations of data in one or more tables or other views. In addition to creating, deleting, and managing views, you can also view the objects that are dependencies of the view. The Dependencies table shows you the Object Name and the Object Type that is dependent on the current view. Conversely, Enterprise Manager can also show you the objects on which the current view is dependent.

In general, use the Action menu on the Search page or Property page of any database object to see a list of the available functions you can perform on that object.

Enterprise Manager allows you to manage program structures, such as packages, package bodies, functions, and triggers in the same way. In addition to creating and viewing these elements, you can compile Java Sources, for example, or generate the data definition language (DDL) code for a specified object such as package. For more information about the various functions available for a specific object, you can refer to the online Help.

XML Database Features in Oracle Enterprise Manager

XML (eXtensible Markup Language) is the standard way to identify and describe data on the Web. Oracle XML DB treats XML as a native datatype in the database. This helps to store, query, update, and transform XML data while accessing it using SQL.

Oracle XML DB offers a number of easy ways to create XML documents from relational tables. The result of any SQL query can be automatically converted into an XML document. Oracle also includes a set of utilities, available in Java and C++, to simplify the task of creating XML documents.

Oracle Enterprise Manager helps to perform the following tasks related to Oracle XML DB:

  • Configure Oracle XML DB — View or edit parameters for the Oracle XML DB configuration file, /xdbconfig.xml.

  • Search, create, edit, and undelete Oracle XML DB Repository resources and their associated access control lists (ACLs).

  • Search, create, edit, and delete XMLType tables and views.

  • Search, create, register, and delete XML schemas.

  • Create function-based indexes based on XPath expressions.

Managing Users and Privileges

Oracle includes security features that control how a database is accessed and used. Privileges and roles control user access to data and the types of SQL statements that can be executed. There are two types of privileges: system privileges and object privileges. A system privilege is an Oracle-defined privilege that enables you to perform specific database operations, such as CREATE, ALTER, and so forth. An object privilege is an Oracle-defined privilege that controls access to a specific object.

Users, usually administrators, can create a role to group together privileges and other roles. This facilitates granting multiple privileges and roles to users.

Privileges and roles can be granted to other users by users who possess the authority to do so with the ADMIN/GRANT option. Oracle Enterprise Manager allows you to create and manage users, roles and profiles. You can also expire passwords and lock or unlock users by applying those actions against one or more users. When managing roles, you can display all users and roles assigned to the specified role by using the Show Grantees function.

Auditing is the monitoring and recording of selected user database actions. It can be based on individual actions, such as the type of SQL statement run, or on combinations of factors that can include name, application, time, and so on. Security policies can cause auditing when specified elements in an Oracle Database are accessed or altered, including content. You can set up and adjust audit settings easily within the Enterprise Manager interface. With Enterprise Manager, you can view the database audit configuration and manage audited objects, privileges, and statements. You can also view the content of the audit trail. Out-of-box, Enterprise Manager also provides audit reports to help you monitor successful and failed login attempts, as well as SYSUSER operations.

Managing Materialized Views

Materialized views are schema objects that can be used to summarize, compute, replicate, and distribute data. They are suitable in various computing environments such as data warehousing, decision support, and distributed or mobile computing. Enterprise Manager lets you create and manage materialized views and provides a set of additional tools that allows you to perform specific actions against the view. The Explain Materialized View feature helps you to determine what is possible with a materialized view by indicating whether a materialized view is fast-refreshable, what types of query rewrites you can perform with this materialized view, and whether PCT refresh is possible.

Oracle maintains the data in materialized views by refreshing them after changes are made to their master tables. The refresh method can be incremental (fast refresh) or complete. For materialized views that use the fast refresh method, a materialized view log keeps a record of changes to the master tables. A complete refresh occurs when the materialized view is initially defined, unless it references a prebuilt table, and a complete refresh may be requested at any time during the life of the materialized view. Because the complete refresh involves reading the detail table to compute the results for the materialized view, this can be a very time-consuming process, especially if very large amounts of data need to be read and processed.

You can use Enterprise Manager to advise you when a query can be rewritten. You can then take the appropriate action required to make a query rewrite. Query rewrite transforms a SQL statement expressed in terms of tables or views into a statement accessing one or more materialized views that are defined on the detail tables.

Materialized views can be refreshed either on demand or at regular time intervals. You can use the Action menu in Enterprise Manager to manually refresh a materialized view.

About Change Management

A dictionary baseline is an object containing a set of database definitions captured at a certain time. Baselines are stored internally in the Enterprise Manager repository, and are in a form that other Change Management applications can use. You can use Enterprise Manager to capture the database object definitions at a particular point in time and create reusable baseline scope specifications. By capturing a dictionary baseline, you can compare different database objects at different points in time and track changes applied to the database objects.

A dictionary comparison identifies differences in database object definitions between two databases, a database and a baseline, or two schemas within a single database/baseline. With Enterprise Manager, you can compare, view, and track differences between two sets of database object definitions at different points in time. The comparison result displays identical objects, objects/attributes that are different, and objects present either in the left or right source.

A dictionary synchronization synchronizes differences in database object definitions between two databases or a database and a baseline.

Synchronizations are generated using synchronization specifications. For synchronizations, the scope specification does not include the names of individual objects. It can only specify the types, and the schemas to be included or excluded. You can additionally supply a prefix to limit the objects selected to those whose names start with that prefix.

Dictionary synchronizations synchronize differences in any attribute value between objects of any type. Use synchronization specifications to create multiple versions of a synchronization. Each version has a unique version number and a synchronization date. Use these versions to associate synchronizations of database/schemas made over time.

For more information about these features, see Change Management for Databases.

Using Advisors in Oracle Enterprise Manager

Advisors are procedures that you can invoke, or Enterprise Manager can invoke internally, that designate a specific object for analysis. An advisor can report on a variety of database aspects, and describe a recommended action for each condition worthy of user intervention. The advisor might report that the condition can be corrected by an automated task that it provides.

Some advisors also provide what-if analysis for specific situations. For example, the Undo Advisor provides an analysis of the impact of changing the retention period for undo records on the size of the undo tablespace. Additionally, the Memory Advisor graphically displays the impact on performance of changing the size of a component of the SGA.

You can invoke an advisor from the Advisor Central Home page that is displayed when you click the Advisor Central link under the Related Links heading on the Database Home page, or on other pages where it is listed. You can also invoke advisors in the context of recommendations from alerts. Advisors are powerful tools for tuning your database. In general, advisors produce more comprehensive recommendations than alerts, since alert generation is intended to be low cost and have minimal impact on performance. On the other hand, because Advisors are often user-invoked, they can consume more resources and perform more detailed analysis. Their automated analysis can provide more results than you might normally be able to produce manually in the time afforded you as part of regular operations. This analysis, along with the what-if capability of some advisors, provides vital information for tuning that cannot be procured from any other source.

Managing Database Security

The following sections discuss various aspects of database security.


For other topics not discussed in the following sections, see Overview of Database Security in Oracle Database Concepts.

Managing Users, Roles, Profiles, and Audit Settings

Oracle includes security features that control how a database is accessed and used. Privileges and roles control user access to data and the types of SQL statements that can be executed. There are two types of privileges: system privileges and object privileges. A system privilege is an Oracle-defined privilege that enables you to perform specific database operations, such as CREATE, ALTER, and so forth. An object privilege is an Oracle-defined privilege that controls access to a specific object.

Users, usually administrators, can create a role to group together privileges and other roles. This facilitates granting multiple privileges and roles to users.

Privileges and roles can be granted to other users by users who possess the authority to do so with the ADMIN/GRANT option. Oracle Enterprise Manager allows you to create and manage users, roles and profiles. You can also expire passwords and lock or unlock users by applying those actions against one or more users. When managing roles, you can display all users and roles assigned to the specified role by using the Show Grantees function.

Auditing is the monitoring and recording of selected user database actions. It can be based on individual actions, such as the type of SQL statement run, or on combinations of factors that can include name, application, time, and so on. Security policies can cause auditing when specified elements in an Oracle Database are accessed or altered, including content. You can set up and adjust audit settings easily within the Enterprise Manager interface. With Enterprise Manager, you can view the database audit configuration and manage audited objects, privileges, and statements. You can also view the content of the audit trail. Out-of-box, Enterprise Manager also provides audit reports to help you monitor successful and failed login attempts, as well as SYSUSER operations.

Transparent Data Encryption

Oracle Advanced Security provides transparent data encryption to support your compliance efforts. Applications do not have to be modified, and existing applications continue to work seamlessly as before. Data is automatically encrypted when it is written to disk and automatically decrypted when the application accesses it. Key management is built-in, eliminating the complex task of creating, managing, and securing encryption keys.

Oracle Advanced Security transparent data encryption enables you to encrypt individual table columns or a tablespace. When a user inserts data into an encrypted column, the database automatically encrypts the data. When users select the column, the data is decrypted. This form of encryption is transparent, provides high performance, and is easy to implement.

Transparent data encryption includes industry-standard encryption algorithms such as the Advanced Encryption Standard (AES) as well as built-in key management.

The Enterprise Manager interface facilitates your usage of Transparent Data Encryption, enabling you to manage Wallet and Master Key.

Oracle Label Security (OLS)

Oracle Label Security (OLS) is a security option that enables you to assign data classification and control access using security labels, addressing privacy and regulatory compliance requirements. OLS provides multi-level security, row-level access control out of the box, and provides elements to be used inside Oracle Database Vault.

OLS can do the following:

  • Control access based on data classification, adding a powerful dimension to the access control decision process.

  • Enforce traditional multi-level security (MLS) policies for government and defense applications.

The Enterprise Manager interface enables you to manage OLS Policies, assign policies to tables or schemas, and assign labels to users. You can assign a label to both data and users. When assigned to data, the label can be attached as a hidden column to existing tables, providing transparency to existing SQL. For example, rows that contain highly sensitive data can be labeled HIGHLY SENSITIVE, while rows that are less sensitive can be labeled SENSITIVE, and so on.

When a user attempts to access data, OLS compares the user label with the data label and determines whether access should be granted. Unlike VPD, OLS provides an out-of-the-box security policy and the metadata repository for defining and storing labels.

Virtual Private Database (VPD)

Virtual Private Database (VPD) enables you to enforce security at the row and column level. A security policy establishes methods for protecting a database from accidental or malicious destruction of data or damage to the database infrastructure.

VPD is useful when security protections such as privileges and roles are not sufficiently fine-grained. For example, you can allow all users to access the employees table, but create security policies to restrict access to employees in the same department as the user. The Enterprise Manager interface enables you to manage VPD policies and various operations associated with it.

Essentially, the database adds a dynamic WHERE clause to a SQL statement issued against the table, view, or synonym to which an Oracle VPD security policy was applied. The WHERE clause allows only users whose credentials pass the security policy to access the protected data.

Application Contexts

Application contexts enable you to write applications that draw upon certain aspects of a user's session information. It provides a method of defining, setting, and accessing attributes that an application can use.

An Application context is a namespace. Each Application Context namespace can have multiple attributes (name/value pairs). These are associated with a user session and can be set/accessed multiple times.

You can store Globally Initialized Application Context attributes in Oracle Internet Directory and assign them to one or more enterprise users. An enterprise user can retrieve the attributes automatically upon login and then use them to initialize an application context.

Enterprise User Security

Enterprise User Security, an Oracle Database Enterprise Edition feature, combined with Oracle Identity Management, enables you to centrally manage database users and authorizations in one location. This dramatically reduces the cost of user provisioning and password resets. This feature should be considered for new application development and existing applications as well.

Enterprise User Security increases security in various ways, such as:

  • Centralizing provisioning and deprovisioning of database users.

  • Centralizing password management and self-service password reset capability.

  • Centralizing management of authorizations using global database roles.

Oracle Database Vault

The Payment Card Industry (PCI), Sarbanes-Oxley (SOX), EU Privacy Directive, and the Healthcare Insurance Portability and Accountability Act (HIPAA) all require strong internal controls on access, disclosure or modification of sensitive information that could lead to fraud, identity theft, financial irregularities and financial penalties.

Oracle Database Vault addresses common regulatory compliance requirements and reduces the risk of insider threats by:

  • Preventing highly privileged users (DBA) from accessing application data.

  • Enforcing separation of duty.

  • Providing controls over who, when, where, and how applications, data, and databases can be accessed.

When you enable a Database Vault in your database, Enterprise Manager provides an interface you can use to monitor Database Vault policies, and also propagate policies to multiple databases.

Masking Sensitive Data for Non-production Use

Data masking (also known as data scrambling and data anonymization,) is the process of replacing sensitive information copied from production databases to test or non-production databases with realistic, but scrubbed, data based on masking rules. Data masking is ideal for virtually any situation when confidential or regulated data needs to be shared with other non-production users; for instance, internal users such as application developers, or external business partners, like offshore testing companies or suppliers and customers. These non-production users need to access some of the original data, but do not need to see every column of every table, especially when the information is protected by government regulations.

Data masking allows organizations to generate realistic and fully functional data with similar characteristics as the original data to replace sensitive or confidential information. This contrasts with encryption or Virtual Private Database, which simply hides data, and the original data can be retrieved with the appropriate access or key. With data masking, the original sensitive data cannot be retrieved or accessed.

Names, addresses, phone numbers, and credit card details are examples of data that require protection of the information content from inappropriate visibility. Live production database environments contain valuable and confidential data — access to this information is tightly controlled. However, each production system usually has replicated development copies, and the controls on such test environments are less stringent. This greatly increases the risks that the data might be used inappropriately. Data masking can modify sensitive database records so that they remain usable, but contain no confidential or personally identifiable information. Yet, the masked test data resembles the original in appearance to ensure the integrity of the application.

Security and Regulatory Compliance

Masked data is a sensible precaution from a business security standpoint, because masked test information can help prevent accidental data escapes. In many cases, masked data is a legal obligation. The Enterprise Manager Data Masking Pack can help organizations fulfill legal obligations and comply with global regulatory requirements, such as Sarbanes-Oxley, the California Database Security Breach Notification Act (CA Senate Bill 1386), and the European Union Data Protection Directive.

The legal requirements vary from country to country, but most countries now have regulations of some form to protect the confidentiality and integrity of personal consumer information. For example, in the United States, The Right to Financial Privacy Act of 1978 creates statutory Fourth Amendment protection for financial records, and a host of individual state laws require this. Similarly, the U.S. Health Insurance Portability and Accountability Act (HIPAA) created protection of personal medical information.

Roles of Data Masking Users

The following types of users participate in the data masking process for a typical enterprise:

  • Application database administrator or application developer

    This user is knowledgeable about the application and database objects. This user may add additional custom database objects or extensions to packaged applications, such as the Oracle E-Business suite.

  • Information security administrator

    This user defines information security policies, enforces security best practices, and also recommends the data to be hidden and protected.

Format Libraries and Masking Definitions

To mask data, the Data Masking Pack provides two main features:

  • Masking format libraries

    The format library contains a collection of ready-to-use masking formats. The library consists of format routines that you can use for masking. A masking format can either be one that you create, or one from the list of Oracle-supplied default masking formats.

    As a matter of best practice, organizations should create masking formats for all common regulated information so that they can be applied to the sensitive data regardless of which database the sensitive data resides in. This ensures that all sensitive data is consistently masked across the entire organization.

  • Masking definitions

    A masking definition defines a data masking operation to be implemented on one or more tables in a database. Masking definitions associate table columns with formats to use for masking the data. They also maintain the relationship between columns that are not formally declared in the database using related columns.

You can create a new masking definition or use an existing definition for a masking operation. To create a masking definition, you specify the column of the table for which the data should be masked and the format of masked data. If the columns being masked are involved in unique, primary key, or foreign key constraints, data masking generates the values so that the constraints are not violated. Masking ensures uniqueness per character using decimal arithmetic. For example, a 5-character string generates a maximum of only 99999 unique values. Similarly, a 1-character string generates a maximum of only 9 unique values.

You would typically export masking definitions to files and import them on other systems. This is important when the test and production sites reside on different Oracle Management Systems or reside on entirely different sites.

Data masking is an iterative and evolving process handled by the security administrator and orchestrated by the database administrator. When you first configure data masking, try out the masking definition on a test system, then add a greater number of columns to the masking definition and test it to make sure it functions correctly and does not break any application constraints. During this process, you should exercise care when removing all imbedded references to the real data while maintaining referential integrity. After data masking is configured to your satisfaction, you can use the existing definition to repeatedly mask after cloning. The masking definition, however, would need to evolve as new schema changes require new data and columns to be masked.

See Also:

"Creating a Data Masking Definition" in the Enterprise Manager online help as well as the help for each Data Masking page.

Recommended Data Masking Workflow

Figure 6-15 shows that the production database is cloned to a staging region and then masked there. During the masking process, the staging and test areas are tightly controlled like a production site.

After the masking process is complete, you can distribute the database for wide availability. If you need to ship the database to another third-party site, you are required to use the Data Pump Export utility, and then ship the dump file to the remote site. However, if you are retaining the masked data in-house, see the next section.

Figure 6-15 Data Masking Workflow

Data Masking workflow
Description of "Figure 6-15 Data Masking Workflow"

Data Masking Task Sequence

The task sequence in this section demonstrates the data masking workflow and refers you to additional information elsewhere in this chapter for some of the tasks in the sequence. Before reviewing this sequence, note that there are two options for completing this process:

  • You can clone the production database to a staging area, mask it, then export/ import it to another database before delivering it to in-house testers or external customers. This is the most secure approach.

  • You can clone the production database to a mask staging area, then make the staging area the new test region. In this case, you should not grant testers SYSDBA access or access to the database files. Doing so would compromise security. The masked database contains the original data in unused blocks and in the free list. You can only purge this information by exporting/importing the data to another database.

    The following steps guide you through the data masking process. Note the references to other sections for supporting information.

  1. Review the application database and identify the sources of sensitive information.

  2. Define mask formats for the sensitive data. The mask formats may be simple or complex depending on the information security needs of the organization.

    See "Data Masking Task Sequence" for more information.

  3. Create a masking definition to associate table columns to these mask formats. Data masking determines the database foreign key relationships and adds foreign key columns to the mask.

    See "Creating a Masking Definition" for more information.

  4. Optionally declare dependent columns not defined in the database but enforced by the applications. Masking assumes it should perform the masking to honor these additional constraints.

    This requires knowledge of the application schema. Consult the application documentation to identify the relationship between the tables and the columns containing sensitive data to ensure complete coverage of your application data.

    See "Adding Dependent Columns" for more information.

  5. Save the masking definition and generate the masking script.

  6. Clone the production database to a staging area, selecting the masking definition to be used after cloning. Note that you can perform cloning using Enterprise Manager, which enables you to add masking to the Enterprise Manager clone workflow. However, if you performing cloning outside of Enterprise Manager, you must initiate masking from Enterprise Manager after cloning is complete. The cloned database should be controlled with the same privileges as the production system since it still contains sensitive production data.

    After cloning, be sure to change the passwords as well as update or disable any database links, streams, or references to external data sources. Back up the cloned database or, at a minimum, the tables that contain masked data. This will help you restore the original data if the masking definition needs to be refined further.

    See "Cloning the Production Database" for more information.

  7. Verify if the masked data meets the information security requirements. Otherwise, refine the masking definition, restore the altered tables, and reapply the masking definition until the optimal set of masking definitions has been identified.

  8. After masking, test all of your applications, reports, and business processes to ensure they are functional. If everything is working, you can export the masking definition to keep it as a backup.

  9. After masking the staging site, make sure to drop any tables named MGMT_DM_TT before cloning to a test region. These temporary tables contain a mapping between the original sensitive column value and the mask values and are therefore sensitive in nature.

    During masking, Enterprise Manager automatically drops these temporary tables for you with the default "Drop temporary tables created during masking" option. However, you can preserve these temporary tables by unselecting this option. In this case, you are responsible for deleting the temporary tables before cloning to the test region.

  10. After masking is complete, ensure that all tables loaded for use by the substitute column format or table column format will be dropped. These tables contain the mask values that table column or substitute formats will use. It is recommended that you purge this information for security reasons.

    See "Deterministic Masking" on page 6-39 for more information.

  11. Clone the database to a test region or use it as the new test region. When cloning the database to an external or unsecured site, you should use Export or Import, and only supply the data in the database rather than the database files themselves.

  12. Provide the masking definition to the application database administrator to use in masking the database as part of cloning production for testing.

Defining New Masking Formats

A masking definition requires one or more masking formats for any columns included in the masking definition. When adding columns to a masking definition, you can either create masking formats manually or import them from the format library. It is often more efficient to work with masking formats from the format library.

To create a masking format in the format library, do the following:

  1. Navigate to the Databases targets roll-up page. The Databases page appears.

  2. Click the Data Masking Format Library link. The Format Library appears with predefined formats that Oracle Enterprise Manager provides.

  3. Click Create. The Create Format page appears, where you can define a masking format.


    For information on page user controls, see the online help for the Format page.
  4. Provide a required name for the new format, select a format entry type from the Add drop-down list, then click Go.

    A page appears that enables you to provide input for the format entry you have selected. For instance, if you select Array List, the subsequent page enables you to enter a list of values, such as New York, New Jersey, and New Hampshire.

  5. Continue adding additional format entries as needed.

  6. When done, provide an optional user-defined or post-processing function (see "Providing User-defined and Post-processing Functions"), then click OK to save the masking format.

    The Format Library page reappears with your newly created format displayed in the Format Library table.


    For information on page user controls, see the online help for the Format Library and Create Format pages.

Providing User-defined and Post-processing Functions

If desired, you can provide user-defined and post-processing functions on the Create Format page. A user-defined choice is available in the Add drop-down, and post-processing function field is available at the bottom of the page.

  • User-defined functions

    To provide a user-defined function, select User Defined Function from the Add drop-down, then click Go to access the input fields.

    A user-defined function passes in the original value as input and returns a mask value. The data type and uniqueness of the output values must be compatible with the original output values. Otherwise, a failure occurs when the job runs. Combinable, a user-defined function is a PL/SQL function that can be invoked in a Select statement. Its signature is returned as:

    Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2;
    • rowid is the min (rowid) of the rows that contain the value original_value 3rd argument.

    • column_name is the name of the column being masked.

    • original_value is the value being masked.

    That is, it accepts an input string that is the original value and returns the mask value.

    Both input and output values are varchar2. For instance, a user-defined function to mask a number could receive 100 as input, the string representation of the number 100, and return 99, the string representation of the number 99. Values are cast appropriately when inserting to the table. If the value is not castable, masking fails.

  • Post-processing functions

    To provide a post-processing function, enter it in the Post Processing Function field.

    A post-processing function has the same signature as a user-defined function, but passes in the mask value the masking engine generates, and returns the mask value that should be used for masking, as shown in the following example:

    Function post_proc_udf_func (rowid varchar2, column_name varchar2, mask_value varchar2) returns varchar2; 
    • rowid is the min (rowid) of the rows that contain the value mask_value.

    • column_name is the name of the column being masked.

    • mask_value is the value being masked.

Using Masking Format Templates

After you have created at least one format, you can use the format definition as a template in the Create Format page, where you can implement most of the format under a different name and change the entries as needed, rather than having to create a new format from scratch.

To create a new format that is similar to an existing format, select a format on the Format Library page and click Create Like. The masking format you select can either be one you have previously defined yourself, or one from the list of out-of-box masking formats. You can use these generic masking format definitions for different applications.

See "Using Oracle-supplied Predefined Masking Formats" for instructional details about the various Oracle-supplied predefined masking format definitions and how to modify them to suit your needs.

Using Oracle-supplied Predefined Masking Formats

The following sections discuss the various Oracle-supplied format definitions and how to modify them to suit your needs.

Patterns of Format Definitions

All of the format definitions adhere to these typical patterns:

  • Generate a random number or random digits.

  • Perform post-processing on the above-generated value to ensure that the final result is a valid, realistic value

For example, a valid credit card number must pass Luhn's check. That is, the last digit of any credit card number is a checksum digit, which is always computed. Also, the first few digits indicate the card type (MasterCard, Amex, Visa, and so forth). Consequently, the format definition of a credit card would be as follows:

  • Generate random and unique 10-digit numbers.

  • Using a post-processing function, transform the values above to a proper credit card number by adding well known card type prefixes and computing the last digit.

This format is capable of generating 10 billion unique credit card numbers.

Category Definitions

The following sections discuss different categories of these definitions. By default, these mask formats are also available in different format styles, such as a hyphen ( - ) format. If needed, you can modify the format style.

Credit Card Numbers

Out of the box, the format library provides many different formats for credit cards. The credit card numbers generated by these format pass the standard credit card validation tests by the applications, thereby making them appear like valid credit card numbers.

Some of the credit card formats you can use are:

  • MasterCard numbers

  • Visa card numbers

  • American Express card numbers

  • Discover Card numbers

  • Any credit card number (Credit card numbers belong to all types of cards.)

You may want to use different styles for storing credit card numbers, such as:

  • Pure numbers

  • 'Space' for every four digits

  • 'Hyphen' ( - ) for every four digits, and so forth

To implement the masked values in a certain format style, you can set the DM_CC_FORMAT variable of the DM_FMTLIB package. To install the DM_FMTLIB package, see "Installing the DM_FMTLIB Package" .

United States Social Security Numbers

Out of the box, you can generate valid U.S. social security (SSN) numbers. These SSNs pass the normal application tests of a valid SSN.

You can affect the format style by setting the DM_SSN_FORMAT variable of the DM_FMTLIB package. For example, if you set this variable to '-', the typical social security number would appear as '123-45-6789'.

ISBN Numbers

Using the format library, you can generate either 10-digit or 13-digit ISBN numbers. These numbers adhere to standard ISBN number validation tests. All of these ISBN numbers are random in nature. Similar to other format definitions, you can affect the "style" of the ISBN format by setting values to DM_ISBN_FORMAT.

UPC Numbers

Using the format library, you can generate valid UPC numbers. They adhere to standard tests for valid UPC numbers. You can affect the formatting style by setting the DM_UPC_FORMAT value of the DM_FMTLIB package.

Canadian Social insurance Numbers

Using the format library, you can generate valid Canadian Social Insurance Numbers (SINs). These numbers adhere to standard tests of Canadian SINs. You can affect the formatting style by setting the DM_CN_SIN_FORMAT value of the DM_FMTLIB package.

North American Phone Numbers

Out of the box, the format library provides various possible U.S. and Canadian phone numbers. These are valid, realistic looking numbers than can pass standard phone number validation tests employed by applications. You can generate the following types of numbers:

  • Any North American phone numbers

  • Any Canadian phone number

  • Any U.S.A. phone number

Installing the DM_FMTLIB Package

The predefined masking formats use functions defined in the DM_FMTLIB package. This package is automatically installed in the DBSNMP schema of your Enterprise Manager repository database. To use the predefined masking formats on a target database (other than the repository database), you must manually install the DM_FMTLIB package on that database. To do this, follow these instructions:

  1. Locate the following scripts in your Enterprise Manager installation:

  2. Copy these scripts to a directory in your target database installation and execute them using SQL*Plus, connected as a user that can create packages in the DBSNMP schema.

After you have done this, you can use the predefined masking formats in your masking definition.

Agent Compatibility for Data Masking

Data Masking supports Oracle Database 9i and later. Enterprise Manager 11.1 Grid Control requires the 11.1 Agent. If you have a version prior to 11.1, you can use it by implementing the following work-around.

Replace the following file...


... with this file:


Supported Data Types

The list of supported data types varies by release.

  • Grid Control 10g Release 5 ( and Database 11g Release 2 (11.2)

    • Numeric Types

      The following Numeric Types can use Array List, Delete, Fixed Number, Null Value, Post Processing Function, Preserve Original Data, Random Decimal Numbers, Random Numbers, Shuffle, SQL Expression, Substitute, Table Column, Truncate, and User Defined Function formats:

      • NUMBER

      • FLOAT

      • RAW



    • String Types

      The following String Types can use Array List, Delete, Fixed Number, Fixed String, Null Value, Post Processing Function, Preserve Original Data, Random Decimal Numbers, Random Digits, Random Numbers, Random Strings, Shuffle, SQL Expression, Substitute, Substring, Table Column, Truncate, and User Defined Function formats:

      • CHAR

      • NCHAR

      • VARCHAR2

      • NVARCHAR2

    • Date Types

      The following Date Types can use Array List, Delete, Null Value, Post Processing Function, Preserve Original Data, Random Dates, Shuffle, SQL Expression, Substitute, Table Column, Truncate, and User Defined Function formats:

      • DATE


  • Grid Control 11g Release 1 (

    Besides the support described above for Grid Control 10.2 and Database 11.2, Grid Control 11.1 also supports the following Lob Types that can use Fixed String, Fixed Number, and Null:

    • CLOB

    • NCLOB

    • BLOB

Creating a Masking Definition

Before creating a masking definition, be aware of the following advisory information:

  • Make sure the format you select does not violate check constraints and does not break any applications that use the data.

  • For triggers and PL/SQL packages, data masking recompiles the object.

  • Exercise caution when masking partitioned tables, especially if you are masking the partition key. In this circumstance, the row may move to another partition.

  • Data Masking does not support clustered tables, masking information in object tables, XML tables, and virtual columns. Relational tables are supported for the masking.

  • If objects are layered on top of a table such as views, materialized views, and PL/SQL packages, they are recompiled to be valid.

To create a masking definition, do the following:

  1. Navigate to the Databases targets roll-up page. The Databases page appears.

  2. Click the Data Masking Definitions link under the Related Links section. The Data Masking Definitions page appears, where you can create and schedule new masking definitions and manage existing masking definitions.


    For information on page user controls, see the online help for the Data Masking Definitions page.
  3. Click Create to go to the Create Masking Definition page. A masking definition includes information regarding table columns and the format for each column. You can choose which columns to mask, leaving the remaining columns intact.


    For information on page user controls, see the online help for the Create Masking Definition page.
  4. Specify a required database name. The Database field shows the database name of the database target you selected before accessing the Data Masking Definitions Page. You can specify any of the available databases. To search for the list, click the flashlight icon.

  5. Click Add to go to the Add Columns page, where you can add one or more columns for masking and automatically add foreign key columns. You need to add at least one column in the masking definition.


    For information on page user controls, see the online help for the Add Columns page.
  6. In the Search section, minimally provide a schema name and optionally provide input for the other search fields to filter the results, then click Search.

  7. Either select one or more columns for formatting later on the Create Masking Definition page, or formatting now if the data types of the columns you have selected are identical.


    For information on data types, see "Supported Data Types".
  8. Decide if you want to mask selected columns as a group.

    Select this checkbox if you want to mask more than one column together, rather than separately. When you then select two or more columns and then later define the format on the Define Group Mask page, the columns appear together, and any choices you make for format type or masking table apply collectively to all of the columns.

    Columns that you want to mask as a group must all be from the same table.

  9. Either click Add to add the column to the masking definition, return to the Create Masking Definition page, and define the format of the column later, or click Define Format and Add to define the format for the column now.

    The Define Format and Add feature can save you significant time. When you select multiple columns to add that have the same data type, you do not need to define the format for each column as you would when you click Add. For instance, if you search for social security numbers (SSN) and this yields 100 SSN columns, you could select them all, then click Define Format and Add to import the SSN format for all of them.

  10. If you clicked Add in the previous step...

    ... you will eventually need to define the format of the column in the Create Masking Definition page before you can proceed. When you are ready to do so, click the icon in the page Format column for the column you want to format. Depending on whether you decided to mask selected columns as a group on the Add Columns page, either the Define Column mask or Define Group mask appears. Read further in this step for instructions for both cases.

    If you clicked Define Format and Add in the previous step and did not check "Mask selected columns as a group," ....

    ... the Define Column Mask page appears, where you can define the format for the column before adding the column to the Create Masking Definition page.

    1. Provide a format entry for the required Default condition by either selecting a format entry from the drop down list and clicking Add, or clicking Import Format, selecting a predefined format on the Import Format page, then clicking Import.

      See "Using Oracle-supplied Predefined Masking Formats" for information about Oracle-supplied predefined masking format definitions.


      For information on page user controls, see the online help for the Define Column Mask page.
    2. Add another condition by clicking Add Condition to add a new condition row, then provide one or more format entries as described in the previous step. Figure 6-16 shows an example of user input.

      Figure 6-16 Sample Input for Define Column Mask Page

      Define Column Mask page
      Description of "Figure 6-16 Sample Input for Define Column Mask Page"

    3. When you have finished formatting the column, click OK to return to the Create Masking Definition page.

    If you clicked Define Format and Add in the previous step and checked "Mask selected columns as a group," ....

    ... the Define Group Mask page appears, where you can add format entries for group columns that appear in the Create Masking Definition page.

    1. Select one of the available format types. For complete information on the format types, see the online help.


      For information on page user controls, see the online help for the Define Group Mask page.
    2. Optionally add a column to the group.

    3. When you have finished formatting the group, click OK to return to the Create Masking Definition page. Your configuration appears in the Columns table.

  11. Click OK to save your definition and return to the Data Masking Definitions page.

    At this point, super administrators can see each other's masking definitions.

  12. Select the definition and click Generate Script to view the script for the list of database commands used to mask the columns you selected earlier.

    From the Script Generation Results page that appears, you can clone and mask the database or schedule the data masking job.


    For information on page user controls, see the online help for the Script Generation Results page.


    If any tables included in the masking definition or reorganization have columns of data type LONG, a warning message may appear. See Using Data Masking with LONG Columns for more information.
  13. Make one of the following choices:

    • If you are working with a production database, click Clone and Mask.

    • If you are already working with a test database and want to directly mask the data in this database, click Schedule Job.

Adding Dependent Columns

The following prerequisites apply for the column to be defined as dependent:

  • A valid dependent column should not already be included for masking.

  • The column should not be a foreign key column or referenced by a foreign key column.

  • The column data should conform to that in the parent column.

If the column does not meet these criteria, an "Invalid Dependent Columns" message appears when you attempt to add the dependent column.

To add dependent columns, do the following:

  1. From the Create Masking Definition page, click the Add + icon in the Dependent Columns Add column for the column you want to format. The Add Dependent Columns page appears.


    For information on page user controls, see the online help for the Add Dependent Columns page.
  2. In the Search section, minimally provide a schema name or select it by first clicking on the flashlight search icon, then clicking Search.

  3. Select one or more column names from the list that appears below, then click Add.

    The Create Masking Definition page reappears and displays your added dependent column(s) in the Dependent Columns table at the bottom of the page. The dependent column(s) will be masked using the same format as that specified for the parent column.

Cloning the Production Database

To clone and optionally mask the masking definition's target database, do the following:

  1. Ensure that you have a Provisioning and Patch Automation pack license before proceeding. The Clone Database feature requires this license.

  2. From the Data Masking Definitions page, select the masking definition you want to clone, select Clone Database from the Actions drop-down, then click Go. The Clone Database wizard appears, where you can create a test system to run the mask.

  3. Proceed through the wizard steps as you ordinarily would to clone a database. For assistance, refer to the online help for each step.

  4. In the Database Configuration step of the wizard, add a masking definition.

  5. Schedule and then run the clone job.

Using Reorganize Objects with LONG Columns

When Reorganize Objects script generation completes, an impact report appears. If any tables included in the reorganization have columns of data type LONG, the following warning message may be displayed in the impact report:

Reorganization includes a table with a LONG column. To support reorganization of tables with LONG columns that are greater than 32KB, the external procedure MGMT$REORG_MOVELONGCOMMAND must be configured properly. It has been determined this external procedure is not currently configured as expected.

If this message is displayed in the impact report, examine the data in the LONG columns. If the data is less than 32KB, you can ignore this message. If the data is greater than 32KB, return from the Reorganize Objects wizard and use the following procedure to configure the database for external procedure connections.

  1. Shut down the target database and its listener.

  2. Edit $ORACLE_HOME/network/admin/tnsnames.ora and add the following entry:

        (DESCRIPTION =
            (ADDRESS_LIST = 
            (CONNECT_DATA =
  3. Save the changes to tnsnames.ora and exit the editor.

  4. Edit $ORACLE_HOME/network/admin/listener.ora and make the following changes:

    1. An entry in listener.ora looks similar to the following example:

      LISTENER =
              (DESCRIPTION =
                  (ADDRESS_LIST =
                      (ADDRESS = 
                (PROTOCOL = tcp)(HOST = host1.us.oracle.com)(PORT = 1521)))))

      Add a new ADDRESS_LIST parameter to this entry:

      LISTENER =
              (DESCRIPTION =
                  (ADDRESS_LIST =
                      (ADDRESS =
                          (PROTOCOL = tcp)(HOST = host1.us.oracle.com)                     (PORT = 1521)))
                  (ADDRESS_LIST =
                      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)))))
    2. A second entry in listener.ora looks similar to the following example:

          (SID_LIST =
              (SID_DESC = (SID_NAME = db1)(ORACLE_HOME = /scratch/oracle/db1)))

      Add a new SID_DESC parameter to this entry:

          (SID_LIST =
              (SID_DESC = (SID_NAME = db1)(ORACLE_HOME = /scratch/oracle/db1))
              (SID_DESC =
                  (SID_NAME = PLSExtProc)
                  (ORACLE_HOME = /scratch/oracle/db1)
                  (PROGRAM = extproc)
                  (ENVS = 
  5. Save the changes to listener.ora and exit the editor.

  6. Ensure that the following file exists in your database installation:

  7. Restart the listener and the database.

  8. Launch Reorganize Objects and generate the script. The warning message should now not appear in the impact report.

  9. Schedule and submit the job. The following message will appear in the job output log:

    Copy with PLSQL block failed due to size of long. Using Mgmt$Reorg_MoveLongCommand to perform the reorganization.

    This message is expected. Any tables with LONG columns are masked or reorganized successfully.

Using Data Masking with LONG Columns

Data Masking supports tables with long columns differently than tables without long columns. Tables with long columns are updated using an Update SQL statement, and only columns that need to be masked are changed. The long column itself is left intact. Tables without long columns are masked by dropping, recreating, and reloading the table data with masked values. This process performs faster than updates and is used, therefore, except for long columns that SQL does not support.

When long column tables are masked, the Update statements produce undo information that can be used to retrieve the old values using flashback query. To avoid this, you must export import the data to another database, purge undo information, or disable flashback query.

Deterministic Masking

You may occasionally need to consistently mask multiple, distinct databases. For instance, if you run HR, payroll, and benefits that have an employee ID concept on three separate databases, the concept may be consistent for all of these databases, in that an employee's ID can be selected to retrieve the employee's HR, payroll, or benefits information. Based on this premise, if you were to mask the employee's ID because it actually contains his/her social security number, you would have to mask this consistently across these three databases.

Deterministic masking provides a solution for this problem. You can use the Substitute format to mask employee ID column(s) in all three databases. The Substitute format uses a table of values from which to substitute the original value with a mask value. As long as this table of values does not change, the mask is deterministic or consistent across the three databases.


For more information on the Substitute format, see the online help for the Define Column Mask page.

Use Cases

The following use cases show some of the more important data masking tasks that you can perform.

Mask production data for an application maintained in a staging environment

This use case applies to the application database administrator except for step 6.

  1. Log into the system as the application database administrator to access databases and their associated privileges.

  2. Search the schema for tables that contain columns with confidential or regulated information. The system responds by displaying tables and columns based on search criteria.

  3. Select the table columns as well as the appropriate mask formats. The system responds by adding the table column and all of its associated foreign key columns into the masking definition.

  4. Assign additional table columns to the masking definition wherein explicit referential integrity relationships are not defined in the database. The system responds by adding these table columns as dependent columns to the masking definition.

  5. Notify the information security administrator of the masking definition.

  6. The information security administrator reviews the masking definition, verifies its completeness, then recommends that the application database administrator proceed with the masking process.

  7. Initiate the data masking process using the masking definition created in step 4. The system responds by validating the masking definition, verifying space availability, then applying the mask formats to the data in the test-staging database.

  8. Verify the completeness of the masking process, and confirm that masked data has replaced sensitive data.

Use an existing masking definition to mask a new database

This use case applies to the application database administrator.

  1. Create a new masking definition based on a previously created masking definition that is then applied to a new database. The system responds by copying the masking definition, including table columns and mask formats, and associating it with the new database.

  2. Use the newly created masking definition to initiate masking of the new database. The system responds by validating the masking definition, verifying space availability, then applying the mask formats to the data in the test-staging database.

Export and import masking definitions

This use case applies to the application developer and application database administrator.

  1. The developer logs into the system to access databases and their associated privileges.

  2. The developer requests the export of a valid masking definition into a transportable file format. The system responds by generating an XML file with the masking definition and mask formats.

  3. The database administrator downloads or receives mask templates.

  4. The database administrator imports the mask template. The system responds by validating the mask template and creating a masking definition based on the template.

  5. The database administrator uses the newly created masking definition to initiate masking of the database. The system responds by validating the masking definition, verifying space availability, then applying the mask formats to the data in the test-staging database.

Related Oracle Security Offerings

Besides data masking, Oracle offers the following security products:

  • Virtual Private Database or Oracle Label Security — Hides rows and data depending on user access grants.

  • Transparent Data Encryption — Hides information stored on disk using encryption. Clients see unencrypted information.

  • DBMS_CRYPTO — Provides server packages that enable you to encrypt user data.

  • Database Vault — Provides greater access controls on data.

Maintaining Databases

You can use Oracle Enterprise Manager to control the flow of data between or outside Oracle Databases. The following sections provide an overview of the functions available to you that can help you maintain your Oracle Database.

Using Backup

Backup of an Oracle Database generally refers to physical backup; protecting the files that make up your database. The files protected by the backup and recovery facilities built into Oracle Enterprise Manager include datafiles, control files, and archived redo log files. The backup mechanisms that work at the physical level protect against damage at the file level, such as the accidental deletion of a datafile or the failure of a disk drive. The focus in Oracle backup and recovery is generally on the physical backup of database files, which permit the full reconstruction of your database.

Enterprise Manager's physical backup and recovery features are built on Oracle's Recovery Manager (RMAN) command-line client. Enterprise Manager carries out its backup tasks by composing RMAN commands and sending them to the RMAN client. Enterprise Manager makes available much of the functionality of RMAN, as well as wizards and automatic strategies to simplify and further automate implementing RMAN-based backup and recovery.

Managing backups consists of two things: managing the backups themselves as they exist on disk or tape, and managing the record of backups kept in the RMAN repository.

Datafiles or archived redo logs copied at the operating system level can be cataloged and added to the RMAN repository, after which they can be used in data restore and recovery operations just as if they had been created with RMAN. Backup maintenance provided in Enterprise Manager includes the following:

  • Viewing lists of backups (backup sets and image copies) recorded in the RMAN repository

  • Cross-checking your repository, which marks as expired any backups that are in the repository but not accessible at the time of the crosscheck

  • Deleting expired backups from your RMAN repository

  • Deleting obsolete backups from the repository and from disk. Note that if you use a recovery area for your backup storage, many maintenance activities are reduced or eliminated because of the flash recovery area's automatic management of disk space.


Media recovery using Enterprise Manager can be either complete recovery or point-in-time recovery. In complete recovery, all changes from the logs are applied and the database returns to its state at the time of failure. You can then reopen the database with no loss of data.

In point-in-time recovery, you can choose any System Change Number (SCN) between the time of the datafile backup and the last change in the redo logs, and apply only changes up through that SCN. You can thus return your database to any SCN (and thus any point-in-time) between the time of your backup and the most recent SCN in the redo logs. This technique can be used to recover from situations such as user errors that cause logical corruption in the database. Point-in-time recovery is sometimes referred to as incomplete recovery, since not all changes are applied.

Media recovery requires a control file, datafiles, and all online and archived redo logs from the time the datafiles were backed up. It is typically used only in the case of database failure.

Crash recovery is used to recover from a failure either when a single-instance database crashes, or all instances of an Oracle RAC database crash. Instance recovery refers to the case where a surviving instance recovers a failed instance in an Oracle RAC database. Datafile media recovery is used to recover from a lost or damaged current datafile or control file. Block media recovery is a technique for restoring and recovering individual data blocks while all database files remain online and available.

Flashback Recovery

Enterprise Manager's flashback features provide a range of physical and logical data recovery tools as efficient, easy-to-use alternatives to physical and logical backups. Flashback table allows you to revert a table to its contents at a time in the recent past; and flashback drop allows you to rescue dropped database tables. Neither requires advance preparation such as creating logical-level exports to allow for retrieval of your lost data, and both can be used while your database is available.

Flashback database lets you quickly recover an Oracle Database to a previous time to correct problems caused by logical data corruptions or user errors. If a flash recovery area is configured, then you can return the database to a prior time. Flashback table lets you recover tables to a specified point in time. You can restore table data along with all its associated attributes, such as indexes, triggers, and so on. This is done while the database is online by rolling back only the changes to the given tables. You can revert the table and its contents to a specific time or user-specified SCN. Use flashback table with flashback query and row versions to find a time to which the table should be restored.

The larger the flash recovery area, the more useful it becomes. Ideally, the flash recovery area should be large enough to hold two complete backup copies of your datafiles, plus any incremental backups and archive logs required to restore your database to any point in time during your recovery window.

Backup and Recovery Settings

You can use Enterprise Manager to configure a number of settings and policies that determine how backups are stored, which data is backed up, how backups perform, and how long backups are retained before being purged from the recovery area. You can also configure settings to improve backup performance. For disks, you can configure the default format for storing backups, the location on disk where backups are stored, and whether backup tasks are run in parallel for improved performance. Oracle backups to disk can be saved as image copies or backup sets. Backups to tape and similar media management devices must be stored as backup sets.

Managing Restore Points

You can use Enterprise Manager to create a restore point as a designated point in time to which you can restore your database. A restore point is a name associated with a past point-in-time of the database. You can flash back the database to a restore point if the required flashback logs and archived logs exist. A Guaranteed Restore Point is a restore point to which you can always flash back the database. Each restore point has a name and creation time. The restore points are sorted by creation time beginning with the most recent.

Overview of Data Guard

Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle Databases to survive disasters and data corruptions. Data Guard maintains these standby databases as transactionally consistent copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime associated with the outage.

Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability. With Data Guard, administrators can optionally improve production database performance by offloading resource-intensive backup and reporting operations to standby systems.

Enterprise Manager provides comprehensive Data Guard setup, management, and monitoring capabilities, including:

  • Automated creation of physical, logical, and snapshot standby databases

  • Monitoring of Data Guard status and performance

  • Management of key Data Guard features, such as protection modes, switchover, failover, and fast-start failover

Additional Maintenance Features

Enterprise Manager allows you to easily move data from files or databases into an existing database. You can export and import data, and you can clone databases using the tools available in Enterprise Manager. The following sections describe the features available in the Maintenance area of the Database Home page.

Exporting and Importing Features

The Export to Files feature enables you to move existing data in Oracle format to and from Oracle Databases. For example, export files can archive database data or move data among different Oracle Databases that run on the same or different operating systems. Using Export to Files, you can back up logical database objects while the database is open and available for use. It writes a read-consistent view of the database's objects to an operating system file. The full database export excludes some schemas and therefore does not include all database objects. The following system schemas are not exported as part of a full export because the metadata they contain is exported as part of other objects in the dump file set: SYS, ORDSYS, EXFSYS, MDSYS, DMSYS, CTXSYS, ORDPLUGINS, LBACSYS, XDB, SI_INFORMTN_SCHEMA, DIP, DBSNMP, and WMSYS.

For more information about exporting files, see the Moving Data Using Data Pump and Export/Import chapter in the Oracle Database Upgrade Guide 11g Release 2. For more information about Data Pump see the Data Pump Export chapter in Oracle Database Utilities 11g Release 2.

Conversely, you can use Enterprise Manager to import the contents of a database, objects, and tables. You can also import the contents of a database by using the Import from Database feature. You can import an entire database, schemas within a database and the objects in the schemas, or one or more tables from within a schema. Use the Load Data from File feature to load data from a non-Oracle database into an Oracle Database.

Use the Monitor Export and Import Job feature to view the status of an import and export operation such as a full database export or a tablespace export. You can change the state of a job by suspending it, canceling it, or resuming it if it had previously been suspended. You can also increase the number of threads dedicated to the job and thus increase its resources.

Cloning Databases

You can use the Enterprise Manager Clone Database tool to clone an Oracle Database instance to an existing Oracle home. After you have an Oracle Database instance in a known state, you may want to clone that database to another existing Oracle home.

Transporting Tablespaces

Use the Transport Tablespaces feature to transport tablespaces between different machine architectures and operating systems. Transportable tablespaces entirely bypass the unload and reload steps. Using transportable tablespaces, Oracle data files (containing table data, indexes, and almost every other Oracle Database object) can be directly transported from one database to another. You can use the transportable tablespaces feature to move a subset of an Oracle Database and plug it in to another Oracle Database, essentially moving tablespaces between the databases.


The Integrate TTS wizard fails when reading header information while attempting to integrate into an database target monitored by a Agent configured with an 11.1 GC OMS.

Use the following workaround:

Copy: $OMS_HOME/sysman/admin/scripts/db/dataUtilities/tts.pl

to: $AGENT_HOME/sysman/admin/scripts/db/dataUtilities/tts.pl

Overview of Oracle Streams

Oracle Streams enables the propagation and management of data, transactions, and events in a data stream either within a database, or from one database to another. The stream routes publish information to subscribed destinations. As users' needs change, they can simply implement a new capability of Oracle Streams, without sacrificing existing capabilities.

Oracle Streams can stream data between databases, nodes, or blade farms in a grid and can keep two or more copies in sync as updates are applied. It also provides a unified framework for information sharing, combining message queuing, replication, events, data warehouse loading, notifications, and publish/subscribe into a single technology.

Oracle Streams provides a set of elements that lets users control what information is put into a stream, how the stream flows or is routed from node to node, what happens to events in the stream as they flow into each node, and how the stream terminates. By specifying the configuration of the elements acting on the stream, you can address specific requirements, such as message queuing or data replication.

Use the Streams Management features in Oracle Enterprise Manager to set up, manage, monitor, and troubleshoot your Oracle Streams environment.

Working with Software Configurations

The Oracle Management Agent on a host collects host configuration information for the host and database configuration information for the Oracle Databases on the host and client configuration information and communicates that information over HTTPS to the Oracle Management Service, which stores it in the Oracle Management Repository. Enterprise Manager lets you compare these configurations to determine differences between two or more hosts, clients or databases. The Generic Compare feature allows you to compare various types of current/saved configurations with one or more current/saved configurations.

Using this feature, you can compare the current configuration of a selected target type with one or more current configurations of other targets of the same type, or compare saved configurations with one or more saved configurations of the same or other targets. You can also compare saved configurations with one or more current configurations of the same or other targets, or compare a specific configuration with another configuration and list the differences immediately. Lastly, you can compare a specific configuration with another configuration and schedule the comparison as a job.

Using Database Software Patching

Use Enterprise Manager to simplify the patching of Oracle software on any host where an Oracle Management Agent is running, and to provide critical patch advisories. Enterprise Manager simplifies the process of patching Oracle software. Oracle Patch Advisories describe critical software patches for Oracle products. To help ensure a secure and reliable configuration, all relevant and current critical Oracle patches should be applied. To promote critical patch application, Enterprise Manager performs an assessment of vulnerabilities by examining the host configurations collected for your enterprise to determine the Oracle homes that require one or more critical patches to be installed. All the Critical Patch Advisories are listed with their corresponding Impact areas, a brief description of each advisory, the number of Affected Hosts, and Oracle homes for each advisory.

You can connect to My Oracle Support via Enterprise Manager, perform a search, download the required patches/patch sets, and apply the patches. You can perform all the patching activities from the Patch Cache. That is, even when the OMS is not connected to My Oracle Support via the Internet, you can perform a search, download, and apply a patch/patch set.

Monitoring Oracle Real Application Clusters

Oracle Real Application Clusters (RAC) provides a high-availability database environment spanning multiple hosts. Each cluster can be comprised of multiple cluster databases, each of which consists of multiple cluster database instances. A cluster database is available as long as one of its instances is available.Enterprise Manager provides performance pages to monitor all levels of a cluster environment, including the cluster, the cluster database, and the cluster database instances. Managing Oracle Real Application Clusters databases and instances is similar to managing single-instance databases. Using Enterprise Manager, you can perform various tasks, such as:

  • Managing clusters, cluster databases, and cluster database instances

  • Monitoring key performance statistics

  • Performing administration and maintenance tasks

  • Managing Oracle Real Application Clusters features, such as Cluster Managed Database Services

Oracle RAC enables each computer (or host) that is a member of the cluster to share access to the database. If one cluster host fails or is taken offline, the other hosts of the cluster continue operating, and the entire Oracle RAC database remains available for applications. This means that two or more computers with typical performance appear to applications as if they were a much more powerful computer.To increase performance, availability, and reliability of a two-host Oracle RAC database, you can add cluster hosts. Because data is not partitioned between hosts, adding hosts to the cluster does not create instability; instead, applications can run faster or support more users. The more hosts your Oracle RAC database has, the less the loss of any individual node affects the database.

Cluster Cache Coherency

Concurrent read and write activity on shared data in a cluster occurs frequently. Depending on the service requirements, this activity does not normally cause performance problems. However, when global cache requests cause a performance problem as indicated on the Cluster Database page, a successful strategy for performance tuning is to optimize SQL plans and the schema to achieve effective local cache hit efficiency and minimize I/O. To assist you in resolving the problem, the Cluster Cache Coherency page enables you to view cache coherency metrics for the entire cluster database, identify processing trends, and optimize performance for your Oracle Real Application Clusters environment (Figure 6-17).

You can access the Cluster Cache Coherency page by clicking the Cluster Cache Coherency link in the Additional Monitoring Links section.

See Also:

"Cluster Cache Coherency Page" in the Enterprise Manager online help

Figure 6-17 Cluster Cache Coherency Page

Enterprise Manager Cluster Cache Coherency page
Description of "Figure 6-17 Cluster Cache Coherency Page"

The charts have the following purposes:

  • Global Cache Block Access Latency represents the end-to-end elapsed time or latency for a block request.

  • Global Cache Block Transfer Rate shows the total aggregated number of data blocks received by all instances in the cluster through an interconnect.

  • Global Cache Block Transfers and Physical Reads shows the percentage of logical reads that read data from the buffer cache of other instances through Direct Memory Access and from disk.

  • Active Session History represents the active sessions for the cluster wait classes shown. You can view the top SQL and top sessions for a time period in the tables below by moving the grey shaded box in this chart to the desired time period.

Cluster Interconnects

The Cluster Interconnects page enables you to view the current state of interfaces on hosts (Figure 6–18). You can use this page to monitor the interconnect interfaces, determine configuration issues, and identify transfer rate-related issues, such as excess traffic. This page helps determine the load added by individual instances and databases on the interconnect. Sometimes, you can immediately identify interconnect delays due to applications outside the Oracle Database.

See Also:

"Cluster Interconnects Page" in the Enterprise Manager online help

Figure 6-18 Cluster Interconnects Page

Cluster Interconnects page

Cluster-Managed Database Services

Services are groups or classifications of applications that comprise business components that extend and respond to application workloads. Examples of services are Accounts Payable, Customer Relationship Management, and so on. Services in Oracle RAC enable continuous, uninterrupted database operations to support multiple services on multiple instances.

Services enable Oracle RAC to integrate cluster database resources into a single system image to optimize cluster manageability. This simplifies system deployment, testing, disaster recovery, and administrative overhead. With services, users connect to a database without regard for which instance executes the SQL application service.

You assign services to run on one or more instances, and alternate instances can serve as backup instances in case the primary instance fails. If a primary instance fails, Oracle moves the service from the failed instance to a surviving alternate instance.

Services enable you to model and deploy both planned and unplanned operations for all types of high availability or disaster recovery scenarios. During outages, Oracle RAC automatically restarts key components. Components that are eligible for automatic restart include instances, Oracle Net Services listeners, and the database as well as several database subcomponents.

You can create and edit services using the Create Services page in Enterprise Manager, which you can access from the Cluster Managed Database Services link in the Enterprise Manager Availability property page (Figure 6-19).

See Also:

"Create Service Page" in the Enterprise Manager online help

Figure 6-19 Create Service Page

Create Service page
Description of "Figure 6-19 Create Service Page"

Policy-managed and Administrator-managed Databases

When you create an Oracle RAC database that is a policy-managed database, you specify the number of servers that are needed for the database, and a server pool is automatically created for the database. Oracle Clusterware populates the server pool with the servers it has available. If you do not use server pools, you can create an administrator-managed database.

You can define services for both policy-managed and administrator-managed databases.

  • Policy-managed Database

    When you define services for a policy-managed database, you assign the service to a server pool where the database is running. You can define the service as either uniform (running on all instances in the server pool) or singleton (running on only one instance in the server pool).

  • Administrator-based Database

    When you define a service for an administrator-managed database, you define which instances normally support that service.These are known as the PREFERRED instances. You can also define other instances to support.

See also:

Oracle Database 2 Day + Real Application Clusters Guide for more information on policy-based and administrator-based databases, including "Creating an Instance on the New Node."

Oracle Clusterware and High Availability

When you combine Oracle Clusterware and Oracle RAC, you can achieve excellent scalability and high availability. To maintain high availability, the Oracle Clusterware components can respond to status changes to restart applications and processes according to high availability rules you can specify in the Create Service page (Figure 6-19). Oracle Clusterware achieves high availability with the components in Table 6-2.

Table 6-2 Oracle Clusterware High Availability Components

Component Description

Voting Disk

The voting disk manages cluster membership by using a health check, and arbitrates cluster ownership among the instances in case of network failures. Oracle RAC uses the voting disk to determine which instances are members of a cluster.

Oracle Clusterware Registry

The OCR maintains database and cluster database configuration information as well as configuration information about any cluster database within the cluster. The OCR also manages information about processes that Oracle Clusterware controls.

Application Programming Interface

Oracle Clusterware provides a high availability application programming interface (API) that you can use to manage applications or processes that run on single-instance Oracle databases or Oracle RAC databases. This enables you to provide high availability for all of your applications.

Database High Availability

The Database High Availability Console is a one-stop, dashboard style page for monitoring your database high availability (HA). You can use the High Availability Console page to display high availability events, view the availability summary that includes the status of the database, view the last backup status, view the Flash Recovery Area Usage, view the Data Guard summary, and view the RAC Services summary including Top Services. You can view the Database High Availability Console page in Advanced View or Basic View, depending on how much detail you want to display.

You can use the Database High Availability Console Options page to set up event options for availability events, configure formatting options for the header bar and target and host names, define refresh options on the High Availability Console page, and specify whether and how to display historical charts.

You can use the Database Maximum Availability Architecture (MAA) Advisor page to view recommended Oracle solutions for each outage type (computer failures, storage failures, human errors, data corruptions, and site failures), view configuration status and Enterprise Manager links for each solution, and view the benefits for each of the solutions that are offered. The MAA Advisor page contains a table that lists the Outage Type, Oracle Solution, Recommendation Level, Configuration Status, and Benefits. You can navigate to a page where you can set up, manage and configure the specific solution area. Once a solution has been configured, you can click on the Advisor Details link to see the updated values.