6 Managing I/O Resources

I/O Resource Management (IORM) is a tool for managing how multiple databases and the workloads within the databases share the I/O resources of Oracle Exadata System Software.

To manage workloads within a database, the Oracle Database Resource Manager has been enhanced to work with IORM to provide database resource management.

6.1 Overview of Resource Management

Storage is often shared by multiple types of workloads and databases. Shared storage has several benefits over dedicated storage. One benefit is that sharing lowers administration costs by reducing the number of storage systems that need to be maintained. Another benefit is that sharing leads to much more efficient use of storage, both from a space and a bandwidth standpoint. When a storage system is dedicated to a single database, the administrator must size the storage system based on the database's peak anticipated load and size. This practice leads to unused I/O bandwidth and space for some databases and, if the estimate was not large enough, then there is insufficient bandwidth and space for other databases. The correct balance of storage resources across the multiple databases is seldom achieved because real world workloads are very dynamic.

On the other hand, running multiple types of workloads and databases on shared storage often leads to performance problems. For example, large parallel queries on one production data warehouse can impact the performance of critical queries on another production data warehouse. Also, a data load on a data warehouse can impact the performance of critical queries also running on it. You can mitigate these problems by over-provisioning the storage system, but this diminishes the cost-savings benefit of shared storage. You can also schedule non-critical tasks at off-peak hours, but this manual process is laborious. It becomes impossible when databases have different administrators who do not coordinate their activities.

I/O Resource Management allows workloads and databases to share Oracle Exadata Storage Servers according to user-defined policies. To manage workloads within a database, you can define database resource plans, using Database Resource Manager which has been enhanced to manage Oracle Exadata Storage Server I/O resources. To manage workloads in a container database (CDB), define a CDB resource plan that allows management for the various pluggable databases. To manage multiple databases, you can define interdatabase plans.

Flash I/O Resource Management protects the latency of critical OLTP I/O requests in flash cache. When table scans are running on flash concurrently with OLTP I/O requests, the OLTP latency is impacted significantly. Flash IORM queues and throttles the table scan, and other low priority I/O requests. The critical OLTP I/O requests are never queued. When the flash disks are not busy serving critical OLTP I/O requests, the queued I/O requests are issued based on the resource allocations in the interdatabase plan.

6.2 Understanding I/O Resource Management

I/O Resource Management (IORM) manages the Oracle Exadata Storage Server I/O resources on a per-cell basis. Whenever the I/O requests start to saturate a cell's capacity, IORM schedules incoming I/O requests according to the configured resource plans. IORM schedules I/O by immediately issuing some I/O requests and queuing others. The I/O requests that are immediately issued are from workloads that have not exceeded their resource allocation, according to the resource plans. The I/O requests that are queued are from workloads that have exceeded their resource allocation. They are issued when their workload no longer exceeds its resource allocation or when the cell is operating below capacity. When the cell is operating below capacity, IORM does not queue I/O requests, and lets a workload exceed its resource allocation as the system is not maxed out.

For example, if a production database and test database are sharing Oracle Exadata Storage Server, you can configure resource plans that give priority to the production database. In this case, whenever the test database load would affect the production database performance, IORM schedules the I/O requests such that the production database I/O performance is not impacted. This means that the test database I/O requests are queued until they can be issued without disturbing the production database I/O performance.

IORM provides many features for managing resource allocations. Each feature can be used independently or in conjunction with other features.

Database resource management enables you to manage workloads within a database. Database resource management is configured at the database level, using Database Resource Manager to create database resource plans. You should use this feature if you have multiple types of workloads within a database and you need to define a policy for specifying how these workloads share the database resource allocation. If only one database is using the Oracle Exadata Storage Server, then this is the only IORM feature that you need.

Interdatabase resource management enables you to manage multiple databases. Interdatabase resource management is configured by using the CellCLI utility to create interdatabase plans. The interdatabase plans specify a resource allocation for each database. You should use this feature if you have multiple databases using Oracle Exadata Storage Server.

If an interdatabase plan has been configured, then each database can have a database plan and a resource allocation. A database resource plan specifies how the database resource allocation should be divided across its workloads, or consumer groups. If a database does not have a database resource plan enabled, then the database resource allocation is not divided, and all I/O requests from the database are treated as a single workload.

Category resource management is an advanced feature. It is useful when Oracle Exadata Storage Server is hosting multiple databases, and you want to allocate resources primarily by the category of the work being done. For example, suppose all databases have three categories of workloads: OLTP, reports, and maintenance. To allocate the I/O resources based on these workload categories you would use category resource management.

The concept of maximum utilization limit (limit) is supported by I/O Resource Management. In addition to specifying the resource allocation values, you can also provide a maximum utilization limit for a given database.This directive ensures that the database never utilizes I/O resources beyond the specified limits. For example, if a production and test database are sharing the Exadata cell, then set a maximum utilization limit for the test database to limit the I/O utilization for the test database.

If a maximum utilization limit is specified, then excess capacity is never used by the databases. It is possible that the disks are running below full capacity when maximum utilization limits are specified.

IORM supports flash cache and flash log management. The ALTER IORMPLAN flashcache attribute can be set to off to prevent databases from using the flash cache. Similarly, the ALTER IORMPLAN flashlog attribute can be set to off to prevent databases from using the flash log. These attributes allow flash cache and flash log to be reserved for mission-critical databases, especially in consolidated environments.

In addition to simply turning off flash cache for certain databases, IORM supports specifying a quota for each flash cache group. A flash cache group can either be a database or a pluggable database. These quotas support reservation of space in flash cache for certain critical groups and also prevent less important or rogue databases or pluggable databases from using up the entire flash cache. The minimum and maximum quotas can either be specified using the interdatabase plan or CDB resource plan.

You can specify a soft maximum that enables a group grow beyond its allocation if the flash cache is not full. You also have the option of partitioning the flash cache for Cloud and "pay for performance" deployments where the database or PDB cannot exceed its allocation even when the flash cache is not full.

I/O Resource Management (IORM) interdatabase plans support profiles to ease management and configuration of interdatabase plans for hundreds of databases. Currently, the storage administrator has to specify resources for every database in the interdatabase plan. The plan also needs to be updated each time a new database is created. IORM profiles help alleviate this problem. The first step is to create profile directives that let you define different profile types based on performance requirements. The second step is to map new and existing databases to one of the profiles defined in the interdatabase plan, and the database inherits all its attributes from the profile directive automatically.

6.2.1 About Database Resource Management

For each database, you can use the Database Resource Manager for the following tasks:

  • Create resource consumer groups

    A database often has many types of workloads. These workloads may differ in their performance requirements and the amount of I/O that they issue. Resource consumer groups provide a way to group sessions that comprise a particular workload. For example, if your database is running four different applications, then you can create four consumer groups, one for each application. If your data warehouse has three types of workloads, such as critical queries, normal queries, and ETL (extraction, transformation, and loading), then you can create a consumer group for each type of workload.

  • Map user sessions to consumer groups

    Once you have created the consumer groups, you must create rules that specify how sessions are mapped to consumer groups. Database Resource Manager allows you to create mapping rules based on session attributes such as the Oracle user name, the service that the session used to connect to the database, client machine, client program name, client user name, and so on. If you are creating consumer groups for each application and each application has a dedicated service, then you should create mapping rules based on service names. If you want to dedicate a consumer group to a particular set of users, then you should create mapping rules based on their user names. Sessions that are not explicitly assigned to a consumer group are placed in the OTHER_GROUPS consumer group.

  • Create CDB resource plans

    A container database resource plan, or CDB plan, specifies how CPU and I/O resources are allocated among the different pluggable databases that are part of the same container. The CDB plan is created using Database Resource Manager. The CDB plan contains a directive for each pluggable database. The directive defines the number of shares that are allocated to that pluggable database. The shares define the relative priority of that pluggable database as compared to other pluggable databases in the plan.

    A maximum utilization limit can be specified for a pluggable database.

    A CDB resource plan also lets you specify memory_min and memory_limit for each PDB. These parameters specify the flash cache minimum and maximum quotas for each PDB.

  • Create resource plans

    The database resource plan, also known as an intradatabase resource plan, specifies how CPU and I/O resources are allocated among consumer groups in its database. The resource plan is created using Database Resource Manager. It contains a resource allocation directive for each consumer group, which consists of a percentage and a level. You can specify up to eight levels. Consumer groups at level 2 get resources that were not allocated at level 1 or were not consumed by a consumer group at level 1. Similarly, consumer groups at level 3 are allocated resources only when some allocation remains from levels 1 and 2. The same rules apply to levels 4 through 8. Multiple levels not only provide a way of prioritizing, they also provide a way of explicitly specifying how all primary and leftover resources are to be used. You can construct resource plans that allocate resources across consumer groups using percentages, priorities, or a combination of the two.

    You can also specify a maximum utilization limit for a consumer group. This works in the same way as a maximum utilization limit for a database, and limits the I/O utilization for the consumer group to the specified value.

    In addition to a CDB plan, each pluggable database can also create a resource plan to manage the workloads running within the pluggable database. Pluggable databases only support single level plans with a maximum of 8 consumer groups.

  • Enable a resource plan

    A database resource plan can be manually enabled with the RESOURCE_MANAGER_PLAN initialization parameter or automatically enabled with the job scheduler window.

When you set a database resource plan on the database, a description of the plan is automatically sent to each cell. When an Oracle Real Applications Clusters (Oracle RAC) database uses Oracle Exadata Storage Server, all instances in the Oracle RAC cluster must be set to the same resource plan. When a new cell is added or an existing cell is restarted, the current plan of the database is automatically sent to the cell. The resource plan is used to manage resources on both the database server and cells.

Background I/Os are scheduled based on their priority relative to the user I/Os. For example, redo writes, and control file reads and writes are critical to performance and are always prioritized above all user I/Os. The database writer process (DBWR) writes are scheduled at the same priority level as the user I/Os. If a resource plan is not enabled for a database, then all user I/Os are treated equally, and background I/Os are treated as described in this paragraph.

Oracle provides several predefined plans. The most commonly used are mixed_workload_plan, dss_plan, and default_maintenance_plan.

See Also:

Oracle Database Administrator's Guide for additional information about pluggable databases, CDB plans, consumer groups, assigning user sessions to consumer groups, and managing resource plans

6.2.2 About Interdatabase Resource Management

Interdatabase resource management is managed with an interdatabase plan.

An interdatabase plan specifies how resources are allocated by percentage or share among multiple databases for each cell. The directives in an interdatabase plan specify allocations to databases, rather than consumer groups. The interdatabase plan is configured and enabled with the CellCLI utility at each cell.

Inter-database plan directives are specified using the db_unique_name of the database as the identifier. IORM does not support specifying directives using db_name unless the role attribute is also specified in the directive. This is only applicable to Oracle Data Guard deployments.

For example, you might specify that 70 percent of the I/O resources are allocated to database SALES, 30 percent to HR, and any of their unused allocations are reallocated to the TEST_SALES database. The interdatabase plan is similar to a database resource plan, in that each directive consists of an allocation amount and a level from 1 to 8. For a given plan, the total allocations at any level must be less than or equal to 100 percent. An interdatabase plan differs from a database resource plan in that it cannot contain subplans and only contains I/O resource directives. Only one interdatabase plan can be active on a cell at any given time.

Share-based plans use a relative share instead of percentage allocations and levels. These plans are simpler to implement, but as effective as percentage allocations. Each database is given a share value which is an integer between 1 and 32. The sum of the shares can be greater than 100. Share-based plans support up to 1024 directives within the interdatabase plan. For example, if a critical database, FINANCE, has 4 shares, and a low-priority database, REPORTING, has 1 share, then the FINANCE database is four times more likely to issue I/Os compared to the REPORTING database.

Oracle Exadata System Software uses the IORM and database resource plans together to allocate I/O resources.

  • First, the interdatabase plan allocates the I/O resources to individual databases. Any unused resources are reallocated to other databases, as specified by the plan. This is similar to database resource plans.

  • Next, the database resource plan for each database allocates the I/O resources to consumer groups. If a database does not have an active database resource plan, all user I/Os are treated the same. Background I/Os are automatically prioritized relative to the user I/Os based on their importance.

As a best practice, you should create a directive for each database that is using the same cell storage. This is done automatically for shared-based plans, but not for percentage allocation plans. Each database that is not explicitly mapped in a share-based plan gets the default share of 1. To ensure that any database without an explicit directive can be managed with percentage allocation plans, create an allocation named OTHER. Databases without explicit directives are managed using the allocation of the OTHER group directive.

Share-based plans use the DEFAULT directive to specify the default values for the different IORM attributes for databases not explicitly named in the plan. The default share is 1, and the limit is set to 100%. Exadata Smart Flash Cache and Exadata Smart Flash Logging are enabled by default and there are no minimum or maximum flash cache quotas setup by default.

Another use case for interdatabase resource management is for consolidation. For example, consolidation of four different applications on the same Oracle Exadata Storage Servers. All the applications have similar priority requirements, and are equally allocated 25% of the I/O resources.

However, you can isolate each application in case another application sees a sudden increase in its workload. Use the maximum utilization limit to isolate the applications, such as specifying a maximum utilization limit of 40% for each application. In such a scenario, each application can utilize a maximum of 40% of the I/O resources, and does not completely monopolize the system. Maximum utilization limit is useful in such consolidation scenarios.

Certain service providers have a pay-for-performance model. The providers want to guarantee performance to their customers corresponding to the level of service being purchased. A customer that is not purchasing the gold tier service should not receive gold level performance. This is another case where maximum utilization limits make sense. Flash cache minimum and maximum quotas would also be useful in this model.

Another resource that is managed using interdatabase plans is flash cache space. With multiple databases, and pluggable databases sharing the storage, capacity in flash cache becomes a critical resource that needs to be managed. I/O Resource Management can provide predictable performance by guaranteeing space in flash cache. It also prevents one database or pluggable database (PDB) from taking over the entire flash cache. This is achieved using the flashcachemin and flashcachelimit attributes.

  • flashcachemin — specifies the minimum size in flash cache that is guaranteed for a database even if the blocks are cold

  • flashcachelimit — specifies a "soft" maximum size in flash cache: if the flash cache is not full, a database can exceed the flashcachelimit value.

The flashcachesize attribute partitions the flash cache to reserve space for a database. A flash cache group with flashcachesize specified cannot exceed its quota even if flash cache is not full. However, if the flash cache usage for the database was already over the flashcachesize value prior to setting the resource plan, Oracle Exadata System Software does not proactively evict the data from flash cache. The data is lazily evicted when there is space pressure in flash cache.

Since flashcachemin and flashcachesize are guaranteed reservations, the sum of flashcachemin and flashcachesize across all directives should be less than the size of the flash cache to ensure the groups get their respective quotas.

Note the following when managing resource plans:

  • If Oracle Exadata Storage Server is only hosting one database, then an interdatabase plan is not needed.

  • If an interdatabase plan is not specified, then all databases receive an equal allocation.

  • If only one database is mapped to the OTHER directive and all other databases have an explicit directive, then Oracle Exadata Storage Server uses the database resource plan of that database to determine how the allocation of the OTHER database is redistributed among the consumer groups in that database.

  • If multiple databases are mapped to the OTHER directive, then Oracle Exadata Storage Server does not use Oracle Database Resource Manager for these databases. All of the I/O requests are treated the same.

  • For share-based plans, each database gets its own directive even when it is not explicitly named in the plan. Oracle Exadata Storage Server uses the database resource plan of the database to determine how the allocation is to be distributed between the consumer groups in the database.

  • If the container database (CDB) plan specifies memory_min or memory_limit, and the interdatabase plan specifies flashCacheSize, then memory_min in the CDB plan is ignored.

6.2.3 About I/O Resource Management Profiles

I/O Resource Management interdatabase plans support profiles to ease management, and configuration of interdatabase plans for hundreds of databases. Profiles introduce a way to allocate I/O resources for a database. This is done using the database init.ora parameter db_performance_profile. Database administrators can classify different databases as, GOLD, SILVER, BRONZE, by setting the db_performance_profile parameter. As with DBRM plans, the db_performance_profile information is automatically pushed to all the storage cells in the Oracle Exadata storage grid. The following SQL displays how the profile parameter can be set for a database:

SQL> alter system set db_performance_profile=gold scope=spfile;

Profiles are specified as directives for the interdatabase plan, and are configured using the CellCLI utility. A profile directive consists of an identifier (name), and a set of attributes. To differentiate between a database directive and a profile directive, a qualifier attribute called type is used. The type attribute can be set to either database or profile. The following is an example of the type attribute syntax:

CellCLI> ALTER IORMPLAN DBPLAN=((name=gold, share=10, limit=100, type=profile),  \
(name=silver, share=5, limit=60, type=profile), (name=bronze, share=1, limit=20, \
 type=profile))

The preceding example contains three directives for profiles, GOLD, SILVER and BRONZE. All databases with the db_performance_profile set to GOLD would automatically get 10 shares, and 100% limit on the cells. The same behavior is true for databases that map to SILVER and BRONZE profiles. When adding a new database, one can set the db_performance_profile, and restart the database. The database automatically inherits the profile attributes on Oracle Exadata Storage Server without having to modify the interdatabase plan. It is also supported to create interdatabase plans with a mix of profile directives, and database directives.

Note the following when managing interdatabase profile plans:

  • The db_performance_profile parameter is not a dynamic parameter, so profile updates require a database restart.

  • If the type attribute is not specified, then the directive defaults to the database directive.

  • An interdatabase plan can specify only 8 profile directives, and 1024 database directives.

  • Level, allocation, and role cannot be specified with a profile directive.

  • The words OTHER, and DEFAULT are reserved words. A profile name cannot be OTHER or DEFAULT.

  • The type attribute cannot be specified with category plans.

  • Profiles cannot be specified in conjunction with category plans.

6.2.4 About Category Resource Management

Database Resource Manager enables you to specify a category for every consumer group. While consumer groups represent collections of users within a database, categories represent collections of consumer groups across all databases. You can manage I/O resources based on categories by creating a category plan. For example, you can specify precedence to consumer groups in the interactive category over consumer groups in the batch category for all databases sharing Oracle Exadata Storage Server. The predefined categories provided in Oracle Database are described in the following table, along with sample percentages.

You can add any number of categories, or modify the predefined categories. You should map consumer groups to the appropriate category for all databases that use the same cell storage. Any consumer group without an explicitly specified category defaults to the OTHER category.

When a category plan is enabled, the category plan is first used to allocate resources among the categories. For each category that is selected, the interdatabase plan is used to select those databases that have consumer groups with the selected category. Finally, the database resource plan of the selected database is used to select one of its consumer groups.

Category plans are configured and enabled using the CellCLI utility on the cell. Only one category plan can be enabled at a time. A sample category plan is shown in the following table.

Table 6-1 Sample Category Plan

Category Name Category Description Level 1 (%) Level 2 (%) Level 3 (%)

ADMINISTRATIVE

For extremely high-priority work, such as urgent administrative tasks.

This category is required.

80

   

INTERACTIVE

For high-priority, performance-sensitive work, such as OLTP transactions.

 

70

 

BATCH

For low-priority work, such as noncritical reports and backup.

   

70

MAINTENANCE

For low-priority work, such as automated tasks.

   

10

OTHER

For all consumer groups that do not have a category label or reference a category that is not in the current category plan.

This category is required.

   

20

The sample plan shown in the above table prioritizes administrative activity across all databases. It also prioritizes interactive activity over batch, maintenance, and other activities. In the sample plan, the following are the resource allocations:

  • Level 1 is given 80 percent of the I/O resources. The ADMINISTRATIVE category is the only category in level 1.

  • Level 2 is given all resources that were unallocated or unused by level 1. In this example, level 2 is given 20 percent of the I/O resources and any resources unused by the ADMINISTRATIVE category. The INTERACTIVE category gets 70 percent of the level 2 amount.

  • Level 3 categories are given the remaining resources, including those not used by the INTERACTIVE category. Of the remaining resources, the BATCH category gets 70 percent, the OTHER category gets 20 percent, and the MAINTENANCE category gets 10 percent.

All administrative consumer groups in all databases should be mapped to the ADMINISTRATIVE category. All high-priority user activity, such as consumer groups for important OLTP transactions and time-critical reports, should be mapped to the INTERACTIVE category. All low-priority user activity, such as reports, maintenance, and low-priority OLTP transactions, should be mapped to the BATCH, MAINTENANCE, and OTHER categories.

6.3 About Consumer Groups and Resource Plans

Oracle Exadata Database Machine provides out-of-the-box consumer groups and resource plans specifically designed for data warehouses that use Oracle Exadata System Software.

These resource plans can be modified to suit the needs of your environment.

The following consumer groups are for data warehouses:

  • ETL_GROUP: Consumer group for ETL (extract, transform, and load) jobs.

  • DSS_GROUP: Consumer group for non-critical decision support system (DSS) queries.

  • DSS_CRITICAL_GROUP: Consumer group for critical DSS queries.

The following resource plans are for data warehouses:

6.3.1 DSS_PLAN Resource Plan

The DSS_PLAN resource plan is designed for data warehouses that prioritize critical DSS queries over non-critical DSS queries and ETL jobs. In this plan, SYS_GROUP has the highest priority, followed by DSS_CRITICAL_GROUP, DSS_GROUP, and then a combination of ETL_GROUP and BATCH_GROUP. No consumer group is allowed to consume all the bandwidth.

Table 6-2 DSS_PLAN Resource Plan for Data Warehouses

Consumer Group Level 1 (%) Level 2 (%) Level 3 (%) Level 4 (%)

SYS_GROUP

75

     

DSS_CRITICAL_GROUP

 

75

   

DSS_GROUP

   

75

 

ETL_GROUP

     

45

BATCH_GROUP

     

45

ORA$DIAGNOSTICS

 

5

   

ORA$AUTOTASK_SUB_PLAN

 

5

   

OTHER_GROUPS

     

10

As shown in the previous table, the DSS_CRITICAL_GROUP group is only allocated 75 percent at level 2. Any unused allocation goes to the next level, not to other consumer groups at the same level. That means that if the DSS_CRITICAL_GROUP group does not completely consume its allocation, then the allocation is not given to the ORA$DIAGNOSTICS or ORA$AUTOTASK_SUBPLAN groups at the same level. According to the plan definition, the allocation is given to the DSS_GROUP group at level 3.

6.3.2 ETL_CRITCAL_PLAN Resource Plan

The ETL_CRITICAL_PLAN prioritizes ETL over DSS queries. In this plan, the SYS_GROUP group is given 75 percent of the bandwidth. The remaining bandwidth is divided between the other consumer groups in the ratios specified by the level 2 allocations. The ETL_GROUP and DSS_CRITICAL_GROUP groups have a higher allocation (35 percent) than the DSS_GROUP and BATCH_GROUP groups (10 percent).

Table 6-3 ETL_CRITICAL_PLAN Resource Plan for Data Warehouses

Consumer Group Level 1 (%) Level 2 (%) Level 3 (%) Level 4 (%)

SYS_GROUP

75

     

DSS_CRITICAL_GROUP

 

35

   

DSS_GROUP

 

10

   

ETL_GROUP

 

35

   

BATCH_GROUP

 

10

   

ORA$DIAGNOSTICS

 

3

   

ORA$AUTOTASK_SUB_PLAN

 

3

   

OTHER_GROUPS

 

3

   

6.4 About CDB Plans and Pluggable Databases

Multitenant container database supports many user defined pluggable databases. A CDB can have multiple workloads within multiple PDBs competing for resources. In a CDB, resources are managed at the following levels:

  • CDB level: Resource Manager manages the workloads for multiple pluggable databases (PDBs) that are contending for system and CDB resources. The administrator can specify how resources are allocated to PDBs, and can limit the resource utilization of specific PDBs.

  • PDB level: Resource Manager manages the workloads within each PDB.

The following CDB plan contains three PDBs named SALES, SERVICES and HR. The PDBs have different shares and max utilization limits in the CDB plan.

PDB Name Directive for Shares Directive for Utilization Limit Memory_min Memory_limit

SALES

3

Unlimited

20

 

SERVICES

3

Unlimited

20

 

HR

1

70

 

50

6.5 Administering IORM

This section describes the tasks for I/O Resource Management (IORM). To perform the tasks, use the DBMS_RESOURCE_MANAGER package to define database resource plans on the database hosts, and the CellCLI utility to specify the IORM and category plans for each cell.

This section contains the following topics:

Related Topics

6.5.1 Enabling IORM to Prioritize Latency

IORM is enabled by default. In the default state, IORM manages flash cache and flash log. It also guards against extremely high latencies for log writes, buffer cache reads, and other critical I/Os. Running very heavy I/O loads on hard disks results in high disk I/O latencies. Without IORM, the only way to lower the disk I/O latency is to reduce the overall I/O throughput by reducing smart scans, and other disk intensive I/O operations. To manage these I/O intensive operations, set the objective option to specify the optimization mode for IORM. The default objective option is basic.

When the objective option is set to basic, the database resource plan maximum utilization limits are not enforced. Database resource plan allocations are only used to guard against extremely high latencies, and there is no consideration for plan conformance.

Smart scan and low priority I/Os to flash devices are queued by IORM in this mode. The critical I/Os usually do not keep the disks busy all the time. During these idle intervals, IORM uses the interdatabase plan allocations to issue the queued smart scan, and low priority I/Os to flash devices. This is done independently of how I/Os to hard disks are processed.

IORM also prioritizes different scan workloads among themselves when no critical OLTP I/Os are being issued. Flash IORM is enabled on hybrid systems with flash cache and Exadata Extreme Flash. The flash IORM device queue depth varies based on the hardware and flash device type.

For stricter plan conformance, and enforcement of maximum utilization limits, the objective option must be set to something other than basic. The supported IORM objectives are auto, low_latency, balanced, and high_throughput. The recommended objective option is auto which allows IORM to continuously monitor the workloads, and select the best mode based on the active workloads currently on the cells. If the objective option is set to something other than basic, then IORM prioritizes the I/O resources as follows:

  1. IORM manages disk I/O if any database has a database resource plan.

  2. IORM manages disk I/O if there is a configured interdatabase or category plan.

To deactivate disk I/O prioritization and throttling, set the IORM objective to basic.

Flash IORM behavior cannot be controlled by changing the IORM objective except on Exadata Extreme Flash storage. Flash IORM is designed to protect the OLTP latency and does not take the objective into account while protecting the critical I/Os.

On Exadata Extreme Flash storage, the IORM objective provides some control over the behavior of flash IORM. The default objective is basic. The objective values of auto and balanced have the same behavior. If the degradation in scan throughput is considered to be too high, the objective can be changed to high_throughput which will increase the scan throughput at the cost of critical I/O latency. The objective may also be changed to low_latency in which case the latency for critical I/Os is very good but there is a significant degradation in scan throughput when both workloads are running concurrently.

6.5.2 Administering Database Resource Management

To set up database resource management, you must use Database Resource Manager to configure the consumer groups, assign sessions to consumer groups, create a database resource plan, and enable it.

This section contains the following topics:

6.5.2.1 Setting Up Consumer Groups and Categories

Consumer groups and categories are set up with the procedures in the PL/SQL DBMS_RESOURCE_MANAGER package.

You can create new consumer groups and categories, or use one of the predefined consumer groups or categories. You do not need to set up categories if you are not planning on using a category plan.

Note:

Consumer groups and categories are created in the database and cannot be created explicitly on a cell.

Before running the DBMS_RESOURCE_MANAGER procedures for administering consumer groups and categories, you must first create a pending area. You must have the system privilege ADMINISTER_RESOURCE_MANAGER to run the procedures in the DBMS_RESOURCE_MANAGER PL/SQL package.

The following PL/SQL commands are used with consumer groups and categories:

  • To manage categories: CREATE_CATEGORY(), DELETE_CATEGORY(), and UPDATE_CATEGORY()

  • To manage consumers groups: CREATE_CONSUMER_GROUP() and UPDATE_CONSUMER_GROUP()

  • To assign consumer groups to categories: CREATE_CONSUMER_GROUP() or UPDATE_CONSUMER_GROUP()

In addition to the consumer groups that you set up, the database contains predefined consumer groups. The DBA_RSRC_CONSUMER_GROUPS view displays information about consumer groups, and the DBA_RSRC_CATEGORIES view displays information about categories in the database.

Example 6-1 Setting Up Consumer Groups and Categories with PL/SQL in the Database

This example shows how to set up consumer groups and categories in a database. The MAINTENANCE category is predefined, and is not created in this example.

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

  DBMS_RESOURCE_MANAGER.CREATE_CATEGORY(
     CATEGORY => 'dss',
     COMMENT => 'DSS consumer groups');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP( 
     CONSUMER_GROUP => 'critical_dss',
     CATEGORY => 'dss',
     COMMENT => 'performance-critical DSS queries');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP( 
     CONSUMER_GROUP => 'normal_dss',
     CATEGORY => 'dss',
     COMMENT => 'non performance-critical DSS queries');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP( 
     CONSUMER_GROUP => 'etl',
     CATEGORY => 'maintenance',
     COMMENT => 'data import operations');

  DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

Example 6-2 Consumer Groups and Categories in an Oracle Database

This example shows a query on the DBA_RSRC_CONSUMER_GROUPS view.

SQL> SELECT consumer_group, category FROM DBA_RSRC_CONSUMER_GROUPS where 
     consumer_group not like 'ORA%' ORDER BY category;

CONSUMER_GROUP                 CATEGORY
------------------------------ ------------------------------
SYS_GROUP                      ADMINISTRATIVE
ETL_GROUP                      BATCH
BATCH_GROUP                    BATCH
DSS_GROUP                      BATCH
CRITICAL_DSS                   DSS
NORMAL_DSS                     DSS
DSS_CRITICAL_GROUP             INTERACTIVE
INTERACTIVE_GROUP              INTERACTIVE
ETL                            MAINTENANCE
LOW_GROUP                      OTHER
OTHER_GROUPS                   OTHER
AUTO_TASK_CONSUMER_GROUP       OTHER
DEFAULT_CONSUMER_GROUP         OTHER
 
13 rows selected

6.5.2.2 Assigning Sessions to Consumer Groups

You can assign a session to a resource consumer group manually, or automatically using consumer group mapping rules. For both approaches, you must give explicit permission for a user to switch to a consumer group. In order to control which consumer groups a user can switch to, use the PL/SQL procedure DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP().

The consumer group mapping rules are based on session attributes such as the user name, the name of the service that the session used to connect to the database, and the name of the client program. To create a consumer group mapping rule, use the SET_CONSUMER_GROUP_MAPPING procedure, as shown in Example 6-3. Before running the SET_CONSUMER_GROUP_MAPPING procedure, you must first create a pending area.

You can also manually switch a session to a particular consumer group, using the PL/SQL DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_USER() or SWITCH_CONSUMER_GROUP_FOR_SESS() procedures.

See Also:

Oracle Database Administrator's Guide for additional information about the following:

Example 6-3 Creating Consumer Group Mapping Rules, Based on Service and User Name

BEGIN
DBMS_SERVICE.CREATE_SERVICE('SALES', 'SALES');
DBMS_SERVICE.CREATE_SERVICE('AD_HOC', 'AD_HOC');
 
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
     (DBMS_RESOURCE_MANAGER.ORACLE_USER, 'SYS', 'CRITICAL_DSS');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
     (DBMS_RESOURCE_MANAGER.SERVICE_NAME, 'SALES', 'CRITICAL_DSS');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
     (DBMS_RESOURCE_MANAGER.SERVICE_NAME, 'AD_HOC', 'NORMAL_DSS');
 
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
 
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP (
   GRANTEE_NAME   => 'PUBLIC',
   CONSUMER_GROUP => 'CRITICAL_DSS',
   GRANT_OPTION   =>  FALSE);
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP (
   GRANTEE_NAME   => 'PUBLIC',
   CONSUMER_GROUP => 'NORMAL_DSS',
   GRANT_OPTION   =>  FALSE);
END;
/

6.5.2.3 Creating a CDB Plan

CDB plans are created using the PL/SQL procedures DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN() and CREATE_CDB_PLAN_DIRECTIVE(). The CDB plan can only be configured from the root PDB. The CDB plan manages CPU resources on the database instances, and flash cache space and I/O bandwidth on the Exadata cells.

Example 6-4 Using a CDB Plan to Distribute Resources Between PDBs

This example shows how to distribute resources between three PDBs named SALES, SERVICES and HR. SALES and SERVICES have higher priority and get three shares each compared to one share for HR. The limit on the HR PDB is set to 70% maximum utilization limit.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan    => ''NEWCDB_PLAN ',
    comment => 'CDB resource plan for newcdb');

  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SALESPDB', 
    shares                => 3, 
    memory_min            => 20,
    utilization_limit     => 100);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => ' NEWCDB_PLAN ', 
    pluggable_database    => 'SERVICESPDB', 
    shares                => 3, 
    memory_min            => 20,
    memory_limit          => 75);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => ' NEWCDB_PLAN ', 
    pluggable_database    => 'HRPDB', 
    shares                => 1, 
    memory_limit          => 50,
    utilization_limit     => 70);

DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

6.5.2.4 Creating a Database Plan

Database resource plans, also known as intradatabase plans, are created using the PL/SQL procedures DBMS_RESOURCE_MANAGER.CREATE_PLAN() and CREATE_PLAN_DIRECTIVE(). You must have the system privilege ADMINISTER_RESOURCE_MANAGER to run the procedures in the DBMS_RESOURCE_MANAGER PL/SQL package. This resource plan manages both CPU resources on database instances and I/O resources on the cells.

One scenario is for multiple applications sharing a database where the I/O resources should be divided across the applications using a particular ratio. For example, there are three applications named SALES, FINANCE, and MARKETING. You would like the I/O resources to be allocated as 60 percent, 25 percent, and 10 percent, respectively, with the remaining 5 percent allocated to any sessions that do not map into these consumer groups. In this scenario, you would create a consumer group for each application, and then create a single-level resource plan and specify the percentage of I/O resources for each consumer group. This allocation is actually the minimum I/O resources that the consumer group can use. If a consumer group does not use its allocation, then it is redistributed to the other consumer groups in the ratio specified by the plan. You can specify the allocations using the MGMT_P1 parameter, as shown in Example 6-5.

Another scenario is to prioritize one workload over another. For example, suppose that you load data into your data warehouse while servicing queries, and you want to always prioritize the queries over the data load. For this scenario, you would create two consumer groups for queries and one consumer group for data load. You would like to share the I/O resources between the two query consumer groups using a 75/25 ratio. In addition, you would like to issue I/Os for data load only if these consumer groups do not use all of their allocation. You can use resource plan levels to specify the allocation priorities as shown in Example 6-6.

As shown by the preceding examples, you must always begin resource plan creations or updates with the PL/SQL procedure CREATE_PENDING_AREA() and complete them with the PL/SQL procedure SUBMIT_PENDING_AREA(). You must also include a directive for OTHER_GROUPS, which includes all sessions that are not explicitly mapped to a consumer group.

Example 6-5 Sharing Resources Across Applications

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN('DAYTIME_PLAN', 'Resource plan for managing all
 applications between 9 am and 5 pm');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('SALES', 'Sales App');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('FINANCE', 'Finance App');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('MARKETING', 'Marketing App');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'SALES', 'Allocation
for SALES', MGMT_P1 => 60);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'FINANCE', 'Allocation
for FINANCE', MGMT_P1 => 25);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'MARKETING',
'Allocation for MARKETING', MGMT_P1 => 10);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'OTHER_GROUPS',
'Allocation for default group', MGMT_P1 => 5);
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

Example 6-6 Sharing Resources Across Workloads

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN('DAYTIME_PLAN', 'Resource plan for prioritizing
queries between 9 am and 5 pm');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('REPORT_QUERIES', 'Report Queries');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('AD-HOC_QUERIES', 'Ad-Hoc Queries');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('DATA_LOAD', 'Data Load');
 
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'REPORT_QUERIES',
'Allocation for REPORT_QUERIES', MGMT_P1 => 75);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'AD-HOC_QUERIES',
'Allocation for AD-HOC_QUERIES', MGMT_P1 => 25);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'DATA_LOAD',
'Allocation for DATA_LOAD', MGMT_P2 => 100);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'OTHER_GROUPS',
'Allocation for default group', MGMT_P3 => 100);
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

6.5.2.5 Enabling a Database Resource Plan

You can manually enable database resource plans by setting the RESOURCE_MANAGER_PLAN parameter. You can automatically enable resource plans by defining a scheduler window with a resource plan. When the scheduler window opens, the resource plan is enabled. When the scheduler window closes, the resource plan is disabled.

When a resource plan is enabled, the database alerts all cells about this event and provides the resource plan. The database also alerts all cells when a resource plan is disabled. Because only one resource plan can be active for any database, you are required to enable the same resource plan on all instances of a database. If no database resource plan is enabled for a database, then all I/O requests are treated equally.

6.5.2.6 Managing Fast File Creation

Oracle Exadata System Software features fast file creation, allowing accelerated initialization of data files.

This feature automatically runs whenever you create a new tablespace, add a data file to an existing tablespace, or autoextend an existing tablespace. Oracle Exadata System Software can initialize files very quickly because it issues many concurrent I/O requests. However, these concurrent I/O requests create a heavy load that can interfere with performance-critical queries.

Using I/O Resource Management (IORM), you can control the priority of fast file creations for creating a new tablespace or adding a data file to an existing tablespace. These operations are run under the FASTFILECRE function. By default, the FASTFILECRE function is mapped to a hidden consumer group that has lower priority than all consumer group and background I/Os. If you choose to increase the priority, and thereby performance, of file creations, add a mapping rule based on the mapping attribute DBMS_RESOUCRE_MANAGER.ORACLE_FUNCTION, and mapping value FASTFILECRE.

Because autoextending an existing tablespace is a brief and often time-critical operation, you cannot modify its priority using IORM.

Example 6-7 Managing Fast File Creation

This example shows how to run fast file creations in the MAINTENANCE_GROUP consumer group.
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('MAINTENANCE_GROUP', 'Maintenance
activity');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANAGER.ORACLE_
FUNCTION, 'FASTFILECRE', 'MAINTENANCE_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

6.5.2.7 Managing Data Import

Data import, or ETL, is an important part of maintaining a data warehouse. In some cases, ETL is extremely critical to performance because reports or queries cannot be run until the data has been loaded. In these cases, ETL should be prioritized above all other queries. In other cases, ETL is a low-priority background activity that only needs to be prioritized in the rare event that it does not complete by a certain time. You can control the priority of ETL as well as the amount of I/O resources that ETL consumes using IORM.

To manage ETL, you should map the ETL sessions to the ETL_GROUP consumer group and include the ETL_GROUP group in your resource plans. The mapping rules for ETL are typically based on user name or client program name. Data pump is run under the DATALOAD function. By default, the DATALOAD function is mapped to the ETL_GROUP consumer group.

Example 6-8 Mapping a Program to the ETL_GROUP Consumer Group

This example shows how to map a program to the ETL_GROUP consumer group.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING  
  (DBMS_RESOURCE_MANAGER.CLIENT_PROGRAM, 'SQLLDR', 'ETL_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
6.5.2.7.1 Importing Non-compressed Data as Compressed Data

Non-compressed data can be imported as compressed data when using the TRANSFORM:SEGMENT_ATTRIBUTES=n option, and the target tablespace has been configured to create new tables as Exadata Hybrid Columnar Compression tables by default.

Related Topics

6.5.2.8 Managing Oracle Recovery Manager Backups and Copies

Backups are an I/O intensive operation. You can control the rate of Oracle Recovery Manager (RMAN) I/Os by configuring the number of channels. In addition, you can use IORM to control the resource consumption and priority of RMAN I/Os to a much greater degree. For example, you can map RMAN to a low priority consumer group. If Oracle Exadata Storage Server is busy, then the RMAN operations run very slowly and not interfere with the other database operations. However, whenever the Oracle Exadata Storage Server is not fully utilized, then IORM schedules the RMAN I/Os, allowing it to consume the unutilized bandwidth.

RMAN backups run under the BACKUP function. RMAN copies run under the COPY function. By default, both the BACKUP and COPY functions are mapped to the BATCH_GROUP consumer group. You can remap these functions to any other consumer group, as shown in the following example.

Example 6-9 Using Consumer Groups to Manage Resources

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANAGER.ORACLE_
FUNCTION, 'BACKUP', 'BATCH_GROUP');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANAGER.ORACLE_
FUNCTION, 'COPY', 'MAINTENANCE_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

6.5.3 Administering Interdatabase Resource Management

You can configure an interdatabase or category plan for Oracle Exadata System Software using the CellCLI ALTER IORMPLAN command.

The catPlan parameter specifies the category plan. The dbPlan parameter specifies the interdatabase plan. To manage the I/O resources using a category plan, interdatabase plan or database resource plan, set the objective.

The default for the objective option is basic, and I/O Resource Management (IORM) manages I/O resources based on the configured resource plans. The objective option can be set to basic, auto, low_latency, balanced, or high_throughput.

Example 6-10 shows how to configure an interdatabase plan using allocations for a cell.

The plan name is automatically set to cellname_IORMPLAN.

For interdatabase plan directives to be valid, you must provide either a level and an allocation, share, maximum utilization limit, or a flash cache quota directive.

A directive without an allocation, share or maximum utilization limit is invalid. A directive that specifies both allocation and share is also invalid.

When configuring an interdatabase plan, catPlan and dbPlan are optional parameters. If catPlan is not specified, then IORM between categories is not enabled. Similarly, if dbPlan is not specified, then IORM between databases is not enabled. When specifying catPlan or dbPlan, a directive with name=other is required. For category plans, the other directive provides an allocation for all active consumer groups whose category is not specified in the category plan. For interdatabase plans, the other directive provides an allocation for all databases that are using Oracle Exadata System Software, but are not explicitly specified in the interdatabase plan. If the other directive is not specified, then the CellCLI utility returns an error.

Example 6-11 shows how to configure an interdatabase plan using shares for a cell.

To reset an interdatabase plan attribute to the default value, set the attribute to an empty string, as shown in Example 6-12. You can reset the entire plan, or separately reset the catPlan or dbPlan.

You can put the ALTER IORMPLAN commands in a text file named alter_iorm, then run the commands using the text file with the START alter_iorm command.

The flashcachemin and flashcachelimit attributes can be used to specify the guaranteed minimum and maximum quotas for flash cache groups. These attributes allows you to control how the flash cache space is distributed across the databases. These attributes can only be specified with an interdatabase plan, and are configured using the CellCLI utility.

The flashcachemin attribute specifies the minimum space in flash cache that is guaranteed for a database.

The flashcachelimit attribute specifies the maximum amount of space in the flash cache that a database can use. The flashcachelimit is a "soft" maximum which means that the flash cache group can exceed its quota if the flash cache is not full. This model does not work for Oracle Cloud and pay-for-performance environments. A different attribute called flashcachesize helps alleviate this limitation. flashcachesize partitions the flash cache to guarantee space in the flash cache for a group. Since this is catered towards Oracle Cloud environments, flashcachesize is not exceeded even if the flash cache has free space.

If a container database (CDB) plan specifies the memory_min and memory_limit attributes for pluggable database (PDB), then it is computed as a percentage of the flashcachemin and flashcachelimit or flashcachesize values for the database or the total flash cache size when nothing is specified.

The role attribute is used for Oracle Data Guard. It allows a different allocation to be specified, based on whether the database is the primary or standby role. By default, all interdatabase plan allocations apply when the database is in either role. If you want the allocation to apply only when the database is in the primary role, then set role=primary. Similarly, if you want the allocation to apply only when the database is in the standby role, then set role=standby.

Example 6-10 Configuring an Interdatabase Plan Using Allocations

CellCLI> ALTER IORMPLAN                                                -
 catPlan=((name=administrative, level=1, allocation=80),               -
          (name=interactive,    level=2, allocation=90),               -
          (name=batch,          level=3, allocation=80),               -
          (name=maintenance,    level=4, allocation=50),               -
          (name=other,          level=4, allocation=50)                -
          ),                                                           -
 dbplan=((name=sales_prod, share=8, role=primary),                     -
         (name=sales_prod, share=1, limit=50, role=standby),           -
         (name=sales_test, share=1, limit=25),                         -
         (name=default, share=2))

Example 6-11 Configuring an Interdatabase Plan Using Shares

ALTER IORMPLAN                                                 -
        (name=dev01, share=1, limit=50, flashLog=off),         -
        (name=dev02, share=1, limit=25, flashCache=off)        -
        (name=default, share=4))

Example 6-12 Resetting Default Values in an Interdatabase Plan

CellCLI> ALTER IORMPLAN dbPlan="", catPlan=""
CellCLI> ALTER IORMPLAN dbPlan=""
CellCLI> ALTER IORMPLAN catPlan=""

Example 6-13 Configuring an Interdatabase Plan for Flash Cache Groups

ALTER IORMPLAN                                                           -
 dbplan=((name=sales, share=8, flashCacheSize=10G),                      -
         (name=finance, share=8, flashCacheLimit=10G, flashCacheMin=2G), -
         (name=dev, share=2, flashCacheLimit=4G, flashCacheMin=1G),      -
         (name=test, share=1, limit=10, flashCacheSize=1G))

Example 6-14 Configuring an Interdatabase Plan for Oracle Data Guard

ALTER IORMPLAN                                          -
dbPlan=((name=prod, share=8, role=primary),             -
        (name=prod, share=1, limit=25, role=standby)    -
        (name=default, share=2))

6.5.4 Listing an I/O Resource Management Plan

You can view the current interdatabase plan for a cell using the CellCLI LIST IORMPLAN command on the cell. The following example shows a detailed list of the interdatabase plan attributes.

Example 6-15 Displaying Interdatabase Plan Details

CellCLI> LIST IORMPLAN DETAIL
   name:                   cell01_IORMPLAN
   status:                 active
   catPlan:                name=administrative,level=1,allocation=80
                           name=interactive,level=2,allocation=90
                           name=batch,level=3,allocation=80
                           name=maintenance,level=4,allocation=50
                           name=other,level=4,allocation=50
   dbplan:                 name=sales_prod, share=8, role=primary
                           name=sales_prod, share=1, limit=50, role=standby
                           name=sales_test, share=1, limit=25
                           name=default, share=2
   objective:              balanced

Related Topics

6.5.5 Verifying the Configuration of I/O Resource Management

The following checklist can be used to verify that IORM is configured properly:

  • The following criteria must be met when using IORM to manage I/O resources within a database:

    • A resource plan has been enabled.

    • The same resource plan has been enabled on all database instances.

      Note:

      • If Database Resource Manager is enabled using Scheduler Window, then the same plan is always enabled on all database instances.

      • If Database Resource Manager is enabled using the RESOURCE_MANAGER_PLAN parameter, then use sid='*' to set the parameter for all database instances.

    • The resource plan includes MGMT_P[1-8] directives for each consumer group in the resource plan.

    The following query can be used to verify the preceding criteria have been met:

    SELECT DECODE(count(*), 0, 'Intra-Instance IORM Plan Enabled', 
    'No Intra-Instance IORM Plan Enabled') status FROM gv$instance WHERE 
    inst_id not in (SELECT inst_id FROM gv$rsrc_plan WHERE cpu_managed = 'ON');
    
  • The following command is used to verify that the interdatabase plan has been configured properly when IORM is used to manage I/O resources from multiple databases:

    CellCLI> LIST IORMPLAN DETAIL
    

    If no interdatabase plan has been configured, then use the CellCLI ALTER IORMPLAN command to configure a plan. Each active database should have its own directive in the dbPlan parameter.

  • The following query is used to verify that sessions are mapped to the correct consumer group. The command must be run while a workload is running.

    SELECT r.sid,
           c.consumer_group current_consumer_group
      FROM v$rsrc_session_info r, dba_rsrc_consumer_groups c 
      WHERE r.current_consumer_group_id = c.consumer_group_id
    union
    SELECT sid, 'OTHER_GROUPS' from v$rsrc_session_info 
      WHERE current_consumer_group_id = 0;
    

    A session may not be in the expected consumer group due to the following configuration errors:

    • Missing privilege: In order for a session to switch into a consumer group, its user or role must have permission to switch into that consumer group. The following query shows the permissions for all consumer groups.

      SELECT grantee, granted_group 
      FROM DBA_RSRC_CONSUMER_GROUP_PRIVS
      ORDER BY granted_group;
      

      The following command is an example of the SQL command that grants permission for any session to switch into a consumer group.

      EXEC dbms_resource_manager_privs.grant_switch_consumer_group -
        ('public', 'BATCH_GROUP', FALSE);
      

      In the preceding command, the consumer group was BATCH_GROUP.

    • Inactive consumer group: If a session maps to or is manually switched to a consumer group that is not part of the current resource plan, then the session is switched into the default consumer group, OTHER_GROUPS.

      If sessions are being assigned to consumer groups using mapping rules, then the following query can be used to determine the consumer group that the mapping rules selected, the mapping attribute that was used, and the consumer group that the session started in originally.

      SELECT r.sid,
             r.mapped_consumer_group,
             r.mapping_attribute, 
             c.consumer_group original_consumer_group
      FROM v$rsrc_session_info r, dba_rsrc_consumer_groups c 
      WHERE r.orig_consumer_group_id = c.consumer_group_id;
      

      If the mapped consumer group differs from the original consumer group, then the mapped consumer group was not part of the resource plan.

  • While a workload is running, verify that I/O loads are being managed in the correct consumer groups. The following CellCLI command lists the number of small and large I/O requests that were issued for each consumer group across all databases:

    CellCLI> LIST METRICCURRENT CG_IO_RQ_LG, CG_IO_RQ_SM   ATTRIBUTES name, -
             metricObjectName, metricValue, collectionTime;
    

    Each consumer group that has an active I/O workload should generate small or large I/O requests according to these metrics.

  • While the workload is running, query the actual I/O utilization for each category, database and consumer group. The following CellCLI command lists the small and large I/O utilization for each database running on Oracle Exadata Storage Server:

    CellCLI> LIST METRICCURRENT DB_IO_UTIL_LG, DB_IO_UTIL_SM ATTRIBUTES name, -
             metricObjectName, metricValue, collectionTime;
    

    The output shows the percentage of disk resources utilized by small and large requests from the databases.

6.5.6 Examples of Using the Plans

Consider the example of four databases sharing the same storage. The four databases are:

  • An OLTP production database, named PROD

  • A test database, named PROD_TEST

  • A development database, named PROD_DEV

  • A data warehouse database, named DW

An OLTP production database typically issues small I/O requests, and low latency for these requests is the critical requirement. A data warehouse issues large numbers of large I/O requests and is more sensitive to the I/O throughput than the latency of each individual I/O request. Without any I/O resource management, the large number of I/O requests issued by the DW database could overwhelm the storage subsystem and increase the latency of the I/O requests issued by the OLTP database. Additionally, the I/O requests issued by the test and development databases, PROD_TEST and PROD_DEV, could adversely affect the performance of the OLTP and the DW databases.

The resources can be specified as shown in the following examples:

6.5.6.1 Using Allocations to Specify Resources

You can prioritize the I/O requests from these four databases by specifying an interdatabase plan, as follows:

  • The OLTP database PROD gets 80 percent of the I/O resources with the highest priority level.

  • The DW database gets 20 percent of the remaining I/O resources and 80 percent of PROD's unused allocation.

  • The PROD_TEST, PROD_DEV and OTHER databases get any unused I/O in the amount of 50 percent, 40 percent, and 10 percent respectively.

This interdatabase plan can be specified at the cell using the CellCLI utility as shown in Example 6-16.

An example of an interdatabase plan is shown in the following table.

Table 6-4 Interdatabase Plan Using Allocations

Database Name Level 1 (%) Level 2 (%) Level 3 (%)

PROD

80

   

DW

 

80

 

PROD_TEST

   

50

PROD_DEV

   

40

OTHER

   

10

If the PROD_TEST and PROD_DEV databases put out an inordinately large I/O load, then the performance of databases PROD and DW are not affected. Also, if database DW issues large amounts of I/O, then the performance of the PROD database are not adversely affected.

Example 6-16 ALTER IORMPLAN Syntax for Allocations

CellCLI> ALTER IORMPLAN                                       -
         dbPlan=(                                             -
                 (name=prod, level=1,allocation=80),          -
                 (name=dw, level=2, allocation=80),           -
                 (name=prod_test,  level=3, allocation=50),   -
                 (name=prod_dev, level=3, allocation=40),     -
                 (name=other, level=3, allocation=10))

6.5.6.2 Using Shares to Specify Resources

Because the OLTP database PROD is most critical, you can specify 16 shares for the PROD database. The DW database gets 4 shares, PROD_TEST gets 2 shares, and PROD_DEV gets 1 share. This plan ensures that the PROD database is 4 times more likely than the DW database to issue an I/O. If PROD does not have any I/Os to issue, then the other databases are picked based on their shares.

Example 6-17 ALTER IORMPLAN Syntax for Shares

CellCLI> ALTER IORMPLAN                                   -
         dbPlan=(                                         -
                (name=prod, share=16),                    -
                (name=dw, share=4),                       -
                (name=prod_test, share=2),                -
                (name=prod_dev, share=1))

6.5.6.3 Managing Flash Cache and Flash Log

You may want to use IORM to manage flash cache and flash log instead of I/O resources for the databases. To do this, disable I/O prioritization using the ALTER IORMPLAN command, then disable flash cache and flash log for the PROD_TEST and PROD_DEV databases as shown in the following example.

Note:

Explicitly setting flash cache and flash log on for the PROD and DW databases was not required because flash cache and flash log are on by default. They were shown in the example to contrast with the settings for the PROD_TEST and PROD_DEV databases,

Example 6-18 ALTER IORMPLAN Syntax for Managing Flash Cache and Flash Log

CellCLI> ALTER IORMPLAN objective='basic';

CellCLI> ALTER IORMPLAN                                   -
         dbPlan=(                                         -
                (name=prod, flashcache=on, flashLog=on),         -
                (name=dw, flashcache=on, flashLog=on),           -
                (name=prod_test, flashcache=off, flashLog=off),  -
                (name=prod_dev, flashcache=off, flashLog=off)    -
                (name=other, flashcache=on, flashLog=on))

6.5.6.4 Managing Flash Cache Quotas for Databases and PDBs

IORM enables you to control how you want the flash cache to be shared among different databases and PDBs. This can be done using just the CDB resource plan or both the CDB plan and the IORM interdatabase plan.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
 
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan    => ''NEWCDB_PLAN ',
    comment => 'CDB resource plan for newcdb');
 
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SALESPDB', 
    memory_min            => 20);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => ' NEWCDB_PLAN ', 
    pluggable_database    => 'SERVICESPDB', 
    memory_min            => 20,
    memory_limit          => 50);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => ' NEWCDB_PLAN ', 
    pluggable_database    => 'HRPDB', 
    memory_limit          => 25);
 
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

Consider a CDB resource plan that specifies memory_min and memory_limit for the 3 PDBs mentioned in the plan. These values are specified in percentages and range from 0 and 100. Because over-provisioning is supported, the sum of the percentages is not restricted to 100%. If the sum is greater than 100%, the values are normalized down to 100%. If memory_min is not specified, it defaults to 0. If memory_limit is not specified, it defaults to 100. For CDB$ROOT, there is a 5% memory_limit value. In the example, the sum of the memory_min values is 40%, and the sum of the memory_limit values is 175%, which needs to be normalized. If an interdatabase plan is not specified, these percentages apply to the entire size of the flash cache. If an interdatabase plan is specified, the quotas for the PDBs are computed as a percentage of the minimum and limit values for the database as specified in the interdatabase plan directive.

In the example above, if no interdatabase plan is specified and the size of the flash cache is 10 GB, the following table shows the breakdown of the quotas after normalization of the limit where the sum of the memory_limit values is greater than 100%. If we end up with a minimum value that is greater than the limit, we reduce the minimum value to make it equal to the limit.

Table 6-5 Case 1: PDB Limits with No Interdatabase Plan

PDB Flash Cache Min FC Soft Limit Normalized Soft Limit FC Hard Limit

SALESPDB

20% = 2 GB

100 (default)

100 / 175 = 5.7 GB

n/a

SERVICESPDB

20% = 2 GB

50

50 / 175 = 2.85 GB

n/a

HRPDB

0

25

25 / 175 = 1.4 GB

n/a

Consider an interdatabase plan as shown below running on a system where the flash cache size is 50 GB.

ALTER IORMPLAN dbplan=                                            -
((name=newcdb,  share=8,  flashCacheSize=10G),                    -
 (name=finance, share=8,  flashCacheLimit=10G, flashCacheMin=2G), -
 (name=dev,     share=2,  flashCacheLimit=4G, flashCacheMin=1G),  -
 (name=test,    share=1,  limit=10))

In addition to the "newcdb" CDB, three other databases (finance, dev, and test) share the same storage cells. Flash cache quotas are only enforced if the directives specify the flashcachesize, flashcachelimit, or flashcachemin attribute. flashcachesize is a guaranteed hard limit and has to be specified in isolation, that is, it cannot be specified with flashcachemin or flashcachelimit in the same directive. Database "test" does not specify any flash cache directive; it and its PDBs (if any) will not be managed for any flash cache quotas.

If the CDB has flashcachesize specified, the memory_min value from the CDB resource plan is ignored, and the memory_limit value is normalized and used to compute the flash cache sizes for the different PDBs. Since the "newcdb" CDB has flashcachesize specified, the memory_min values are ignored. The flachcachesize will be computed after normalizing the memory_limit values as we saw previously. The only difference is that this will be a guaranteed hard limit since the CDB has specified the flashcachesize directive.

Table 6-6 Case 2: PDB Limits with an InterDatabase Plan

PDB Flash Cache Min FC Hard Limit Normalized Hard Limit FC Soft Limit

SALESPDB

0

100 (default)

100 / 175 = 5.71 GB

n/a

SERVICESPDB

0

50

50 / 175 = 2.86 GB

n/a

HRPDB

0

25

25 / 175 = 1.43 GB

n/a

For non-CDB databases, the flashcachesize, flashcachemin, and flashcachelimit values are specified in absolute terms and no additional normalization is required. Since flashcachesize and flashcachemin are guaranteed reservations, the sum of flashcachesize and flashcachemin across all the directives should be less than the total size of the flash cache.