5 Managing I/O Resources

Exadata I/O Resource Management (IORM) is a tool for managing how multiple workloads and databases share the I/O resources of Oracle Exadata Database Machine.

To manage workloads within a database, Oracle Database Resource Manager is enhanced to work with Exadata IORM.

5.1 Understanding I/O Resource Management (IORM)

IORM manages the storage server I/O resources on a per-cell basis. Whenever the I/O requests start to saturate a cell's capacity, IORM schedules incoming I/O requests according to the configured resource plans.

5.1.1 About I/O Resource Management (IORM) in Exadata Database Machine

IORM provides many features for managing resource allocations. Each feature can be used independently or in conjunction with other features.

Storage is often shared by multiple types of workloads and databases, which can often lead to performance problems. For example, large parallel queries on a production data warehouse can impact the performance of critical OLTP in another database. You can mitigate these problems by over-provisioning the storage system, but this diminishes the cost-saving benefits of shared storage. You can also schedule non-critical tasks at off-peak hours, but this manual process is laborious.

IORM allows workloads and databases to share Oracle Exadata Storage Servers according to user-defined policies. To manage workloads within a database, you can define database resource plans, using Oracle Database Resource Manager which is enhanced to manage Oracle Exadata Storage Server I/O resources. You can also define a container database (CDB) resource plan that allows management for the pluggable databases (PDBs) that it contains. To manage multiple databases, you can define an interdatabase plan. Or, you can define a cluster plan to perform cluster-based resource management.

When there is contention for I/O resources, IORM schedules I/O by immediately issuing some I/O requests and queuing others. I/O requests are serviced immediately for workloads that are not exceeding their resource allocation, according to the resource plans. I/O requests are queued for workloads exceeding their resource allocation. Queued I/Os are serviced according to the priorities in the resource plans when the workload no longer exceeds its resource allocation. When the cell is operating below capacity and there is no contention for I/O resources, IORM does not queue I/O requests, and lets a workload exceed its resource allocation to consume available I/O resources.

For example, if a production database and test database share Oracle Exadata Storage Server resources, you can configure resource plans that give priority to the production database. In this case, whenever the test database load would affect the production database performance, IORM schedules the I/O requests such that the production database I/O performance is not impacted. This means that the test database I/O requests are queued until they can be issued without disturbing the production database I/O performance.

Flash IORM protects the latency of critical OLTP I/O requests in flash cache. When table scans are running on flash concurrently with OLTP I/O requests, the OLTP latency is impacted significantly. Flash IORM queues and throttles the table scan, and other low priority I/O requests. The critical OLTP I/O requests are never queued. When the flash disks are not busy serving critical OLTP I/O requests, the queued I/O requests are issued based on the resource allocations in the IORM plan.

5.1.1.1 About the IORM Objective

The I/O Resource Management (IORM) objective option specifies the optimization mode for IORM.

The IORM objective is configured individually on every storage server using the CellCLI ALTER IORMPLAN command. To deliver consistent overall system performance, ensure every storage server in the storage cluster uses the same IORM configuration settings.

The valid objective values are:

  • auto - Causes IORM to determine the best mode based on active workloads and resource plans. IORM continuously and dynamically determines the optimization objective based on the observed workloads and enabled resource plans. For most use cases, auto is the recommended value.

    Starting with Oracle Exadata System Software release 21.2.0, auto is the default setting for new deployments.

  • high_throughput - Optimizes critical DSS workloads that require high throughput. This setting improves disk throughput at the cost of I/O latency.

  • low_latency - Optimizes critical OLTP workloads that require extremely good disk latency. This setting provides the lowest possible latency at the cost of throughput by limiting disk utilization.

  • balanced - Balances low disk latency and high throughput, which is useful for a mixture of critical OLTP and DSS workloads. This setting limits disk utilization of large I/Os to a lesser extent than low_latency to achieve a balance between latency and throughput.

  • basic - Use this setting to limit the maximum small I/O latency and otherwise disable I/O prioritization. This is the default setting for new deployments in Oracle Exadata System Software release 20.1.0 and earlier. Specifically, using this setting:

    • IORM guards against extremely high latencies for log writes, buffer cache reads, and other critical I/Os by prioritizing them ahead of smart scans and other disk intensive I/O operations.
    • IORM manages access to flash cache and flash log.
    • IORM manages scans to ensure that resources are shared between different workloads.
    • IORM does not enforce the maximum utilization limits in the IORM plan. Plan allocations are only used to guard against extremely high latencies, and there is no consideration for plan conformance. For stricter plan conformance, and enforcement of maximum utilization limits, the objective option must be set to something other than basic.

On High Capacity (HC) storage servers, flash IORM protects OLTP latency by prioritizing critical I/Os to flash devices ahead of smart scan and low priority I/Os. This occurs by default and regardless of the objective option setting.

On Extreme Flash (EF) storage servers, the IORM objective provides some additional control over flash IORM. On EF storage servers, flash IORM behaves the same as on HC servers when the objective option is set to basic, auto, or balanced. However, on EF storage servers, high_throughput increases the scan throughput at the cost of critical I/O latency, and low_latency provides maximum latency protection at the cost of significant scan throughput degradation when both workloads run concurrently.

5.1.1.2 IORM Plans

Exadata I/O Resource Management (IORM) manages resources using various IORM plans.

Oracle Database Resource Manager enables you to manage resources within a database, also known as intradatabase resource management. A database resource plan specifies how resources are allocated across different workloads, or consumer groups. When a database uses Oracle Exadata Storage Server the database resource plan is communicated to the storage server and is used by IORM. If a database does not have a database resource plan enabled, then intradatabase resource management is disabled and all I/O requests from the database are treated as a single workload.

Interdatabase resource management enables you to manage resources across multiple databases. Interdatabase resource management is configured by using the CellCLI ALTER IORMPLAN command to specify an interdatabase plan (dbplan). The dbplan contains directives that specify resource allocations for specific databases. You can use this feature if you have multiple databases sharing Oracle Exadata Storage Server resources and you want to control the resource allocation to a specific database.

Cluster resource management enables you to manage resources across multiple Oracle Grid Infrastructure clusters sharing the same storage server resources. Cluster resource management is configured by using the CellCLI ALTER IORMPLAN command to specify a cluster plan (clusterplan). The clusterplan contains directives that specify resource allocations that apply to all databases in a specific cluster.

Note:

The cluster plan is first introduced in Oracle Exadata System Software release 21.2.0.
Category resource management allocates resources according to the category of the work being done. For example, suppose all databases have three categories of workloads: OLTP, reports, and maintenance. To allocate the I/O resources based on these workload categories you could use category resource management. Category resource management uses Oracle Database Resource Manager to define workload categories in each database, and a category plan (catplan) that is defined in each storage server.

Note:

Starting with Oracle Exadata System Software release 21.2.0, the category plan is deprecated and a warning message is issued when a category plan is set.

Apart from the database resource plan definitions, which are configured within each database, Exadata IORM plans (such as the dbplan and clusterplan) are configured individually on every storage server using the CellCLI ALTER IORMPLAN command. To deliver consistent overall system performance, ensure every storage server in the storage cluster uses the same IORM configuration settings.

5.1.1.3 Resource Assignment Methods

You can use shares or allocations to assign resources in an IORM plan.

A share value represents the relative importance of each entity. With share-based resource allocation, a higher share value implies higher priority and more access to the I/O resources. For example, a database with a share value of 2 gets twice the resource allocation of a database with a share value of 1.

Valid share values are 1 to 32, with 1 being the lowest share, and 32 being the highest share. The sum of all share values in a plan cannot be greater than 32768.

Share-based resource allocation is the recommended method for the interdatabase plan (dbplan). For the cluster plan (clusterplan), share-based resource allocation is the only option.

With allocation-based resource management, an allocation specifies the resource allocation as a percentage (0-100). Each allocation is associated with a level. Valid level values are from 1 to 8, and the sum of allocation values cannot exceed 100 for each level. Resources are allocated to level 1 first, and then remaining resources are allocated to level 2, and so on.

Though not recommended, allocation-based resource management can be used in the interdatabase plan (dbplan). For the category plan (catplan), allocation-based resource management is the only option.

5.1.2 About Database Resource Management Within a Database

Oracle Database Resource Manager enables you to manage workloads within a database.

A database often has many types of workloads. These workloads may differ in their performance requirements and the amount of I/O that they issue. Database resource management is configured at the database level, using Oracle Database Resource Manager to create database resource plans. You should use this feature if you have multiple types of workloads within a database. You can define a policy for specifying how these workloads share the database resource allocations. If only one database is using the Oracle Exadata Storage Server resources, then this is the only resource management feature that you need.

For each database, you can use Oracle Database Resource Manager for the following tasks:

  • Create resource consumer groups

    Resource consumer groups provide a way to group sessions that comprise a particular workload. For example, if your database is running four different applications, then you can create four consumer groups, one for each application. If your data warehouse has three types of workloads, such as critical queries, normal queries, and ETL (extraction, transformation, and loading), then you can create a consumer group for each type of workload.

  • Map user sessions to consumer groups

    Once you have created the consumer groups, you must create rules that specify how sessions are mapped to consumer groups. Oracle Database Resource Manager allows you to create mapping rules based on session attributes such as the Oracle user name, the service that the session used to connect to the database, client machine, client program name, client user name, and so on. If you are creating consumer groups for each application and each application has a dedicated service, then you should create mapping rules based on service names. If you want to dedicate a consumer group to a particular set of users, then you should create mapping rules based on their user names. Sessions that are not explicitly assigned to a consumer group are placed in the OTHER_GROUPS consumer group.

  • Create CDB resource plans

    A container database (CDB) resource plan specifies how CPU and I/O resources are allocated among the different pluggable databases (PDBs) that are associated with the same container. The CDB plan is created using Oracle Database Resource Manager. The CDB plan contains a directive for each PDB. The directive defines the number of shares that are allocated to that PDB. The shares define the relative priority of that PDB as compared to other PDBs in the plan.

    A maximum utilization limit can be specified for a PDB.

    A CDB resource plan also lets you specify memory_min and memory_limit for each PDB. These parameters define the various cache quotas for each PDB and have no bearing on memory sizing in the database instance.

  • Create resource plans

    The database resource plan, also known as an intradatabase resource plan, specifies how CPU and I/O resources are allocated among consumer groups in its database. The resource plan is created using Oracle Database Resource Manager. It contains a resource allocation directive for each consumer group, which consists of a percentage and a level. You can specify up to eight levels.

    • Consumer groups at level 2 get resources that were not allocated at level 1 or were not consumed by a consumer group at level 1.
    • Consumer groups at level 3 are allocated resources only when some allocation remains from levels 1 and 2.
    • The same rules apply to levels 4 through 8.

    Multiple levels not only provide a way of prioritizing, they also provide a way of explicitly specifying how all primary and leftover resources are to be used. You can construct resource plans that allocate resources across consumer groups using percentages, priorities, or a combination of the two.

    You can also specify a maximum utilization limit for a consumer group. This works in the same way as a maximum utilization limit for a database, and limits the I/O utilization for the consumer group to the specified value.

    In addition to a CDB plan, each PDB can also create a resource plan to manage the workloads running within the PDB. PDBs only support single level plans with a maximum of 8 consumer groups.

  • Enable a resource plan

    A database resource plan can be manually enabled with the RESOURCE_MANAGER_PLAN initialization parameter or automatically enabled with the job scheduler window.

When you set a database resource plan on the database, a description of the plan is automatically sent to each cell. For Oracle Real Application Clusters (Oracle RAC) database running on Oracle Exadata, all instances in the Oracle RAC cluster must be set to the same resource plan. When a new cell is added or an existing cell is restarted, the current plan of the database is automatically sent to the cell. The resource plan is used to manage resources on both the database server and storage servers (cells).

Background I/Os are scheduled based on their priority relative to the user I/Os. For example, redo writes, and control file reads and writes are critical to performance and are always prioritized above all user I/Os. The database writer process (DBWR) writes are scheduled at the same priority level as the user I/Os. If a resource plan is not enabled for a database, then all user I/Os are treated equally, and background I/Os are treated as described in this paragraph.

Oracle provides several predefined plans. The most commonly used are mixed_workload_plan, dss_plan, and default_maintenance_plan.

5.1.3 About Interdatabase Resource Management

Interdatabase resource management enables you to manage resources for multiple databases using the same shared storage.

An interdatabase plan specifies how resources are allocated by percentage or share among multiple databases for each storage server. The directives in an interdatabase plan specify allocations to databases, rather than consumer groups. The interdatabase plan is configured and enabled with the CellCLI utility at each storage server.

5.1.3.1 About Interdatabase IORM Plan Directives

Interdatabase plan directives are specified using the DB_UNIQUE_NAME of the database as the identifier.

When using allocation-based resource management, each directive consists of an allocation amount and a level from 1 to 8, which is similar to a database resource plan. For a given plan, the total allocations for any level must be less than or equal to 100 percent. An interdatabase plan differs from a database resource plan in that it cannot contain subplans and only contains I/O resource directives.

Share-based plans use a relative share instead of percentage allocations and levels. These plans are simpler to implement, but as effective as percentage allocations. Each database is given a share value which is an integer between 1 and 32. The sum of the shares can be up to 32768. Share-based plans support up to 1024 directives within the interdatabase plan. For example, if a critical database, FINANCE, has 4 shares, and a low-priority database, REPORTING, has 1 share, then during periods of resource contention the FINANCE database is four times more likely to issue I/Os compared to the REPORTING database.

Oracle Exadata System Software uses the interdatabase plan and database resource plans together to allocate I/O resources.

  • First, the interdatabase plan allocates the I/O resources to individual databases.
  • Next, the database resource plan for each database allocates the I/O resources to consumer groups. If a database does not have an active database resource plan, all user I/Os are treated the same. Background I/Os are automatically prioritized relative to the user I/Os based on their importance.

As a best practice, you should create a directive for each database that shares the storage. This is done automatically for shared-based plans, but not for allocation-based plans. To ensure that any database without an explicit directive can be managed with percentage allocation plans, create an allocation named OTHER. Databases without explicit directives are managed using the allocation of the OTHER group directive.

Each database that is not explicitly mapped in a share-based plan gets the default share of 1. However, share-based plans can use the DEFAULT directive to specify the default values for the different IORM attributes.

If you have databases with the same DB_UNIQUE_NAME in different Oracle ASM clusters, then starting with Oracle Exadata System Software release 19.1.0, you can use the asmcluster attribute to uniquely identify each database in the interdatabase plan.

5.1.3.2 Using Interdatabase Plans for Consolidation and Isolation

Interdatabase resource management plans help manage resources when consolidating or isolating databases.

Consider the consolidation of four different databases sharing the same Oracle Exadata Storage Servers. If all the databases have equal priority, you can use IORM to equally allocate 25% of the I/O resources to each database. If one database is more important than the others, then you can use IORM to give it a larger share of the I/O resources.

Defining maximum utilization limits may also be useful in consolidation scenarios. By using limits, you can isolate each database in case one sees a sudden workload increase. For example, you could define a maximum utilization limit of 40% for each database, which would guarantee that no database can monopolize the system. However, when a database reaches its assigned limit it cannot take advantage of any spare capacity. Consequently, it is possible for disks to run below full capacity when limits are specified.

Resource management using limits is also ideal for pay-for-performance use cases, where service providers want to guarantee performance to their customers corresponding to the level of service being purchased. For example, customers paying for a lower performance tier should be limited more than customers paying for higher performance.

5.1.3.3 Using IORM to Control Database Access to Exadata Cache Resources

You can use IORM to manage access to valuable Oracle Exadata Storage Server cache resources.

You can use the flashcache and flashlog attributes in the interdatabase IORM plan to control access to Exadata Smart Flash Cache and flash log. You can set flashcache=off to prevent the specified database from using the flash cache. Likewise, setting flashlog=off prevents the specified database from using the flash log. By default, all databases can use flash memory for caching and logging.

Likewise, on appropriately configured servers, you can use xrmemcache and xrmemlog attributes to control access to Exadata RDMA Memory (XRMEM) resources.

Note:

On Exadata X8M and X9M storage server models only, you can use pmemcache and pmemlog attributes interchangeably with xrmemcache and xrmemlog. However, you cannot have a mixture of PMEM and XRMEM attributes in the IORM plan.

These attributes allow valuable cache resources to be reserved for mission-critical databases, especially in consolidated environments.

5.1.3.4 About Flash Cache Management in IORM Plans

I/O Resource Management can provide predictable performance by guaranteeing space in Exadata Smart Flash Cache.

With multiple databases and pluggable databases (PDBs) sharing the storage, flash cache space becomes a critical resource that requires management. IORM can reserve space for critical databases or PDBs while preventing less important or rogue entities from consuming the entire flash cache.

You can use the following attributes in the interdatabase plan to set limits for flash cache resources. A hard limit means that the specified limit cannot be exceeded even when the cache is not full. A soft limit means that the specified limit can be exceeded if there are available resources.

  • flashCacheMin — Specifies the minimum amount of flash cache space that is guaranteed for the specified database, even if the blocks are cold. This is a hard limit.

    Because flashCacheMin is a guaranteed reservation, the sum of flashCacheMin across all directives should be less than the size of the flash cache to ensure that each database gets its respective quota.

  • flashCacheLimit — Specifies the soft maximum amount of flash cache space that is available to the database. If the flash cache is not full, a database can exceed the flashCacheLimit value.
  • flashCacheSize — Specifies the hard maximum amount of flash cache space that is available to the database. The flashCacheSize value cannot be exceeded at any time.

    However, if you set flashCacheSize to a value that is lower than the current space occupied by the database, then starting with Oracle Exadata System Software release 20.1.0 excess data is proactively cleared from the cache. Previously, excess data was only removed when overwritten by other data.

    Starting with Oracle Exadata System Software release 19.2.0, flashCacheSize is not a guaranteed reservation if the sum of the flashCacheSize across all directives is more than the size of the flash cache. In this case, you can also specify flashCacheMin to define a guaranteed minimum quota.

Within each database, you can manage the minimum and maximum quotas for PDBs with Oracle Database Resource Manager, in a container database (CDB) resource plan using the memory_min and memory_limit directives.

Cache limits in an interdatabase IORM plan directive constrain the settings in any corresponding CDB plan. Consequently, if an interdatabase IORM plan directive specifies flashcachemin and flashcachesize settings for a database, then the PDB-specific memory_min quotas in the CDB plan represent fractions of the flashcachemin setting, and the PDB-specific memory_limit values represent fractions of the flashcachesize.

However, if the interdatabase IORM plan directive specifies flashcachesize without flashcachemin, then the PDB-specific memory_min settings are ignored while the memory_limit settings continue to represent fractions of the flashcachesize.

5.1.3.5 About XRMEM Cache Management in IORM Plans

I/O Resource Management (IORM) can provide predictable performance by guaranteeing space in Exadata RDMA Memory Cache (XRMEM cache).

With multiple databases and pluggable databases (PDBs) sharing the storage, XRMEM cache space becomes a critical resource that requires management. IORM can reserve space for critical databases or PDBs while preventing less important or rogue entities from consuming the entire XRMEM cache.

You can use the following attributes in the interdatabase plan to set limits for XRMEM cache resources. A hard limit means that the specified limit cannot be exceeded even when the cache is not full. A soft limit means that the specified limit can be exceeded if there are available resources.

  • xrmemCacheMin — Specifies the minimum amount of XRMEM cache space that is guaranteed for the specified database, even if the blocks are cold. This is a hard limit.

    Because xrmemCacheMin is a guaranteed reservation, the sum of xrmemCacheMin across all directives should be less than the size of the XRMEM cache to ensure that each database gets its respective quota.

  • xrmemCacheLimit — Specifies the soft maximum amount of XRMEM cache space that is available to the database. If the XRMEM cache is not full, a database can exceed the xrmemCacheLimit value.
  • xrmemCacheSize — Specifies the hard maximum amount of XRMEM cache space that is available to the database. The xrmemCacheSize value cannot be exceeded at any time.

    However, if you set xrmemCacheSize to a value that is lower than the current space occupied by the database, then excess data is proactively cleared from the cache.

    xrmemCacheSize is not a guaranteed reservation if the sum of the xrmemCacheSize across all directives is more than the size of the XRMEM cache. In this case, you can also specify xrmemCacheMin to define a guaranteed minimum quota.

    Note:

    On Exadata X8M and X9M storage server models only, you can interchangeably use attribute names starting with pmem instead of xrmem. For example, pmemCacheMin instead of xrmemCacheMin. However, you cannot have a mixture of PMEM and XRMEM attributes in the IORM plan.

Within each database, you can manage the minimum and maximum quotas for PDBs with Oracle Database Resource Manager, in a container database (CDB) resource plan using the memory_min and memory_limit directives.

Cache limits in an interdatabase IORM plan directive constrain the settings in any corresponding CDB plan. Consequently, if an interdatabase IORM plan directive specifies xrmemCacheMin and xrmemCacheSize settings for a database, then the PDB-specific memory_min quotas in the CDB plan represent fractions of the xrmemCacheMin setting, and the PDB-specific memory_limit values represent fractions of the xrmemCacheSize.

However, if the interdatabase IORM plan directive specifies xrmemCacheSize without xrmemCacheMin, then the PDB-specific memory_min settings are ignored while the memory_limit settings continue to represent fractions of the xrmemCacheSize.

5.1.3.6 About PMEM Cache Management in IORM Plans

Note:

This topic applies only to Oracle Exadata System Software releases before 23.1.0. Otherwise, see About XRMEM Cache Management in IORM Plans.

I/O Resource Management (IORM) can provide predictable performance by guaranteeing space in the persistent memory (PMEM) cache, which is available in Oracle Exadata Storage Server X8M and X9M.

With multiple databases and pluggable databases (PDBs) sharing the storage, PMEM cache space becomes a critical resource that requires management. IORM can reserve space for critical databases or PDBs while preventing less important or rogue entities from consuming the entire PMEM cache.

You can use the following attributes in the interdatabase plan to set limits for PMEM cache resources. A hard limit means that the specified limit cannot be exceeded even when the cache is not full. A soft limit means that the specified limit can be exceeded if there are available resources.

  • pmemCacheMin — Specifies the minimum amount of PMEM cache space that is guaranteed for the specified database, even if the blocks are cold. This is a hard limit.

    Because pmemCacheMin is a guaranteed reservation, the sum of pmemCacheMin across all directives should be less than the size of the PMEM cache to ensure that each database gets its respective quota.

  • pmemCacheLimit — Specifies the soft maximum amount of PMEM cache space that is available to the database. If the PMEM cache is not full, a database can exceed the pmemCacheLimit value.
  • pmemCacheSize — Specifies the hard maximum amount of PMEM cache space that is available to the database. The pmemCacheSize value cannot be exceeded at any time.

    However, if you set pmemCacheSize to a value that is lower than the current space occupied by the database, then starting with Oracle Exadata System Software release 20.1.0 excess data is proactively cleared from the cache. Previously, excess data was only removed when overwritten by other data.

    pmemCacheSize is not a guaranteed reservation if the sum of the pmemCacheSize across all directives is more than the size of the PMEM cache. In this case, you can also specify pmemCacheMin to define a guaranteed minimum quota.

Within each database, you can manage the minimum and maximum quotas for PDBs with Oracle Database Resource Manager, in a container database (CDB) resource plan using the memory_min and memory_limit directives.

Cache limits in an interdatabase IORM plan directive constrain the settings in any corresponding CDB plan. Consequently, if an interdatabase IORM plan directive specifies pmemCacheMin and pmemCacheSize settings for a database, then the PDB-specific memory_min quotas in the CDB plan represent fractions of the pmemCacheMin setting, and the PDB-specific memory_limit values represent fractions of the pmemCacheSize.

However, if the interdatabase IORM plan directive specifies pmemCacheSize without pmemCacheMin, then the PDB-specific memory_min settings are ignored while the memory_limit settings continue to represent fractions of the pmemCacheSize.

5.1.3.7 Tips for Managing Interdatabase Resource Plans

Note the following information when creating and managing interdatabase resource plans.

  • If Oracle Exadata is only hosting one database, then an interdatabase plan is not needed.
  • If an interdatabase plan is not specified, then all databases receive an equal allocation.
  • If only one database is mapped to the OTHER directive and all other databases have an explicit directive, then Oracle Exadata System Software uses the database resource plan of that database to determine how the allocation of the OTHER database is redistributed among the consumer groups in that database.
  • If multiple databases are mapped to the OTHER directive, then Oracle Exadata System Software does not use Oracle Database Resource Manager for these databases. All of the I/O requests are treated the same.
  • For share-based plans, each database gets its own directive even when it is not explicitly named in the plan. Oracle Exadata System Software uses the database resource plan of the database to determine how the allocation is to be distributed between the consumer groups in the database.
  • Interdatabase plan directives are specified using the DB_UNIQUE_NAME of the database as the identifier.

    By itself, the DB_UNIQUE_NAME value is not case-sensitive. However, IORM may not work correctly when multiple instances of the same Oracle RAC database use inconsistent cases for DB_UNIQUE_NAME (for example, PROD and prod). Therefore, ensure that the DB_UNIQUE_NAME value is case-consistent across each Oracle RAC database.

    If no value is specified, the DB_UNIQUE_NAME value is derived from DB_NAME. In that case, ensure that the DB_NAME value is case-consistent across the Oracle RAC database.

  • If you have databases with the same DB_UNIQUE_NAME but associated with different Oracle ASM clusters, then, starting with Oracle Exadata System Software release 19.1.0, you can use the asmcluster attribute to uniquely identify each database when specifying directives.
  • Cache limits in an interdatabase IORM plan directive constrain the settings in any corresponding container database (CDB) plan.

    Consequently, if an interdatabase IORM plan directive specifies flashcachemin and flashcachesize settings for a database, then the PDB-specific memory_min quotas in the CDB plan represent fractions of the flashcachemin setting, and the PDB-specific memory_limit values represent fractions of the flashcachesize.

    However, if the interdatabase IORM plan directive specifies flashcachesize without flashcachemin, then the PDB-specific memory_min settings are ignored while the memory_limit settings continue to represent fractions of the flashcachesize.

    The same applies to the Exadata RDMA Memory Cache (XRMEM cache) regarding the relationship between the PDB-specific quotas in the CDB plan (memory_min and memory_limit) and the cache-specific settings in the interdatabase IORM plan (xrmemcachemin and xrmemcachesize).

5.1.4 About Cluster Resource Management

Cluster resource management enables you to manage resources for multiple clusters using the same shared storage.

Note:

The cluster plan is first introduced in Oracle Exadata System Software release 21.2.0.

An I/O Resource Management (IORM) cluster plan specifies how storage server resources are allocated across multiple clusters. Each directive in a cluster plan specifies an allocation to a cluster, rather than an individual database or consumer group.

For example, consider a system hosting two clusters; clusterA and clusterB. Now, imagine a cluster plan with share-based resource allocation where clusterA has three shares, and clusterB has one share. In that case, and in the absence of any other IORM plans, all of the databases running in clusterA share 75% of the I/O resources, while the databases in clusterB share the remaining 25%.

The cluster plan can work in conjunction with an interdatabase resource plan, but only if the interdatabase resource plan does not use allocation-based resource management (using allocation and level directives). In this case, directives from both plans are applied to determine the share of resources.

So, continuing from the previous example, imagine that the databases in clusterA are in an interdatabase resource plan with share-based resource allocation. In that case, the resources allocated in the cluster plan to clusterA are further divided and shared amongst the databases by using the directives in the interdatabase resource plan.

The cluster plan cannot work in conjunction with a category plan. That is, you cannot set IORM cluster plan directives when category plan directives exist. Likewise, you cannot set category plan directives when cluster plan directives exist.

The cluster plan is configured and enabled on each storage server by using the CellCLI ALTER IORMPLAN command. To operate the cluster plan, ASM-scoped security must also be configured.

5.1.5 About Category Resource Management

Categories represent collections of consumer groups across all databases.

Note:

Starting with Oracle Exadata System Software release 21.2.0, the category plan is deprecated and a warning message is issued when a category plan is set.

Oracle Database Resource Manager enables you to specify a category for every consumer group. You can manage I/O resources based on categories by creating a category plan. For example, you can specify precedence to consumer groups in the interactive category over consumer groups in the batch category for all databases sharing Oracle Exadata Storage Server.

You can add any number of categories, or modify the predefined categories. You should map consumer groups to the appropriate category for all databases that use the same cell storage. Any consumer group without an explicitly specified category defaults to the OTHER category.

Category plans are configured and enabled using the CellCLI utility on the cell. Only one category plan can be enabled at a time. The predefined categories provided in Oracle Database are described in the following table, along with sample percentages.

Table 5-1 Sample Category Resource Management Plan

Category Name Category Description Level 1 (%) Level 2 (%) Level 3 (%)

ADMINISTRATIVE

For extremely high-priority work, such as urgent administrative tasks.

This category is required.

80

not set

not set

INTERACTIVE

For high-priority, performance-sensitive work, such as OLTP transactions.

not set

70

not set

BATCH

For low-priority work, such as noncritical reports and backup.

not set

not set

70

MAINTENANCE

For low-priority work, such as automated tasks.

not set

not set

10

OTHER

For all consumer groups that do not have a category label or reference a category that is not in the current category plan.

This category is required.

not set

not set

20

The sample plan shown in the above table prioritizes administrative activity across all databases. It also prioritizes interactive activity over batch, maintenance, and other activities. In the sample plan, the following are the resource allocations:

  • Level 1 is given 80 percent of the I/O resources. The ADMINISTRATIVE category is the only category in level 1.

  • Level 2 is given all resources that were unallocated or unused by level 1. In this example, level 2 is given 20 percent of the I/O resources and any resources unused by the ADMINISTRATIVE category. The INTERACTIVE category gets 70 percent of the level 2 amount.

  • Level 3 categories are given the remaining resources, including those not used by the INTERACTIVE category. Of the remaining resources, the BATCH category gets 70 percent, the OTHER category gets 20 percent, and the MAINTENANCE category gets 10 percent.

All administrative consumer groups in all databases should be mapped to the ADMINISTRATIVE category. All high-priority user activity, such as consumer groups for important OLTP transactions and time-critical reports, should be mapped to the INTERACTIVE category. All low-priority user activity, such as reports, maintenance, and low-priority OLTP transactions, should be mapped to the BATCH, MAINTENANCE, and OTHER categories.

5.1.6 About Consumer Groups and Resource Plans

Oracle Exadata provides out-of-the-box consumer groups and resource plans specifically designed for data warehouses that use Oracle Exadata System Software.

These resource plans can be modified to suit the needs of your environment.

The following consumer groups are for data warehouses:

  • ETL_GROUP: Consumer group for ETL (extract, transform, and load) jobs.
  • DSS_GROUP: Consumer group for non-critical decision support system (DSS) queries.
  • DSS_CRITICAL_GROUP: Consumer group for critical DSS queries.

The following resource plans are for data warehouses:

5.1.6.1 DSS_PLAN Resource Plan

The DSS_PLAN resource plan is designed for data warehouses that prioritize critical DSS queries over non-critical DSS queries and ETL jobs.

In this plan, SYS_GROUP has the highest priority, followed by DSS_CRITICAL_GROUP, DSS_GROUP, and then a combination of ETL_GROUP and BATCH_GROUP. No consumer group is allowed to consume all the bandwidth.

Table 5-2 DSS_PLAN Resource Plan for Data Warehouses

Consumer Group Level 1 (%) Level 2 (%) Level 3 (%) Level 4 (%)

SYS_GROUP

75

not set

not set

not set

DSS_CRITICAL_GROUP

not set

75

not set

not set

DSS_GROUP

not set

not set

75

not set

ETL_GROUP

not set

not set

not set

45

BATCH_GROUP

not set

not set

not set

45

ORA$DIAGNOSTICS

not set

5

not set

not set

ORA$AUTOTASK_SUB_PLAN

not set

5

not set

not set

OTHER_GROUPS

not set

not set

not set

10

As shown in the previous table, the DSS_CRITICAL_GROUP group is only allocated 75 percent at level 2. Any unused allocation goes to the next level, not to other consumer groups at the same level. That means that if the DSS_CRITICAL_GROUP group does not completely consume its allocation, then the allocation is not given to the ORA$DIAGNOSTICS or ORA$AUTOTASK_SUBPLAN groups at the same level, but instead the allocation is given to the DSS_GROUP group at level 3.

5.1.6.2 ETL_CRITCAL_PLAN Resource Plan

The ETL_CRITICAL_PLAN prioritizes ETL over DSS queries.

In this plan, the SYS_GROUP group is given 75 percent of the bandwidth. The remaining bandwidth is divided between the other consumer groups in the ratios specified by the level 2 allocations. The ETL_GROUP and DSS_CRITICAL_GROUP groups have a higher allocation (35 percent) than the DSS_GROUP and BATCH_GROUP groups (10 percent).

Table 5-3 ETL_CRITICAL_PLAN Resource Plan for Data Warehouses

Consumer Group Level 1 (%) Level 2 (%) Level 3 (%) Level 4 (%)

SYS_GROUP

75

not set

not set

not set

DSS_CRITICAL_GROUP

not set

35

not set

not set

DSS_GROUP

not set

10

not set

not set

ETL_GROUP

not set

35

not set

not set

BATCH_GROUP

not set

10

not set

not set

ORA$DIAGNOSTICS

not set

3

not set

not set

ORA$AUTOTASK_SUB_PLAN

not set

3

not set

not set

OTHER_GROUPS

not set

3

not set

not set

5.1.7 About CDB Plans and Pluggable Databases

The container database (CDB) can have multiple workloads within multiple PDBs competing for resources.

The Oracle Multitenant container database (CDB) supports many user-defined pluggable databases (PDBs). In a CDB, resources are managed at the following levels:

  • CDB level: Oracle Database Resource Manager manages the workloads for multiple PDBs that are contending for system and CDB resources. The administrator can specify how resources are allocated to PDBs, and can limit the resource utilization of specific PDBs.

  • PDB level: Oracle Database Resource Manager manages the workloads within each PDB.

The following example outlines a CDB plan for three PDBs named SALES, SERVICES and HR. The PDBs have different shares and max utilization limits in the CDB plan.

PDB Name Directive for Shares Directive for Utilization Limit Memory_min Memory_limit

SALES

3

Unlimited

20

not set

SERVICES

3

Unlimited

20

not set

HR

1

70

not set

50

5.2 Administering IORM

You can perform various administrative tasks related to I/O Resource Management (IORM).

To perform the tasks, use the DBMS_RESOURCE_MANAGER package to define database resource plans on the database servers, and the CellCLI utility to specify the IORM plan on each cell.

Related Topics

5.2.1 Setting the IORM Objective

Use the CellCLI ALTER IORMPLAN command to set the IORM objective.

For example:

CellCLI> ALTER IORMPLAN objective=auto

Note:

The IORM objective is configured individually on every storage server using the CellCLI ALTER IORMPLAN command. To deliver consistent overall system performance, ensure every storage server in the storage cluster uses the same IORM configuration settings.

5.2.2 Administering Database Resource Management

To set up database resource management, you must use Oracle Database Resource Manager to configure the consumer groups, assign sessions to consumer groups, create a database resource plan, and enable the plan.

5.2.2.1 Setting Up Consumer Groups and Categories

Consumer groups and categories are set up with the procedures in the PL/SQL DBMS_RESOURCE_MANAGER package.

You can create new consumer groups and categories, or use one of the predefined consumer groups or categories. You do not need to set up categories if you are not planning on using a category plan.

Note:

Consumer groups and categories are created in the database and cannot be created explicitly on a cell.

Before running the DBMS_RESOURCE_MANAGER procedures for administering consumer groups and categories, you must first create a pending area. You must have the system privilege ADMINISTER_RESOURCE_MANAGER to run the procedures in the DBMS_RESOURCE_MANAGER PL/SQL package.

The following PL/SQL commands are used with consumer groups and categories:

  • To manage categories: CREATE_CATEGORY(), DELETE_CATEGORY(), and UPDATE_CATEGORY()

  • To manage consumers groups: CREATE_CONSUMER_GROUP() and UPDATE_CONSUMER_GROUP()

  • To assign consumer groups to categories: CREATE_CONSUMER_GROUP() or UPDATE_CONSUMER_GROUP()

In addition to the consumer groups that you set up, the database contains predefined consumer groups. The DBA_RSRC_CONSUMER_GROUPS view displays information about consumer groups, and the DBA_RSRC_CATEGORIES view displays information about categories in the database.

Example 5-1 Setting Up Consumer Groups and Categories with PL/SQL in the Database

This example shows how to set up consumer groups and categories in a database. The MAINTENANCE category is predefined, and is not created in this example.

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

  DBMS_RESOURCE_MANAGER.CREATE_CATEGORY(
     CATEGORY => 'dss',
     COMMENT => 'DSS consumer groups');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP( 
     CONSUMER_GROUP => 'critical_dss',
     CATEGORY => 'dss',
     COMMENT => 'performance-critical DSS queries');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP( 
     CONSUMER_GROUP => 'normal_dss',
     CATEGORY => 'dss',
     COMMENT => 'non performance-critical DSS queries');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP( 
     CONSUMER_GROUP => 'etl',
     CATEGORY => 'maintenance',
     COMMENT => 'data import operations');

  DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

Example 5-2 Consumer Groups and Categories in an Oracle Database

This example shows a query on the DBA_RSRC_CONSUMER_GROUPS view.

SQL> SELECT consumer_group, category FROM DBA_RSRC_CONSUMER_GROUPS where 
     consumer_group not like 'ORA%' ORDER BY category;

CONSUMER_GROUP                 CATEGORY
------------------------------ ------------------------------
SYS_GROUP                      ADMINISTRATIVE
ETL_GROUP                      BATCH
BATCH_GROUP                    BATCH
DSS_GROUP                      BATCH
CRITICAL_DSS                   DSS
NORMAL_DSS                     DSS
DSS_CRITICAL_GROUP             INTERACTIVE
INTERACTIVE_GROUP              INTERACTIVE
ETL                            MAINTENANCE
LOW_GROUP                      OTHER
OTHER_GROUPS                   OTHER
AUTO_TASK_CONSUMER_GROUP       OTHER
DEFAULT_CONSUMER_GROUP         OTHER
 
13 rows selected
5.2.2.2 Assigning Sessions to Consumer Groups

You can assign a session to a resource consumer group manually or automatically using consumer group mapping rules.

For both approaches, you must give explicit permission for a user to switch to a consumer group. In order to control which consumer groups a user can switch to, use the PL/SQL procedure DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP().

The consumer group mapping rules are based on session attributes such as the user name, the name of the service that the session used to connect to the database, and the name of the client program. To create a consumer group mapping rule, use the SET_CONSUMER_GROUP_MAPPING procedure, as shown in Example 5-3. Before running the SET_CONSUMER_GROUP_MAPPING procedure, you must first create a pending area.

You can also manually switch a session to a particular consumer group, using the PL/SQL DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_USER() or SWITCH_CONSUMER_GROUP_FOR_SESS() procedures.

Example 5-3 Creating Consumer Group Mapping Rules, Based on Service and User Name

BEGIN
DBMS_SERVICE.CREATE_SERVICE('SALES', 'SALES');
DBMS_SERVICE.CREATE_SERVICE('AD_HOC', 'AD_HOC');
 
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
     (DBMS_RESOURCE_MANAGER.ORACLE_USER, 'SYS', 'CRITICAL_DSS');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
     (DBMS_RESOURCE_MANAGER.SERVICE_NAME, 'SALES', 'CRITICAL_DSS');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
     (DBMS_RESOURCE_MANAGER.SERVICE_NAME, 'AD_HOC', 'NORMAL_DSS');
 
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
 
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP (
   GRANTEE_NAME   => 'PUBLIC',
   CONSUMER_GROUP => 'CRITICAL_DSS',
   GRANT_OPTION   =>  FALSE);
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP (
   GRANTEE_NAME   => 'PUBLIC',
   CONSUMER_GROUP => 'NORMAL_DSS',
   GRANT_OPTION   =>  FALSE);
END;
/

See Also:

Oracle Database Administrator’s Guide for additional information about the following:

5.2.2.3 Creating a CDB Plan

The CDB plan manages CPU resources on the database servers, and flash cache space and I/O bandwidth on the Exadata storage servers.

CDB plans are created using the PL/SQL procedures DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN() and CREATE_CDB_PLAN_DIRECTIVE(). The CDB plan can only be configured from the root PDB.

Example 5-4 Using a CDB Plan to Distribute Resources Between PDBs

This example shows how to distribute resources between three PDBs named SALES, SERVICES and HR. SALES and SERVICES have higher priority and get three shares each compared to one share for HR. The limit on the HR PDB is set to 70% maximum utilization limit.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan    => ''NEWCDB_PLAN ',
    comment => 'CDB resource plan for newcdb');

  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SALESPDB', 
    shares                => 3, 
    memory_min            => 20,
    utilization_limit     => 100);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => ' NEWCDB_PLAN ', 
    pluggable_database    => 'SERVICESPDB', 
    shares                => 3, 
    memory_min            => 20,
    memory_limit          => 75);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => ' NEWCDB_PLAN ', 
    pluggable_database    => 'HRPDB', 
    shares                => 1, 
    memory_limit          => 50,
    utilization_limit     => 70);

DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
5.2.2.4 Creating a Database Plan

Database resource plans, also known as intradatabase plans, are created using the PL/SQL procedures DBMS_RESOURCE_MANAGER.CREATE_PLAN() and CREATE_PLAN_DIRECTIVE().

You must always begin resource plan creations or updates with the PL/SQL procedure CREATE_PENDING_AREA() and complete them with the PL/SQL procedure SUBMIT_PENDING_AREA(). You must also include a directive for OTHER_GROUPS, which includes all sessions that are not explicitly mapped to a consumer group.

You must have the system privilege ADMINISTER_RESOURCE_MANAGER to run the procedures in the DBMS_RESOURCE_MANAGER PL/SQL package. This resource plan manages both CPU resources on database instances and I/O resources on the cells.

Example 5-5 Sharing Resources Across Applications

In this example, assume you have multiple applications sharing a database where the I/O resources should be divided across the applications using a particular ratio. For example, there are three applications named SALES, FINANCE, and MARKETING. You would like the I/O resources to be allocated as 60 percent, 25 percent, and 10 percent, respectively, with the remaining 5 percent allocated to any sessions that do not map into these consumer groups. In this scenario, you would create a consumer group for each application, and then create a single-level resource plan and specify the percentage of I/O resources for each consumer group. This allocation is actually the minimum I/O resources that the consumer group can use. If a consumer group does not use its allocation, then it is redistributed to the other consumer groups in the ratio specified by the plan. You can specify the allocations using the MGMT_P1 parameter.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN('DAYTIME_PLAN', 'Resource plan for managing all
 applications between 9 am and 5 pm');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('SALES', 'Sales App');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('FINANCE', 'Finance App');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('MARKETING', 'Marketing App');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'SALES', 'Allocation
for SALES', MGMT_P1 => 60);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'FINANCE', 'Allocation
for FINANCE', MGMT_P1 => 25);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'MARKETING',
'Allocation for MARKETING', MGMT_P1 => 10);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'OTHER_GROUPS',
'Allocation for default group', MGMT_P1 => 5);
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

Example 5-6 Sharing Resources Across Workloads

In this example, assume you want to prioritize one workload over another. For example, suppose that you load data into your data warehouse while also servicing queries, and you want to always prioritize the queries over the data load. For this scenario, you would create two consumer groups for queries (reporting and ad-hoc) and one consumer group for data load. You would like to share the I/O resources between the two query consumer groups using a 75/25 ratio. In addition, you would like to issue I/Os for data load only if the query consumer groups do not use all of their allocation. You can use resource plan levels to specify the allocation priorities.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN('DAYTIME_PLAN', 'Resource plan for prioritizing
queries between 9 am and 5 pm');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('REPORT_QUERIES', 'Report Queries');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('AD-HOC_QUERIES', 'Ad-Hoc Queries');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('DATA_LOAD', 'Data Load');
 
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'REPORT_QUERIES',
'Allocation for REPORT_QUERIES', MGMT_P1 => 75);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'AD-HOC_QUERIES',
'Allocation for AD-HOC_QUERIES', MGMT_P1 => 25);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'DATA_LOAD',
'Allocation for DATA_LOAD', MGMT_P2 => 100);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE('DAYTIME_PLAN', 'OTHER_GROUPS',
'Allocation for default group', MGMT_P3 => 100);
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
5.2.2.5 Enabling a Database Resource Plan

You can manually enable database resource plans by setting the RESOURCE_MANAGER_PLAN parameter. You can automatically enable resource plans by defining an Oracle Scheduler window with a resource plan.

When the Oracle Scheduler window opens, the resource plan is enabled. When the Oracle Scheduler window closes, the resource plan is disabled.

When a resource plan is enabled, the database alerts all cells about this event and provides the resource plan. The database also alerts all cells when a resource plan is disabled. Because only one resource plan can be active for any database, you are required to enable the same resource plan on all instances of a database. If no database resource plan is enabled for a database, then all I/O requests are treated equally.

5.2.2.6 Managing Fast File Creation

Oracle Exadata System Software features fast file creation, allowing accelerated initialization of data files.

This feature automatically runs whenever you create a new tablespace, add a data file to an existing tablespace, or autoextend an existing tablespace. Oracle Exadata System Software can initialize files very quickly because it issues many concurrent I/O requests. However, these concurrent I/O requests create a heavy load that can interfere with performance-critical queries.

Using I/O Resource Management (IORM), you can control the priority of fast file creations for creating a new tablespace or adding a data file to an existing tablespace. These operations are run under the FASTFILECRE function. By default, the FASTFILECRE function is mapped to a hidden consumer group that has lower priority than all consumer group and background I/Os. If you choose to increase the priority, and thereby performance, of file creations, add a mapping rule based on the mapping attribute DBMS_RESOUCRE_MANAGER.ORACLE_FUNCTION, and mapping value FASTFILECRE.

Because autoextending an existing tablespace is a brief and often time-critical operation, you cannot modify its priority using IORM.

Example 5-7 Managing Fast File Creation

This example shows how to run fast file creations in the MAINTENANCE_GROUP consumer group.
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP('MAINTENANCE_GROUP', 'Maintenance
activity');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANAGER.ORACLE_
FUNCTION, 'FASTFILECRE', 'MAINTENANCE_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
5.2.2.7 Managing Data Import

You can control the priority of ETL as well as the amount of I/O resources that ETL consumes using I/O Resource Management (IORM).

Data import, or extract, transform, load (ETL), is an important part of maintaining a data warehouse. In some cases, ETL is extremely critical to performance because reports or queries cannot be run until the data has been loaded. In these cases, ETL should be prioritized above all other queries. In other cases, ETL is a low-priority background activity that only needs to be prioritized in the rare event that it does not complete by a certain time.

To manage ETL, do the following:

  • Map the ETL sessions to the ETL_GROUP consumer group.

    The mapping rules for ETL are typically based on user name or client program name. Data pump is run under the DATALOAD function. By default, the DATALOAD function is mapped to the ETL_GROUP consumer group.

  • Include the ETL_GROUP group in your resource plans.

To import non-compressed data as compressed data,

Example 5-8 Mapping a Program to the ETL_GROUP Consumer Group

This example shows how to map a program to the ETL_GROUP consumer group.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING  
  (DBMS_RESOURCE_MANAGER.CLIENT_PROGRAM, 'SQLLDR', 'ETL_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

Importing Non-compressed Data as Compressed Data

Non-compressed data can be imported as compressed data when using the TRANSFORM:SEGMENT_ATTRIBUTES=n option, and the target tablespace has been configured to create new tables as Exadata Hybrid Columnar Compression tables by default.

5.2.2.8 Managing Oracle Recovery Manager Backups and Copies

You can use I/O Resource Management (IORM) to control the resource consumption and priority of RMAN I/Os.

Backups are an I/O intensive operation. You can control the rate of Oracle Recovery Manager (RMAN) I/Os by configuring the number of channels. You can use IORM for a greater degree of control over the resource consumption and priority of RMAN I/Os. For example, you can map RMAN to a low priority consumer group. If the Oracle Exadata Storage Server is busy, then the RMAN operations run very slowly and not interfere with the other database operations. However, whenever the Oracle Exadata Storage Server is not fully utilized, then IORM schedules the RMAN I/Os, allowing it to consume the unutilized bandwidth.

RMAN backups run under the BACKUP function. RMAN copies run under the COPY function. By default, both the BACKUP and COPY functions are mapped to the BATCH_GROUP consumer group. You can remap these functions to any other consumer group, as shown in the following example.

Example 5-9 Using Consumer Groups to Manage Resources

This example shows how to map the BACKUP function to the BATCH_GROUP consumer group and the COPY function to the MAINTENANCE_GROUP consumer group.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANAGER.ORACLE_
FUNCTION, 'BACKUP', 'BATCH_GROUP');
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANAGER.ORACLE_
FUNCTION, 'COPY', 'MAINTENANCE_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

5.2.3 Administering the IORM Plan

You can administer the IORM plan by using the CellCLI ALTER IORMPLAN command.

5.2.3.1 Setting the IORM Plan

Use the CellCLI ALTER IORMPLAN command to set the IORM plan.

For example:

CellCLI> ALTER IORMPLAN                                                 -
         dbplan=((name=sales01, share=4),                               -
                 (name=sales02, share=3),                               -
                 (name=dev01, share=1),                                 -
                 (name=DEFAULT, share=2))

Because of the length and complexity of typical IORM plan definitions, consider putting the ALTER IORMPLAN command in a script file and run it by using the CellCLI START command.

Note:

Exadata I/O Resource Management (IORM) is configured individually on every storage server using the CellCLI ALTER IORMPLAN command. To deliver consistent overall system performance, ensure every storage server in the storage cluster uses the same IORM configuration settings.

5.2.3.2 Using Share-Based Resource Management

A share value represents the relative importance of each entity.

With share-based resource allocation, a higher share value implies higher priority and more access to the I/O resources. For example, a database with a share value of 2 gets twice the resource allocation of a database with a share value of 1.

Valid share values are 1 to 32, with 1 being the lowest share, and 32 being the highest share. The sum of all share values in a plan cannot be greater than 32768.

Share-based resource allocation is the recommended method for the interdatabase plan (dbplan). For the cluster plan (clusterplan), share-based resource allocation is the only option.

The following example illustrates how to use share-based resource management in an interdatabase plan. Consider four databases sharing the same Oracle Exadata Storage Server resources. The four databases are:

  • A critical OLTP production database, named PROD
  • A test database, named PROD_TEST
  • A development database, named PROD_DEV
  • A data warehouse database, named DW

An OLTP production database typically issues small I/O requests, and low latency for these requests is the critical requirement. A data warehouse issues large numbers of large I/O requests and is more sensitive to the I/O throughput than the latency of each individual I/O request. Without any I/O resource management, the large number of I/O requests issued by the DW database could overwhelm the storage subsystem and increase the latency of the I/O requests issued by the PROD database. Additionally, the I/O requests issued by the test and development databases, PROD_TEST and PROD_DEV, could adversely affect the performance of the PROD and the DW databases.

To ensure a reasonable distribution of I/O resources, you can define a share-based interdatabase plan as follows:

CellCLI> ALTER IORMPLAN                                    -
         dbplan=((name=prod, share=16),                    -
                 (name=dw, share=4),                       -
                 (name=prod_test, share=2),                -
                 (name=DEFAULT, share=1))

By using the example interdatabase plan, the critical OLTP database (PROD) gets priority when there is contention for I/O resources. Specifically, the I/O share for PROD is 4 times greater than DW, 8 times greater than PROD_TEST, and a 16 times greater that the default share that is assigned to PROD_DEV.

At any time, you change the share allocations to adjust the relative priorities.

5.2.3.3 Using Allocation-Based Resource Management

In IORM, an allocation value specifies the resource allocation as a percentage (0-100).

With allocation-based resource management, each allocation is associated with a level. Valid level values are from 1 to 8, and the sum of allocation values cannot exceed 100 for each level. Resources are allocated to level 1 first, and then remaining resources are allocated to level 2, and so on.

Though not recommended, allocation-based resource management can be used in the interdatabase plan (dbplan). For the category plan (catplan), allocation-based resource management is the only option.

The following example illustrates how to use allocation-based resource management in an interdatabase plan. Consider four databases sharing the same Oracle Exadata Storage Server resources. The four databases are:

  • A critical OLTP production database, named PROD
  • A test database, named PROD_TEST
  • A development database, named PROD_DEV
  • A data warehouse database, named DW

An OLTP production database typically issues small I/O requests, and low latency for these requests is the critical requirement. A data warehouse issues large numbers of large I/O requests and is more sensitive to the I/O throughput than the latency of each individual I/O request. Without any I/O resource management, the large number of I/O requests issued by the DW database could overwhelm the storage subsystem and increase the latency of the I/O requests issued by the PROD database. Additionally, the I/O requests issued by the test and development databases, PROD_TEST and PROD_DEV, could adversely affect the performance of the PROD and the DW databases.

To ensure a reasonable distribution of I/O resources, you can define an allocation-based interdatabase plan as follows:

CellCLI> ALTER IORMPLAN                                       -
         dbPlan=((name=prod, level=1,allocation=80),          -
                 (name=dw, level=2, allocation=80),           -
                 (name=prod_test,  level=3, allocation=50),   -
                 (name=prod_dev, level=3, allocation=40),     -
                 (name=OTHER, level=3, allocation=10))

By using the example interdatabase plan:

  • The critical OLTP database (PROD) is guaranteed 80 percent of the I/O resources during periods of I/O resource contention.
  • The DW database gets 80 percent any remaining unused I/O.
  • Finally, the PROD_TEST and PROD_DEV databases get any unused I/O in the amount of 50 percent and 40 percent respectively. Also, in this example, a 10% allocation is reserved for OTHER databases that are not explicitly listed in the plan.

At any time, you change the allocations to adjust the resource assignments.

5.2.3.4 Using the limit Attribute

The limit attribute specifies the maximum flash I/O utilization limit as a percentage of the available resources.

You can use the limit attribute to restrict the flash I/O utilization for an entity in the interdatabase plan or the cluster plan. This attribute ensures that the specified entity never utilizes flash I/O resources beyond the specified limit. The attribute applies only to I/O on flash devices, which includes flash-based grid disks and Exadata Smart Flash Cache.

For example, if a production and test database are sharing Oracle Exadata Storage Server resources, then you could set a maximum flash utilization limit for the test database in the interdatabase plan as follows:

ALTER IORMPLAN dbplan=((name=prod),               -
                       (name=test, limit=20),     -
                       (name=DEFAULT, limit=10))

If maximum utilization limits are specified, then excess capacity might not be used. As a result, it is possible for flash devices to run below full capacity when maximum utilization limits are specified.

Note:

Specifying low limit values can have a significant performance impact and is generally not advisable.

Resource management using limits is ideal for pay-for-performance use cases but should not be used to implement fairness. Instead, use the share attribute to ensure equitable distribution of I/O resources.

5.2.3.5 Controlling Access to Flash Cache and Flash Log

You can use IORM to manage access to flash cache and flash log.

You can use the flashcache and flashlog attributes in the interdatabase plan to control access to flash cache and flash log. When set in the interdatabase plan, these attributes control access by a specific database.

The following example shows how to disable flash cache and flash log for the prod_test and prod_dev databases. In the example, flash log is also disabled for the dw_test database.

CellCLI> ALTER IORMPLAN                                          -
         dbplan=((name=prod, flashcache=on, flashlog=on),        -
                (name=dw, flashcache=on, flashlog=on),           -
                (name=prod_test, flashcache=off, flashlog=off),  -
                (name=prod_dev, flashcache=off, flashlog=off),   -
                (name=dw_test, flashcache=on, flashlog=off))

You can also use these attributes in conjunction with other attributes. For example:

CellCLI> ALTER IORMPLAN                                                   -
         dbplan=((name=prod, share=8, flashcache=on, flashlog=on),        -
                (name=dw, share=6, flashcache=on, flashlog=on),           -
                (name=prod_test, share=2, flashcache=off, flashlog=off),  -
                (name=prod_dev, share=1, flashcache=off, flashlog=off),   -
                (name=dw_test, share=2, flashcache=on, flashlog=off),     -
                (name=other, share=1))

You do not need to explicitly set flashcache=on or flashlog=on because they are the default settings.

5.2.3.6 Using Flash Cache Attributes

You can use flash cache attributes in the IORM plan to guarantee space allocation in Exadata Smart Flash Cache.

Using flash cache attributes, IORM can reserve space for critical databases while preventing less important or rogue entities from consuming the entire flash cache. These attributes can only be specified in an interdatabase plan, and are configured using the CellCLI utility.

You can use the following attributes to set limits for flash cache resources. A hard limit means that the specified limit cannot be exceeded even when the memory cache is not full. A soft limit means that the specified limit can be exceeded if there are available resources.

  • flashCacheMin — Specifies the minimum amount of flash cache space that is guaranteed for the specified database, even if the blocks are cold. This is a hard limit.

    Because flashCacheMin is a guaranteed reservation, the sum of flashCacheMin across all directives should be less than the size of the flash cache to ensure that each database gets its respective quota.

  • flashCacheLimit — Specifies the soft maximum amount of flash cache space that is available to the database. If the flash cache is not full, a database can exceed the flashCacheLimit value.
  • flashCacheSize — Specifies the hard maximum amount of flash cache space that is available to the database. The flashCacheSize value cannot be exceeded at any time.

    However, if you set flashCacheSize to a value that is lower than the current space occupied by the database, then starting with Oracle Exadata System Software release 20.1.0 excess data is proactively cleared from the cache. Previously, excess data was only removed when overwritten by other data.

    Starting with Oracle Exadata System Software release 19.2.0, flashCacheSize is not a guaranteed reservation if the sum of the flashCacheSize across all directives is more than the size of the flash cache. In this case, you can also specify flashCacheMin to define a guaranteed minimum quota.

Example 5-10 Configuring an Interdatabase Plan with Flash Cache Attributes

This example shows how to create an interdatabase plan with flash cache attributes.

In the example, the sales and test databases are allocated an amount of space in the flash cache by using the flashCacheSize parameter. But, the databases cannot exceed the specified allocation, even if the flash cache has free space.

The finance and dev databases use flashCacheMin for guaranteed minimum quotas. These databases can also exceed the specified flashCacheLimit size when there is free space in the flash cache.

ALTER IORMPLAN                                                           -
 dbplan=((name=sales, share=8, flashCacheSize=10G),                      -
         (name=finance, share=8, flashCacheLimit=10G, flashCacheMin=2G), -
         (name=dev, share=2, flashCacheLimit=4G, flashCacheMin=1G),      -
         (name=test, share=1, limit=10, flashCacheSize=1G))

Example 5-11 Configuring an Interdatabase Plan with Flash Cache Attributes

Starting with Oracle Exadata System Software release 19.3.0, you can specify both flashCacheMin and flashCacheSize for the same target.

ALTER IORMPLAN                                                  -
 dbplan=((name=sales, share=8, flashCacheMin=3G, flashCacheSize=10G),    -
         (name=finance, share=8, flashCacheLimit=10G, flashCacheMin=2G), -
         (name=dev, share=2, flashCacheLimit=4G, flashCacheMin=1G),      -
         (name=test, share=1, limit=10, flashCacheSize=1G))
5.2.3.7 Controlling Access to XRMEM Resources

You can use IORM to manage access to Exadata RDMA Memory (XRMEM) resources.

You can use the xrmemcache and xrmemlog attributes in the interdatabase plan to control access to XRMEM cache and XRMEM log. When set in the interdatabase plan, these attributes control access by a specific database.

Note:

On Exadata X8M and X9M storage server models only, you can use pmemcache and pmemlog attributes interchangeably with xrmemcache and xrmemlog. However, you cannot have a mixture of PMEM and XRMEM attributes in the IORM plan.

The following example shows how to disable XRMEM cache and XRMEM log for the prod_test and prod_dev databases. In the example, XRMEM log is also disabled for the dw_test database.

CellCLI> ALTER IORMPLAN                                        -
         dbplan=((name=prod, xrmemcache=on, xrmemlog=on),        -
                (name=dw, xrmemcache=on, xrmemlog=on),           -
                (name=prod_test, xrmemcache=off, xrmemlog=off),  -
                (name=prod_dev, xrmemcache=off, xrmemlog=off),   -
                (name=dw_test, xrmemcache=on, xrmemlog=off))

You can also use these attributes in conjunction with other attributes. For example:

CellCLI> ALTER IORMPLAN                                                 -
         dbplan=((name=prod, share=8, xrmemcache=on, xrmemlog=on),        -
                (name=dw, share=6, xrmemcache=on, xrmemlog=on),           -
                (name=prod_test, share=2, xrmemcache=off, xrmemlog=off),  -
                (name=prod_dev, share=1, xrmemcache=off, xrmemlog=off),   -
                (name=dw_test, share=2, xrmemcache=on, xrmemlog=off),     -
                (name=other, share=1))

You do not need to explicitly set xrmemcache=on or xrmemlog=on because they are the default settings.

5.2.3.8 Using XRMEM Cache Attributes

You can use XRMEM cache attributes in the IORM plan to guarantee space allocation in the Exadata RDMA Memory Cache (XRMEM cache).

Using XRMEM cache attributes, IORM can reserve space for critical databases while preventing less important or rogue entities from consuming the entire XRMEM cache. These attributes can only be specified in an interdatabase plan, and are configured using the CellCLI utility.

You can use the following attributes to set limits for XRMEM cache resources. A hard limit means the database cannot exceed its quota even when the memory cache is not full. A soft limit means that the specified limit can be exceeded if there are available resources.

  • xrmemCacheMin — Specifies the minimum amount of XRMEM cache space that is guaranteed for the specified database, even if the blocks are cold. This is a hard limit.

    Because xrmemCacheMin is a guaranteed reservation, the sum of xrmemCacheMin across all directives should be less than the size of the XRMEM cache to ensure that each database gets its respective quota.

  • xrmemCacheLimit — Specifies the soft maximum amount of XRMEM cache space that is available to the database. If the XRMEM cache is not full, a database can exceed the xrmemCacheLimit value.
  • xrmemCacheSize — Specifies the hard maximum amount of XRMEM cache space that is available to the database. The xrmemCacheSize value cannot be exceeded at any time.

    However, if you set xrmemCacheSize to a value that is lower than the current space occupied by the database, then excess data is proactively cleared from the cache.

    xrmemCacheSize is not a guaranteed reservation if the sum of the xrmemCacheSize across all directives is more than the size of the XRMEM cache. In this case, you can also specify xrmemCacheMin to define a guaranteed minimum quota.

    Note:

    On Exadata X8M and X9M storage server models only, you can interchangeably use attribute names starting with pmem instead of xrmem. For example, pmemCacheMin instead of xrmemCacheMin. However, you cannot have a mixture of PMEM and XRMEM attributes in the IORM plan.

Example 5-12 Configuring an Interdatabase Plan with XRMEM Cache Attributes

This example shows how to create an interdatabase plan with XRMEM cache attributes.

In the example, the sales and test databases are allocated an amount of space in the XRMEM cache by using the xrmemCacheSize parameter. But, the databases cannot exceed the specified allocation, even if the XRMEM cache has free space.

The finc and dev databases use xrmemCacheMin for guaranteed minimum quotas. These databases can also exceed the specified xrmemCacheLimit size when there is free space in the XRMEM cache.

The example plan also contains various flash cache attributes.

ALTER IORMPLAN dbplan=                                                                             -
((name=sales, share=8, xrmemCacheSize=2G, flashCacheSize=10G),                                     -
(name=finc, share=8, xrmemCacheMin=1G, xrmemCacheLimit=2G, flashCacheLimit=10G, flashCacheMin=2G), -
(name=dev, share=2, xrmemCacheMin=500M, xrmemCacheLimit=1G, flashCacheLimit=4G, flashCacheMin=1G), -
(name=test, share=1, limit=10, xrmemCacheSize=200M))
5.2.3.9 Controlling Access to PMEM Cache and PMEM Log

Note:

This topic applies only to Oracle Exadata System Software releases before 23.1.0. Otherwise, see Controlling Access to XRMEM Resources.

You can use IORM to manage access to persistent memory (PMEM) cache and PMEM log.

You can use the pmemcache and pmemlog attributes in the interdatabase plan to control access to PMEM cache and PMEM log. When set in the interdatabase plan, these attributes control access by a specific database.

The following example shows how to disable PMEM cache and PMEM log for the prod_test and prod_dev databases. In the example, PMEM log is also disabled for the dw_test database.

CellCLI> ALTER IORMPLAN                                        -
         dbplan=((name=prod, pmemcache=on, pmemlog=on),        -
                (name=dw, pmemcache=on, pmemlog=on),           -
                (name=prod_test, pmemcache=off, pmemlog=off),  -
                (name=prod_dev, pmemcache=off, pmemlog=off),   -
                (name=dw_test, pmemcache=on, pmemlog=off))

You can also use these attributes in conjunction with other attributes. For example:

CellCLI> ALTER IORMPLAN                                                 -
         dbplan=((name=prod, share=8, pmemcache=on, pmemlog=on),        -
                (name=dw, share=6, pmemcache=on, pmemlog=on),           -
                (name=prod_test, share=2, pmemcache=off, pmemlog=off),  -
                (name=prod_dev, share=1, pmemcache=off, pmemlog=off),   -
                (name=dw_test, share=2, pmemcache=on, pmemlog=off),     -
                (name=other, share=1))

You do not need to explicitly set pmemcache=on or pmemlog=on because they are the default settings.

5.2.3.10 Using PMEM Cache Attributes

Note:

This topic applies only to Oracle Exadata System Software releases before 23.1.0. Otherwise, see Using XRMEM Cache Attributes.

You can use PMEM cache attributes in the IORM plan to guarantee space allocation in the persistent memory (PMEM) cache.

Using PMEM cache attributes, IORM can reserve space for critical databases while preventing less important or rogue entities from consuming the entire PMEM cache. These attributes can only be specified in an interdatabase plan, and are configured using the CellCLI utility.

You can use the following attributes to set limits for PMEM cache resources. A hard limit means the database cannot exceed its quota even when the memory cache is not full. A soft limit means that the specified limit can be exceeded if there are available resources.

  • pmemCacheMin — Specifies the minimum amount of PMEM cache space that is guaranteed for the specified database, even if the blocks are cold. This is a hard limit.

    Because pmemCacheMin is a guaranteed reservation, the sum of pmemCacheMin across all directives should be less than the size of the PMEM cache to ensure that each database gets its respective quota.

  • pmemCacheLimit — Specifies the soft maximum amount of PMEM cache space that is available to the database. If the PMEM cache is not full, a database can exceed the pmemCacheLimit value.
  • pmemCacheSize — Specifies the hard maximum amount of PMEM cache space that is available to the database. The pmemCacheSize value cannot be exceeded at any time.

    However, if you set pmemCacheSize to a value that is lower than the current space occupied by the database, then starting with Oracle Exadata System Software release 20.1.0 excess data is proactively cleared from the cache. Previously, excess data was only removed when overwritten by other data.

    pmemCacheSize is not a guaranteed reservation if the sum of the pmemCacheSize across all directives is more than the size of the PMEM cache. In this case, you can also specify pmemCacheMin to define a guaranteed minimum quota.

Example 5-13 Configuring an Interdatabase Plan with PMEM Cache Attributes

This example shows how to create an interdatabase plan with pmem cache attributes. In the example, the sales and test databases are guaranteed an amount of space in the PMEM cache by using the pmemCacheSize parameter. But, the databases cannot exceed the specified allocation, even if the PMEM cache has free space.

The finc and dev databases use pmemCacheMin for guaranteed minimum quotas. These databases can also exceed the specified pmemCacheLimit size when there is free space in the PMEM cache.

The example plan also contains various flash cache attributes.

ALTER IORMPLAN dbplan=                                            -
((name=sales, share=8, pmemCacheSize= 2G, flashCacheSize=10G), -
(name=finc, share=8, pmemCacheMin= 1G, pmemCacheLimit= 2G, flashCacheLimit=10G, flashCacheMin=2G), -
(name=dev, share=2, pmemCacheMin= 500M, pmemCacheLimit= 1G, flashCacheLimit=4G, flashCacheMin=1G), -
(name=test, share=1, limit=10, pmemCacheSize= 200M))
5.2.3.11 Using the role Attribute

The role attribute allows a different allocation to be specified, based on whether the database has the Oracle Data Guard primary or standby role.

By default, interdatabase plan directives apply when the database is in either role. If you want the directive to apply only when the database is in the primary role, then include role=primary. Similarly, if you want the directive to apply only when the database is in the standby role, then include role=standby.

For example:

ALTER IORMPLAN                                          -
dbPlan=((name=prod, share=8, role=primary),             -
        (name=prod, share=1, limit=25, role=standby)    -
        (name=default, share=2))
5.2.3.12 Using the asmcluster Attribute

You can use the asmcluster attribute to uniquely identify databases with the same DB_UNIQUE_NAME.

If you have databases with the same DB_UNIQUE_NAME but associated with different Oracle ASM clusters, then, starting with Oracle Exadata System Software release 19.1.0, you can use the asmcluster attribute to uniquely identify the databases.

The databases must be clients of different Oracle ASM clusters, and you must ASM-scoped security configured to facilitate cluster identification.

For example:

ALTER IORMPLAN                                          -
dbplan=((name=pdb1, share=4, flashcachemin=5G, asmcluster=asm1),  -
        (name=pdb1, share=2, limit=80, asmcluster=asm2),  -
        (name=pdb2, share=2, flashcachelimit=2G, asmcluster=asm1),  -
        name=default, share=1, flashcachelimit=1G))
5.2.3.13 Resetting Default Values in an IORM Plan

Use an empty string to reset an IORM plan.

You can reset the entire IORM plan, or separately reset the catplan, dbplan, or clusterplan. For example:

CellCLI> ALTER IORMPLAN catplan="", dbplan="", clusterplan=""
CellCLI> ALTER IORMPLAN catplan=""
CellCLI> ALTER IORMPLAN dbplan=""
CellCLI> ALTER IORMPLAN clusterplan=""

5.2.4 Listing an I/O Resource Management Plan

You can view the current interdatabase plan for a storage server using the CellCLI LIST IORMPLAN command on the storage server.

Example 5-14 Displaying Interdatabase Plan Details

This example shows how to get a detailed list of the interdatabase plan attributes.

CellCLI> LIST IORMPLAN DETAIL
   name:                   cell01_IORMPLAN
   status:                 active
   catPlan:                name=administrative,level=1,allocation=80
                           name=interactive,level=2,allocation=90
                           name=batch,level=3,allocation=80
                           name=maintenance,level=4,allocation=50
                           name=other,level=4,allocation=50
   dbplan:                 name=sales_prod, share=8, role=primary
                           name=sales_prod, share=1, limit=50, role=standby
                           name=sales_test, share=1, limit=25
                           name=default, share=2
   objective:              balanced

Related Topics

5.2.5 Managing Flash Cache Quotas for Databases and PDBs

IORM enables you to control how you want the flash cache to be shared among different databases and pluggable databases (PDBs).

This can be done using just the CDB resource plan or in conjunction with the I/O Resource Management (IORM) interdatabase plan.

Consider an example intradatabase resource plan for the NEWCDB CDB. The plan specifies memory_min and memory_limit for the 3 PDBs mentioned in the plan.

Note the following:

  • The memory_min and memory_limit values are specified in percentages and range from 0 and 100. Because over-provisioning is supported, the sum of the percentages is not restricted to 100%. If the sum of these values is greater than 100%, then the values are normalized down to a percentage.
  • If memory_min is not specified, then it defaults to 0.
  • If memory_limit is not specified, then it defaults to 100.
  • For CDB$ROOT, there is a 5% memory_limit value.

The following code shows how to create the example intradatabase plan. The sum of the memory_min values is 40%, and the sum of the memory_limit values is 175%, which must be normalized. If an interdatabase plan is not specified, then these percentages apply to the entire size of the flash cache. If an interdatabase plan is specified, then the quotas for the PDBs are computed as a percentage of the flashcachemin and flashcachesize values for the database as specified in the interdatabase plan directive.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
 
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan    => 'NEWCDB_PLAN',
    comment => 'CDB resource plan for newcdb');
 
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SALESPDB', 
    memory_min            => 20);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SERVICESPDB', 
    memory_min            => 20,
    memory_limit          => 50);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'HRPDB', 
    memory_limit          => 25);
 
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

In the example above, if no interdatabase plan is specified and the size of the flash cache is 50 GB, then the following table shows the breakdown of the quotas after normalization of the limit (because the sum of the memory_limit values is greater than 100%). After normalization, if a minimum value is greater than the corresponding limit, then the minimum value is reduced to make it equal to the limit.

Table 5-4 Case 1: PDB Flash Cache Limits with No Interdatabase Plan

PDB Flash Cache Min FC Soft Limit Normalized Soft Limit FC Hard Limit

SALESPDB

20% = 10 GB

100 (default)

100 / 175 * 50 GB = 28.57 GB

n/a

SERVICESPDB

20% = 10 GB

50

50 / 175 * 50 GB = 14.28 GB

n/a

HRPDB

0

25

25 / 175 * 50 GB = 7.14 GB

n/a

The next example shows an interdatabase plan with flash cache quotas that includes the NEWCDB CDB.

ALTER IORMPLAN dbplan=                                                                                       -
      ((name=newcdb, share=8, flashCacheSize=20G),                                        -
       (name=finance, share=8, flashCacheLimit=10G, flashCacheMin=2G), -
       (name=dev, share=2, flashCacheLimit=4G, flashCacheMin=1G),    -
       (name=test, share=1))

In addition to the NEWCDB CDB, three other databases (finance, dev, and test) share the same storage servers. Flash cache quotas are only enforced if the directives specify the flashcachesize, flashcachelimit, or flashcachemin attributes. The database test does not specify any flash cache directive; so that database and its PDBs (if any exist) are not managed for any flash cache quotas.

Cache limits in an interdatabase IORM plan directive constrain the settings in any corresponding CDB plan. Consequently, if an interdatabase IORM plan directive specifies flashcachemin and flashcachesize settings for a database, then the PDB-specific memory_min quotas in the CDB plan represent fractions of the flashcachemin setting, and the PDB-specific memory_limit values represent fractions of the flashcachesize.

However, if the interdatabase IORM plan directive specifies flashcachesize without flashcachemin, then the PDB-specific memory_min settings are ignored while the memory_limit settings continue to represent fractions of the flashcachesize.

So, because the example interdatabase IORM plan directive for newcdb specifies flashcachesize without flashcachemin, the PDB-specific memory_min quotas in the CDB plan are ignored. The following table lists the effective flash cache limits when applying the example CDB plan together with the example interdatabase IORM plan.

Table 5-5 Case 2: PDB Flash Cache Limits with an InterDatabase Plan

PDB Flash Cache Min FC Hard Limit Normalized Hard Limit FC Soft Limit

SALESPDB

0

100 (default)

100 / 175 * 20 GB = 11.43 GB

n/a

SERVICESPDB

0

50

50 / 175 * 20 GB = 5.71 GB

n/a

HRPDB

0

25

25 / 175 * 20 GB = 2.86 GB

n/a

For non-CDB databases, the flashcachesize, flashcachemin, and flashcachelimit values are specified in absolute terms and no additional normalization is required. Because flashcachemin is a guaranteed reservation, the sum of flashcachemin across all the directives should be less than the total size of the flash cache.

5.2.6 Managing XRMEM Cache Quotas for Databases and PDBs

I/O Resource Management (IORM) enables you to control how you want the Exadata RDMA Memory Cache (XRMEM cache) to be shared among different databases and pluggable databases (PDBs).

This can be done using just the CDB resource plan or in conjunction with the I/O Resource Management (IORM) interdatabase plan.

Consider an example intradatabase resource plan for the NEWCDB CDB. The plan specifies memory_min and memory_limit for the 3 PDBs mentioned in the plan.

Note the following:

  • The memory_min and memory_limit values are specified in percentages and range from 0 and 100. Because over-provisioning is supported, the sum of the percentages is not restricted to 100%. If the sum of these values is greater than 100%, then the values are normalized down to a percentage.
  • If memory_min is not specified, then it defaults to 0.
  • If memory_limit is not specified, then it defaults to 100.
  • For CDB$ROOT, there is a 5% memory_limit value.

The following code shows how to create the example intradatabase plan. The sum of the memory_min values is 40%, and the sum of the memory_limit values is 175%, which must be normalized. If an interdatabase plan is not specified, then these percentages apply to the entire size of the XRMEM cache. If an interdatabase plan is specified, then the quotas for the PDBs are computed as a percentage of the xrmemcachemin and xrmemcachesize values for the database as specified in the interdatabase plan directive.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
 
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan    => 'NEWCDB_PLAN',
    comment => 'CDB resource plan for newcdb');
 
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SALESPDB', 
    memory_min            => 20);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SERVICESPDB', 
    memory_min            => 20,
    memory_limit          => 50);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'HRPDB', 
    memory_limit          => 25);
 
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

In the example above, if no interdatabase plan is specified and the size of the XRMEM cache is 1.25 TB, then the following table shows the breakdown of the quotas after normalization of the limit (because the sum of the memory_limit values is greater than 100%). After normalization, if a minimum value is greater than the corresponding limit, then the minimum value is reduced to make it equal to the limit.

Table 5-6 Case 1: PDB XRMEM Cache Limits with No Interdatabase Plan

PDB XRMEM Cache Min XRMEM Soft Limit Normalized Soft Limit XRMEM Hard Limit

SALESPDB

20% = 256 GB

100 (default)

100 / 175 * 1.25 TB = 731 GB

n/a

SERVICESPDB

20% = 256 GB

50

50 / 175 * 1.25 TB = 366 GB

n/a

HRPDB

0

25

25 / 175 * 1.25 TB = 183 GB

n/a

The next example shows an interdatabase plan with XRMEM cache quotas that includes the NEWCDB CDB.

ALTER IORMPLAN dbplan=                                                                                       -
      ((name=newcdb, share=8, xrmemCacheSize= 200G, flashCacheSize=10G),                                        -
       (name=finance, share=8, xrmemCacheMin= 100G, xrmemCacheLimit= 200G, flashCacheLimit=10G, flashCacheMin=2G), -
       (name=dev, share=2, xrmemCacheMin= 1G, xrmemCacheLimit= 10G, flashCacheLimit=4G, flashCacheMin=1G),    -
       (name=test, share=1))

In addition to the NEWCDB CDB, three other databases (finance, dev, and test) share the same storage servers. XRMEM cache quotas are only enforced if the directives specify the xrmemcachesize, xrmemcachelimit, or xrmemcachemin attributes. The database test does not specify any XRMEM cache directive; so that database and its PDBs (if any exist) are not managed for any XRMEM cache quotas.

Cache limits in an interdatabase IORM plan directive constrain the settings in any corresponding CDB plan. Consequently, if an interdatabase IORM plan directive specifies xrmemcachemin and xrmemcachesize settings for a database, then the PDB-specific memory_min quotas in the CDB plan represent fractions of the xrmemcachemin setting, and the PDB-specific memory_limit values represent fractions of the xrmemcachesize.

However, if the interdatabase IORM plan directive specifies xrmemcachesize without xrmemcachemin, then the PDB-specific memory_min settings are ignored while the memory_limit settings continue to represent fractions of the xrmemcachesize.

So, because the example interdatabase IORM plan directive for newcdb specifies xrmemcachesize without xrmemcachemin, the PDB-specific memory_min quotas in the CDB plan are ignored. The following table lists the effective XRMEM cache limits when applying the example CDB plan together with the example interdatabase IORM plan.

Table 5-7 Case 2: PDB XRMEM Cache Limits with an InterDatabase Plan

PDB XRMEM Cache Min XRMEM Hard Limit Normalized Hard Limit XRMEM Soft Limit

SALESPDB

0

100 (default)

100 / 175 * 200 GB = 114.29 GB

n/a

SERVICESPDB

0

50

50 / 175 * 200 GB = 57.14 GB

n/a

HRPDB

0

25

25 / 175 * 200 GB = 28.57 GB

n/a

For non-CDB databases, the xrmemcachesize, xrmemcachemin, and xrmemcachelimit values are specified in absolute terms and no additional normalization is required. Because xrmemcachemin is a guaranteed reservation, the sum of xrmemcachemin across all the directives should be less than the total size of the XRMEM cache.

5.2.7 Managing PMEM Cache Quotas for Databases and PDBs

Note:

This topic applies only to Oracle Exadata System Software releases before 23.1.0. Otherwise, see Managing XRMEM Cache Quotas for Databases and PDBs.

I/O Resource Management (IORM) enables you to control how you want the PMEM cache to be shared among different databases and pluggable databases (PDBs).

This can be done using just the CDB resource plan or in conjunction with the I/O Resource Management (IORM) interdatabase plan.

Consider an example intradatabase resource plan for the NEWCDB CDB. The plan specifies memory_min and memory_limit for the 3 PDBs mentioned in the plan.

Note the following:

  • The memory_min and memory_limit values are specified in percentages and range from 0 and 100. Because over-provisioning is supported, the sum of the percentages is not restricted to 100%. If the sum of these values is greater than 100%, then the values are normalized down to a percentage.
  • If memory_min is not specified, then it defaults to 0.
  • If memory_limit is not specified, then it defaults to 100.
  • For CDB$ROOT, there is a 5% memory_limit value.

The following code shows how to create the example intradatabase plan. The sum of the memory_min values is 40%, and the sum of the memory_limit values is 175%, which must be normalized. If an interdatabase plan is not specified, then these percentages apply to the entire size of the PMEM cache. If an interdatabase plan is specified, then the quotas for the PDBs are computed as a percentage of the pmemcachemin and pmemcachesize values for the database as specified in the interdatabase plan directive.

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
 
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan    => 'NEWCDB_PLAN',
    comment => 'CDB resource plan for newcdb');
 
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SALESPDB', 
    memory_min            => 20);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'SERVICESPDB', 
    memory_min            => 20,
    memory_limit          => 50);
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'NEWCDB_PLAN', 
    pluggable_database    => 'HRPDB', 
    memory_limit          => 25);
 
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

In the example above, if no interdatabase plan is specified and the size of the PMEM cache is 10 GB, then the following table shows the breakdown of the quotas after normalization of the limit (because the sum of the memory_limit values is greater than 100%). After normalization, if a minimum value is greater than the corresponding limit, then the minimum value is reduced to make it equal to the limit.

Table 5-8 Case 1: PDB PMEM Cache Limits with No Interdatabase Plan

PDB PMEM Cache Min PMEM Soft Limit Normalized Soft Limit PMEM Hard Limit

SALESPDB

20% = 2 GB

100 (default)

100 / 175 * 10 GB = 5.71 GB

n/a

SERVICESPDB

20% = 2 GB

50

50 / 175 * 10 GB = 2.85 GB

n/a

HRPDB

0

25

25 / 175 * 10 GB = 1.42 GB

n/a

The next example shows an interdatabase plan with PMEM cache quotas that includes the NEWCDB CDB.

ALTER IORMPLAN dbplan=                                                                                       -
      ((name=newcdb, share=8, pmemCacheSize= 2G, flashCacheSize=10G),                                        -
       (name=finance, share=8, pmemCacheMin= 1G, pmemCacheLimit= 2G, flashCacheLimit=10G, flashCacheMin=2G), -
       (name=dev, share=2, pmemCacheMin= 100M, pmemCacheLimit= 1G, flashCacheLimit=4G, flashCacheMin=1G),    -
       (name=test, share=1))

In addition to the NEWCDB CDB, three other databases (finance, dev, and test) share the same storage servers. PMEM cache quotas are only enforced if the directives specify the pmemcachesize, pmemcachelimit, or pmemcachemin attributes. The database test does not specify any PMEM cache directive; so that database and its PDBs (if any exist) are not managed for any PMEM cache quotas.

Cache limits in an interdatabase IORM plan directive constrain the settings in any corresponding CDB plan. Consequently, if an interdatabase IORM plan directive specifies pmemcachemin and pmemcachesize settings for a database, then the PDB-specific memory_min quotas in the CDB plan represent fractions of the pmemcachemin setting, and the PDB-specific memory_limit values represent fractions of the pmemcachesize.

However, if the interdatabase IORM plan directive specifies pmemcachesize without pmemcachemin, then the PDB-specific memory_min settings are ignored while the memory_limit settings continue to represent fractions of the pmemcachesize.

So, because the example interdatabase IORM plan directive for newcdb specifies pmemcachesize without pmemcachemin, the PDB-specific memory_min quotas in the CDB plan are ignored. The following table lists the effective PMEM cache limits when applying the example CDB plan together with the example interdatabase IORM plan.

Table 5-9 Case 2: PDB PMEM Cache Limits with an InterDatabase Plan

PDB PMEM Cache Min PMEM Hard Limit Normalized Hard Limit PMEM Soft Limit

SALESPDB

0

100 (default)

100 / 175 * 2 GB = 1.14 GB

n/a

SERVICESPDB

0

50

50 / 175 * 2 GB = 0.57 GB

n/a

HRPDB

0

25

25 / 175 * 2 GB = 0.28 GB

n/a

For non-CDB databases, the pmemcachesize, pmemcachemin, and pmemcachelimit values are specified in absolute terms and no additional normalization is required. Because pmemcachemin is a guaranteed reservation, the sum of pmemcachemin across all the directives should be less than the total size of the PMEM cache.

5.2.8 Using IORM Profiles

I/O Resource Management (IORM) interdatabase plans support profiles to ease management, and configuration of interdatabase plans for hundreds of databases.

Profiles introduce a way to allocate I/O resources for groups of databases. Profiles are specified as directives for the interdatabase plan, and are configured using the CellCLI utility. A profile directive consists of an identifier (name), and a set of attributes. To differentiate between a database directive and a profile directive, a qualifier attribute called type is used. The type attribute can be set to either database or profile. The following is an example of the type attribute syntax:

CellCLI> ALTER IORMPLAN DBPLAN=((name=gold, share=10, type=profile),  -
                                (name=silver, share=5, type=profile), -
                                (name=bronze, share=1, type=profile))

The preceding example contains three directives for profiles, GOLD, SILVER and BRONZE. All databases with db_performance_profile set to GOLD would automatically get 10 shares of cell I/O resources. Likewise, in the example above, databases with the SILVER profile would get 5 shares, and databases with the BRONZE profile would get 1 share.

After your create your profiles, you then map new and existing databases to one of the profiles defined in the interdatabase plan. This is done by setting the db_performance_profile initialization parameter for each database to the name of the desired profile. You must then restart the database. As with Oracle Database Resource Manager plans, the IORM profile information is automatically pushed to all the storage servers (cells). The following SQL command displays how the initialization parameter can be set for a database:

SQL> ALTER SYSTEM SET db_performance_profile=gold SCOPE=spfile;

SQL> SHUTDOWN IMMEDIATE

SQL> STARTUP

When adding a new database, you can set the db_performance_profile parameter and restart the database. The database automatically inherits the profile attributes without having to modify the interdatabase plan. You can also create interdatabase plans with a mix of profile directives, and database directives.

You can view existing profiles using the LIST IORMPROFILE command.

Note the following when managing interdatabase profile plans:

  • The db_performance_profile parameter is not a dynamic parameter, so profile updates require a database restart.
  • If the type attribute is not specified, then the directive defaults to the database directive.
  • An interdatabase plan can specify only 8 profile directives, and 1024 database directives.
  • Level, allocation, and role cannot be specified with a profile directive.
  • The words OTHER, and DEFAULT are reserved words. A profile name cannot be OTHER or DEFAULT.
  • The type attribute cannot be specified with category plans.
  • Profiles cannot be specified in conjunction with category plans.
  • If multiple databases are mapped to the OTHER directive, then Oracle Exadata Storage Server does not use Oracle Database Resource Manager for these databases. All of the I/O requests are treated the same.

Related Topics

5.2.9 Verifying the Configuration of I/O Resource Management

Use this checklist to verify that I/O Resource Management (IORM) is configured correctly.

  • Verify that the following criteria are met when using IORM to manage I/O resources within a database:
    • A resource plan has been enabled.
    • The same resource plan has been enabled on all database instances.
      • If Oracle Database Resource Manager is enabled using Scheduler Window, then the same plan is always enabled on all database instances.
      • If Oracle Database Resource Manager is enabled using the RESOURCE_MANAGER_PLAN parameter, then use sid='*' to set the parameter for all database instances.
    • The resource plan includes MGMT_P[1-8] directives for each consumer group in the resource plan.

    The following query can be used to verify the preceding criteria have been met:

    SELECT DECODE(count(*), 0, 'Intra-Instance IORM Plan Enabled', 
    'No Intra-Instance IORM Plan Enabled') status 
    FROM gv$instance 
    WHERE inst_id NOT IN 
      (SELECT inst_id FROM gv$rsrc_plan WHERE cpu_managed = 'ON');
    
  • Verify that the interdatabase plan has been configured properly when IORM is used to manage I/O resources from multiple databases
    CellCLI> LIST IORMPLAN DETAIL
    

    If no interdatabase plan has been configured, then use the CellCLI ALTER IORMPLAN command to configure a plan. Each active database should have its own directive in the dbPlan parameter.

  • Verify that sessions are mapped to the correct consumer group.

    Run the following query while a workload is running:

    SELECT r.sid,
           c.consumer_group current_consumer_group
      FROM v$rsrc_session_info r, dba_rsrc_consumer_groups c 
      WHERE r.current_consumer_group_id = c.consumer_group_id
    UNION
    SELECT sid, 'OTHER_GROUPS' from v$rsrc_session_info 
      WHERE current_consumer_group_id = 0;
    

    A session may not be in the expected consumer group due to the following configuration errors:

    • Missing privilege: In order for a session to switch into a consumer group, its user or role must have permission to switch into that consumer group. The following query shows the permissions for all consumer groups.

      SELECT grantee, granted_group 
      FROM DBA_RSRC_CONSUMER_GROUP_PRIVS
      ORDER BY granted_group;
      

      Use the following command to grant permission for any session to switch into the consumer group. This example shows to how grant the permission for BATCH_GROUP.

      EXEC dbms_resource_manager_privs.grant_switch_consumer_group -
        ('public', 'BATCH_GROUP', FALSE);
      
    • Inactive consumer group: If a session maps to or is manually switched to a consumer group that is not part of the current resource plan, then the session is switched into the default consumer group, OTHER_GROUPS.

      If sessions are being assigned to consumer groups using mapping rules, then the following query can be used to determine the consumer group that the mapping rules selected, the mapping attribute that was used, and the consumer group that the session started in originally.

      SELECT r.sid,
             r.mapped_consumer_group,
             r.mapping_attribute, 
             c.consumer_group original_consumer_group
      FROM v$rsrc_session_info r, dba_rsrc_consumer_groups c 
      WHERE r.orig_consumer_group_id = c.consumer_group_id;
      

      If the mapped consumer group differs from the original consumer group, then the mapped consumer group was not part of the resource plan.

  • Use CellCLI to list the number of small and large I/O requests that were issued for each consumer group across all databases.
    CellCLI> LIST METRICCURRENT CG_IO_RQ_LG, CG_IO_RQ_SM   ATTRIBUTES name, -
             metricObjectName, metricValue, collectionTime;
    

    Each consumer group that has an active I/O workload should generate small or large I/O requests according to these metrics.

  • While a workload is running, verify that I/O loads are being managed in the correct consumer groups.

    The following CellCLI command lists the number of small and large I/O requests that were issued for each consumer group across all databases:

    CellCLI> LIST METRICCURRENT CG_IO_RQ_LG, CG_IO_RQ_SM   ATTRIBUTES name, -
             metricObjectName, metricValue, collectionTime;
    
  • While the workload is running, query the actual I/O utilization for each category, database and consumer group.

    The following CellCLI command lists the small and large I/O utilization for each database running on Oracle Exadata Storage Server:

    CellCLI> LIST METRICCURRENT DB_IO_UTIL_LG, DB_IO_UTIL_SM ATTRIBUTES name, -
             metricObjectName, metricValue, collectionTime;
    

    The output shows the percentage of disk resources utilized by small and large requests from the databases.