7 Managing the OSM Database Schema

This chapter describes how to manage an Oracle Communications Order and Service Management (OSM) database schema.

Implementing a Strategy for OSM Information Lifecycle Management

Information Lifecycle Management (ILM) is a process for managing information through its lifecycle in a manner that optimizes storage and access. This section discusses how to implement an OSM-specific strategy for ILM.

An OSM deployment includes these database schemas:

  • The core schema, which contains order cartridge metadata, order data, configuration, and other data.

  • The rule engine schema, which contains logic for rule processing.

  • The reporting schema, which is used for reporting.

All schemas are installed and upgraded together (although you might have to install the reporting schema manually depending on your release and patch level).

OSM provides tools to help you manage data classes rather than individual tables, so you do not need a detailed understanding of the OSM schemas. The core schema is the only schema that contains data that accumulates, ages, becomes obsolete, and eventually must be purged:

  • Cartridge metadata: Static metadata that is populated in the OSM database when the cartridge is deployed or redeployed. This data does not grow or change when orders are created or processed. For each cartridge there is a Java EE application deployed to the OSM Oracle WebLogic Server domain. Cartridge metadata and the associated Java EE applications consume resources and take time to initialize on server startup.

  • Order data: The bulk of OSM data. Storage sizing depends on order volume, order size, retention policies, and your order purge strategy. OSM supports partitioning, which helps you manage order data efficiently.

The remainder of the data includes relatively small volumes of configuration and other data that is static, is updated infrequently, or it is managed by OSM.

The following sections discuss important aspects of an ILM strategy:

  • "Creating Tablespaces": Presents options and provides recommendations for creating tablespaces for OSM schemas.

  • "Using Partitioning": Provides an overview of partitioning in OSM and discusses the benefits and pitfalls of partitioning. Oracle strongly recommends partitioning in all production deployments or production test environments, particularly those with high order volumes or any volume of large or complex orders. Moreover, partitioning is required if you plan to use active-active Oracle RAC.

  • "Order Purge Strategies": Helps you decide on an order purge strategy. This is one of the most important decisions that you must make, not only before going into production but also before you start performance testing.

  • "Partitioning Realms": Discusses the use of logical partitions in the OSM schema to assist in purging or dropping orders.

  • "Cartridge Management Strategy": Recommends a strategy for managing cartridges.

  • "Sizing Partitions": Discusses how to size partitions for order data. Partition sizing depends on your order purge strategy.

  • "Online vs. Offline Maintenance": Gives a brief overview of which maintenance operations can be performed online.

Creating Tablespaces

The OSM DB Installer expects the following permanent database tablespaces specified in the project specification:

db:
  datasourcesPrimary:
    # Add one entry in servers per node in the primary RAC cluster.
    # To facilitate proper load balancing, provide the nodes' addresses and not
    # the SCAN addresses.
    servers:
      - host: osmdev-oracle-db.default.svc.cluster.local
        port: 1521
  # Default log level. Valid value
  #
  ## The levels in descending order are:
  ## SEVERE (highest value)
  ## WARNING
  ## INFO
  ## CONFIG
  ## FINE
  ## FINER
  ## FINEST (lowest value)
  ##
  logLevel: "WARNING"
  #
  # The remaining parameters must match the values used when the PDB was
  # created. Failure to match will result in dbInstaller errors
  #
  # The default tablespace name of OSM schema
  defaultTablespace: "OSM"
  # The temporary tablespace name of OSM schema
 tempTablespace: "TEMP"
  # The time zone offset in seconds
  timezoneOffsetSeconds: "-28800"
  # The model data tablespace name of OSM schema
  modelDataTablespace: "OSM"
  # The model index tablespace name of OSM schema
  modelIndexTablespace: "OSM"
  # The order data tablespace name of OSM schema
  orderDataTablespace: "OSM"
  # The order index tablespace name of OSM schema
  orderIndexTablespace: "OSM"

You can choose different tablespaces or a single tablespace. Typically model data and indexes are separate from order data and indexes.

If your schema is partitioned, you can also create new table partitions in different tablespaces for increased administration and availability, for example on a rotation basis. If a tablespace is damaged, the impact and restoration effort could be limited to one or just a few partitions. See "Adding Partitions (Online or Offline)" for more information.

Oracle recommends the following:

  • Create tablespaces dedicated to OSM, so that OSM performance and availability are not affected by other applications, for example due to I/O contention or if a tablespace must be taken offline. Store the datafiles of these tablespaces on different disk drives to reduce I/O contention with other applications.

  • Create locally managed tablespaces with automatic segment space management by specifying EXTENT MANAGEMENT LOCAL and SEGMENT SPACE MANAGEMENT AUTO in the CREATE TABLESPACE statement. Both options are the default for permanent tablespaces because they enhance performance and manageability.

  • Configure automatic database extent management by using the AUTOALLOCATE clause of the CREATE TABLESPACE statement. This is the default. For production deployments, avoid UNIFORM extent management for OSM order data and indexes because the volume of data varies widely from table to table.

  • If you use smallfile tablespaces, do not create hundreds of small datafiles. These files need to be checkpointed, resulting in unnecessary processing. Note that Oracle Database places a limit on the number of blocks per datafile depending on the platform. The typical limit is 222-1, which limits the datafile size to 32GB for 8k blocks.

Additional considerations if you use bigfile tablespaces:

  • If data is stored in bigfile tablespaces instead of traditional tablespaces, the performance of database opens, checkpoints, and DBWR processes should improve. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile. You can mitigate the risk of corruption by using multiple tablespaces for partitions, for example on a rotating basis.

  • Bigfile tablespaces are intended to be used with Automatic Storage Management (Oracle ASM) or other logical volume managers that supports striping or RAID, and dynamically extensible logical volumes.

  • Avoid creating bigfile tablespaces on a system that does not support striping because of negative implications for parallel query execution and RMAN backup parallelization.

  • Using bigfile tablespaces on platforms that do not support large file sizes is not recommended and can limit tablespace capacity.

For more information about managing tablespaces, see Oracle Database Administrator's Guide.

Using Partitioning

OSM database partitioning enhances the performance, manageability, and availability of data in an OSM deployment.

The OSM DB Installer enables partitioning automatically. The following figure provides details about the OSM partition tables that accumulate order-related information using range partitioning based on OSM order ID ranges.

The OM_ORDER_HEADER table stores a synopsis for each order, such as the order ID, priority, state, milestone timestamps, and so on. This table is range-hash partitioned by order ID. More precisely:

  • The non-inclusive upper bound of each range partition is an order ID. For example, if the upper bound of the first partition is 1,000,001 and partitions are sized to contain 1,000,000 order Ids each, the first partition contains orders with an order ID between 1 and 1,000,000, the next partition contains orders with an order ID between 1,000,001 and 2,000,000, and so on.

  • Hash sub-partitioning reduces I/O contention. In production deployments, range partitions typically have 16, 32, or 64 sub-partitions.

  • Range partition names are generated by OSM and they include the partition upper bound. For example, the upper bound of partition P_000000000001000001 is 1,000,001. Sub-partition names are generated by Oracle Database (for example, SYS_P2211001).

The rest of the tables that accumulate order data are equipartitioned with OM_ORDER_HEADER. They are either range-hash partitioned or reference partitioned.

You can use the partitioning realms feature to separate orders with different operational characteristics into different partitions. See "Partitioning Realms" for more information.

For more information about the different types of partitioning, see Oracle Database VLDB and Partitioning Guide.

Benefits of Partitioning

Partitioning your OSM schema allows you to subdivide tables and indexes into smaller segments. This provides the following benefits:

Improved Manageability

Improved manageability is the result of partition independence, plus managing smaller segments is easier, faster and less resource intensive. The benefits increase with the schema size, because partitioning allows you to divide large tables and indexes into smaller more manageable segments.

You can purge several weeks' worth of order data in minutes by dropping or purging a partition, without affecting other partitions. You can set up routine purges of older partitions containing obsolete and complete orders, while creating and bringing on-line new partitions as existing partitions fill up.

Data Definition Language (DDL) operations on tables and indexes are less resource intensive and less likely to fail if they are performed one partition at a time. For example, consider a 128 GB non-partitioned index and a partitioned index of the same size with 32 partitions. The partitioned index can be rebuilt one partition at a time (32 transactions), whereas rebuilding the non-partitioned index is a single transaction that requires 32 times more free space. If the rebuild fails, the entire transaction is rolled back.

Increased Availability

Partitioning increases availability mainly by reducing downtime in the event of error. For example, the time to recover from a corruption in a large table could be reduced significantly if that table was partitioned and the corruption was isolated to a single partition.

Increased Concurrency

Hash sub-partitioning increases concurrency by reducing I/O contention, specifically by spreading DML statements (data modifications) over several physical sub-partitions. Contention is usually manifested as "buffer busy" waits in Automatic Workload Repository (AWR) reports.

Note that range partitioning does not help with contention because new orders are created in the same range partition. The only exception is if you use active-active Oracle RAC, in which case order creation is spread over two or more range partitions.

Support for Active-Active Oracle RAC

If you use active-active Oracle RAC, range partitioning is critical for good performance. Therefore, active-active Oracle RAC is supported only if your OSM schema is partitioned.

If you use a single instance database, new orders are assigned order Ids and are all stored in the same partition. When that partition is exhausted, new orders are created in the next partition, and so on. To avoid conflicts in order ID generation, each OSM server running in a WebLogic cluster is assigned one or more "slots" and generates only order Ids mapped to its slots based on a proprietary algorithm. For example, in a cluster with two equally weighted managed servers, each server might generate every other order ID. (The order ID generation algorithm is internal and may be different from release to release.)

If you configure OSM to use Oracle RAC in active-passive mode or a single node of Oracle RAC, order Ids are generated as described above. However, if you configure OSM to use Oracle RAC in active-active mode with N nodes, new orders are created concurrently in N partitions. The goals are load balancing and increased scalability and performance. These are achieved by dividing managed servers into N groups, so that each group interacts with a single Oracle RAC instance. Directing interactions from a given managed server to a specific RAC instance is achieved using JDBC multi data sources.OSM does not support an Active GridLink data source given that this configuration has a severe impact on OSM performance. The reason for this is that, with Active GridLink, OSM cannot direct interactions from a given managed server to a specific primary RAC instance.

Figure 7-2 shows an example with four managed servers divided into two groups. Each group is configured with a different multi data source in failover mode. However, the primary data source in each multi data source points to a different Oracle RAC instance in an alternating fashion.

Figure 7-3 shows that each group generates order Ids from a different range, so that the orders created by each group are stored in different partitions, for example, P_00000000000100001 and P_00000000000200001. In this way:

  • All database transactions on an order that take place on a single managed server also take place on a single database instance. This minimizes cluster waits by avoiding buffer cache transfers among database instances.

  • The workload is balanced across Oracle RAC nodes and WebLogic Managed Servers.

Figure 7-2 OSM Data Source Configuration for Active-Active Oracle RAC

Description of Figure 7-2 follows
Description of "Figure 7-2 OSM Data Source Configuration for Active-Active Oracle RAC"

Figure 7-3 OSM Order ID Generation for Active-Active Oracle RAC

Description of Figure 7-3 follows
Description of "Figure 7-3 OSM Order ID Generation for Active-Active Oracle RAC"

The downside of creating orders concurrently in N partitions is that some order Ids are skipped, which causes partitions to be only partially filled when they are considered exhausted. For example, Figure 3 shows that MS01 and MS03 skip order Ids 2, 6, 10, 14, and so on. This is because these are mapped to slots owned by MS02 and MS04. However, MS02 and MS04 do not generate those order Ids because they generate order Ids from a different range. As a result, each partition is only half-full when it is exhausted.

The overall size of order ID gaps depends on the number of Oracle RAC nodes, regardless of how the load is balanced across those nodes. For example, when you remove managed server MS4 from the cluster of the previous example, so that each managed server processes 1/3 of the load, the managed servers are still divided into two groups. This means that partition P_00000000000100001 contains 2/3 of the order Ids and P_00000000000200001 contains the remaining 1/3. Thus, when P_00000000000100001 is exhausted, it will be 1/3 empty. Because MS2 skips slots assigned to MS1 and MS3, its partition will be exhausted at about the same time and it will be 2/3 empty. Although the two Oracle RAC nodes are not balanced (they process 2/3 and 1/3 of the load each), on average both partitions are half empty.

In summary, if you switch from a single database to N-node active-active Oracle RAC, the number of partitions increases N-fold, whereas the actual number of order Ids stored in a partition decreases N-fold. Storage consumption is about the same.

For more information, refer to the OSM High-Availability Guidelines and Best Practices in the OSM Installation Guide.

Increased Query Performance

OSM query performance benefits the least from partitioning because queries are designed to use indexes and return quickly. In fact, a very large number of physical partitions could negatively effect query performance, as discussed in "Pitfalls of Partitioning." However, there are the following performance benefits:

  • Many indexes are based on order ID and are therefore right-handed. Partitioning alleviates imbalances by dividing indexes into smaller segments.

  • Queries that perform index range scans can be run in parallel if the index is partitioned. Parallel query execution could dramatically improve the performance of maintenance operations, reporting, and ad-hoc user searches.

Pitfalls of Partitioning

Tables that store order data are equipartitioned with OM_ORDER_HEADER. The rest of the tables are not partitioned. Therefore, the number of physical partitions in an OSM schema is at least T x R x H, where T is the number of partitioned tables, R is the number of OM_ORDER_HEADER range partitions, and H is the number of hash sub-partitions per range partition (excluding LOB index partitions). For example, an OSM schema with 48 order tables, 10 OM_ORDER_HEADER range partitions, and 32 hash sub-partitions has at least 15360 physical partitions. If you let the number of physical partitions increase unchecked, you are likely to run into the performance problems discussed below, even if the space used by most partitions is very small. It is recommended that you review the "Sizing Partitions" section for sizing guidelines.

Order Search Performance

The majority of OSM queries return quickly because they perform index unique scans or range scans on a single partition. Most query access is based on order ID, which is the partitioning key, and a database technique called partition pruning that narrows access to a single or a small subset of partitions. Partitioning does not benefit such queries because the performance increase achieved by scanning smaller index partitions is negligible. In contrast, a large number of partitions could have a negative impact on queries that iterate over all or a large subset of local index partitions. This happens when order ID is not part of the query criteria and therefore partition pruning cannot be used. In order search, probing a large number of small index segments is slower than probing a small number of large index segments.

Purge Performance

A very large number of partitions could significantly slow down partition purge. Experience shows that the tipping point is around 300,000 physical partitions, although this varies depending on the specific OSM installation environment.

The time to purge a partition using EXCHANGE PARTITION operations depends on the number of hash sub-partitions. For example, if you decrease the number of sub-partitions from 64 to 32, the duration of the EXCHANGE PARTITION stage of the purge decreases to nearly half.

A partitioned table is considered in the library cache as one object, regardless of the number of partitions. Partition purge operations use DDL statements, which invalidate the cursors associated with the underlying partitioned tables. When a cursor is re-parsed, all the partition metadata for the underlying tables must be reloaded and the amount of time increases with the number of partitions. This is less of an issue when you drop a partition, because the DROP PARTITION statement is cascaded. However, partition purge also uses EXCHANGE PARTITION, which is not cascaded in 11g. A partition purge executes several exchange operations per reference-partitioned table, causing repeated metadata reloads that could significantly slow down the purge (for example, from minutes to hours).

Shared Pool

Oracle Database stores partitioning metadata in the data dictionary, which is loaded in the row cache of the shared pool. A very large number of partitions could stress the shared pool. If the shared pool is undersized, you could run into ORA-4031 errors (unable to allocate shared memory), especially while purging partitions.

Development and Testing Environments

Starting with Oracle Database 11.2.0.2, the initial extent size for partitioned tables is 8 MB. If you have many hash sub-partitions, partitioned tables can consume a lot of space even if they contain very little data. For example, even with deferred segment allocation, injecting a few orders could quickly expand a table with 64 sub-partitions to 512 MB. Although this is not typically an issue for production environments, which should already have sufficient storage available, it could be an issue in development or testing environments with limited storage capacity. In such environments, you can use a smaller number of sub-partitions (for example 4), or use a tablespace with uniform extent allocation and a small extent size (for example, 64 KB).

Order Purge Strategies

The database size increases as orders enter the system. If left unchecked, the size becomes difficult for database administrators to manage and you might eventually run out of space. A purge strategy is an important decision to make before going into production. If you choose a continuous order purge strategy, you should incorporate it in your order load tests. Partition sizing depends on the purge strategy.

OSM supports these purge strategies:

Partition-Based Order Purge Strategy

The om_part_maintain PL/SQL package allows you to drop and "purge" partitions. These operations are based on efficient DDL statements that modify the table structure, such as DROP PARTITION and EXCHANGE PARTITION. They are designed to reclaim large amounts of space as fast and efficiently as possible. You can purge several weeks' worth of order data in minutes. However, they usually require downtime and a regular maintenance schedule.

When a partition is dropped, all its data is deleted and space is immediately reclaimed. Populated partitions can be dropped offline only. Empty partitions can be dropped either offline or online.

Often a partition cannot be dropped because it contains orders that do not satisfy the purge criteria (for example, closed orders that are still in retention and open orders). In this case, you can use "partition purge". This operation is more versatile than dropping because it allows you to retain some orders and consolidate multiple partitions. In addition, if all orders in a partition are closed and satisfy the purge criteria, you might be able to purge it online.

The following sections provide additional information about this strategy:

Partition Purge Example

The following figures show a partition purge example over a period of 14 weeks with three maintenance windows.

  • The purge frequency is biweekly.

  • Each partition is sized to contain about 2 weeks' worth of orders.

  • The order retention period is 4 weeks (after closure).

  • Purge performance tests show that the period of downtime is acceptable if less than 2% of the orders in the partition are retained (including both open orders and closed orders in retention) but the system can tolerate up to 5%. The statistics (or estimates) suggest that 98% of the orders close within 3 weeks.

For this example, to purge an exhausted partition you must wait for at least 7 weeks (3 weeks for 98% of the orders to close and 4 weeks retention). Because the partition size is equal to 2 weeks of orders, you can schedule the first maintenance window after 9 weeks in production. After that, maintenance is biweekly. Before going into production, you should also add enough partitions to store orders until the first maintenance window.

Figure 7-4 shows the first maintenance window. P1 is purged and less than 2% of its orders are retained, including closed orders that are still in retention. Because a partition contains 2 weeks' worth of orders, you also add at least one partition (only P6 shown).

Partitions P2, P3, P4, and P5 are not purged:

  • It is not cost-effective to purge P2: It contains a high percentage of open orders and closed orders that are still in the retention period (for example 15%).

  • It is not cost-effective to purge P3: The second half of P3 is within the retention period, so it contains a very high percentage of open orders, and closed orders that are still in the retention period (for example 75%).

  • P4 and P5 are ineligible for purge: All orders are either open or within the retention period. In addition, P5 is where new orders are created (shown half full).

In other words, 3.5 partitions are not purged. If you follow an uninterrupted biweekly schedule, the number of partitions that are not purged is always about 3.5.

Figure 7-4 Partition Purge Maintenance, Window 1

Description of Figure 7-4 follows
Description of "Figure 7-4 Partition Purge Maintenance, Window 1"

Figure 7-5 shows the second maintenance window (after 2 weeks). Because a partition contains 2 weeks' worth of orders, P5 is exhausted and P6 is half full. As in the previous maintenance, it is cost-effective to purge only one partition that has not yet been purged, which is P2 in this example. You also add at least one partition (not shown). Notice that the number of partitions that are not purged is 3.5 again.

Figure 7-5 Partition Purge Maintenance, Window 2

Description of Figure 7-5 follows
Description of "Figure 7-5 Partition Purge Maintenance, Window 2"

Figure 7-6 shows the third maintenance window. As in the previous maintenance, it is cost-effective to purge only one partition (P3) that has not yet been purged, and 3.5 partitions are not purged. This time, however, the previously purged partitions P1 and P2 are purged again and consolidated with P3 (3-to-1 consolidation). The number of orders retained with this consolidation is less than 3.2%, which is less than the 5% tolerance limit. Periodic consolidation is recommended to maintain a low number of partitions and maximize reclaimed space.

Figure 7-6 Partition Purge Maintenance, Window 3

Description of Figure 7-6 follows
Description of "Figure 7-6 Partition Purge Maintenance, Window 3"
Advantages and Disadvantages of Partition-Based Purge

Advantages of the partition purge strategy:

  • The ability to reclaim large amounts of space (for millions of orders) in a very short period of time (normally measured in minutes). Partition drop and purge operations are based on efficient DDL statements, such as DROP PARTITION and EXCHANGE PARTITION.

  • If your database and storage are well-tuned for order processing, you do not need extra hardware for purging (CPU, RAM and I/O bandwidth). Because the purge happens when OSM is offline and not processing orders, all the system resources are available for purging.

  • You can place partitions on different tablespaces for increased administration and availability. For example, if a tablespace is damaged, the impact and restoration effort could be limited to one or just a few partitions, possibly an old partition with few open orders.

  • You can choose the correct partition size to facilitate administration and maintenance according to your needs.

Disadvantages of the partition purge strategy:

  • This strategy requires downtime. The amount of downtime depends on the purge frequency, purge performance, and several additional factors discussed in "Sizing Range Partitions for Partition-Based Order Purge." In general, with relatively little extra storage you can reduce the frequency of purge cycles considerably. For example, you might be able to switch from biweekly to monthly maintenance with only a 20% increase in storage. This is discussed in detail in the "Purge Frequency" section. If you require 24x7 operations, consider row-based order purge.

  • If you have very high volumes of orders and you cannot afford frequent downtime, the large storage size could become hard to manage.

  • This strategy does not work well if you have a mix of long-lived orders and a relatively high-volume of short-lived orders. Because the high-volume orders reside together with long-lived orders, the latter dictate the purge strategy. Unless long-lived orders are a fraction of the total volume, it might not be cost-effective to purge a partition soon after all short-lived orders are closed. (This is because retaining a large number of long-lived orders would increase considerably the purge time and therefore the downtime.) Also, as explained in "Pitfalls of Partitioning," if you let the number of partitions increase significantly, performance of partition purge operations suffers. In this case, consider a hybrid purge strategy or row-based order purge.

To mitigate the disadvantages of this strategy, choose the partition size carefully and adjust it as conditions change. As a rule of thumb, size your partitions to contain as many orders as will be purged in one purge maintenance window. For sizing guidelines, refer to the "Sizing Range Partitions for Partition-Based Order Purge" section.

Row-Based Order Purge Strategy

If you cannot take periodic downtime to purge orders, consider row-based order purge, which is implemented by the om_new_purge_pkg package. Because you can run order purge online, it allows for continuous business operations. You can also use it as an auxiliary purge strategy, as discussed in "Hybrid Purge Strategy," and for ad-hoc deletes (for example, if you need to purge and resubmit an order).

Because order purge uses deletes instead of DDL statements, you can run order purge online with minimal contention with normal order processing:

  • Order processing can take place concurrently because all indexes remain usable, foreign keys remain enabled and validated, and only the orders that satisfy the purge criteria are affected (as opposed to partition-based purge, which moves around orders that do not satisfy the purge criteria).

  • Contention is minimal because deletes acquire row level locks (as opposed to partition-based purge, which uses DDL operations that acquire object level locks).

As row-based order purge allows OSM to stay online and perform normal order processing while purge operations are taking place concurrently, this increases the total workload of the system. The database and storage IO must be sized to handle the additional workload of deleting orders concurrently with order processing. For sizing guidelines refer to "Sizing Range Partitions for Row-Based Order Purge."

To use order purge as your main purge strategy (or in a hybrid strategy), schedule it to run as a background activity that continuously frees space to be reused by new orders. Ideally you should free space at the same rate it is consumed. The space freed by deletes in a table or index segment does not lower the high water mark (the boundary between used and unused space) of that segment. It can be reused by inserts into the same segment but it is not released to the tablespace. Therefore the primary purge target should be the partition(s) where new orders are created. (In the case of active-active Oracle RAC with N nodes, orders are created on N partitions concurrently.) For example, you could run it once daily during the lowest volume period with high parallelism, several times a day for shorter periods or continuously throughout the day with low parallelism.

Partitions should be sized with a wide range of order Ids to store several months' worth of orders. Partition sizing depends on the retention policy, the order life time, and how you reclaim the space occupied by old partitions. This leads to the following two variations of this strategy:

  • Zero downtime: If you have a strict 24x7 requirement, you could delete orders from old partitions until they are empty, which enables you to drop them online. The downside is that it might be a long wait before you can fully reclaim that space by dropping the empty partitions, especially if you have orders that remain open for several months or even years (as mentioned earlier, deletes do not lower the segment high water mark and the space freed by deletes cannot be used for new orders because new orders are created on the latest partition).

  • Infrequent maintenance (for example, once a year): If you have a near 24x7 requirement with occasional maintenance windows, you could use those windows to drop or purge old partitions offline.

The following sections provide additional information about this strategy:

Row-Based Order Purge Example

Figure 7-7 show a row-based order purge example over a period of several months. The retention period is 2 weeks, and the maximum lifetime of an order is 5 weeks. Closed orders are deleted daily, as they come out of retention. Some of the orders close within a day, and the order aging curve allows order purge to target an additional 25% of the orders each week for any week prior to the past two weeks.

This means that by the end of week 6, all orders created in the first week have been purged. By the end of week 7, all orders created in the first two weeks have been purged. This pattern continues until the partition is exhausted on the Nth week. Then it repeats for the next partition.

Figure 7-7 Partition Lifetime Using Row-Based Order Purge

Description of Figure 7-7 follows
Description of "Figure 7-7 Partition Lifetime Using Row-Based Order Purge"
Advantages and Disadvantages of Row-Based Order Purge

The key advantage of this strategy is that you can run it online and therefore it can support 24x7 operations.

The disadvantages of row-based order purge:

  • Row-based order purge is the least efficient way to purge because delete is an expensive DML statement. Deleting row by row is CPU intensive and consumes I/O bandwidth, so you must plan for spare capacity. Normally this is not an issue because you size your hardware for peak volume. However, if you do not have enough idle or low-volume periods for purge to catch up with new order creation, you will have to add hardware (for example, CPUs and storage links).

  • The space freed by deletes in a table or index segment does not lower the high water mark of that segment. It can be reused by inserts into the same segment but it is not released to the tablespace. Therefore you must run order purge as frequently as possible (at least once a day) to free space at the same rate it is consumed by new orders, which has an objective of restraining the growth of the high water mark.

  • If order purge runs continuously, it might be harder to manage. You must be prepared to deal with unexpected load spikes and adjust the order purge schedule if load patterns change. And you might have to disable order purge for occasional maintenance tasks or batch jobs.

  • Row-based order purge makes it difficult troubleshoot performance issues. If you run into performance issues with normal order processing, you might have to stop order purge to reproduce the problem in isolation. This is not an issue if you configure a purge job class that runs on a dedicated database service (see "purge_job_class").

Hybrid Purge Strategy

You can use both partition purge and order purge, and realize the benefits of both strategies. For example, consider a hybrid strategy if you have a mixed load of long-lived orders and short-lived orders, and you can afford periodic downtime. In this example:

  • Use order purge to delete short-lived orders aggressively from the partition(s) where new orders are created. The space freed by deletes will be reused by new orders. The goal is to restrain the growth of the high water mark (the boundary between used and unused space) in table and index segments. This will allow you to have larger partitions and less frequent purge maintenance.

  • Drop/purge partitions periodically (for example, once a month) to purge long-lived orders and the remaining short-lived orders. Follow the partition sizing guidelines for partition-based order purge, which are discussed in "Sizing Range Partitions for Partition-Based Order Purge."

Partitioning Realms

You can use the partitioning realms feature to separate orders into different partitions when those orders have different operational characteristics, for example, short-lived orders versus long-lived orders. This enables you to group orders into partitions that can be purged or dropped more effectively.

The partitioning realms feature is used for logical partitioning, which is supported for even non-partitioned schemas.

A new order that is received by OSM requires an order_seq_id before it is persisted in the database. Each partitioning realm reserves ranges of order_seq_id values to be used by orders that are assigned to that realm. Mapping rules within each partitioning realm determine which orders are assigned to the partitioning realm. If an order does not map to any partitioning realm it is assigned to a pre-defined default realm. The default realm is called default_order.

Orders automatically map to default realms that are pre-defined in the database. You do not have to configure partitioning realms unless you have a specific requirement to do so. For more information, see "Default Partitioning Realm."

Partitioning Realms Configuration

Partitioning realms are created and maintained in the system using XML configuration files. Some aspects of partitioning realms that are defined in the configuration files are also saved in the database, which is required for database maintenance operations, such as purging.

The configuration files are referenced in the oms-config.xml file using the following parameter:

oracle.communications.ordermanagement.OrderPartitioningRealmConfigFileURLs

Note:

The partitioning realms files must be located in the same file system as the oms-config.xml file.

Sample order partitioning realms configuration files are provided in the OSM SDK in the SDK/Samples/PartitioningRealms directory. An XML Schema Definition (XSD) for partitioning realms is also located in this directory.

Example 7-1 is a sample partitioning realm file. This sample file contains only one order partitioning realm, but files can contain configuration for multiple realms, if required.

Example 7-1 Sample Order Partitioning Realms File

<partitioningRealmModel xmlns="http://xmlns.oracle.com/communications/ordermanagement/partitioningRealms" realmType="ORDER">
  <partitioningRealm name="sample_order_realm" enabled="true">
    <description>Sample for demonstrating realm configuration</description>
    <purgeStrategy>ANY</purgeStrategy>
    <parameters>
      <rangePartitionSize>500000</rangePartitionSize>
    </parameters>
    <mappings>
      <includes>
        <cartridgeNamespace>MyTestCartridge</cartridgeNamespace>
        <orderName>MyOrder</orderName>
      </includes>
    </mappings>
  </partitioningRealm>
</partitioningRealmModel>

Table 7-1 lists and describes the elements and attributes that can be included in the partitioning realms file.

Table 7-1 Partitioning Realms File Elements and Attributes

Element or Attribute Description

realmType

The type of realm contained in this configuration file. Valid value is ORDER. This element is mandatory.

name

The name of the realm. The name is not case-sensitive and must be unique in the system. This element is mandatory.

enabled

Indicates whether the realm is enabled or disabled. Disabled realms are never mapped to orders and do not have new partitions added automatically.

The enabled attribute is optional and the default value is true (enabled) if not present.

description

This contains the description of the realm. This element is optional.

purgeStrategy

Indicates the purge strategies supported by this realm. Valid values are PARTITION-BASED, ROW-BASED, and ANY.

The element is optional and the default is ANY if not present. See "Purge Strategy for Partitioning Realms" for more information.

parameters

Contains parameters specific to the partitioning realm. These parameters override the values in the om_parameter table. All the parameters are optional. If they are not specified, the value from the om_parameter table is used. Currently supported parameter is rangePartitionSize.

rangePartitionSize

The range partition size represents the number of orders to use when adding new partitions for this realm. This element maps to the range_partition_size parameter in the om_partitioning_realm_param table. This element is optional. If not specified, the range_partition_size value from the om_parameter table is used.

mappings

Contains includes rules that map new orders to this partitioning realm. Multiple includes elements can be used to map multiple different cartridges or order types to a single realm. The valid child elements are includes and excludes. See "Mapping Orders to Partitioning Realms" for more information. This parameter is technically optional, but if it is not included, the partition will not be used for any orders.

includes

Contains the elements used to map new orders to this realm. The valid child elements are: cartridgeNamespace, cartridgeVersion, and orderName. This parameter is optional.

For more information about using the includes element, see "Mapping Orders to Partitioning Realms."

excludes

(Not shown in example) Contains the elements used to exclude new orders from this realm. The valid child elements are: cartridgeNamespace, cartridgeVersion, and orderName. This parameter is optional.

cartridgeNamespace

A regular expression used to match the cartridge namespace of a new order. This parameter is optional.

cartridgeVersion

(Not shown in example) A regular expression used to match the cartridge version of a new order. This is valid only when the cartridgeNamespace element is also included.

Oracle does not recommend using cartridgeVersion because that would require the realm mappings need to be updated when cartridges are upgraded to a new version. This parameter is optional.

orderName

A regular expression used to match the order name (or order type) of a new order. This parameter is optional.

Mapping Orders to Partitioning Realms

The mappings element of the partitioning realms configuration is used to map new orders to a specific partitioning realm. The mappings associate orders to partitioning realms by comparing order data to match criteria (regular expressions) for each partitioning realm. If a match is NOT found for an order, the order is assigned to the default_order partitioning realm. The default_order partitioning realm cannot contain any mappings. See "Default Partitioning Realm" for more information.

For example, the following partitioning realms XML configuration sets the partitioning realm to short_lived_orders for any order with a cartridgeNamespace that starts with "Mobile." Any order that is not in a cartridge with a name that starts with "Mobile" will be assigned the default_order partitioning realm.

<partitioningRealmModel realmType="ORDER">
    <partitioningRealm name="short_lived_orders">
        <mappings>
            <includes>
                <cartridgeNamespace>Mobile.*</cartridgeNamespace>
            </includes>
        </mappings>
    </partitioningRealm>
</partitioningRealmModel>

The following fields can be used as match criteria in the partitioning realms mapping file:

  • Cartridge Namespace

  • Cartridge Version

  • Order Name

Note:

The cartridge namespace and version are the deployed cartridge namespace and version. For standalone cartridges, use the standalone cartridge namespace and version. For composite cartridges, use the composite cartridge namespace and version.

Using the cartridge version in mappings is not recommended because mappings can be broken when cartridges are upgraded and will then require more frequent maintenance.

About the includes Element

If there are multiple match criteria specified within an includes element, ALL the criteria in the includes element must match. In other words, the criteria within an includes element are part of an AND condition. In this example, only LTEMobileService orders from the Mobile are mapped to the short_lived_orders partitioning realm. Other orders from the same cartridge are not mapped to this partitioning realm.

<partitioningRealmModel realmType="ORDER">
    <partitioningRealm name="short_lived_orders">
        <mappings>
            <includes>
                <cartridgeNamespace>Mobile</cartridgeNamespace>
                <orderName>LTEMobileService</orderName>
            </includes>
        </mappings>
    </partitioningRealm>
</partitioningRealmModel>

If there are multiple includes elements, only one of the includes criteria must match in order for the realm to map. In other words, multiple includes elements are part of an OR condition.

For example, with the following configuration the short_lived_orders partitioning realm is mapped to orders from both Mobile and Broadband cartridges.

<partitioningRealmModel realmType="ORDER">
    <partitioningRealm name="short_lived_orders">
        <mappings>
            <includes>	
                <cartridgeNamespace>Mobile</cartridgeNamespace>
            </includes>
            <includes>
                <cartridgeNamespace>Broadband</cartridgeNamespace>
            </includes>
        </mappings>
    </partitioningRealm>
</partitioningRealmModel>

When there are multiple partitioning realms defined in a configuration file, they are processed sequentially. The first partitioning realm to match the order data is used. In the following example, the long_lived_orders realm would never be mapped because short_lived_orders realm maps to all cartridges starting with "Mobile".

<partitioningRealmModel realmType="ORDER">
    <partitioningRealm name="short_lived_orders">
        <mappings>
            <includes>
                <cartridgeNamespace>Mobile.*</cartridgeNamespace>
            </includes>
        </mappings>
    </partitioningRealm>
    <partitioningRealm name="long_lived_orders">
        <mappings>
            <includes>
                <cartridgeNamespace>MobileCartridge</cartridgeNamespace>
            </includes>
        </mappings>
    </partitioningRealm>
</partitioningRealmModel>

About the excludes Element

To exclude an order from getting mapped to a realm you can add an excludes element under the mappings. The excludes element takes precedence over the includes element. In other words, if an order matches both includes element criteria and excludes element criteria, the order is not mapped to the partitioning realm. In the following example, all orders that start with "Mobile" are mapped to the short_lived_orders realm, except Mobile4GService orders.

<partitioningRealmModel realmType="ORDER">
    <partitioningRealm name="short_lived_orders">
        <mappings>
            <includes>
                <orderName>Mobile.*</orderName>
            </includes>
            <excludes>
                <orderName>Mobile4GService</orderName>
            </excludes>
        </mappings>
    </partitioningRealm>
</partitioningRealmModel>
Enabling and Disabling Partitioning Realms

New realms that you add are disabled. Typically, you add realms, add partitions to those realms, and then enable them. See "Partitioning Realms" and "partition_auto_creation" for more information.

You can enable and disable realms by changing the enabled attribute in the partitioning realm configuration to true (enabled) or false (disabled). If a partitioning realm is disabled, it is no longer used for mapping incoming orders, therefore any partition or order ID block that is assigned to that partitioning realm is no longer used.

Enabled partitioning realms cannot be removed from the partitioning realm configuration file. If an enabled partitioning realm is removed from the configuration file, a validation error occurs the next time you restart the system or refresh OSM metadata.

You cannot disable the default partitioning realm: default_order.

To remove a partitioning realm from the configuration file:

  1. In the configuration file, set the enabled attribute to false to disable the partitioning realm.

  2. Do one of the following:

    • Restart OSM.

    • Refresh the OSM metadata.

    The partitioning realm is disabled in the database.

  3. Remove or comment out the realm XML configuration that you want to remove.

    Note:

    This removes the XML configuration for the realm, however the realm still exists in the database. You cannot remove partitioning realms from the database.

Renaming a Partitioning Realm

A partitioning realm can be renamed by adding the oldName attribute to the partitioning realm configuration and changing the name attribute to the new name. For example:

<partitioningRealm oldName="MyOldRealmName" name="MyNewRealmName"/>

After you change the name in the XML configuration file, refresh the OSM metadata.

You cannot rename the default partitioning realm: default_order.

Refreshing Partitioning Realms Configuration

Refreshing OSM metadata loads and updates the partitioning realms configuration. When you make changes to the partitioning realms configuration, you must refresh the OSM metadata for the change to take effect. There are several ways to refresh the OSM metadata. See "Refreshing OSM Metadata" for more information.

If there is a validation error in the partitioning realms configuration, the behavior depends on the following factors:

  • If the server is starting up, an invalid partitioning realms configuration causes a CRITICAL health policy error which prevents OSM from starting. You can find the cause of the validation error in the log files. The problem in the partitioning realms configuration must be fixed before you attempt to restart OSM.

  • If the server is already running, an invalid partitioning realms configuration causes errors in the logs and the loading of the partitioning realms configuration is abandoned. The OSM server continues to run using the last known good realm configuration. In other words, the partitioning realm configuration changes are ignored. You can find the cause of the validation error in the log files. The problem in the partitioning realm configuration must be fixed before you attempt to refresh the OSM metadata.

Adding Partitions for New Partitioning Realms

The steps to add a partition for a new partitioning realm depends on whether the partition_auto_creation value is enabled or disabled. In production and performance environments, Oracle recommends that you disable the partition_auto_creation attribute and add partitions manually before they are required.

When you create new partitioning realms, a new partition is needed to hold orders associated with the new realm. When you start OSM or refresh metadata, all enabled partitioning realms are assigned an order ID generator. The order ID generator is responsible for creating new order_seq_ids for new orders in OSM. When order ID generators are initialized they are assigned an om_order_id_block, which represents a range of order_seq_ids reserved for that order ID generator. Each om_order_id_block maps to a partition in the database schema. This is why every enabled partitioning realm requires a partition.

partition_auto_creation Disabled

If partition_auto_creation is disabled, partitioning realms must be created in a disabled state before adding partitions.

To add partitions with the partition_auto_creation attribute disabled:

  1. Create the new partitioning realm by setting the enabled attribute to false in the partitioning realm XML configuration file.

  2. Refresh the OSM metadata or restart OSM.

    This creates the partitioning realm in the system in a disabled state.

  3. Add one or more partitions for the new (disabled) realm using the om_part_maintain.add_partitions procedure. In addition to the name of the new partitioning realm, you must set the a_force argument to true to create a partition for a disabled partitioning realm. If you do not enter true for the a_force argument, an exception is raised because, by default, partitions cannot be added to disabled realms.

    exec om_part_maintain.add_partitions(a_count, a_tablespace, a_realm_mnemonic, true)
    
  4. When you are ready to use the new partitioning realm, set the enabled attribute to true in the partitioning realm XML configuration.

  5. Refresh the OSM metadata or restart OSM.

    This enables partitioning realm in the system.

partition_auto_creation Enabled

If partition_auto_creation is enabled, you can create partitioning realms in an enabled state. Partitions are created automatically for the partitioning realm.

To add partitions with the partition_auto_creation attribute enabled:

  1. Create the new partitioning realm by setting the enabled attribute to true in the partitioning realm XML configuration file.

  2. Refresh the OSM metadata or restart OSM.

    This creates the partitioning realm in the system in an enabled state.

    A partition is automatically created for the new partitioning realm.

  3. (Optional) Add one or more partitions for the new realm using the om_part_maintain.add_partitions procedure.

    exec om_part_maintain.add_partitions(a_count, a_tablespace, a_realm_mnemonic)

Purge Strategy for Partitioning Realms

The purge strategy associated with a partitioning realm determines the type of purge supported by the realm. Set the purgeStrategy value in the partitioning realms file to specify the purge strategy. See "Partitioning Realms Configuration" for more information about the partitioning realms file.

Valid purgeStrategy values are the following:

  • ROW-BASED: Only row-based purge operations can be used in this realm. Partition-based purge procedures will ignore partitions with a purge strategy of ROW-BASED, unless they are empty. If a partition is empty, it can be dropped regardless of the configured purge strategy.

  • PARTITION-BASED: Only partition-based purge operations can be used on this realm. Row-based purge procedures will ignore orders in a realm with a purge strategy of PARTITION-BASED. The only exceptions are procedures that let the user specify the orders to purge, such as the delete_order and purge_selected_orders procedures; these procedures will delete the order regardless of the realm's purge strategy.

  • ANY: There are no restrictions on the purge method used. This is the default purge strategy.

Default Partitioning Realm

The default partitioning realm is for all orders that do not map to a user-configured partitioning realm. The following default partitioning realm is created in OSM:

  • default_order

The default partitioning realm is not included in the configuration files, but can be included if you want to change some properties of the default realm.

When working with the default realm, you cannot change the following:

  • The default realm cannot be disabled.

  • The default realm cannot be renamed.

  • The default realm cannot contain any mappings.

  • The realmType of default realm cannot be changed.

When working with default realms, you can change the following:

  • The description of default realm can be changed.

  • The purge strategy can be changed.

  • The rangePartitionSize of the default realm can be changed. By default, the default_order realm does not have a rangePartitionSize and uses the range_partition_size parameter in the om_parameter table.

Non-Partitioned Schemas

You can use partitioning realms in non-partitioned environments to improve the performance of row-based purging. Grouping orders based on how long they take to close allows orders to be purged together. Purging sequential orders reduces database IO during the purge.

Order ID Blocks

Use the OM_ORDER_ID_BLOCK table to determine which orders belong to which realms. When new partitions are added, a new row is inserted into this table to track the order ID range and associated partitioning realm.

The following example query lists all the block ranges and their associated partitioning realm:

select b.first_order_id,
    b.last_order_id,
    r.mnemonic,
    b.status,
    r.realm_type,
    b.dbinstance
from om_order_id_block b,
    om_partitioning_realm r
where b.realm_id = r.realm_id
order by b.last_order_id;

Where:

  • first_order_id: The first order ID in the block.

  • last_order_id: The last order ID in the block.

  • mnemonic: The name of the partitioning realm associated with the block.

  • status: One of USED, ACTIVE, or AVAILABLE.

    • ACTIVE: A block of IDs that is actively being used for new orders.

    • USED: A block of IDs that is no longer being used to generate new IDs.

    • AVAILABLE: A new, empty block that is available for use.

  • realm_type: Must be ORDER.

  • dbinstance: The number of the Oracle RAC instance. A value of -1 means the block is unassigned to an Oracle RAC node.

To determine which realm a specific order is associated with, run the following query (replacing order_seq_id with the order that you want to query):

select r.mnemonic,
    r.realm_type
from om_order_id_block b,
    om_partitioning_realm r
where order_seq_id between b.first_order_id and b.last_order_id
    and b.realm_id = r.realm_id;

Cartridge Management Strategy

Following are the main components to a deployed cartridge:

  • The cartridge metadata that is populated in the OSM database when the cartridge is deployed or redeployed.

  • The order data populated in the OSM database whenever an order is created and as it is being processed.

Cartridge metadata consumes little space in the database. However, it consumes memory resources, takes time to load into the OSM server on startup, and is re-loaded when cartridges are deployed.

Follow these guidelines to reduce the memory footprint and startup time of OSM, and to deploy and undeploy cartridges quickly online:

  • Undeploy obsolete cartridges from time to time to reduce the memory footprint and startup time of OSM.

  • Use Fast Undeploy to undeploy cartridges instead of conventional undeploy. Fast Undeploy allows you to undeploy cartridges quickly online by undeploying the Java EE application only. Instead of purging cartridge metadata, it sets the cartridge status in the database to UNDEPLOYED. Fast Undeploy also offloads purging of order data to your order purge strategy, which does not need to distinguish between deployed and undeployed cartridges.

  • Consider purging metadata of an undeployed cartridge only if it has no associated orders. If you are purging fast-undeployed cartridges online, you cannot purge any cartridges that have orders.

For more information see "Managing Cartridges."

Sizing Partitions

The following values, which you enter on the Database Schema Partition Information installer screen, specify the size and number of partitions created during and after an OSM installation.

  • Orders per Partition: Specifies the number of orders that the Oracle Database allows in a range partition. This is also referred to as the range partition size.

  • Number of Sub-partitions: Specifies the number of hash sub-partitions in a range partition.

You can change the values that you selected during the installation process by updating the range_partition_size and subpartitions_number OSM database parameters.

You can configure different range_partition_size parameters for each partitioning realm. This allows you to tailor the partition size for the partitioning realm purge strategy. For example, you could use row-based purged with oversized partitions for a realm defined for high-volume, short-lived orders, and partition-based purge with smaller partitions for low-volume, long-lived orders. You configure the realm-specific range_partition_size using partitioning realm configuration files. For more information, refer to the configuration section of "Partitioning Realms."

Updates to these parameters do not affect existing partitions. For more information about these parameters, see "Configuration Parameters."

Sizing of partitions depends on the purge strategy and several other factors, as discussed in the following sections:

Sizing Hash Sub-Partitions

Hash sub-partitioning increases concurrency by reducing I/O contention, specifically by spreading DML statements (data modifications) over several physical sub-partitions. Contention is usually manifested as "buffer busy" waits in AWR reports.

Oracle strongly recommends that you set the number of sub-partitions to the smallest power of 2 that either eliminates "buffer busy" waits or reduces them to low single digits (for example, less than 2%). Typical values in production environments are 16, 32, and 64.

  • Using values other than powers of 2 results in data skew, that is, data unevenly distributed across partitions.

  • Increasing unnecessarily the number of sub-partitions could have a negative impact on purge performance. Test with 16 and 32 before trying 64. For more information refer to "Pitfalls of Partitioning."

Oracle recommends that you conduct performance tests at the expected peak workload to find the right number of sub-partitions for your environment. Periodically, also review the "buffer busy" waits in AWR reports from production. Consider increasing the subpartitions_number parameter if there are signs of increased I/O contention. Similarly, if there are no signs of contention and you experience performance issues due to a very large number of partitions, consider decreasing it if the total number of physical partitions is very large. The new value will be used when you add partitions (existing partitions are not affected).

In development and testing environments with limited storage capacity you should use a small number of hash sub-partitions, such as 2 or 4. For more information, see "Development and Testing Environments."

Sizing Range Partitions for Partition-Based Order Purge

If your purge strategy is partition-based, typical range partition sizes vary between 100 thousand and 1 million. Creating partitions that are overly large makes it more difficult to free space by purging partitions that contain orders that no longer need to be retained. Creating partitions that are too small increases the frequency that new partitions must be created and the duration of maintenance windows, and could cause issues with performance and resource consumption.

As a rule of thumb, each range partition should be sized to contain as many orders as will be purged in one purge maintenance window. For example, if you target to purge partitions every month, you could size your partitions to contain the average expected monthly volume. The feasibility of purge frequency will need to be validated based mainly on the amount of storage available and the duration of the outage that may be required to run the purge maintenance window.

Several factors influence sizing. Because these factors are correlated, it is recommended that you create a few scenarios by adjusting those parameters that are under your control. This document uses the example in the "Partition Purge Example" section to discuss sizing criteria in detail.

Purge Performance

The main factors that affect partition-based purge downtime are purge frequency and purge performance.

There are a number of ways to improve purge performance:

  • If range partitions are undersized, according to the guideline that each range partition should ideally contain as many orders as will be purged in one purge maintenance window, consider increasing the partition size. For example, the time to purge a 200 GB partition is only slightly more than the time to purge a 100 GB partition. This guideline also helps minimize partition consolidations.

  • Decrease the number of hash sub-partitions. The time to purge a 200 GB partition with 64 hash sub-partitions is nearly double the time to purge a 200 GB partition with 32 sub-partitions. For more information refer to "Pitfalls of Partitioning" and "Sizing Hash Sub-Partitions."

  • Decrease the overall number of physical partitions. For more information refer to "Pitfalls of Partitioning."

  • Increase the time-to-close wait to reduce the number of retained orders.

  • Tune purge operations, for example increase the degree of parallelism.

  • Tune the database and operating system.

  • Tune storage. For example, consider enabling more storage ports or converting disks from RAID-5 to RAID-10, which has better write performance.

  • If, after exhausting all of the above options, performance is still inadequate, consider hardware upgrades depending on the nature of the bottleneck (for example, CPU, RAM, I/O bandwidth).

Estimating Storage

To determine the size of partitions, you need to also consider the amount of storage that is allocated to OSM. This is necessary to provision sufficient storage capacity, ensure that you are comfortable managing it, and validate the trade-off between storage and the frequency and duration of maintenance windows (outages).

It is recommended that you conservatively estimate the amount of required storage. Consider possible changes in sizing criteria and purge frequency, such as a volume or order size increase due to a rollout of new services, orders requiring more space due to additional functional requirements introduced during a solution or product upgrade or a purge embargo during holidays. Add contingency for unforeseen events that might delay purging or lead to increased space consumption.

For the purpose of estimating minimum storage requirements, consider the following partition breakdown:

  • The oldest partitions that have been purged at least once.

  • Partitions that have never been purged, including exhausted partitions and the latest partition(s) where new orders are created. (If you use Oracle RAC with N nodes in active-active mode, orders are created concurrently on N partitions as explained in the Oracle RAC section.)

The oldest partitions that have been purged at least once normally contain a small number of orders. It is recommended that you consolidate these partitions regularly (every few purges). If you do, the total space consumed by those partitions should be a fraction of a single partition.

Partitions that have never been purged consume the bulk of your storage. The number of these partitions depends on the partition size, the order retention period, the time-to-close wait, the purge frequency and whether you use active-active Oracle RAC. At the time of purge, these partitions can be further distinguished as eligible and ineligible for purge. If you follow a regular schedule, you can estimate the space consumed by these partitions as follows:

  • Where P is the partition size (for example, 4 week's worth of data), R the retention period, T the time-to-close wait, and F the purge frequency (all using the same units, such as days or weeks).

  • Where N is the number of active-active Oracle RAC nodes. If you use a single instance database, N=1.

  • Where S is the space consumed by a single partition. Refer to the ""All-In" Order Volume" section for estimating S.

  • To estimate the number of partitions that are eligible for purge: F / P x N

  • To estimate the number of partitions that are ineligible for purge: (T + R) / P x N

  • To estimate the total number of partitions that have never been purged: (F + T + R) / P x N

  • To estimate the total space consumed by these partitions: (F + T + R) / P x N x S

  • If you use a single instance database and the partition size is the same as the purge frequency, the above formula can be simplified: (P + T + R) / P x S

Oracle strongly recommends that you increase your estimate by some contingency based on your ability to quickly add storage if necessary, reschedule a failed purge, and other risks (for example, by 25% or the amount of space reclaimed by each purge).

Example:

Figure 7-8 is an example of estimating minimum storage requirements.

  • Partition size (P): 2 weeks' worth of orders

  • Purge frequency (F): Biweekly

  • Time-to-close (T): 3 weeks

  • Retention period (R): 4 weeks

The number of partitions that have never been purged before is:

(F + T + R) / P x N = (2 + 3 + 4) / 2 x 1 = 4.5

Assuming that the space consumed by a single partition is about 200 GB, the total space consumed by those partitions is about 900 GB. Specifically, the four exhausted partitions P3-P6 consume about 800 GB, while the half-full partition P7 consumes about 100 GB. Partitions P1 and P2 have already been purged at least once. Assuming that you do not purge a partition unless at least 98% of its orders are closed, P1 and P2 consume less than 4 GB each (2% of 200 GB). In total, the used space is about 908 GB. The used space should fluctuate between roughly 712 GB after a purge and 908 GB, as long as there are no unexpected events or delays. In addition, you must add some contingency in case you miss a maintenance window, for example, 200 GB.

Figure 7-8 Estimating Minimum Space Consumption

Description of Figure 7-8 follows
Description of "Figure 7-8 Estimating Minimum Space Consumption"
"All-In" Order Volume

To estimate the space consumed by a single partition, you must first estimate the "all-in" order volume. "All-in" means a representative order mix, including SOM orders created by COM orders (if they are stored in the same schema), revision orders, fallouts, technical orders, and so on. Some cartridge implementations might generate a lot of small orders that consume a disproportionate share of order Ids compared to their size (for example, for trouble tickets).

This is how you could estimate the space consumed by a single partition:

  • Estimate the average all-in order volume over a period that accounts for volume fluctuations. One week is a good starting point, because it includes weekends.

  • Populate a partition with a representative mix of orders and states for the same period. If that period is too long due to storage or time constraints, you may use a shorter period. However, it is important that you use a substantial data set to improve the accuracy of estimates - typically at least one day's worth of orders.

  • Use the om_part_maintain.estimate_ptn_purged_space procedure to estimate the space that would be reclaimed if you purged the entire partition, and extrapolate for various partition sizes. For more information, see "Estimating Partition Disk Space (Online or Offline)."

Partition Size Restrictions

If the estimated space consumed by a range partition is too big to manage? For example, suppose you want to purge monthly and your estimates show that a month's worth of orders will consume about 400 GB. If you do not want to manage partitions as big as 400 GB but you want to stick to a monthly purge frequency, decrease the partition size (for example, to two weeks' worth of orders). The downside is an increase in purge time, normally measured in minutes. Refer to the "Purge Frequency" section for an example.

Retention Policy

The retention policy is one of the most important sizing factors, yet you have the least control over it because normally it is determined by the business. The retention period starts counting after an order is closed. Therefore, in order to determine when an exhausted partition will be both eligible and cost-effective to purge, add the retention period to the "time-to-close" wait period.

Example:

Figure 7-9 shows the impact of the retention period. Decreasing the retention period by 2 weeks requires less storage, equal to the space consumed by a single partition. This is because each partition is sized to contain 2 weeks' worth of orders. Similarly, if you increased the retention period to 6 weeks, you would consume additional space for 2 weeks' worth of orders and you would have to maintain an extra partition.

Figure 7-9 Impact of Retention Policy

Description of Figure 7-9 follows
Description of "Figure 7-9 Impact of Retention Policy"
Time-to-Close Wait

Time-to-close wait is the period until "most" orders in the partition are closed. The objective is to wait until a partition purge is cost-effective. As a starting point, you should wait until at least 98% of the orders are closed. Your final decision is a trade-off between storage and purge performance (duration of outage), as discussed below.

The first concern is the impact to purge performance of the time-to-close wait. When you purge a partition, retained orders are temporarily copied into so-called backup tables, and they are later restored (copied back) into the partition. These copy operations could add significant downtime to the maintenance window depending on the volume of retained data, your hardware, and the degree of parallelism. You can decrease execution time by increasing parallelism. In general, you should aim to maximize resource utilization in order to improve purge performance. However, increased parallelism comes with more overhead. For example, you might find out that if you double the parallelism, the execution time is reduced by only one third. And there is a tipping point where parallelism overhead outweighs gains. Therefore it is recommended that you tune the degree of parallelism and evaluate the performance of purge operations on production quality hardware - ideally of the same caliber as your production hardware. For additional information about tuning, see the "Performance Tuning" section.

It is easier to use percentages in your initial time-to-close calculations (for example, the time to close 98% of orders). Performance tests help to nail it down to absolute numbers. For example, suppose your acceptable range for copying retained orders (backup and restore) is 15-30 minutes, and that according to performance tests this is enough to retain 10000-20000 orders. In order to allow for partition consolidations, you could use 10000 in your calculations, which also provides a safety buffer. For example, if the partition size in one of your scenarios is one million orders, 10000 orders is 1%. In this case, time-to-close is the time it takes to close 99% of the orders.

With regard to storage, a shorter time-to-close wait is better. Decreasing the time-to-close wait alone by X days is the same as decreasing the retention period alone by X days or decreasing both by X days in total.

Example:

Figure 7-10 shows the impact of the time-to-close wait period. Each partition is sized to contain 2 weeks' worth of orders. All things being equal, decreasing this wait by 2 weeks requires less storage, equal to the space consumed by a single partition. Alternatively, the number of retained orders increased five times to about 10%, which might add several minutes to the duration of a maintenance window. You must decide whether these storage savings justify a longer outage (perpetually).

Figure 7-10 Impact of Time-to-Close Wait Period

Description of Figure 7-10 follows
Description of "Figure 7-10 Impact of Time-to-Close Wait Period"
Oracle RAC

As explained in the section "Support for Active-Active Oracle RAC," if you switch from a single database to Oracle RAC with N active-active nodes, the number of partitions increases N-fold whereas the actual number of order Ids stored in a partition decreases N-fold. This means that:

  • The space consumed by N partitions is about the same as that consumed previously by a single partition.

  • You do not necessarily need to change the partition size, storage capacity, the purge frequency, or any other purge-related policies.

  • During a purge window, you must purge N partitions instead of one and consolidate them N-to-1.

Consolidating partitions might sound contrary to the way OSM is designed to use partitions on active-active Oracle RAC. However, it is unlikely that order processing on a consolidated partition will experience cluster waits. The number of retained orders is normally small, the consolidated order Ids are far apart, and there is typically little activity on those orders. If a significant increase in cluster waits proves to be the result of partition consolidation, consider avoiding consolidation when a partition is purged for the first time.

Another concern is that a large number of physical partitions could potentially cause performance issues, as discussed in the "Pitfalls of Partitioning" section.

Using Oracle RAC in active-passive mode is similar to using a single instance database. The only difference is that order creation might be switched to another partition and then back to the original in the events of failover and failback, although a switch might not occur right away or even not at all. This means that you may end up with a sparsely populated partition, which at some point could be consolidated with another partition.

Example:

Figure 7-11 compares a single instance database to active-active Oracle RAC. Specifically, OSM is configured to use two nodes in active-active mode. The Oracle RAC database may have additional nodes that are either not used by OSM or they are used in passive mode. The partition size, time-to-close wait, retention period and purge frequency are the same. However, OSM uses twice as many partitions on Oracle RAC, which are half-full when they are exhausted (half of the order Ids are skipped). This means that you must purge and consolidate two partitions instead of one to reclaim the same amount of space.

Figure 7-11 Single-Instance Database vs. Two-Node Active-Active Oracle RAC

Description of Figure 7-11 follows
Description of "Figure 7-11 Single-Instance Database vs. Two-Node Active-Active Oracle RAC"
Purge Frequency

As explained in "Estimating Storage," if you follow a regular purge schedule, the number of partitions purged during each maintenance window is F/P for a single instance database, where F is the purge frequency (for example, 30 days) and P is the partition size (for example, 30 days' worth of orders). As a starting point, it is recommended that you size each range partition to contain as many orders as will be purged in one purge maintenance window, that is, F=P. As you evaluate scenarios for different purge frequencies, adjust the partition size accordingly so that F=P. If the partition size is less than the purge frequency, you will have to consolidate partitions N-to-1, where N= F/P. This will add some extra time to purge maintenance (normally measured in minutes). You might do this is if you are uncomfortable using large partitions. In this case, if you like a constant (predictable) consolidation ratio, choose the partition size so that N=F/P is an integral number.

A desire to purge as infrequently as possible is likely limited by the storage capacity and/or the administrative burden of managing a very large schema (whatever your criteria may be for "large"). Fortunately, you can often decrease the purge frequency N-fold with a relatively small increase in storage capacity. For simplicity, consider a single instance database and assume that the purge frequency is the same as the partition size. As explained in Estimating Storage, in this case you can use the following formula to estimate the storage consumed by partitions that have never been purged, where P is the partition size (for example, in days), T is the time-to-close wait, R is the retention period, and S is the space consumed by a single partition:

(P + T + R) / P x S = (1 + (T + R) / P) x S

Based on this formula, if the period T + R is large compared to P, you could double or triple the partition size and the purge frequency with a relatively small increase in storage. This is demonstrated with the following example.

Example (from biweekly to monthly maintenance):

Suppose that you have a 4-week retention period, a 3-week time-to-close wait and a biweekly purge frequency. The partition size is also 2 weeks. The following formula is used to calculate the storage consumed by never-purged-before partitions, where S is the space consumed by a single partition:

(P + T + R) / P x S = (P + 3/2P + 2P) / P x S = 4.5 x S

Now suppose that you want to reduce the purge frequency to every 4 weeks. You can double the partition size and estimate the storage capacity:

(P + T + R) / P x S = (P + 3/4P + P) / P x 2 x S =5.5 x S

This means that the extra storage is S. Thus, you can achieve a 100% reduction in downtime for 22% increase in storage capacity or 2 weeks' worth of orders. This is demonstrated in the figure below.

Note that because there is no change in the time-to-close wait, the larger partition will have more retained orders depending on the order aging curve. If the difference is large enough to have a material impact in the purge time, you may want to consider increasing the time-to-close wait slightly at the cost of a bit more extra storage.

Figure 7-12 Impact of Doubling Partition Size and Reducing Purge Frequency to Half

Description of Figure 7-12 follows
Description of "Figure 7-12 Impact of Doubling Partition Size and Reducing Purge Frequency to Half"
Sizing Range Partitions for Row-Based Order Purge

Sizing range partitions for row-based order purge is different from sizing for partition-based purge. Partitions sized for row-based purge must have a wide range of order Ids to store several months' worth of orders.

The main characteristic of this strategy is that closed orders, when they come out of their retention period, must be deleted from the partition(s) where new orders are created. The objective is to maximize reuse of the free space, to restrain the growth of the segment high water mark. In addition, the space freed by deletes in an exhausted partition cannot be reused because new orders are created on a different partition. The space consumed by an exhausted partition can be released to the tablespace by using other operations, for example by dropping or shrinking the partition.

This means that your goal should be to minimize the number of exhausted partitions. The main criteria are the maximum order lifetime (OLT), the retention period, and your availability requirements. Here are some guidelines for sizing partitions for row-based order purge:

  • If you have little or no operational experience using this purge strategy, be conservative by starting with a partition size you are comfortable with, and increase it as you gain experience (if necessary).

  • If you cannot afford outages, the partition size should be large enough to contain the order volume over a period that is greater than or equal to the sum of the maximum order lifetime and the retention period. This approach requires the least amount of storage, as discussed in "Sizing Range Partitions for Zero Downtime."

  • If the maximum order lifetime is too long (years), the above recommendation would result in oversized partitions that could take years to exhaust. If you do not have operational experience using row-based order purge and/or you feel uncomfortable with oversized partitions, and you can afford some occasional downtime (for example, every 6 months or once a year), you can size partitions for infrequent maintenance as discussed in "Sizing Range Partitions for Infrequent Maintenance."

  • The partition sizing for a single instance database and active-active Oracle RAC is the same. As explained in "Support for Active-Active Oracle RAC," if you switch from a single database to Oracle RAC with N active-active nodes, the number of partitions will increase N-fold, whereas the actual number of order Ids stored in a partition will decrease N-fold.

Sizing Range Partitions for Zero Downtime

If you cannot afford outages, you can size partitions to avoid outages as follows:

  • Size partitions to contain the order volume over a period that is greater than or equal to the sum of the maximum order lifetime and the retention period.

  • Keep purging orders from exhausted partitions until they are empty (using row-based order purge).

  • Drop empty exhausted partitions online.

This approach requires the least amount of storage because it restricts the number of partitions to two at any point in time (2xN if you use Oracle RAC with N nodes in active-active mode). Specifically, the previous partition is purged empty and dropped online before the current partition is exhausted. Figure 7-13 illustrates this by example.

Figure 7-13 Sizing Partitions to Avoid Outages

Description of Figure 7-13 follows
Description of "Figure 7-13 Sizing Partitions to Avoid Outages"

If both the retention period and the maximum order lifetime are relatively short, there is more flexibility in sizing partitions. You do not necessarily need oversized partitions because you can drop them online in a relatively short period after they are exhausted.

If the current partition is exhausted and the previous partition has still long-lived open orders that are expected to remain open for much longer, you might have to schedule a maintenance window to purge that partition using partition-based purge.

Sizing Range Partitions for Infrequent Maintenance

If you have a near 24x7 availability requirement with occasional maintenance windows, you could use those windows to drop or purge old partitions offline (for example, every 6 months or once a year).

If the retention period or the maximum time-to-close wait are long, you should plan the partition size with the next maintenance window in mind, so that eventually you are able to either drop it or purge it cost-effectively (for example less than 2% of the orders will be retained).

As a rule, the partition size (P) should be less than the period until the next maintenance window (M), minus the time-to-close wait (T), minus the retention period (R), as shown below. In this case, time-to-close is the wait period until "most" orders in the partition are closed, as discussed in "Time-to-Close Wait."

P < M - (T + R)

If there is uncertainty about the date of the next maintenance window and/or that date is based on external factors and it will be immovable, it is a good idea to make the partition a bit smaller as contingency.

The partition size must also be (a lot) greater than the sum of the retention period and the time-to-close wait, otherwise it would be exhausted before there is an opportunity to delete a substantial amount of orders:

P > T + R

Note that the partition size may vary, at least initially, because it depends on a maintenance schedule. Instead of pre-allocating two or more partitions, you may want to wait until the current partition is closer to being exhausted and there is less uncertainty about the next maintenance window.

Example:

Suppose 98% of the orders close within 1 week, the retention period is 4 weeks, and you have a maintenance window every 24 weeks. You want the first partition to be exhausted after 19 weeks or less (24 - 1 - 4). Using 1 week as contingency, 18 weeks is a good size. After that, the partition size is 24 weeks (the same as the purge frequency), everything else being the same.

Online vs. Offline Maintenance

All database maintenance operations that can be performed online, which means OSM is running, can also be performed offline. However, some operations can be performed offline only.

In order to execute a procedure offline you must either stop all WebLogic servers where OSM is deployed (for example, all managed servers) or stop the OSM application and cartridges. Beginning with OSM 7.2.0.9, database management procedures stop and restart OSM database jobs automatically.

Table 7-2 summarizes which operations can be performed online. Offline execution is always faster than online execution. If a procedure supports online execution, it is recommended only under low volume. In particular, online execution of partition management operations causes lock contention in the database and waits, such as cursor: pin S wait on X. Under high volume, such contention could result in severe performance degradation, transaction timeouts, and even order failures.

Table 7-2 Summary of Online Versus Offline Maintenance Operations

Operation Online Offline

Add a partition

Avoid if possible

Recommended

Row-based order purge

OK

OK

Drop a populated partition

Not supported

OK

Drop an empty partition

Not supported

Recommended

Partition purge

  • Purging a partition partially is not supported online.

  • Purging an entire partition is supported conditionally. For a list of these conditions, see the om_part_maintain.purge_partitions procedure.

OK

Exchange table maintenance (exchange tables are used by partition purge)

OK

OK

Cartridge Fast Undeploy

OK, but if done using XML Import/Export (rather than Design Studio), you need to refresh the OSM metadata as well. See "Refreshing OSM Metadata" for more information.

OK

Cartridge conventional undeploy

Not supported

OK

Managing Order Data

The most common maintenance operations are adding partitions and purging orders to reclaim storage.

See "Adding Partitions (Online or Offline)" for information about how to add partitions.

As discussed in "Order Purge Strategies," there are two main purge strategies, namely row-based order purge and partition-based order purge. These are discussed in more detail in these sections:

Adding Partitions (Online or Offline)

You can add partitions manually using the following procedures:

  • om_part_maintain.add_partition

  • om_part_maintain.add_partitions

For production and performance environments, Oracle strongly recommends that you add partitions manually either when OSM is offline or during periods of low system activity. This is particularly important if you use Oracle RAC.

  • Adding partitions online causes lock contention in the database and waits, such as cursor: pin S wait on X. Under high volume, such contention could result in severe performance degradation, transaction timeouts, and even order failures.

  • OSM can also add a partition automatically when a new order ID does not map to any partition. This is to prevent order creation failures if all partitions are exhausted. However, it is strongly recommended that you create partitions manually and disable automatic creation for all production and performance environments, especially if you use Oracle RAC, in order to avoid automatic creation under high volume. Automatic creation of partitions can be disabled with the partition_auto_creation parameter.

  • The size of new partitions is specified by the range_partition_size parameter. Specifically, the upper bound of a new partition is the greatest partition upper bound plus the value of range_partition_size.

  • (Optional) You can specify the tablespace of each new partition as the input argument to om_part_maintain.add_partition. Using a different tablespace for each partition (usually a circular list) facilitates administrative tasks, such as backups.

  • (Optional) You can specify the mnemonic of the partitioning realm the partition is associated with as the input argument to om_part_maintain.add_partition. If a realm is not specified, the new partition is assigned to the default_order realm. If the partitioning realm specified in the argument is disabled, an error will occur and the partition will not be added. To override this error and create the partition for the disabled partitioning realm, set the optional a_force argument to true.

The installer creates the first partition. Always create a sufficient number of new partitions to handle the orders for a given period until the next maintenance window that includes a safety buffer in case an unexpected increase in order volume occurs or you skip a maintenance window.

If you have configured OSM to use Oracle RAC with N database nodes in active-active mode, you must add partitions in multiples of N. This is because OSM creates orders on the last N partitions concurrently. For example, if you use a 2-node Oracle RAC database in active-active mode, new orders are created on the last two partitions. If OSM is configured with a single instance database, an Oracle RAC database in active-passive mode or Oracle RAC One Node, new orders are created on the last partition only.

For more information, see "range_partition_size," "om_part_maintain.add_partition (Offline Only)," and "om_part_maintain.add_partitions (Offline Only)."

Example (Add 2nd partition): Consider a new installation with range_partition_size equal to 100,000. The upper bound of the partition created by the installer is 100001 (the upper bound of a partition is non-inclusive). The following statement adds a second partition with upper bound 200001 on tablespace OSMTS.

execute om_part_maintain.add_partition('OSMTS');

Example (Add Nth partition, N > 2): The following statement adds three more partitions on the same tablespace as the most recently added partition with upper bounds 300001, 400001 and 500001.

execute om_part_maintain.add_partitions(3);

Using Row-Based Order Purge

As discussed in "Row-Based Order Purge Strategy," you can use row-based order purge as an online purge strategy. Row-based order purge uses deletes to purge individual orders, whereas partition-based purge uses ALTER TABLE … DROP PARTITION and ALTER TABLE … EXCHANGE PARTITION to change the table structure.

The space freed by deletes in a table or index segment does not lower the high water mark of that segment. That space can be reused by inserts into the same segment but the space is not released to the tablespace. Therefore, you must run order purge as frequently as possible (at least once a day) to free space at the same rate it is consumed by new orders. The objective is to restrain the growth of the high water mark.

Figure 7-14 shows how row-based order purge deletes orders in two stages. The API also provides procedures that allow you to execute each stage separately, as follows:

  1. Selects the order Ids that satisfy given purge criteria into the OM_ORDER_ID_FOR_PURGE staging table.

  2. Deletes the selected orders.

Figure 7-14 How Row-Based Order Purge Works

Description of Figure 7-14 follows
Description of "Figure 7-14 How Row-Based Order Purge Works"

In the first stage, order purge scans OM_ORDER_HEADER and inserts the order Ids of all orders that satisfy the given purge criteria into the OM_ORDER_ID_FOR_PURGE staging table. You can restrict the scope of the search to an order ID range, for example to the latest partition(s) where new orders are created.

In the second stage, the selected orders are purged in parallel using the dbms_parallel_execute package. More precisely:

  • Order purge splits the work into smaller pieces by dividing the data blocks of OM_ORDER_ID_FOR_PURGE into chunks. Then it spawns N database jobs to purge the chunks in parallel, where N is the degree of parallelism (possibly 1). Each job processes one chunk at a time by deleting one order at time, automatically committing every few deletes. In the event of error (for example, a deadlock), the job continues with the next chunk.

  • After finishing the processing of all chunks, order purge retries processing of any failed chunks until either all chunks are processed successfully (all orders in the chunk are purged) or a pre-defined retry threshold is reached.

  • At the end of a successful purge, order purge clears OM_ORDER_ID_FOR_PURGE.

This approach ensures that a) an order is either purged entirely or not at all b) a purge may succeed partially even in the event of errors and c) the purge handles recoverable errors, such as deadlocks.

For performance reasons, the staging table is hash partitioned. To minimize contention among purge jobs, OM_ORDER_ID_FOR_PURGE must have the same number of partitions as the number of hash sub-partitions in an OM_ORDER_HEADER range partition. Otherwise, order Ids in different OM_ORDER_ID_FOR_PURGE blocks that are processed by different purge jobs might be stored in the same OM_ORDER_HEADER block.

The default degree of parallelism is specified by the degree_of_parallelism configuration parameter. The number of chunks generated depends on a number of factors, including the number of OM_ORDER_ID_FOR_PURGE hash partitions, the volume and distribution of data in the staging table, and the desired chunk size. The latter is specified by parallel_execute_chunk_size, which is an advanced configuration parameter (you rarely need to change the default).

Row-based order purge is implemented by the om_new_purge_pkg package. This package allows you to:

  • Purge a single order by order ID.

  • Purge orders that satisfy given purge criteria.

  • Schedule an order purge.

  • Stop and resume an order purge.

  • Row-based purge operations are audited. Purge audit views allow you to monitor order purges. For more information see "Auditing and Monitoring Order Purges."

Purging a Single Order by Order ID

The om_new_purge_pkg.delete_order procedure allows you to delete an order by its order ID. This is convenient when you want to delete only one order or a small number of orders (for example, to resubmit).

Note that this procedure does not issue commit, in contrast to most purge procedures. You must manually issue commit or rollback.

Purging Orders that Satisfy Given Criteria

If you need to purge large quantities of orders (for example, if row-based order purge is your purge strategy), purging orders one by one serially using delete_order would be very slow. Use one of the following procedures instead, which allow you to purge orders in parallel:

  • The om_new_purge_pkg.purge_orders procedure purges all orders that satisfy given purge criteria, including your retention policy (for example, you can purge all orders that were closed 30 days ago or more). First, it finds the order Ids that satisfy the purge criteria. Then it spawns database jobs to purge the selected orders in parallel, as discussed in Understanding Row-Based Order Purge.

  • The om_new_purge_pkg.select_orders and om_new_purge_pkg_purge_selected_orders procedures are equivalent to purge_orders but offer more flexibility because you can execute the order selection and order purge steps separately. For example, you can select the orders to purge piecemeal by executing select_orders several times. You can also update the OM_ORDER_ID_FOR_PURGE staging table manually, especially if you have complex purge criteria that are not supported by purge_orders and select_orders.

If the purge is performed while OSM is online, adjust the degree of parallelism to ensure that the database can handle the additional workload of deleting orders concurrently with order processing.

Both purge_orders and purge_selected_orders provide the a_stop_date parameter, which allows you to specify an end date and time for the purge. This is useful if you want to run order purge during specific periods (for example, during a low-volume period at night).

Scheduling Order Purge

The om_new_purge_pkg.schedule_order_purge_job procedure allows you to schedule a one-time execution of order purge. The purge is scheduled using the dbms_job package. If row-based order purge is your main strategy, this procedure is inadequate. In this case it is recommended that you use the Oracle Database scheduler to schedule periodic execution of order purge.

Stopping and Resuming an Order Purge

The om_new_purge_pkg.stop_purge procedure allows you to stop an order purge. This might be necessary, for example, if the host is too busy or you want to perform other maintenance. This procedure returns immediately. However, the purge will stop after all currently assigned chunks are processed, possibly after several minutes.

Later you can resume the same purge by executing om_new_purge_pkg.resume_purge. You can also restart the purge with different parameters by executing om_new_purge_pkg.purge_selected_orders (for example, if you want to change the time when the purge window ends or the degree of parallelism), or start a new purge.

Using Partition-Based Order Purge

As discussed in "Partition-Based Order Purge Strategy," partition-based order purge allows you to purge several weeks' worth of order data in minutes by dropping or "purging" partitions. These operations are based on efficient DDL statements that modify the table structure, such as ALTER TABLE ... DROP PARTITION and ALTER TABLE ... EXCHANGE PARTITION.

The following sections discuss partition-based purge in detail:

  • "Differences Between Purging and Dropping Partitions": summarizes the differences between purging and dropping partitions.

  • "Purging Partitions (Online or Offline)": describes how to purge partitions. When you purge a partition, all the data that satisfies given purge criteria is deleted and storage is immediately reclaimed. This operation allows you purge a partition partially, that is, to retain orders that do not satisfy given purge criteria. It also allows you to consolidate multiple partitions into one. The limitation is that the partition must be purged offline. However, if all orders in a partition are closed and satisfy the purge criteria, you might be able to purge the entire partition online.

  • "Dropping Partitions (Offline Only)": describes how to drop partitions. When you drop a partition, all its data is deleted and storage is immediately reclaimed. The limitations are that the partition must be dropped offline and it must have no open orders.

  • "Dropping Empty Partitions (Online or Offline)": describes how to drop empty partitions online.

Differences Between Purging and Dropping Partitions

These are the main differences between purging and dropping partitions:

  • You cannot drop a partition if it has open orders. However, you can purge a partition if the number of retained orders does not exceed a configurable threshold. In this case, OSM copies retained orders to the so-called backup tables prior to the partition exchange, and restores them afterwards.

  • To purge partitions you must create exchange tables first. This must be done once after a new installation and subsequently after schema changes. Exchange tables are not used when dropping partitions or when using row-based order purge.

  • If you have only one partition, you cannot drop it unless you add another partition first. This restriction does not apply to purging.

  • You can only drop partitions when the OSM is offline, so you must plan for downtime. If a partition does not have any open orders and all orders satisfy the purge criteria, you might be able to purge the partition online. However, note that purging online is slower than offline, and results in increased contention and lower throughput because DDL operations lock the entire table in exclusive mode. Therefore, you should only purge online during the lowest volume hours and only after you have tested it successfully in your environment.

Purging Partitions (Online or Offline)

When you purge a partition, all the data that satisfies given purge criteria is deleted and storage is immediately reclaimed. This operation allows you purge a partition partially, that is, to retain orders that do not satisfy given purge criteria. This is done using ALTER TABLE … EXCHANGE PARTITION statements. The EXCHANGE PARTITION statement exchanges a partition with a table (known as the exchange table). The exchange operation swaps the segments of the partition with the segments of the exchange table but does not delete or move data between segments. When the exchange table is empty, the partition is purged, but the partition remains associated with the partitioned table, for example, it still appears in user_tab_partitions. In general, the EXCHANGE PARTITION statement is a fast operation because it updates metadata only in the data dictionary (unless the table has global indexes and the exchange operation is performed with the UPDATE GLOBAL INDEXES option). However, to reclaim the storage previously used by the partition you must also purge the exchange tables.To purge partitions and reclaim storage space, do the following using PL/SQL stored procedures:

  1. Create exchange tables (once after a new installation and subsequently after schema changes). For more information see "Managing Exchange Tables for Partition-Based Order Purge."

  2. During periodic maintenance windows, do the following:

    • Purge partition and possibly consolidate partitions. See "Partition-Based Order Purge Strategy" for examples. Typically this is an offline operation because a small percentage of orders must be retained (for example, open orders and closed orders that are still in the retention period). See "Purging Partitions Partially (Offline Only)" for more information. If all orders in a partition are closed and satisfy the purge criteria, you might be able to purge the entire partition online, see "Purging Entire Partitions That Do Not Contain Open Orders (Online or Offline)" for more information.

    • Add partitions. You can add partitions online under low volume, however adding partitions offline is less likely to cause problems. See "Adding Partitions (Online or Offline)" for more information.

    • Purge exchange tables to reclaim storage space. You can purge the exchange tables at any time to reclaim storage (preferably during off-peak hours). If the database has enough spare CPU and IO capacity, this operation does not affect performance, even when it is performed online, because it does not cause contention (the exclusive locks acquired on the exchange tables do not block other processing). For more information see "Managing Exchange Tables for Partition-Based Order Purge."

Figure 7-15 Typical Maintenance Using Partition-Based Order Purge

Description of Figure 7-15 follows
Description of "Figure 7-15 Typical Maintenance Using Partition-Based Order Purge"
Purging Entire Partitions That Do Not Contain Open Orders (Online or Offline)

You can entirely purge partitions that do not contain any open orders with the om_part_maintain_purge_partitions procedure. If purging online, this procedure exchanges each partition you want to purge with empty purge table(s), effectively swapping the data of that partition out of the table. If purging offline, partitions that can be entirely purged are dropped, unless you disallow it. However, the partition where retained orders are consolidated is always exchanged, even if that partition has no retained orders itself.

  • Purging partitions online causes lock contention in the database and waits, such as cursor: pin S wait on X. Under high volume, such contention could result in severe performance degradation, transaction timeouts, and even order failures.

  • You can disallow dropping partitions by passing a_drop_empty_ptns=false. However, this prevents partitions from being consolidated and affects purge performance.

  • The name and upper bound of an exchanged partition do not change.

  • If the exchanged partition and the purge table are on different tablespaces then after the exchange the two tablespaces are swapped (there is no movement of data).

  • If the parameter purge_policy_purge_related_orders_independently is set to 'N' and the partition contains orders that are associated directly or indirectly with orders that do not satisfy the purge criteria (for example, open follow-on orders in a different partition), the partition cannot be purged entirely. For more information, see the purge policy section in "Purging Related Orders Independently."

For more information, see om_part_maintain.purge_partitions.

Purging Partitions Partially (Offline Only)

The om_part_maintain.purge_partitions procedure can purge partitions that contain orders that must be excluded from purging (retained), such as open orders and orders that do not satisfy the purge criteria. This is permitted if the OSM application is offline and the number of retained orders in a partition is "small", that is, it does not exceed a pre-defined threshold.

In addition, purge_partitions can move retained orders from multiple partitions into a single partition in order to maximize reclaimed space, reduce the number of partitions and minimize downtime. This is done by purging successive partitions in iterations. The maximum number of partitions purged in each iteration is limited by the purge capacity. More precisely, purge_partitions purges successive partitions that qualify for purging as follows:

  • Copies the orders that do not satisfy the purge criteria from those partitions into the backup tables.

  • Purges each partition entirely by exchanging it with purge tables.

  • Drops N-1 of those partitions.

  • Restores the retained orders from the backup tables into the Nth partition with their order IDs unchanged.

For more information, see "om_part_maintain.purge_partitions (Online or Offline)."

Example: Assume that the purge capacity is 3, and consider these partitions, as shown in Figure 7-16:

  • P_000000000000600001: All orders satisfy the purge criteria. This partition can be purged entirely.

  • P_000000000000700001: This partition can be purged but some orders do not satisfy the purge criteria.

  • P_000000000000800001: This partition can be purged but some orders do not satisfy the purge criteria.

  • P_000000000000900001: This partition cannot be purged because the number of orders that do not satisfy the purge criteria exceeds a configured threshold.

Figure 7-16 Purging Partitions That Contain Orders That Must be Excluded from Purging

Description of Figure 7-16 follows
Description of "Figure 7-16 Purging Partitions That Contain Orders That Must be Excluded from Purging"

Figure 7-17 to Figure 7-21 show how purge_partitions purges these partitions step by step:

  1. It copies the orders that do not satisfy the purge criteria from P_000000000000700001 and P_000000000000800001 into the backup tables.

  2. It purges partitions P_000000000000600001, P_000000000000700001 and P_000000000000800001 by exchanging them with the purge tables.

  3. It drops partitions P_000000000000600001 and P_000000000000700001, which are now empty.

  4. It restores the retained orders from the backup tables into partition P_000000000000800001.

  5. (Optional) It purges the purge tables and continues the same process for any remaining partitions. This is possible only if you allowed it to purge the purge tables. Otherwise, it cannot proceed because the purge capacity is exhausted.

Figure 7-17 Step 1: Back Up Retained Orders

Description of Figure 7-17 follows
Description of "Figure 7-17 Step 1: Back Up Retained Orders"

Figure 7-18 Step 2: Purge the Partitions

Description of Figure 7-18 follows
Description of "Figure 7-18 Step 2: Purge the Partitions"

Figure 7-19 Step 3: Drop the Partitions

Description of Figure 7-19 follows
Description of "Figure 7-19 Step 3: Drop the Partitions"

Figure 7-20 Step 4: Restore the Retained Orders from the Backup Tables

Description of Figure 7-20 follows
Description of "Figure 7-20 Step 4: Restore the Retained Orders from the Backup Tables"

Figure 7-21 Step5: (Optional) Purge the Purge Tables to Reclaim Space

Description of Figure 7-21 follows
Description of "Figure 7-21 Step5: (Optional) Purge the Purge Tables to Reclaim Space"
Dropping Partitions (Offline Only)

If most of your orders are short-lived, you may be able to purge orders by dropping old partitions. This is the most efficient way to reclaim space but it involves downtime. During a maintenance window, stop OSM, drop old partitions, add enough partitions to store new orders until your next maintenance window, and restart OSM, as shown in Figure 7-22. Exchange tables are not used when you drop partitions.

Figure 7-22 Dropping and Adding Partitions

Description of Figure 7-22 follows
Description of "Figure 7-22 Dropping and Adding Partitions"

When you drop a partition, all its data is deleted and storage is immediately reclaimed. You can use the stored procedure om_part_maintain.drop_partitions to drop partitions that contain orders with order IDs within a specified range if you no longer require the order data they contain and they have no open orders. If the schema contains only a single partition then Oracle Database does not allow you to drop it.

If the parameter purge_policy_purge_related_orders_independently is set to 'N' and the partition contains orders that are associated directly or indirectly with orders that do not satisfy the purge criteria (for example, open follow-on orders in a different partition), the partition cannot be dropped. For more information, see the purge policy section in "Purging Related Orders Independently."

Because global indexes become unusable when partitions are dropped, this procedure also rebuilds unusable indexes and index partitions. This can be done in parallel.

For more information, see "om_part_maintain.drop_partitions (Offline only)."

Example:

Consider an OSM schema with partitions P_000000000001000001, P_000000000002000001, P_000000000003000001 and so on. If P_000000000003000001 has open orders then the following statement will drop only P_000000000001000001 and P_000000000002000001.

execute om_part_maintain.drop_partitions(4000000);
Dropping Empty Partitions (Online or Offline)

Eventually, you must drop empty purged partitions for performance and operational reasons. Dropping empty partitions is relatively fast because they have few segments. You can use one of the following procedures:

Reclaiming Unused Space in Volatile Tables

Some volatile tables in the OSM database can grow to take up an inappropriate amount of space. To determine whether this is happening in your database, you can use the Segment Advisor in the database to determine the tables with the most free space, to help you decide whether to reclaim that free space. For more information about the Segment Advisor, see Oracle Database Administrator's Guide.

The tables that are most likely to have free space are:

  • OM_ORDER_STATE_PENDING

  • OM_JMS_EVENT

If you find that these tables have free space, you can reclaim the space using the following commands:

alter table OM_ORDER_STATE_PENDING enable row movement;
alter table OM_ORDER_STATE_PENDING shrink space;
 
alter table OM_JMS_EVENT enable row movement;
alter table OM_JMS_EVENT shrink space;

These commands must be run only offline, when OSM is in a maintenance window. The commands may take several minutes to run.

Order Purge Policies

Purge policies are configured in the om_parameter table.

Purging Related Orders Independently

By default, related orders are purged independently. This means that the decision whether an order can be purged is based solely on whether that order satisfies the purge criteria or not, regardless of any dependencies. For example, if the predecessor of an open follow-on order satisfies the purge criteria, the predecessor order can be purged even though the follow-on must be retained.

Beginning with OSM 7.2.0.10 and 7.2.2.3.5, you can disable this policy by setting the purge_policy_purge_related_orders_independently parameter to the value N in the om_parameter table. When this policy is disabled, an order with related orders can be purged only if all directly and indirectly related orders are ready to purge, that is they satisfy the purge criteria (for example, a_delete_before and a_order_states). However, the order IDs of the related orders may be within a different partition, or even outside the given purge range.

Note:

Currently this policy is supported by partition-based purge only.

Note:

Setting purge_policy_purge_related_orders_independently to N may add several minutes to the time it takes to purge or drop a partition.

Example (Purging related orders independently):

Assume that purge_policy_purge_related_orders_independently is set to N and that you want to purge all orders in partition P_000000000000300001 that were closed before midnight on the specified date of February 13, 2014:

execute om_part_maintain.purge_partitions(
        a_online => false,
        a_delete_before => trunc(to_date('02/13/2014', 'MM/DD/YY')),
        a_order_states => om_new_purge_pkg.v_closed_orders
        a_order_id_lt => 300001);

Figure 7-23 shows some of the orders in partition P_000000000000300001 and the related orders:

  • Order 291001 is purged because it satisfies the purge criteria and has no related orders.

  • Order 291002 satisfies the purge criteria and is related to 391002, which is amended by 491002. Both 391002 and 491002 satisfy the purge criteria and are therefore ready to purge although they are outside the given purge range. Therefore order 291002 is purged even though 391002 and 491002 are not.

  • Order 291003 is retained because it is indirectly related to 491003, which has a completion date that is after the specified date.

  • Order 291004 is retained because it is indirectly related to 491004, which is open.

Figure 7-23 Related Orders that Satisfy the Purge Criteria

Description of Figure 7-23 follows
Description of "Figure 7-23 Related Orders that Satisfy the Purge Criteria"

The following types of relationships are considered when looking for related orders:

  • Successor Orders

  • Predecessor Orders

  • Amendment Orders

  • Base Orders (for amendments)

Keep the following in mind when purge_policy_purge_related_orders_independently is set to N:

  • Both direct and indirect relationships are considered.

  • It does not matter if the related orders are within the range of order IDs being purged; it matters only whether they match the purge criteria (a_delete_before and a_order_states parameters).

  • When purging partitions online, if any order in a partition has a related order that does not match the purge criteria, the partition cannot be purged. Purging online requires that there are no orders retained in the partition.

  • When dropping partitions, if any order in a partition has a related order that does not match purge criteria, the partition cannot be dropped. Dropping partitions requires that there are no orders retained in the partition.

  • If the total number of orders to be retained exceeds the threshold defined by the parameter xchg_retained_orders_thres (default 10000), the partition is not purged. This includes orders that satisfy the purge criteria but they must be retained because they are related to orders that do not satisfy that criteria.

Auditing and Monitoring Order Purges

OSM monitors all row-based order purge operations in the database but not partition-based purge operations. OSM audits the following operations:

  • om_new_purge_pkg.purge_orders

  • om_new_purge_pkg.purge_selected_orders

  • om_new_purge_pkg.delete_order

  • om_new_purge_pkg.purge_cartridge_orders

    Note:

    You do not manually run the om_new_purge_pkg.purge_cartridge_orders package. Design Studio runs this package when the PURGE_ORDER_ON_UNDEPLOY cartridge management variable is set to true and you undeploy a cartridge. Design Studio does not run this package when the FAST_CARTRIDGE_UNDEPLOY cartridge management variable is set to true when you undeploy a cartridge.

OSM assigns each purge operation a unique purge ID that OSM associates with all audit records. You can monitor in-progress purges, review past purges, and analyze purge performance using the following views:

You must set the nls_date_format database initialization parameter for queries to return the time portion in audit views that have DATE datatype columns. For example:

alter session set nls_date_format = 'DD-MM-YYYY HH24:MI:SS';

Example (Monitoring an order purge): You can use the OM_AUDIT_PURGE_LATEST view to monitor the latest order purge. If a purge is running, you can use a query like this one to find out the purge rate and estimated completion time.

select status,
    est_or_actual_end_date,
    percent_complete,
    orders_purged_per_minute as purge_rate,
    parallelism
from om_audit_purge_latest;

STATUS  EST_OR_ACTUAL_END_DATE PERCENT_COMPLETE PURGE_RATE PARALLELISM
------  ---------------------- ---------------- ---------- -----------
RUNNING    15-09-2015 08:23:39            95.45       1911          16

Audit Tables

In most cases, the purge audit views provide sufficient data; however, you may need to query the underlying tables. For example you may need to review the orders that were purged, or the purge criteria, or both for troubleshooting purposes. OSM stores audit records in these tables:

  • "OM_AUDIT_PURGE": This is the main audit table. It stores the operation name, status, critical dates and other data.

  • "OM_AUDIT_PURGE_ORDER": Stores a synopsis of each purged order with a timestamp.

  • "OM_AUDIT_PURGE_PARAM": Stores the purge criteria, the parameters supplied to the purge procedure, and a snapshot of relevant session and configuration parameters when the purge was started.

By default, OSM retains the audit data for at least 90 days. The minimum retention period is specified by the purge_audit_retention_days configuration parameter in the om_parameter table. Order purge procedures purge obsolete audit records automatically before adding new audit records.

If you have partitioned the OSM schema, then OSM partitions the purge audit tables on a monthly basis. OSM manages the audit partitions automatically. The first order purge operation in a month automatically adds a new partition for the month. Order purge procedures drop partitions with obsolete audit records automatically before adding new audit records.

Managing Exchange Tables for Partition-Based Order Purge

Partition purge can purge a partition by exchanging it with tables that are known as exchange table. The exchange operation swaps the segments of the partition with the segments of the exchange table but does not delete or move data between segments. In general, the EXCHANGE PARTITION statement is a fast operation because it updates metadata only in the data dictionary (unless the table has global indexes and the exchange operation is performed with the UPDATE GLOBAL INDEXES option). However, to reclaim the storage previously used by the partition you must also purge the exchange tables.

There are two types of exchange tables used for purging:

  • Purge tables

  • Backup tables

About OSM Purge Tables

Purge tables are used for purging partitions using exchange. For each partition to be purged, purge_partitions decides whether to drop it or exchange it. For example, when purging online, it is possible to drop only empty partitions. When purging offline, the partition where retained orders are consolidated must be exchanged, whereas the remaining partitions can be dropped.

The structure of each purge table is similar to the structure of the corresponding partitioned table. The number of purge tables that OSM creates depends on how many partitions you want to be able to exchange without having to purge the purge tables. This is called the exchange purge capacity.

For example, if the purge capacity is 3 then OSM creates 3 sets of purge tables as follows:

  • If an OSM table is range-hash partitioned, OSM creates 3 hash-partitioned purge tables.

  • If an OSM table is range partitioned, OSM creates 3 non-partitioned purge tables.

  • If an OSM table is reference-partitioned, OSM creates 3xN non-partitioned purge tables, where by default N is the number of hash sub-partitions of the oldest OM_ORDER_HEADER partition. You can override the default N when you setup the exchange tables.

The format of a purge table name is XCHG_OM_PRG_p$xchg_table_id$r, where:

  • The XCHG_OM_PRG_ allows quick identification of purge tables.

  • p is sequence number between 1 and the purge capacity, referred to as the logical exchange partition (formatted to 3-digits 001 to 999). Each partition to be exchanged is mapped to a logical exchange partition. This means that the maximum supported purge capacity is 999. This is the first generated component so that purge table names are grouped by partition when sorted.

  • xchg_table_id is an OSM-generated sequence ID for each partitioned table, called the exchange table ID (formatted to 3 digits). OSM stores the exchange table IDs in the om_xchg_table table when OSM creates exchange tables. OSM purges this table when it drops exchange tables. You do not need to know the exchange table IDs.

  • r is a 3-digit suffix that identifies which reference partition is exchanged when the table is reference partitioned; otherwise this value is omitted. r is referred to as the reference partition position because reference partitions are exchanged in order based on their position.

Example:

Figure 1 shows how OSM maps OM_ORDER_HEADER partitions to exchange tables when purging partitions. OSM maps OM_ORDER_HEADER to exchange table ID 001. All range-hash partitioned tables and the corresponding exchange tables have 64 hash partitions. The purge table capacity is 2, for example, there are two purge tables for OM_ORDER_HEADER. Assuming that the purge tables are empty, OSM can purge partitions P_000000000008000000 and P_000000000009000000 by exchanging them with exchange tables having logical exchange partitions 001 and 002, respectively.

Figure 7-24 Partition to Purge Table Mapping Example

Description of Figure 7-24 follows
Description of "Figure 7-24 Partition to Purge Table Mapping Example"
About OSM Backup Tables

Backup tables are used for making a backup of orders to be retained before purging a partition. Orders are restored from the backup tables after the partition is purged. The structure of each backup table is similar to the structure of the corresponding partitioned table. Backup tables are always hash-partitioned in order to enable parallel inserts.

OSM creates a single set of backup tables, one for each partitioned table (whereas the number of purge tables depends on the purge capacity). The format of a backup table name is XCHG_OM_BCK_$xchg_table_id$, where:

  • xchg_table_id is the OSM-generated exchange table ID (formatted to 3 digits). This is identical to the exchange table ID used for purge tables.

  • XCHG_OM_BCK_ allows quick identification of backup tables.

Creating Exchange Tables (Online or Offline)

You must manually create exchange tables using the om_part_maintain.setup_xchg_tables procedure after you install the OSM schema. You must re-create them after upgrades, schema import, and after any ad hoc schema changes.

For more information see "om_part_maintain.setup_xchg_tables (Online or Offline)."

Example

If you want to exchange 3 partitions without having to purge exchange tables, run the om_part_maintain.setup_xchg_tables procedure as shown below. The procedure creates one set of backup tables and 3 sets of purge tables.

execute om_part_maintain.setup_xchg_tables(3);
Purging Exchange Tables (Online or Offline)

Eventually, you must reclaim the storage for exchanged partitions. You can do this with the om_part_maintain.purge_xchg_prg_tables procedure, which executes a TRUNCATE TABLE statement for each purge table with the DROP STORAGE option to reclaim the space. For more information see "om_part_maintain.purge_xchg_prg_tables (Online or Offline)."

It is not necessary to purge backup tables. These are purged automatically immediately after all order data is restored with the REUSE STORAGE option to retain the space of deleted rows. If you want to purge the backup tables without restoring orders, or if you want to reclaim their space, you can do this with the "om_part_maintain.purge_xchg_bck_tables (Online or Offline)" procedure.

Dropping Exchange Tables (Online or Offline)

You can drop exchange tables with the om_part_maintain.drop_xchg_tables procedure. By default, OSM automatically drops exchange tables before re-creating them.

For more information, see "om_part_maintain.drop_xchg_tables (Online or Offline)."

Estimating Partition Disk Space (Online or Offline)

Beginning with 7.2.2.4, the function estimate_ptn_purged_space returns the estimated amount of disk space (in bytes) that could be reclaimed by purging or dropping partitions. You can use this function to estimate:

  • The amount of space that could be reclaimed during a maintenance window.

  • The amount of space consumed by a partition, including global indexes.

  • The average order size in a partition.

The estimate_ptn_purged_space function simulates a purge execution and determines the total numbers of bytes used by successive partitions that qualify for purging. The following valuable information about the purge simulation is available in the DBMS output:

The partitions that qualify for purging and whether they are purged entirely or partially.

  • Number of orders purged.

  • Number of orders retained.

  • Average size of an order (bytes).

  • Estimated amount of space used by retained orders (bytes).

  • Estimated amount of space reclaimed (bytes).

Consider the following when executing the estimate_ptn_purged_space function:

  • Exchange tables must be created before calling this function.

  • This function can be executed offline or online.

  • This function assumes that you will execute purge_partitions with arguments a_drop_empty_ptns and a_purge_xchg_prg_tables set to true.

  • The average order size is used to calculate the space used by retained orders. If retained orders are not typical sized orders the estimate returned from this function may not closely match the actual space reclaimed.

Example (Estimate the space that could be reclaimed in a maintenance window):

Consider an OSM schema with partitions P_000000000001000001, P_000000000002000001 and P_000000000003000001. The following statement estimates the space that could be reclaimed if partitions P_000000000001000001 and P_000000000002000001 were purged of all orders that were closed more than 30 days ago.

declare
begin
  dbms_output.put_line('Space Reclaimed(bytes):' ||
  om_part_maintain.estimate_ptn_purged_space(
    a_delete_before=> trunc(sysdate)-30, 
    a_order_states=>om_new_purge_pkg.v_closed_orders, 
    a_order_id_lt=>2000001, 
    a_order_id_ge=>1));
end;

Example (Estimate the space consumed by a partition):

Consider an OSM schema with partitions P_000000000001000001, P_000000000002000001, and P_000000000003000001. The following statement estimates the space consumed by the first two partitions (including their share of global indexes).

declare
begin
  dbms_output.put_line('Partition size:' ||
  om_part_maintain.estimate_ptn_purged_space(
    a_delete_before=> om_const_pkg.v_no_date, 
    a_order_states=>om_new_purge_pkg.v_all_orders, 
    a_order_id_lt=>2000001, 
    a_order_id_ge=>1));
end;

Managing Cartridges

The main components to a deployed cartridge are:

  • The static cartridge metadata that is populated in the OSM database when the cartridge is deployed or redeployed. This data does not grow or change when orders are created or processed. Cartridge metadata is loaded into the OSM server at startup and re-loaded when cartridges are deployed.

  • The dynamic order data that is populated in the OSM database whenever an order is created and as it is being processed.

Note:

OSM does not create ear files for automation plug-ins. The WebLogic server console does not display automation plug-in ear files. Use the console logs to debug issues.

Your primary goals should be to minimize the memory needs and startup time of OSM and to deploy, redeploy, and undeploy cartridges quickly online. Because cartridge metadata consumes relatively little space in the database, purging cartridge metadata is not a major concern.

Cartridge metadata consumes memory resources and takes time to initialize on startup. You can minimize the memory needs and startup time of OSM by undeploying old cartridges that are no longer required from the run-time production environment.

To undeploy and redeploy cartridges quickly online, use Fast Undeploy instead of conventional undeploy.

Using Fast Undeploy

A cartridge can be undeployed when all associated orders are closed. There are two ways to undeploy a cartridge:

  • Using conventional undeploy, which removes from the database both the cartridge metadata and all associated orders. This operation can be extremely expensive if you have a large volume of order data.

  • Beginning with OSM 7.2.0.9, a new Fast Undeploy option is provided to rapidly undeploy a cartridge without removing the cartridge metadata and the associated order data from the database. When an OSM cartridge is undeployed using Fast Undeploy, OSM behaves the same as if the cartridge was undeployed using a conventional undeploy, that is, as if the cartridge and associated orders do not exist. The benefit of Fast Undeploy is that it allows the undeploy operation to complete quickly regardless of the number of orders that may have been created against that cartridge. Fast Undeploy is the default undeploy mode.

Oracle strongly recommends that you use Fast Undeploy instead of conventional undeploy. This enables you to undeploy unwanted cartridges quickly while offloading data purge to regular partitioned-based or row-based order purge, based on your data retention policies and maintenance schedule. This is useful both in development and production environments.

When you redeploy a cartridge, you have the option to undeploy the cartridge first. If you deployed the cartridge using fast undeploy, this operation is called a fast redeploy because the cartridge is fast undeployed before it is redeployed.

Fast Undeploy removes cartridges from the OSM WebLogic domain only. You must later remove undeployed cartridges from the database. For performance reasons, it is recommended that you remove undeployed cartridges only after all associated orders have been purged. Because cartridge metadata consumes relatively little space in the database, this can be an infrequent operation.

Purging Metadata of Undeployed Cartridges

A cartridge that was undeployed using Fast Undeploy has UNDEPLOYED status in the database. Use the following statement to query the database for undeployed cartridges:

select * from om_cartridge where status = 'UNDEPLOYED';

To purge all cartridges that were undeployed using Fast Undeploy, use the following statement:

exec om_cartridge_pkg.drop_obsolete_cartridges;

You can run this procedure when OSM is online or offline. This procedure does not purge any cartridges associated with orders and it does not purge any component cartridges unless all associated solution cartridges are also selected for purge. The DBMS output displays the cartridges that were purged and, for those cartridges with an UNDEPLOYED status but not purged, the reason the procedure did not purge the cartridge. For more information about DBMS output, see "DBMS Output."

To purge a single undeployed cartridge, use the following statement:

exec om_cartridge_pkg.drop_cartridge(cartridge_id);

You can run this procedure only when OSM is offline.

Configuration Parameters

The following configuration parameters affect partition maintenance operations. These parameters are defined in the om_parameter table.

Note:

You can override the parameters marked with (*) for a specific partitioning realm in the partitioning realm configuration file. OSM persists partitioning realm-specific parameters in the om_partitioning_realm_param table. For more information, see "Partitioning Realms."

  • Parameters in the om_parameter table:

    • * range_partition_size

    • subpartitions_number

    • default_xchg_capacity

    • xchg_retained_orders_thres

    • degree_of_parallelism

    • degree_of_parallelism_rebuild_indexes

    • degree_of_parallelism_rebuild_xchg_indexes

    • purge_job_class

    • parallel_execute_chunk_size

    • partition_auto_creation

    • purge_policy_rebuild_unusable_indexes

    • purge_policy_purge_related_orders_independently

    • purge_policy_consolidate_partitions

    • purge_policy_time_to_close_wait

    • purge_audit_retention_days

    • purge_commit_count

  • deferred_segment_creations (Oracle Database initialization parameter)

range_partition_size

This parameter is present in both the om_partitioning_realm_param table and om_parameter table. The value in om_partitioning_realm_param is specific to a partitioning realm and takes precedence over the value in table om_parameter. This parameter specifies the size of new partitions.

The initial value in the om_parameter table for this parameter is specified during installation. You can change it with the following SQL statement, where N is the new value (for example 100000):

update om_parameter 
set value = N
where mnemonic = 'range_partition_size';
commit;

Updates to this parameter do not affect existing partitions. The upper bound of a new partition is the greatest partition upper bound plus the value of this parameter.

The value of this parameter in table om_partitioning_realm_param is inserted and updated by changes in partitioning realm configuration xml. Refer the Partitioning Realm section for details.

subpartitions_number

Specifies the number of hash sub-partitions. You choose the initial value of this parameter during installation. You can change it with the following SQL statement, where N is the new value (for example, 32).

update om_parameter
set value = N
where mnemonic = 'subpartitions_number';
commit;

Updates to this parameter do not affect existing partitions. If you change this parameter and you use om_part_maintain.purge_partitions for purging, you must re-execute om_part_maintain.setup_xchg_tables when it is time to purge partitions that were added after the change. This is because the number of hash partitions of the purge tables must match the number of hash sub-partitions of the range partitions to be purged.

default_xchg_capacity

Specifies the default purge capacity if om_part_maintain.setup_xchg_tables is called with an unspecified capacity. If it is not configured, the default is 3.

xchg_retained_orders_thres

If the number of orders to be excluded from purging in a partition exceeds this threshold, the partition cannot be purged for performance reasons. The default is 10000. You can override the default in the om_parameter table.

degree_of_parallelism

Specifies the default degree of parallelism for statements that are executed in parallel. It applies to queries, DML, and DDL statements. However, the degree of parallelism for rebuilding indexes is configured by the degree_of_parallelism_rebuild_indexes and degree_of_parallelism_rebuild_xchg_indexes parameters. If this parameter is not specified, the default degree of parallelism is 4.

This parameter is also used for recreating global partitioned indexes when the RECREATE GLOBAL policy is used. However, the degree of parallelism for rebuilding index partitions is configured by the degree_of_parallelism_rebuild_indexes and degree_of_parallelism_rebuild_xchg_indexes parameters. For more information, see "purge_policy_rebuild_unusable_indexes." You can use the "om_part_maintain.set_dop (Online or Offline)" procedure to set this parameter.

For more information, see "Parallel Execution."

degree_of_parallelism_rebuild_indexes

Specifies the default degree of parallelism for rebuilding index partitions of OSM tables except exchange tables. If this parameter is not specified, the default degree of parallelism is 2. This is less than the default value for degree_of_parallelism because you cannot rebuild an entire partitioned index with a single statement. You must rebuild each partition or sub-partition, which contains only a fraction of the data. Therefore the overhead of increased parallelism may have negative impact on rebuild performance. For example, performance tests might show that an optimal value for degree_of_parallelism is 32 whereas the optimal value for degree_of_parallelism_rebuild_indexes is only 4.

You can use the "om_part_maintain.set_dop_rebuild_indexes (Online or Offline)" procedure to set this parameter.

The degree of parallelism for rebuilding indexes of exchange tables is configured with the degree_of_parallelism_rebuild_xchg_indexes parameter.

degree_of_parallelism_rebuild_xchg_indexes

Specifies the default degree of parallelism for rebuilding index partitions of exchange tables. If this parameter is not specified, the default degree of parallelism is 1. This is because you cannot rebuild an entire partitioned index with a single statement. You must rebuild each partition or sub-partition, which contains only a fraction of the data. Because the size of exchange indexes is usually small rebuilding them serially is usually faster.

You can use the "om_part_maintain.set_dop_rebuild_xchg_indexes (Online or Offline)" procedure to set this parameter.

purge_job_class

This parameter in the om_parameter table specifies the class for purge jobs. A database job must be part of exactly one class. The default value is DEFAULT_JOB_CLASS, which is also the default database job class. If your database is Oracle RAC, jobs in the DEFAULT_JOB_CLASS class can run on any node.

If you use a partition purge strategy, restricting purge jobs to a single node significantly improves performance. Specifically, if the jobs that restore retained orders run on all nodes, cluster waits could account for 40% or more of the database time. Cluster waits increase with the degree of parallelism and the number of nodes. You can eliminate cluster waits by restricting job execution on a single node as follows:

  1. Create a database service, for example, OSM_MAINTAIN, with a single preferred node and any number of available nodes. Refer to Oracle Database documentation for instructions about how to create a service using Oracle Enterprise Manager or srvctl.

  2. Create a job class, for example, OSM_MAINTAIN, and associate it with the new service:

    exec dbms_scheduler.create_job_class(
        'OSM_MAINTAIN', service => 'OSM_MAINTAIN');
    
  3. Grant EXECUTE permission on the job class to the OSM user:

  4. Grant execute on sys. OSM_MAINTAIN to <user>;

  5. Set the purge_job_class to the job class.

Purge jobs will be spawned on the preferred node for this database service, if it is running; otherwise on an available node.

If you use a row-based order purge strategy, running purge jobs on all nodes does not negatively affect performance. In fact, you may want to distribute the purge load on all nodes. However, if you do not want order purge to compete for resources with order processing, this parameter allows you to run order purge on a different node. For example, if you have an Oracle RAC database with 3 nodes, you could use two nodes for order processing and the third node for continuous order purge.

parallel_execute_chunk_size

This is an advanced parameter that specifies the chunk size for parallel execution using jobs. For more information, see "Tuning parallel_execute_chunk_size."

partition_auto_creation

This parameter in the om_parameter table specifies whether OSM is enabled to add a partition automatically when a new order ID does not map to any partition. Valid values are Y (enabled) and N. Oracle strongly recommends that you plan to add partitions manually and disable automatic creation for all production and performance environments, especially if you use Oracle RAC. Adding partitions online causes high contention in the database, resource busy exceptions and transaction timeouts that could result to failed orders and instability of OSM (especially during a busy period).

purge_policy_rebuild_unusable_indexes

This parameter in the om_parameter table specifies the default policy for rebuilding unusable indexes. Possible values are:

  • om_part_maintain.c_rebuild_idx_recreate_global (RECREATE GLOBAL): This means that the preferred method to rebuild a global partitioned index that became unusable after a partition maintenance operation is to drop and recreate the entire index. This is the default, unless the global index is not partitioned, it supports a unique constraint, or OSM is offline. Recreating a global partitioned index scans the table only once and it can be done efficiently with a high degree of parallelism, so it is more efficient and much faster than rebuilding each index partition separately. The default degree of parallelism for recreating global indexes is specified by the degree_of_parallelism parameter.

  • om_part_maintain.c_rebuild_idx_rebuild (REBUILD): This means that the preferred method to rebuild global partitioned indexes is one partition at a time using ALTER INDEX REBUILD PARTITION. The default degree of parallelism for rebuilding index partitions is specified by the degree_of_parallelism_rebuild_indexes parameter.

purge_policy_purge_related_orders_independently

This parameter in the om_parameter table specifies whether orders should be purged independently of any related orders they may have. Valid values are Y (purge independently is enabled) and N (purge independently is disabled). By default, orders are purged independently. For more information, see the purge policy section in "Purging Related Orders Independently."

Note:

Setting purge_policy_purge_related_orders_independently to N may add several minutes to the time it takes to purge or drop a partition.

purge_policy_consolidate_partitions

This parameter in the om_parameter table specifies the number of partitions to consolidate into a single partition when purging. Valid values are between 1 and 10 and the default value is 3. For example, a value of 5 means the purge procedure can combine the retained orders of up to 5 successive partitions into a single partition and drop the other 4 partitions.

In order for partitions to be consolidated, the following conditions must be satisfied:

  • Partitions can be dropped (argument a_drop_empty_ptns is true)

  • Purging is done offline (argument a_online is false)

  • Purge capacity is not exhausted

purge_policy_time_to_close_wait

This purge policy can improve the performance of row-based purges and decrease purge rate fluctuations. The policy specifies a delays time before beginning to purge eligible orders so that the majority of the orders that were created on the same day are closed. The goal is to decrease I/O. For example, if 80% of orders complete in 4 days and the remaining 20% complete slowly over a much longer period, you could set purge_policy_time_to_close_wait to 4.

Example 1 (temporal affinity disabled): In this example, the retention period is 10 days and the row-based order purge runs daily. The purge procedure runs at 10:30 PM with the a_order_states argument set to closed orders only (v_closed_orders) and the a_delete_before argument set to sysdate-10 (the current date/time minus 10 days). This purges all orders that were closed 10 days ago before 10:30 PM. If 60% of orders close within the same day and 30% close on the next day before 10:30 PM then 90% of orders close within 2 days.

If temporal affinity is disabled, closed orders are purged as follows. For simplicity, ignore the 10% of orders that are closed slowly over several days and therefore they are purged at a slower rate.

  • No orders are purged on days 1 to 10 because they are all either open or in the 10 day retention period.

  • Orders purged at 10:30 PM on day 11: All orders that closed before 10:30 PM on day 1 (60% of the orders created on day 1).

  • Orders purged at 10:30 PM on day 12: All orders that closed before 10:30 PM on day 2 (60% of the orders created on day 2 and 30% of orders created on day 1).

  • Orders purged at 10:30 PM on day 13: All orders that closed before 10:30 PM on day 3 (60% of the orders created on day 3 and 30% of orders created on day 2).

  • …

  • Orders purged at 10:30 PM on day N: All orders that closed before 10:30 PM on day N-10 (60% of the orders created on day N-10 and 30% of orders created on day N-11).

Example 2 (temporal affinity enabled and 90% of the orders closed within 2 days): In the previous example, if the purge_policy_time_to_close_wait=1 (1 day), purging would be delayed by one day. 90% of the orders created on a day would be purged at the same time as the orders that were created and closed on the same day. The purge procedure runs at 10:30 PM and the same a_order_states and a_delete_before settings are configured in the same way as example 1. However, this configuration purges all orders that were created 11 days before 10:30 PM and were closed 10 days before 10:30 PM. The creation date criterion is based on 1 day time-to-close wait and 10 days retention period.

  • No orders are purged on days 1 to 10 because they are all either open or in the 10 day retention period.

  • No orders purged on day 11: The orders closed on day 1 are out of retention but they have to wait an extra day.

  • Orders purged at 10:30 PM on day 12: All orders created before 10:30 PM on day 1 and closed before 10:30 PM on day 2 (90% of the orders created on day 1).

  • Orders purged at 10:30 PM on day 13: All orders created before 10:30 PM on day 2 and closed before 10:30 PM on day 3 (90% of the orders created on day 2).

  • …

  • Orders purged at 10:30 PM on day N: All orders created before 10:30 PM on day N-11 and closed before 10:30 PM on day N-10 (90% of the orders created on day N-11).

Example 3 (temporal affinity enabled and 90% of the orders closed within 3 days): This is similar to the previous example, except that we want a 2 day delay instead of 1 (purge_policy_time_to_close_wait=2).

  • No orders are purged on days 1 to 10 because they are all either open or in the 10 day retention period.

  • No orders purged on day 11: The orders closed on day 1 are out of retention but they have to wait.

  • No orders purged on day 12: The orders closed on day 1 and 2 are out of retention but they have to wait.

  • Orders purged at 10:30 PM on day 13: All orders created before 10:30 PM on day 1 and closed before 10:30 PM on day 3 (90% of the orders created on day 1).

  • Orders purged at 10:30 PM on day 14: All orders created before 10:30 PM on day 2 and closed before 10:30 PM on day 4 (90% of the orders created on day 2).

  • Orders purged at 10:30 PM on day N: All orders created before 10:30 PM on day N-12 and closed before 10:30 PM on day N-10 (90% of the orders created on day N-12).

purge_audit_retention_days

This parameter in the om_parameter table specifies the minimum number of days to retain purge audit data. The default is 90 days. OSM automatically purges the audit data after the data exceeds this time limit. For more information, see "Auditing and Monitoring Order Purges."

deferred_segment_creation

Oracle Database introduced deferred segment creation in 11gR2. If the deferred_segment_creation initialization parameter is set to true (the default), it forces the database to wait until the first row is inserted into a table/partition before creating segments for that table/partition and its dependent objects. In general, deferred segment creation saves disk space for unused tables/partitions. The main benefit to OSM is that it minimizes the time it takes to create a partition. However, in high volume deployments, especially on Oracle RAC, deferred segment creation can lead to serious performance issues when the database is forced to create the deferred segments of a partition in order to store new orders. This occurs when the previous partition is exhausted. The result is high "library cache lock" waits that could last for an extended period of time (frequently more than 30 minutes). In high volume deployments, it is strongly recommended that you disable deferred segment creation.

To disable deferred segment creation, log in to the database as the SYS user and execute the following statements:

alter system set deferred_segment_creation=false scope=both sid='*';
execute dbms_space_admin.materialize_deferred_segments('<schema_name>');

purge_commit_count

Specifies how frequently each purge job issues a commit command. For example, the value 10 means that 10 orders are purged before a commit is done. Unless you perform extensive performance purge tests to determine the optimal value for this parameter, Oracle recommends that you leave it at the default value. If not present, the value 10 is used.

About PL/SQL API

This section provides an overview of PL/SQL API.

DBMS Output

It is strongly recommended that you spool DBMS output to a file, especially for partition maintenance operations. The DBMS output includes valuable information for troubleshooting and performance tuning, such as elapsed execution times and error traces.

Note:

The DBMS output is sent to the client at the end of execution. Oracle Database does not provide any mechanism to flush output during the execution of a procedure.

For example, if you use SQL*Plus:

SQL> set serveroutput on
SQL> spool part_maintain.out
SQL> execute om_part_maintain.drop_partitions(4000000);
SQL> execute om_part_maintain.add_partitions(2);
SQL> spool off;

Specifying Purge Criteria

Order purge procedures allow you to specify the following purge criteria:

Table 7-3 Order Purge Criteria

Criteria Parameters Partition-Based Purge Row-Based Order Purge

Order state

a_order_states

Yes

Yes

Order timestamp

a_delete_before

Yes

Yes

Order ID range

a_order_id_lt

Yes

Yes

Order type

a_order_type_mnemonic

No

Yes

Order source

a_order_source_mnemonic

No

Yes

Order cartridge

a_namespace_mnemonic

a_version_mnemonic

No

Yes

Table 7-4 and Table 7-5 that follow represent order states and pre-defined aggregate order states that can be supplied to purge script. You can choose to supply order state, pre-defined aggregate order state, or a custom aggregate state to purge script. Order states and predefined aggregate order states are defined in the om_new_purge_pkg package.

  • Example of an order state to purge cancelling orders:

    om_new_purge_pkg.v_cancelling_orders

  • Example of a pre-defined aggregate order state to purge all closed and cancelled orders:

    om_new_purge_pkg.v_closed_or_cancelled_orders

  • Example of a custom aggregate order state to purge failed and aborted orders:

    om_new_purge_pkg.v_failed_orders + om_new_purge_pkg.v_aborted_orders

Note:

While forming a custom aggregate state, ensure the following:

  • Use only order state, but not the pre-defined aggregate order states.

  • Do not use the same state twice.

Table 7-4 shows the order state constants and their corresponding values.

Table 7-4 Order State Constants and Values

Constant Value

v_completed_orders

1

v_aborted_orders

2

v_not_started_orders

4

v_suspended_orders

8

v_cancelled_orders

16

v_wait_for_revision_orders

32

v_failed_orders

64

v_waiting_orders

128

v_in_progress_orders

256

v_amending_orders

512

v_cancelling_orders

1024

Table 7-5 shows the pre-defined aggregate order states and their corresponding values.

Table 7-5 Predefined Aggregate Order States and Values

Constant Value

v_closed_orders

3 (v_completed_orders + v_aborted_orders)

v_closed_or_cancelled_orders

19 (v_completed_orders + v_aborted_orders + v_cancelled_orders)

v_not_running_orders

252 (v_waiting_orders + v_failed_orders + v_wait_for_revision_orders + v_cancelled_orders + v_suspended_orders + v_not_started_orders)

v_compensating_orders

1536 (v_cancelling_orders + v_amending_orders)

v_running_orders

1792 (v_compensating_orders + v_in_progress_orders)

v_open_orders

2044 (v_running_orders + v_not_running_orders)

v_all_orders

2047 (v_open_orders + v_closed_orders)

The a_delete_before parameter allows you to further narrow the purge criteria based on the order timestamp (for example, you might want to retain closed orders for at least 30 days). Table 7-6 shows which timestamp in the om_order_header table is compared to a_delete_before depending on a_order_states and the order status.

Table 7-6 Order Purge Based on Timestamp, Order State, and Order Status

a_order_states Order Status Timestamp

v_all_orders

N/A

a_ord_creation_date

v_closed_orders

7 (complete) or 9 (aborted)

a_ord_completion_date

v_completed_orders

7

a_ord_completion_date

v_aborted_orders

9

a_ord_completion_date

v_suspended_orders

2

a_ord_txn_completion_date

v_cancelled_orders

3

a_ord_txn_completion_date

v_in_progress_orders

4

a_ord_txn_completion_date

v_amending_orders

5

a_ord_txn_completion_date

v_cancelling_orders

6

a_ord_txn_completion_date

v_wait_for_revision_orders

8

a_ord_txn_completion_date

v_failed_orders

10

a_ord_txn_completion_date

v_not_started_orders

1 (not started)

a_ord_creation_date

Parallel Execution

The om_part_maintain API performs many operations in parallel:

  • Parallel queries and most DML and DDL statements are executed in parallel using parallel servers, which apply multiple CPU and I/O resources to the execution of a single database operation. Examples include copying orders into the backup tables and rebuilding unusable indexes.

  • Some operations are executed in parallel using the dbms_parallel_execute package, which divides work into chunks processed in parallel by database jobs. Row-based order purge and the restore stage of purge_partitions are performed this way. If your database is Oracle RAC, it is recommended that you create a database job class to restrict job execution on a single node to eliminate cluster waits. For more information, see "purge_job_class."

Procedures that support parallelism use the a_parallelism parameter, which allows you to specify the desired degree of parallelism for those statements that can be executed in parallel.

The degree of parallelism can be:

  • Greater than 1: Statements that can be executed in parallel are executed with the specified degree of parallelism.

  • 1: All statements are executed serially.

  • 0: The degree of parallelism is computed by the database and it can be 2 or greater. Statements that can be executed in parallel always run in parallel.

  • Less than 0: The degree of parallelism is computed by the database and it can be 1 or greater. If the computed degree of parallelism is 1, the statement runs serially. Indexes are always rebuilt in parallel.

If you leave a_parallelism unspecified, OSM uses the default parallelism configured by these parameters:

  • degree_of_parallelism

  • degree_of_parallelism_rebuild_indexes

  • degree_of_parallelism_rebuild_xchg_indexes

Concurrency Restrictions

Exchange table, partition management, and purge procedures acquire an exclusive user lock to prevent concurrent execution of other management procedures, which could result in unrecoverable errors. Each OSM schema uses a different lock specifically for this package. The lock is released automatically at the end of execution. The database also releases user locks automatically when a session terminates.

Specifically, the following procedures acquire an exclusive lock to prevent concurrent execution:

  • setup_xchg_tables

  • drop_xchg_tables

  • purge_xchg_prg_tables

  • purge_partitions

  • drop_empty_partitions

  • drop_partitions

  • add_partition and add_partitions

  • equipartition

  • purge_orders

  • select_orders

  • purge_selected_orders

  • resume_purge

PL/SQL API Reference

The PL/SQL API provides procedures and functions for:

  • Setup and tuning

  • Maintenance

  • Troubleshooting and recovery

Setup and Tuning Procedures

This section provides information about setup and tuning PL/SQL API procedures.

om_part_maintain.setup_xchg_tables (Online or Offline)

This procedure creates exchange tables for purging partitions with om_part_maintain.purge_partitions.

procedure setup_xchg_tables(
        a_xchg_purge_capacity natural default null,
        a_tablespace varchar2 default null,
        a_force boolean default false,
        a_subpartition_count_override positive default null) ;

If you purge partitions, you must create exchange tables after a new installation and each time you upgrade the schema. If the exchange tables are not up to date, om_part_maintain.purge_partitions reports an error. If you only drop partitions, exchange tables are not required.

This procedure first calls drop_xchg_tables to drop all existing exchange tables and reclaim space. If a_force is false and an exchange table is not empty, it throws an exception. Upon successful completion, it sets the sys$xchg_purge_capacity and sys$xchg_purge_seq system parameters to the purge capacity and 1, respectively (in the om_parameter table).

The parameters are:

  • a_xchg_purge_capacity: Specifies the exchange capacity in the range 0-999. If it is not specified, it uses the value of the default_xchg_capacity parameter configured in the om_parameter table. If default_xchg_capacity is not set, the default capacity is 3. If the specified capacity is 0 then it creates backup tables but not purge tables. If the specified or configured capacity is illegal, it throws an exception.

  • a_tablespace: Specifies the tablespace where you want the exchange tables to be created. If you do not specify it, the database default tablespace is used.

  • a_force: Specifies whether existing exchange tables should be dropped even if they are non-empty. If this is false and an exchange table is not empty, an exception is thrown. In this case, exchange tables are left in an inconsistent state (new exchange tables are not created but existing exchange tables might be partially dropped).

  • a_subpartition_count_override: Specifies the number of hash partitions for exchange tables. Oracle Database does not allow a range-hash partition to be exchanged with the hash-partitioned table if the number of hash partitions of the range partition and the table do not match. By default, the number of hash partitions of the exchange tables for om_order_header is the same as the number of hash sub-partitions of the oldest om_order_header partition. If you need to purge partitions with a different number of hash sub-partitions (because you changed the subpartitions_number parameter), re-execute setup_xchg_tables and supply the right value for this parameter.

om_part_maintain.drop_xchg_tables (Online or Offline)

This procedure drops all exchange tables. It is executed automatically when you execute setup_xchg_tables.

procedure drop_xchg_tables(a_force boolean default false) ;

The implementation first purges exchange metadata. Specifically, it purges the om_xchg_table table, and sets sys$xchg_purge_capacity and sys$xchg_purge_seq in the om_parameter table to 0.

The DROP TABLE statements are executed with the PURGE option, so the space is released immediately (you cannot recover the exchange tables from the recycle bin).

If a_force is false and an exchange table is not empty, it throws an exception. In this case exchange tables are left in an inconsistent state.

om_part_maintain.set_dop (Online or Offline)

This procedure sets the degree_of_parallelism parameter in the om_parameter table to the specified degree of parallelism.

procedure set_dop(a_parallelism binary_integer);
om_part_maintain.set_dop_rebuild_indexes (Online or Offline)

This procedure sets the degree_of_parallelism_rebuild_indexes parameter in the om_parameter table to the specified degree of parallelism.

procedure set_dop_rebuild_indexes(a_parallelism binary_integer);
om_part_maintain.set_dop_rebuild_xchg_indexes (Online or Offline)

This procedure sets the degree_of_parallelism_rebuild_xchg_indexes parameter in the om_parameter table to the specified degree of parallelism.

procedure set_dop_rebuild_xchg_indexes(a_parallelism binary_integer);

Maintenance Procedures and Functions

This section provides information about maintenance procedures and functions.

om_part_maintain.add_partition (Offline Only)

This procedure adds a single partition.

procedure add_partition(a_tablespace varchar2 default null,
                         a_realm_mnemonic varchar2 default null,
                         a_force boolean default false);

The implementation is equivalent to this call:

add_partitions(1, a_tablespace);
om_part_maintain.add_partitions (Offline Only)

This procedure adds one or more partitions. At the end, it also rebuilds any unusable indexes as a precaution (normally indexes should remain usable).

procedure add_partitions(
        a_count positiven,
        a_tablespace varchar2 default null,
        a_realm_mnemonic varchar2 default null,
        a_force boolean default false);

The upper bound of each new partition is the greatest partition upper bound plus the value of the range_partition_size parameter. If found, the range_partition_size for the realm is used (found in table om_partitioning_realm_param). If not found, the range_partition_size in table om_parameter is used. The upper bound is used in the partition name. For example, if the new partition's upper bound is 100,000, the partition name is P_000000000000100000 (always formatted to 18 characters).

This procedure inserts a new row into table om_order_id_block to represent the range of order_seq_ids for the new partition. In addition to the order ID range, the order ID block contains the status (for example, AVAILABLE for the newly added partition), the dbinstance (-1 until the block changes to ACTIVE), and the partitioning realm associated with the block.

You must run this procedure offline. Running this procedure online causes high contention in the database and transaction timeouts that could result in failed orders and instability of OSM.

The parameters are:

  • a_count: The number of partitions to add.

  • a_tablespace: The tablespace for the new partitions. This procedure modifies the default tablespace attribute of partitioned tables with the specified tablespace before adding partitions. If you do not specify the tablespace or the input argument is null, each partition is created on the default tablespace of the partitioned table (for example, on the same tablespace as the most recently added partition).

  • a_realm_mnemonic: The partitioning realm mnemonic (case-insensitive) that the new partition belongs to. If the value is null, the partition is assigned to the default_order realm. If the realm is disabled, an error occurs; you can ignore this error by entering true in the a_force argument.

  • a_force: If the value of this parameter is true, partitions can be added for disabled realms. This is useful because partitioning realms are often created in a disabled state, therefore partitions can be added for the realm before enabling the partitioning realm.

Dropping newly added partitions: If you want to drop several new partitions, perhaps because you want to re-create them (for example, with a different number of hash sub-partitions and/or on a different tablespace) or because you inadvertently added a large number of partitions, you can drop those partitions that are still empty using drop_empty_partitions.

om_part_maintain.drop_partitions (Offline only)
procedure drop_partitions(
        a_order_id_lt number,
        a_order_id_ge integer default null,
        a_parallelism_rebuild_indexes binary_integer default null) ;

This procedure drops partitions that satisfy the following conditions:

  • The order IDs mapped to this partition are within the specified range.

  • All orders are either closed (complete or aborted) or canceled.

  • If the om_parameter purge_policy_purge_related_orders_independently is set to 'N' and the partition contains orders with related open orders, the partition cannot be dropped. For more information, see the purge policy section in "Purging Related Orders Independently."

  • The partition belongs to a partitioning realm with a purge strategy of PARTITION-BASED or ANY. For more information, see the purge strategy section in "Partitioning Realms."

More precisely:

  • It disables all foreign keys that reference the partitioned tables.

  • It drops partitions that satisfy the aforementioned conditions. However, if all partitions satisfy those conditions, the partition with the greatest upper bound is not dropped. Oracle Database requires that a partitioned table have at least one partition. For example, if you have only one partition, you cannot use drop_partitions to reclaim space. In this case, use om_part_maintain.purge_partitions.

  • It re-enables the disabled foreign keys (with NOVALIDATE, for performance reasons).

  • It rebuilds unusable indexes and index partitions in parallel.

  • It deletes any remaining order data that references orders in the partitions dropped.

Oracle recommends that you back up the OSM schema prior to executing this procedure.

This procedure must be executed offline.

The parameters are:

  • a_order_id_lt: Specifies a non-inclusive upper bound for the range of order IDs mapped to the partitions to be dropped. If it is not null then only partitions with an upper bound less than or equal to this value are considered. (The upper bound of a partition is non-inclusive, that is, the order IDs mapped to that partition are strictly less than its upper bound.)

  • a_order_id_ge: Specifies an inclusive lower bound for the range of order IDs mapped to the partitions to be dropped. If it is not null then only partitions with a lower bound greater than or equal to this value are considered. (The lower bound of a partition is the upper bound of the previous partition, if any; otherwise 1.)

  • a_parallelism_rebuild_indexes: Specifies the degree of parallelism for rebuilding unusable indexes. It is recommended that you leave it null. The implementation will choose the optimal method for each unusable index depending on the index type and configuration parameters. For more information, see "purge_policy_rebuild_unusable_indexes."

om_part_maintain.drop_empty_partitions (Online or Offline)

This procedure drops empty partitions with mapped order IDs within the specified range.

procedure drop_empty_partitions(
        a_order_id_lt integer default null,
        a_order_id_ge integer default null);

This procedure is similar to drop_partitions except that:

  • It ignores non-empty partitions. It exits when it encounters a partition with an upper bound greater than a_order_id_lt.

  • It does not delete data from non-partitioned tables. It assumes it is already deleted.

  • It does not disable foreign keys.

  • It can be executed online. However, in this case you might experience high contention due to exclusive locks acquired by Oracle Database. Oracle recommends that you execute this procedure either offline or off-peak.

If all partitions are empty and within the specified range, the partition with the greatest upper bound are not dropped. This is because Oracle Database requires that each partitioned table have at least one partition.

The parameters are:

  • a_order_id_lt: Specifies a non-inclusive upper bound for the range of order IDs mapped to the partitions to be dropped. If it is not null then only partitions with an upper bound less than or equal to this value can be dropped. (The upper bound of a partition is non-inclusive, that is, the order IDs mapped to that partition are strictly less than its upper bound.)

  • a_order_id_ge: Specifies an inclusive lower bound for the range of order IDs mapped to the partitions to be dropped. If it is not null then only partitions with a lower bound greater than or equal to this value can be dropped. (The lower bound of a partition is the upper bound of the previous partition, if any; otherwise 1.)

Exceptions:

  • ORA-20166: There is another in-progress maintenance operation.

  • ORA-20170: Failed to suspend database jobs.

  • ORA-20171: OSM is running.

Example (dropping empty partitions after a purge): Assume that purge_partitions left some partitions empty as shown in the table below.

select partition_name, high_value 
from user_tab_partitions
where table_name = 'OM_ORDER_HEADER'
order by partition_name;

Table 7-7 Example: Dropping Empty Partitions

PARTITION_NAME HIGH_VALUE Empty?

P_000000000000100001

100001

Yes

P_000000000000200001

200001

--

P_000000000000300001

300001

Yes

P_000000000000400001

400001

Yes

P_000000000000500001

500001

--

The following statement drops P_000000000000100001, ignores P_000000000000200001, drops P_000000000000300001, and stops at P_000000000000400001 because the upper bound of this partition is greater than 300001:

execute om_part_maintain.drop_empty_partitions(300001);

Example (dropping newly added partitions): Suppose you want to drop several new partitions, perhaps because you want to re-create them (for example, with a different number of hash sub-partitions and/or on a different tablespace) or because you inadvertently added a large number of partitions. Specifically, assume that you want to drop partitions P_000000000000600001, P_000000000000700001 and P_000000000000800001 shown in the table below.

Table 7-8 Example: Dropping Newly Added Partitions

PARTITION_NAME HIGH_VALUE Mapped Order IDs Empty?

P_000000000000200001

200001

1-200000

--

P_000000000000400001

400001

200001-400000

Yes

P_000000000000500001

500001

400001-500000

--

P_000000000000600001

600001

500001-600000

Yes

P_000000000000700001

700001

600001-700000

Yes

P_000000000000800001

800001

700001-800000

Yes

Execute the following statement:

execute om_part_maintain.drop_empty_partitions(
    a_order_id_ge => 500001);
om_part_maintain.purge_partitions (Online or Offline)
procedure purge_partitions(
        a_online            boolean,
        a_delete_before     date,
        a_order_states      integer default om_new_purge_pkg.v_closed_orders
        a_order_id_lt       integer default null,
        a_order_id_ge       integer default null,
        a_stop_date         date default om_const_pkg.v_no_date,
        a_drop_empty_ptns   boolean default true,
        a_purge_xchg_prg_tables boolean default false,
        a_parallelism       binary_integer default null) ;

This procedure purges the partitions that satisfy these conditions:

  • If executed online:

    • All of the orders are either closed (complete or aborted) or canceled.

    • All of the contained orders satisfy the purge criteria specified by the a_delete_before and a_order_states arguments.

    • All of the contained order IDs are within the purge range specified by a_order_id_lt and a_order_id_ge. The range of mapped order IDs does not need to be a subset of the specified range. What matters is the range of actual order IDs.

  • If executed offline:

    • Some or all of the contained orders satisfy the purge criteria specified by the a_delete_before and a_order_states arguments.

    • Some or all of the contained order IDs are within the purge range specified by a_order_id_lt and a_order_id_ge. The range of mapped order IDs does not need to be a subset of the specified range. What matters is the range of actual order IDs.

    • The number of orders to be excluded from purging (for example, those orders that do not satisfy the previous two conditions) does not exceed the threshold specified by the xchg_retained_orders_thres parameter.

    • The partition belongs to a partitioning realm with a purge strategy of PARTITION-BASED or ANY. For more information, see the purge strategy section in "Partitioning Realms."

Oracle recommends that you back up the OSM schema prior to executing this procedure and that you gather statistics after you finished purging.

If you execute this procedure online, you might experience high contention due to exclusive locks acquired by Oracle Database. Oracle recommends that you execute this procedure either offline or off-peak.

If you execute this procedure offline, you can purge a partition that contains orders that do not satisfy the purge criteria as long as the number of retained orders in that partition does not exceed the threshold specified by the xchg_retained_orders_thres parameter. In this case, the retained orders are copied to the backup tables prior to the exchange operation and they are restored (copied again) into the partitioned tables after the exchange operation. Because these are relatively expensive operations, the threshold ensures that they will complete in a timely fashion. Both backup and restore are executed in parallel as specified by the a_parallelism argument.

If this procedure is executed offline, it disables foreign keys. This is necessary when purging partitions with retained orders. Disabling foreign keys is unsafe to do when the OSM application is online as it can result in data integrity violations. Therefore disabling foreign keys requires OSM be offline until they are re-enabled.

Partitions are purged one by one end-to-end, that is, from all partitioned tables. For example, if you want to purge partitions P_000000000001000001, P_000000000002000001, and P_000000000003000001 then P_000000000001000001 will be purged first from all partitioned tables, then P_000000000002000001 and so on.

This procedure can consolidate retained orders from multiple partitions into a single partition, to maximize reclaimed space, reduce the number of partitions, and minimize downtime. This is done by purging successive partitions in iterations. The maximum number of partitions consolidated in each iteration is limited by the parameter purge_policy_consolidate_partitions. More precisely, this procedure purges successive partitions that qualify for purging as follows:

  1. Copies the orders that do not satisfy the purge criteria from those partitions into the backup tables. This is a relative fast operation because it is performed in parallel and the backup tables have few indexes and constraints.

  2. Purges each partition entirely by exchanging it with purge tables. This is a fast operation because EXCHANGE PARTITION only updates metadata in the data dictionary.

  3. Drops N-1 of those partitions. This is a fast operation because the partitions are now empty.

  4. Restores the retained orders from the backup tables into the Nth partition with their order IDs unchanged. This is also performed in parallel using the dbms_parallel_execute package. However, this step is slower than backup because the partitioned tables have more indexes and constraints.

The EXCHANGE PARTITION operation is performed with the following options:

  • INCLUDING INDEXES: This means that local index partitions or subpartitions are also exchanged. This ensures that local indexes remain usable during the exchange, for example, they do not have to be rebuilt.

  • WITHOUT VALIDATION: By default, the exchange operation is performed WITH VALIDATION, which means that Oracle Database returns an error if any rows in the exchange table do not map into partitions or subpartitions being exchanged. This check is unnecessary when the exchange table is empty.

  • If this procedure is executed online and the table has global indexes that enforce unique constraints then the exchange is performed with the following options:

    • UPDATE GLOBAL INDEXES: This means that global indexes are updated during the exchange and therefore remain usable. Otherwise, unusable global indexes that enforce unique constraints would result in ORA-01502 exceptions. (By default, unusable global indexes that do not enforce unique constraints are ignored and therefore are not an issue – this is controlled by the SKIP_UNUSABLE_INDEXES initialization parameter. Therefore, if a table has no such global indexes or if this procedure is executed offline, rebuilding unusable global indexes is deferred for performance reasons.)

    • PARALLEL: This means that global indexes are updated in parallel for performance reasons. It does not alter the global indexes to parallel.

After each partition is purged end-to-end, the sys$xchg_purge_seq counter in the om_parameter table increments to the next logical exchange partition. When the logical exchange partition exceeds the purge capacity, this counter cycles to 1.

The procedure exits when:

  • Time expires.

  • It encounters a partition with a lower bound greater than or equal to the upper bound of the specified range.

  • The number of hash sub-partitions of the next om_order_header partition is different than the number of partitions of the corresponding exchange table. The number of hash partitions of each exchange table is the same as the same number of hash sub-partitions of the oldest partition of the corresponding range-hash partitioned table. If newer partitions have a different number of hash sub-partitions (because you changed the subpartitions_number parameter) then you will not be able to purge the newer partitions until you drop the older partitions and re-execute setup_xchg_tables.

All disabled constraints are re-enabled at the end (with NOVALIDATE for performance reasons).

The parameters are:

  • a_online: Specifies whether this procedure is being executed online. If it is true, it ignores partitions with open orders and partitions with orders that do not satisfy the purge criteria (only entire partitions can be purged online).

  • a_delete_before: Only orders with a timestamp older than this date and time are eligible for purging. For more information, see "Specifying Purge Criteria."

  • a_order_states: Only orders with one of these states are eligible for purging. By default, only closed orders are eligible for purging. For more information, see "Specifying Purge Criteria."

  • a_order_id_lt and a_order_id_ge: If a_order_id_ge is not null then only orders with order ID greater than or equal to this value are eligible for purging. If a_order_id_lt is not null then only orders with order ID less than to this value are eligible for purging. If a_order_id_lt is null, it will be defaulted to the non-inclusive upper bound of the latest used partition. (This ensures that new empty partitions beyond the currently active partition are not dropped accidentally.) If a partition contains both order IDs in this range and outside this range then the partition cannot be purged unless the out-of-range orders can be retained (for example, the purge is done offline and the total number of retained orders in that partition does not exceed the threshold specified by the xchg_retained_orders_thres parameter).

  • a_stop_date: If it is not null then the procedure exits when the date and time are reached. This is done on a best-effort basis, since a premature exit could leave data in inconsistent state. The time is checked periodically. The elapsed time between checks could be as high as the time it takes to purge as many partitions as the spare purge capacity. Only non-critical deferrable operations are skipped when the time expires, such purging exchange tables.

  • a_drop_empty_ptns: Specifies whether empty partitions should be dropped. The default is true, since dropping empty partitions is a fast operation. In this case, this procedure can purge as many successive partitions at a time as the spare capacity, which reduces the time it takes to restore orders and therefore downtime. If this is argument is false, each partition to be purged must go through the backup-purge-restore process separately.

  • a_purge_xchg_prg_tables: Specifies whether exchange tables should be purged as well. If it is true then it purges exchange tables, as long as time has not expired and at least one partition was purged. This is relatively slow operation, so the default is false. In this case, the number of partitions that can be purged by a single execution of this procedure is limited by the space purge capacity.

  • a_parallelism: Specifies the degree of parallelism for backup and restore operations. If it is null, it uses the parallelism configured by the degree_of_parallelism parameter. For more information, see "Parallel Execution."

Exceptions: This procedure performs a number of checks to ensure it can proceed with purge. If a check fails, it throws one of the following exceptions:

  • ORA-20142: The schema is not equi-partitioned. Execute the equi-partition procedure.

  • ORA-20160: The schema is not partitioned. You can only use this procedure if your schema is partitioned.

  • ORA-20162: There are no exchange tables. Execute the setup_xchg_tables procedure.

  • ORA-20163: The exchange tables are not up-to-date. This means that the schema has been upgraded after the exchange tables were created. Re-execute the setup_xchg_tables procedure.

  • ORA-20166: There is another in-progress maintenance operation.

  • ORA-20170: Failed to suspend database jobs.

  • ORA-20171: The procedure was executed with a_online=false and it detected that OSM is running.

Example (purge all orders that were closed at least 180 days ago): Suppose you want to purge all complete or aborted orders that were closed at least 180 days ago. Assuming that most partitions contain some orders that do not satisfy these criteria, you decided to execute purge_partitions offline. You also want to defer dropping empty partitions and purging the exchange tables until the system is restarted. This is how you can do it:

begin
    om_part_maintain.purge_partitions(
        a_online => false,
        a_delete_before => trunc(sysdate) - 180,
        a_order_states => om_new_purge_pkg.v_closed_orders,
        a_drop_empty_ptns => false,
        a_purge_xchg_prg_tables => false,
        a_parallelism => 4) ;
end;

Example (ignore old partitions that contain only a few orders): This example adds to the scenario of the previous example. Assume that old partitions with non-inclusive upper bound up to 5600000 contain a small number of orders that can be purged but cannot be purged entirely (for example, because they still contain open orders). Purging those partitions would be unproductive, since it could exhaust the exchange capacity. Therefore you decided to use the a_order_id_ge parameter to ignore them for now:

begin
    om_part_maintain.purge_partitions(
        a_online => false,
        a_delete_before => trunc(sysdate) - 180,
        a_order_states => om_new_purge_pkg.v_closed_orders,
        a_order_id_ge => 5600000,
        a_drop_empty_ptns => false,
        a_purge_xchg_prg_tables => false,
        a_parallelism => 4) ;
end;
om_part_maintain.purge_entire_partition (Online or Offline)
procedure purge_entire_partition(
        a_online            boolean,
        a_partition_name    varchar2,
        a_purge_xchg_prg_tables boolean default false,
        a_purge_orphan_data     boolean default true) ;

This procedure purges the given partition entirely (all orders). The partition is not dropped. The following two calls are equivalent, assuming that the partition size is 100000:

execute om_part_maintain.purge_entire_partition(
        a_online => true,
        a_partition_name => 'P_000000000000400001');
 
execute om_part_maintain.purge_partitions(
        a_online => true,
        a_delete_before => om_const_pkg.v_no_date,
        a_order_states => om_new_purge_pkg.v_all_orders,
        a_order_id_lt => 400001,
        a_order_id_le => 300001,
        a_stop_date => null,        
        a_drop_empty_ptns => false);

Parameters:

  • a_online: Specifies whether this procedure is being executed online. If this parameter is true, it ignores partitions with open orders and partitions with orders that do not satisfy the purge criteria (only entire partitions can be purged online).

  • a_partition_name: The name of the partition to purge.

  • a_purge_xchg_prg_tables: Specifies whether exchange tables should be purged as well. If this parameter is true, it purges exchange tables, as long as time has not expired and at least one partition was purged. This is a relatively slow operation, so the default is false. In this case, the number of partitions that can be purged by a single execution of this procedure is limited by the space purge capacity.

  • a_purge_orphan_data: Specifies whether you want orphan data to be purged after the partition is purged. The default is true. You may want to defer purging of orphan data if you used om_part_maintain.backup_selected_ords to manually backup selected orders, which you plan to restore with om_part_maintain.restore_orders.

om_part_maintain.estimate_ptn_purged_space (Online or Offline)
function estimate_ptn_purged_space(        a_delete_before date,        a_order_states  integer default om_new_purge_pkg.v_closed_orders,        a_order_id_lt   integer default null,        a_order_id_ge   integer default null)return number;

This function estimates amount of disk space (in bytes) that is reclaimed by purging or dropping partitions.

This function simulates the execution of om_part_maintain.purge_partitions, therefore refer to the purge partitions API reference for a description of the parameters, exit conditions, and possible exceptions.

Example (estimate the space reclaimed by purging all orders that were closed at least 180 days ago):

declarebegindbms_output.put_line('Space Reclaimed (bytes): '||om_part_maintain. estimate_ptn_purged_space(        a_delete_before => trunc(sysdate) - 180,        a_order_states => om_new_purge_pkg.v_closed_orders)) ;end;

Example (estimate the space reclaimed by dropping partitions): The following example shows how to estimate the space reclaimed by dropping all partitions with an upper bound less than or equal to 300001. Note that the a_delete_before and a_order_states parameters have been set to values that include all orders in the partition.

declarebegindbms_output.put_line('Space Reclaimed (bytes): '||om_part_maintain. estimate_ptn_purged_space(        a_delete_before => om_const_pkg.v_no_date,        a_order_states => om_new_purge_pkg.v_all_orders,        a_order_id_lt => 300001)) ;end;
om_part_maintain.purge_xchg_bck_tables (Online or Offline)
procedure purge_xchg_bck_tables(a_drop_storage boolean default false);

This procedure purges all exchange backup tables. Normally you do not need to execute this procedure because backup tables are purged automatically when all order data is restored. The implementation executes TRUNCATE TABLE, so purged data cannot be restored.If a_drop_storage is true, backup tables are truncated with the DROP STORAGE option to reclaim space. Otherwise, they are truncated with the REUSE STORAGE option to retain the space from the deleted rows. If you never reclaim the space, its size is limited by the largest volume of order data copied into the backup tables. By default, space is reused for performance reasons and in order to minimize downtime: First, inserts are more efficient if space is already allocated. Second, purging the backup tables is faster if space is reused.

om_part_maintain.purge_xchg_prg_tables (Online or Offline)
procedure purge_xchg_prg_tables;

This procedure purges all exchange purge tables to reclaim space. It does not purge backup tables. The implementation executes TRUNCATE TABLE … DROP STORAGE, so purged data cannot be restored.

om_new_purge_pkg.delete_order (Online or Offline)
procedure delete_order(a_order_seq_id integer);

This procedure unconditionally deletes the given order from the database. Note that this procedure does not issue commit, in contrast to most purge procedures. It is the responsibility of the user to issue commit or rollback.

This operation is audited.

om_new_purge_pkg.purge_orders (Online or Offline)
procedure purge_orders(
    a_status                     out integer,
    a_delete_before              date,
    a_order_states               integer,
    a_stop_date                  date     default om_const_pkg.v_no_date,
    a_order_id_lt                integer  default null,
    a_order_id_ge                integer  default null,
    a_order_source_mnemonic      varchar2 default null,
    a_order_type_mnemonic        varchar2 default null,
    a_namespace_mnemonic         varchar2 default null,
    a_version_mnemonic           varchar2 default null,
    a_cartridge_id               integer  default null,
    a_commit_count               integer  default null,
    a_parallelism binary_integer default null);

This procedure purges orders that satisfy the given criteria. It is the main implementation of row-based order purge. Orders are purged by database job. The procedure finds the order Ids that satisfy the purge criteria, inserts them into the OM_ORDER_ID_FOR_PURGE staging table, splits them into chunks, and distributes the chunks to database jobs for parallel purge. Each chunk is processed by deleting one order at a time with periodic commits. This approach ensures that a) an order is either purged entirely or not at all and b) a purge may succeed partially even in the event of errors.

This operation is audited.

Executing this procedure is equivalent to executing select_orders and purge_selected_orders. However, purge_orders always starts a new purge by clearing the OM_ORDER_ID_FOR_PURGE staging table, whereas select_orders only adds orders Ids to the staging table.

Table 7-9 describes the possible outcomes. The a_status output parameter is set accordingly.

Table 7-9 Possible Outcomes

Outcome a_status OM_ORDER_ID_FOR_PURGE

No orders satisfy the purge criteria.

om_new_purge_pkg.v_status_nopurge

Empty

The purge finished successfully and all orders that satisfied the purge criteria were purged.

om_new_purge_pkg.v_status_finshed

Cleared

The purge finished with errors (some orders were not purged). This procedure throws an exception. Note that this procedure retries purging of failed chunks a few times. An exception means that the retries also failed (or only partially succeeded). You can run resume_purge to retry.

N/A

Contains all order Ids that satisfy the purge criteria.

The purge finished prematurely because the expiration date specified by a_stop_date was reached.

om_new_purge_pkg.v_status_expired

Cleared

The purge was stopped by the user (using stop_purge), and it can be resumed using resume_purge.

om_new_purge_pkg.v_status_stopped

Contains all order Ids that satisfy the purge criteria.

Parameters:

  • a_status: Returns the purge status.

  • a_delete_before: Only orders with a timestamp older than this date and time are eligible for purging. See "Specifying Purge Criteria" for more information.

  • a_order_states: Only orders with one of these states are eligible for purging. See "Specifying Purge Criteria" for more information.

  • a_stop_date: The end of the purge window. If it is not null then the procedure exits when this date and time are reached. The time is checked after each order delete.

  • a_order_id_lt and a_order_id_ge: If a_order_id_ge is not null, only orders with order ID greater than or equal to this value are eligible for purging. If a_order_id_lt is not null then only orders with order ID less than this value are eligible for purging. If either of these is set to om_new_purge_pkg.v_ptn_scope_latest, the purge scope is restricted to the latest partition(s) where new orders are created.

  • a_order_source_mnemonic: If it is not null, only orders with this order source are eligible for purging. Wildcards are not supported.

  • a_order_type_mnemonic: If it is not null, only orders with this order type are eligible for purging. Wildcards are not supported.

  • a_namespace_mnemonic: If it is not null, only orders in this cartridge namespace are eligible for purging. Wildcards are not supported.

  • a_version_mnemonic: This is used in combination with a_namespace_mnemonic. If it is not null, only orders in the specified cartridge namespace and version are eligible for purging. Wildcards are not supported.

  • a_cartridge_id: If it is not null, only orders in the specified cartridge are eligible for purging.

  • a_parallelism: Specifies the degree of parallelism (the number of database jobs performing the purge). If it is null, it uses the parallelism configured by the degree_of_parallelism parameter. If it is 1, the purge is executed serially (with a single database job). See "Parallel Execution" for more information.

  • a_commit_count: Specifies how often each job should issue a commit command. Unless you performed extensive performance purge tests to determine the optimal value for this parameter, it is recommended that you leave it null. If the value is null, the job uses the commit count that is configured in the purge_commit_count parameter.

Example: The following execution purges orders with a time limit of 15 minutes and a parallelism of 8. The purge criteria specify all orders that were closed 30 days ago or more.

declare
    v_status integer;
begin
    om_new_purge_pkg.purge_orders( 
    a_status=>v_status, 
    a_stop_date => sysdate + 15/24/60, -- 15m
    a_delete_before=>trunc(sysdate) - 30, 
    a_order_states=> om_new_purge_pkg.v_closed_orders, 
    a_parallelism => 8);
end;
om_new_purge_pkg.schedule_order_purge_job (Online or Offline)
procedure schedule_order_purge_job(
    a_start_date                 date,
    a_delete_before              date,
    a_order_states               integer,
    a_stop_date                  date     default om_const_pkg.v_no_date,
    a_order_id_lt                integer  default null,
    a_order_id_ge                integer  default null,
    a_order_source_mnemonic      varchar2 default null,
    a_order_type_mnemonic        varchar2 default null,
    a_namespace_mnemonic         varchar2 default null,
    a_version_mnemonic           varchar2 default null,
    a_cartridge_id               integer  default null,
    a_commit_count               integer  default null,
    a_parallelism binary_integer default null);

This procedure schedules an execution of purge_orders using the dbms_job package. The a_start_date parameter specifies the start date and time.

om_new_purge_pkg.select_orders (Online or Offline)
procedure select_orders(
    a_selected_count             out integer,
    a_delete_before              date,
    a_order_states               integer,
    a_order_id_lt                integer  default null,
    a_order_id_ge                integer  default null,
    a_order_source_mnemonic      varchar2 default null,
    a_order_type_mnemonic        varchar2 default null,
    a_namespace_mnemonic         varchar2 default null,
    a_version_mnemonic           varchar2 default null,
    a_cartridge_id               integer  default null);

This procedure inserts into the staging table OM_ORDER_ID_FOR_PURGE the order Ids that satisfy the given purge criteria. This is useful when you cannot identify all orders to be purged in a single execution of purge_orders, and you do not want to execute multiple purges. In this case:

  • You can populate OM_ORDER_ID_FOR_PURGE piecemeal by executing select_orders several times with different purge criteria. You can also insert or delete order Ids from this table manually.

  • After you finish populating this table, execute purge_selected_orders.

Parameters:

  • a_selected_count: Returns the number of order Ids inserted into OM_ORDER_ID_FOR_PURGE by this call. This count ignores order Ids that were already inserted into this table, even if they match the given purge criteria.

  • The rest of the parameters specify the purge criteria and they are the same as in purge_orders.

Example: The following selects for purge all orders in cartridge namespace X that were closed 7 days ago and reside on the latest partition(s) (where new orders are created).

declare
    v_status integer;
    v_selected_count  integer;
begin
    om_new_purge_pkg.select_orders(
    a_selected_count=>v_selected_count, 
    a_delete_before=>trunc(sysdate) - 7, 
    a_order_states=>om_new_purge_pkg.v_closed_orders,
    a_order_id_ge=>om_new_purge_pkg.v_ptn_scope_latest,
    a_namespace_mnemonic => 'X');
end;
om_new_purge_pkg.purge_selected_orders (Online or Offline)
procedure purge_selected_orders(
    a_status       out integer,
    a_purged_count out integer,
    a_stop_date    date default om_const_pkg.v_no_date,
    a_commit_count number default null,
    a_parallelism  binary_integer default null);

This procedure purges the orders specified in the staging table OM_ORDER_ID_FOR_PURGE. It works like purge_orders except that the purge criteria are not supplied (the orders are already selected).

You can also use this procedure to restart a stopped purge with different parameters (for example, if you want to change the time when the purge window ends or the degree of parallelism).

See "om_new_purge_pkg.purge_orders (Online or Offline)" for possible outcomes.

This operation is audited.

Parameters:

  • a_status: Returns the purge status.

  • a_purged_count: The number of orders purged.

  • a_stop_date: The end of the purge window. If it is not null then the procedure exits when this date and time are reached. The time is checked after each order delete.

  • a_commit_count: Specifies how often each job should issue a commit command. Unless you performed extensive performance purge tests to determine the optimal value, Oracle recommends that you leave it null. If it is null, it uses the commit count value that is configured in the purge_commit_count parameter.

  • a_parallelism: Specifies the degree of parallelism (the number of database jobs performing the purge). If it is null, it uses the parallelism configured by the degree_of_parallelism parameter. If it is 1, the purge is executed serially (with a single database job). See "Parallel Execution" for more information.

om_new_purge_pkg.stop_purge (Online or Offline)
procedure stop_purge;

This procedure stops the current order purge if one is running. This procedure call returns when the purge stops, which normally takes a few seconds.

Later you can resume the same purge by running resume_purge, restart the purge with different parameters by running purge_selected_orders (for example, if you want to change the time when the purge window ends or the degree of parallelism), or start a new purge.

om_new_purge_pkg.resume_purge (Online or Offline)
procedure resume_purge(
    a_stop_date date default null,
    a_commit_count number default null,
    a_parallelism binary_integer default null);

This procedure resumes a stopped order purge or an order purge that finished with errors.

If you do not supply any arguments or if the given arguments are the same as those of the initial purge operation, this procedure resumes processing of existing chunks that are either unassigned or finished processing with errors, using the same degree of parallelism. If you do supply new or changed arguments, this procedure regenerates chunks allows you to change certain parameters of the purge operation. For example:

  • You can resume a stopped the purge operation with the a_stop_date parameter if you want to change the end of the purge window.

  • You can resume a stopped purge operation with the a_parallelism parameter if you want to lower the degree of parallelism of an online purge operation (for example, due to an unexpected increase in order volume).

Parameters:

  • a_stop_date: This parameter specifies the end of the purge window. If it is null, the initial value supplied to the purge operation remains in effect.

  • a_commit_count: This parameter specifies how often each job should commit. If it is null, the initial value supplied to the purge operation remains in effect.

  • a_parallelism: This parameter specifies the degree of parallelism (the number of database jobs performing the purge). If it is null, the initial value supplied to the purge operation remains in effect.

Note:

Do not use resume_purge to expand the scope of a purge. resume_purge does not regenerate order ID chunks and any order Ids that fall outside the range of existing unassigned chunks are not be purged. If you want to expand the scope of a purge, add order Ids to OM_ORDER_ID_FOR_PURGE and run purge_selected_orders instead.

Advanced Procedures

This section provides information about advanced procedures.

om_part_maintain.backup_selected_ords (Offline)
procedure backup_selected_ords(
        a_parallelism binary_integer default null);

The purge_partitions procedure inspects each partition in the given range and inserts into the OM_ORDER_ID_FOR_BACKUP table the order IDs of the orders that do not satisfy the purge criteria. The specified orders are copied into the backup tables and they are restored after the partitions are purged. The backup_selected_ords and restore_orders procedures allow you to do the same for arbitrary order IDs, for example, if you want to retain orders for a particular cartridge.

Note:

This procedure does not modify data in partitioned tables.

Example: The following example shows how to purge partition P_000000000000400001 but retain all orders in the HSI cartridge. Error handling is omitted for simplicity. Note that orphan data is purged after orders are restored, which is the reason why the a_purge_orphan_data argument of purge_entire_partition is false.

declare
    v_jobs_suspended dbms_sql.number_table;
begin
    om_job_pkg.disable_suspend_jobs(60, v_jobs_suspended) ;
    insert into om_order_id_for_backup
        (select h.order_seq_id
            from om_cartridge c, 
                 om_order_header partition (P_000000000000400001) h
            where c.namespace_mnemonic = 'HSI'
                and h.cartridge_id = c.cartridge_id
        );
    om_part_maintain.backup_selected_ords(a_parallelism=>4);
    om_part_maintain.purge_entire_partition(
        a_online => false, 
        a_partition_name => 'P_000000000000400001', 
        a_purge_orphan_data => false);
    om_part_maintain.restore_orders(a_parallelism=>4);
    om_part_maintain.purge_orphan_order_data();
    om_job_pkg.enable_resume_jobs(v_jobs_suspended);
end;
om_part_maintain.restore_orders (Offline)

This procedure restores orders from the backup tables into the partitioned tables, and purges the backup tables.

procedure restore_orders(
        a_parallelism binary_integer default null);

Normally you do not have to use this procedure because purge_partitions restores orders automatically. However, it might be needed for recovery purposes, as discussed in the "Troubleshooting and Error Handling" section. It can also be used in conjunction with backup_selected_ords to exclude arbitrary order IDs from a purge.

Troubleshooting Functions

This sections provides information about troubleshooting functions.

om_part_maintain.get_partitions (Online or Offline)

This function returns all om_order_header range partitions as well as any partitions missing from om_order_header (if the schema is not equi-partitioned).

function get_partitions return om_t_order_partitions;

The returned information includes the table name, partition name, number of subpartitions, tablespace name, and partition upper bound. If the table name is not OM_ORDER_HEADER, the specified partition is missing. This function is useful for troubleshooting.

om_part_maintain.is_equipartitioned (Online or Offline)

This function tells whether the schema is equi-partitioned and returns missing partitions.

function is_equipartitioned(
        a_missing_ptns out om_t_order_partitions)
return boolean;

It returns false if the number of range partitions differs from table to table or the schema is not partitioned. The implementation does not compare the number of subpartitions.

If number of range partitions differs from table to table, this could be the result of interrupted or failed attempts to add or drop partitions. If the schema is not equi-partitioned, EXCHANGE PARTITION cannot be used for purging partitions; therefore om_part_maintain.purge_partitions returns right away. In this case, use om_part_maintain.equipartition to partition your schema.

Recovery Procedures

This section provides information about recovery procedures.

om_part_maintain.equipartition (Offline only)

This procedure equi-partitions the schema by adding the partitions in the specified collection that are missing from the schema.

procedure equipartition(
        a_missing_ptns in om_t_order_partitions default null);

Partitions are added through ALTER TABLE ADD PARTITION and ALTER TABLE SPLIT PARTITION operations. It throws an exception if the schema is not partitioned.

Parameters:

  • a_missing_ptns: The missing partitions to be added. If it is null, the procedure calls is_equipartitioned to find all missing partitions.

Exceptions:

  • ORA-20166: There is another in-progress maintenance operation.

  • ORA-20170: Failed to suspend database jobs.

  • ORA-20171: OSM is running.

Error handling: After you resolve the issue, re-execute this procedure.

om_part_maintain.purge_orphan_order_data (Online or Offline)
procedure purge_orphan_order_data;

This procedure is not part of regular maintenance operations. It purges orphan order data from tables that are not range-partitioned (specifically, order data with an order ID that is less than the minimum order ID in om_order_header). Orphan data could be the result of a failed execution of purge_partitions or drop_partitions.

om_part_maintain.rebuild_unusable_indexes (Online or Offline)
procedure rebuild_unusable_indexes(
        a_table_name_like varchar2 default 'OM_%',
        a_parallelism binary_integer default null,
        a_online boolean default true,
        a_preferred_method varchar2 default null);
 
procedure rebuild_unusable_indexes(
        a_indexes dbms_sql.varchar2s,
        a_parallelism binary_integer default null,
        a_online boolean default true,
        a_preferred_method varchar2 default null);

These procedures rebuild unusable indexes, and unusable index partitions and sub-partitions. They are called automatically by other procedures that may leave indexes in an unusable state, especially global indexes, such as drop_partitions, purge_partitions, and equipartition.

Parameters:

  • a_table_name_like: Restricts the scope of the operation to indexes of the specified table name(s). You may use wildcards. The default is OM_% (for example, exchange tables are ignored).

  • a_indexes: The names of the indexes to be rebuilt.

  • a_parallelism: Specifies the degree of parallelism. Indexes are altered back to NOPARALLEL afterward they are rebuilt. It is recommended that you leave it null. The implementation will choose the optimal method for each unusable index depending on the index type and configuration parameters. For more information see purge_policy_rebuild_unusable_indexes.

  • a_online: Tells whether indexes should be rebuilt online in order to avoid failure from contention.

  • a_preferred_method: The preferred rebuild method. Valid values are:

    • om_part_maintain.c_rebuild_idx_rebuild (REBUILD)

    • om_part_maintain.c_rebuild_idx_recreate_global (RECREATE GLOBAL).

    For more information see purge_policy_rebuild_unusable_indexes.

om_part_maintain.rebuild_index (Online or Offline)
procedure rebuild_index(
        a_index_name varchar2,
        a_parallelism binary_integer default null,
        a_online boolean default true,
        a_preferred_method varchar2 default null,
        a_only_if_unusable boolean default true);

This procedure rebuilds the specified index.

Parameters:

  • a_index_name: The index name.

  • a_parallelism: Specifies the degree of parallelism. The index is altered back to NOPARALLEL afterward it is rebuilt. It is recommended that you leave it null. The implementation will choose the optimal method depending on the index type and configuration parameters. For more information see purge_policy_rebuild_unusable_indexes.

  • a_online: Tells whether the index should be rebuilt online in order to avoid failure from contention.

  • a_preferred_method: The preferred rebuild method. Valid values are:

    • om_part_maintain.c_rebuild_idx_rebuild (REBUILD)

    • om_part_maintain.c_rebuild_idx_recreate_global (RECREATE GLOBAL).

    For more information see purge_policy_rebuild_unusable_indexes.

om_part_maintain.sys$undo_restore_table (Offline)

This is an internal procedure that should be used strictly for recovery purposes.

procedure sys$undo_restore_table(
        a_table_name          varchar2,
        a_parallelism binary_integer default null);

Note:

This is an internal procedure that should be used strictly for recovery purposes.

If om_part_maintain.purge_partitions fails while restoring retained orders into a partitioned table, the procedure automatically invokes om_part_maintain.sys$undo_restore_table to delete the partially restored rows from that table. This leaves the table in a clean state and prevents unique key violations when another restore attempt is made. The OM_SQL_POINTER table shows the error and points to the line in OM_SQL_LOG where execution failed. That line includes a call to om_part_maintain.sys$restore_table with the name of a partitioned table as input argument. If you have any doubts whether all the partially restored data was deleted from that table successfully, execute om_part_maintain.sys$undo_restore_table manually.

om_part_maintain.sys$undo_restore_orders (Offline)
procedure sys$undo_restore_orders(
        a_parallelism binary_integer default null);

This procedure invokes om_part_maintain.sys$undo_restore_table for each partitioned table to delete all the order data that was restored from the backup tables. This is a slow operation because it uses DELETE statements. But it might be necessary if purge_partitions fails because there is an issue with the data to be restored from the backup tables. In this case you must call om_part_maintain.sys$undo_restore_orders, fix the data in the backup tables, and finally restore the orders. This procedure has no effect if the backup tables were purged. For more information, see "Troubleshooting and Error Handling."

Database Reference

The following sections provide information about database views and database tables.

Database Views

The following sections provide information about database audit views.

OM_AUDIT_PURGE_ALL

The OM_AUDIT_PURGE_ALL view returns information about all order purges in descending order (the latest purge is returned first).

Table 7-10 lists and describes the columns in the OM_AUDIT_PURGE_ALL table.

Table 7-10 OM_AUDIT_PURGE_ALL Table

Column Datatype NULL Description

PURGE_SEQ_ID

NUMBER(18)

NOT NULL

The system-generated unique ID assigned to the purge operation.

OPERATION_NAME

VARCHAR(64)

NOT NULL

The name of the operation (for example, om_new_purge_pkg.purge_orders)

STATUS

VARCHAR2(20)

NOT NULL

The purge status.

  • STARTED: This short-lived initial status occurs when the purge operation is selecting the orders to be purged.

  • RUNNING: This status occurs when the purge operation is purging the orders.

  • STOPPED: This status occurs when the om_new_purge_pkg.stop_purge stops a purge operation.

  • EXPIRED: This status occurs when the purge exceeds the end of the purge window specified by the a_stop_date argument.

  • FINISHED: This status occurs when the purge finishes successfully. All orders that satisfy the purge criteria are purged.

  • FINISHED_WITH_ERROR: This status occurs when the purge finishes with some errors. Some orders that satisfy the purge criteria are not purged.

  • FAILED: This status occurs when the purge failed. No orders are purged.

  • NO_PURGE: This status occurs when no order satisfies the purge criteria.

START_DATE

DATE

NOT NULL

The start date and time of the purge.

PURGE_DURATION_MINUTES

NUMBER(9)

NOT NULL

The purge run time in minutes. If the purge status is STARTED or RUNNING, this duration includes the current elapsed time, for example, since the purge was started or last resumed. This behavior is unlike the PURGE_DURATION_SECS column of the underlying OM_AUDIT_PURGE table, where the current elapsed time is not included.

EST_OR_ACTUAL_END_DATE

DATE

N/A

The actual or estimated end date and time. If the purge is completed (for example, with status FINISHED, FINISHED_WITH_ERROR or FAILED), the actual end date and time appears. Otherwise the estimated date and time appears based on the purge rate and the purge status:

  • STARTED: The estimated date and time is NULL because the purge rate is unknown.

  • RUNNING: The end date and time is estimated based on the number of orders to be purged and the current purge rate. The estimated date and time may exceed the end of the purge window, if specified.

  • STOPPED: The estimated date and time is calculated as if the status is RUNNING. In other words, it tells you when the purge would finish if you resumed it immediately. (This is useful if you need to find out whether it would finish before a certain time.)

PURGE_WINDOW_END_DATE

DATE

N/A

The end of the purge window as specified by the a_stop_date argument of the purge procedure.

STOPPED_DATE

DATE

N/A

The last date and time when the purge was stopped.

RESUMED_DATE

DATE

N/A

The last date and time when the purge was resumed.

SELECTED_ORDER_COUNT

NUMBER(9)

NOT NULL

The number of orders selected for purge.

SELECTED_ORDER_PURGE_COUNT

NUMBER

NOT NULL

The number of orders selected for purge that have been purged so far (cascaded deletes are excluded).

PERCENT_COMPLETE

NUMBER

NOT NULL

The completion percent based on SELECTED_ORDER_COUNT and SELECTED_ORDER_PURGED_COUNT.

ORDERS_PURGED_PER_MINUTE

NUMBER

NOT NULL

The purge rate per minute.

ORDERS_INJECTED_PER_MINUTE

NUMBER

NOT NULL

The rate per minute that orders are created in OSM while the order purge executes. This value allows you to identify any purge overlaps with high order volume periods.

PARALLELISM

NUMBER

NOT NULL

The effective purge degree of parallelism. You can specify this value on the operation using the optional a_parallelism argument. If the argument is null, the purge procedures uses the parallelism configured by the degree_of_parallelism parameter. If degree_of_parallelism is not set, the default parallelism is 4.

ERROR_MESSAGE

VARCHAR2(4000)

N/A

The reason of failure if the purge failed or finished with errors.

OM_AUDIT_PURGE_LATEST

The OM_AUDIT_PURGE_LATEST views is identical the OM_AUDIT_PURGE_ALL view except that it returns information only about the latest purge (see "OM_AUDIT_PURGE_ALL"). This view is useful for monitoring.

Database Tables

The following sections provide information about audit related database tables.

OM_AUDIT_PURGE

The OM_AUDIT_PURGE table describes each order purge. Each audited purge operation adds a record to this table to monitor the purge operation as soon as the purge starts.

Table 7-11 describes the OM_AUDIT_PURGE table. This table is partitioned by START_DATE. Each partition corresponds to a different month.

Table 7-11 OM_AUDIT_PURGE Table

Column The system-generated unique ID assigned to the purge operation. NULL Description

PURGE_SEQ_ID

The name of the operation (for example, om_new_purge_pkg.purge_orders)

NOT NULL

A system-generated unique ID assigned to the purge operation.

OPERATION_NAME

VARCHAR2(64)

NOT NULL

The name of the purge operation (for example, om_new_purge_pkg.purge_orders).

STATUS

VARCHAR2(20)

NOT NULL

The purge status.

  • STARTED: This short-lived initial status occurs when the purge operation is selecting the orders to be purged.

  • RUNNING: This status occurs when the purge operation is purging the orders.

  • STOPPED: This status occurs when the om_new_purge_pkg.stop_purge stops a purge operation.

  • EXPIRED: This status occurs when the purge exceeds the end of the purge window specified by the a_stop_date argument.

  • FINISHED: This status occurs when the purge finishes successfully. All orders that satisfy the purge criteria are purged.

  • FINISHED_WITH_ERROR: This status occurs when the purge finishes with some errors. Some orders that satisfy the purge criteria are not purged.

  • FAILED: This status occurs when the purge failed. No orders are purged.

  • NO_PURGE: This status occurs when no order satisfies the purge criteria.

START_DATE

DATE

NOT NULL

The start date and time of the purge.

PURGE_DURATION_SEC

NUMBER(9)

NOT NULL

The purge execution time in seconds. This value is updated when the purge completes. If the status is RUNNING, it does not include the elapsed time because the purge is started or resumed. If the purge was stopped and resumed, it includes the total execution time excluding idle periods.

INJECTED_ORDER_COUNT

NUMBER(9)

NOT NULL

The number of orders injected while the purge was running online. This value helps identify any purge overlaps with high order volume periods.

SELECTED_ORDER_COUNT

NUMBER(9)

NOT NULL

The number of orders selected for purge.

END_DATE

DATE

N/A

The date when the purge ended. This is set when the status is FINISHED, FINISHED_WITH_ERROR, EXPIRED or FAILED.

STOP_REQUESTED_DATE

DATE

N/A

The last date and time when the user submitted a purge stop request.

STOPPED_DATE

DATE

N/A

The last date and time when the purge was stopped.

RESUMED_DATE

DATE

N/A

The last date and time when the purge was resumed.

ERROR_MESSAGE

VARCHAR2(4000)

N/A

The reason of failure if the purge failed or finished with errors.

OM_AUDIT_PURGE_ORDER

The OM_AUDIT_PURGE_ORDER table stores a synopsis for each purged order including the order ID and all attributes that are used to determine whether an order satisfies the purge criteria. Orders are added to this table as they are purged and become visible as transactions commit. This ability allows you to monitor the purge rate.

Table 7-12 describes the OM_AUDIT_PURGE_ORDER table. This table is reference-partitioned with OM_AUDIT_PURGE as the parent table.

Table 7-12 OM_AUDIT_PURGE_ORDER Table

Column Datatype NULL Description

PURGE_SEQ_ID

NUMBER(18)

NOT NULL

The system-generated unique ID assigned to the purge operation.

ORDER_SEQ_ID

NUMBER(18)

NOT NULL

The order ID.

CASCADED_DELETE

VARCHAR2(1)

N/A

Indicates whether the order was deleted because of a cascaded delete. For example, whether this is an amendment order of a deleted base order.

DELETED_DATE

DATE

NOT NULL

The date and time when the order was deleted.

ORDER_TYPE_ID

NUMBER(9)

NOT NULL

Copied from OM_ORDER_HEADER.

ORDER_SOURCE_ID

NUMBER(9)

NOT NULL

Copied from OM_ORDER_HEADER.

REFERENCE_NUMBER

VARCHAR2(255)

NOT NULL

Copied from OM_ORDER_HEADER.

ORD_STATE_ID

NUMBER(9)

NOT NULL

Copied from OM_ORDER_HEADER.

CARTRIDGE_ID

NUMBER(6)

NOT NULL

Copied from OM_ORDER_HEADER.

ORD_CREATION_DATE

DATE

NOT NULL

Copied from OM_ORDER_HEADER.

ORD_START_DATE

DATE

NOT NULL

Copied from OM_ORDER_HEADER.

ORD_COMPLETION_DATE

DATE

NOT NULL

Copied from OM_ORDER_HEADER.

ORD_TXN_COMPLETION_DATE

TIMESTAMP

N/A

Copied from OM_ORDER_HEADER.

VERSION

NUMBER(9)

N/A

Copied from OM_ORDER_HEADER.

OM_AUDIT_PURGE_PARAM

The OM_AUDIT_PURGE_PARAM table stores the purge arguments and criteria supplied to the purge procedure and a snapshot of relevant session and configuration parameters at the time the purge was started. The following parameters are included:

  • Arguments of the purge procedure that specify purge criteria, such as a_delete_before, a_order_states, a_order_id_lt, a_order_id_ge, a_order_source_mnemonic, a_order_type_mnemonic, a_namespace_mnemonic, a_version_mnemonic, and a_cartridge_id.

  • Arguments of the purge procedure other than purge criteria, such as a_stop_date, a_parallelism and a_commit_count.

  • Database session parameters that identify who executed the purge and where, such as BG_JOB_ID, FG_JOB_ID, HOST, INSTANCE_NAME, OS_USER, SERVICE_NAME, SESSION_USER, and SID.

  • Purge-related configuration parameters in the om_parameter table, such as degree_of_parallelism, parallel_execute_chunk_size, oms_timezone, and purge_job_class.

Table 7-13 describes the OM_AUDIT_PURGE_PARAM table. This table is reference-partitioned with OM_AUDIT_PURGE as the parent table.

Table 7-13 OM_AUDIT_PURGE_PARAM Table

Column Datatype NULL Description

PURGE_SEQ_ID

NUMBER(18)

NOT NULL

The system-generated unique ID assigned to the purge operation.

PARAMETER_NAME

VARCHAR2(254)

NOT NULL

The parameter name.

PARAMETER_TYPE

VARCHAR2(1)

NOT NULL

The parameter type:

  • P: This is a parameter of the purge procedure that specifies a purge criterion.

  • U: This is a parameter of the purge procedure other than a purge criterion.

  • S: This is a database session parameter.

  • C: This is a purge-related configuration parameter.

PARAMETER_VALUE

VARCHAR2(255)

The parameter value.

Troubleshooting and Error Handling

Note:

In the event of failure during a purge operation, Oracle strongly recommends that you stop OSM and perform all troubleshooting and recovery offline

The PL/SQL API provides functions and procedures to troubleshoot and recover from errors. Most procedures for managing partitions use om_sql_log_pkg, which is an internal package that enables procedures to persist and execute SQL statements so that execution can be resumed in the event of an error. This is particularly useful for DDL statements.

The om_sql_log_pkg package persists SQL statements in the om_sql_log table, which includes the following columns:

  • sid: The session ID. The default value is the current session ID, for example, sys_context('USERENV', 'SID'). This allows concurrent execution.

  • name: This is usually the name of the procedure that generated the SQL statement. It is useful to Oracle Support.

  • line: This is a line number used for ordering the SQL statements to be executed.

  • sql_text: The SQL statement.

SQL statements persisted in om_sql_log are executed by om_sql_log_pkg.exec. This procedure executes all SQL statements with the specified session ID, ordered by line number. If you do not specify the session ID, it uses the current one. During execution, the line number of the current statement is updated in the om_sql_pointer table. This allows you to monitor execution. Upon successful execution, it deletes all statements with that session ID. In the event of failure, however, it inserts in the om_sql_pointer table the error message with the session ID and line number of the failed statement. In this case, when om_sql_log_pkg.exec is re-executed, it resumes execution with the failed statement.

Therefore you can troubleshoot and recover from a failed partition maintenance operation even if it was executed by a scheduled job. The contents of om_sql_log and om_sql_pointer allow for faster assistance from Oracle Support. After you fix the root cause of a failure, in some cases you can resume the operation from the point of failure. This ensures that your data is not left in an inconsistent state (although in some cases you might have to take additional actions if you want to complete that operation).

If you resume a failed operation from a different database session, or you abandon that operation, you must manually delete the rows for the failed session by executing the following statement:

execute om_sql_log_pkg.remove(sid);

where sid is the session ID specified by the error in om_sql_pointer.

Example (Monitoring execution): You can monitor execution by retrieving from om_sql_log the current statement of each session. If the error_code section is populated, the execution failed.

select l.*, p.error_code, p.error_message
from om_sql_log l, om_sql_pointer p
where l.sid = p.sid and l.line = p.line;

Example: You can review the set of SQL statements of a partition maintenance operation that failed in the current session as follows:

select * from om_sql_log
where sid = sys_context('USERENV', 'SID') 
order by line;

Error Handling for add_partitions

The add_partitions procedure logs the DDL statements to be executed in the om_sql_log table. If the reason for the failure is unknown, you can check om_sql_log and om_sql_pointer for clues. If om_sql_pointer contains an error, it is likely because partition creation succeeded only partially and the schema is therefore no longer equi-partitioned. After you resolve the issue, you can finish the operation as follows:

  • If the failure occurred while rebuilding unusable indexes, the new partitions have been created. Execute om_sql_log_pkg.exec(sid) to finish rebuilding, where sid is the session ID specified by the error in om_sql_pointer.

  • If the failure occurred during the partition creation phase, execute om_sql_log_pkg.exec(sid) to finish partition creation. Then execute om_part_maintain.rebuild_unusable_indexes with a_online set to true or false depending on whether OSM is running or not.

  • When in doubt, execute om_part_maintain.is_equipartitioned to check whether the schema is equi-partitioned. If it is not, you can execute om_part_maintain.equipartition to fix it.

Error Handling for drop_partitions

In the event of error, drop_partitions re-enables disabled foreign keys and throws the exception again. However, execution failures could result in partitioning inconsistencies, orphan order data and unusable indexes or index partitions. After you resolve the issue, you can take the following actions:

  • The recommended action is to re-execute the procedure with the same argument. If this is not possible (for example, because you cannot afford further downtime), do the following:

    • You must at least execute rebuild_unusable_indexes to ensure indexes are usable.

    • (Optional) You can execute purge_orphan_order_data to delete orphan order data. Otherwise orphan data is deleted by the next execution of purge_partitions or drop_partitions.

    • Execute om_job_pkg.resume_jobs to resume database jobs.

  • When in doubt, execute is_equipartitioned to check whether the schema is equi-partitioned. If it is not, you can execute equipartition to fix it.

Error Handling for purge_partitions

In the event of unexpected error, purge_partitions re-enables any disabled constraints and throws the exception again. However, execution failures could result in partitioning inconsistencies, orphan order data and even data loss if you do not follow the error handling procedure to recover.

Troubleshooting

In the event of an unexpected error, purge_partitions re-enables any disabled constraints and throws the exception again. However, execution failures could result in partitioning inconsistencies, orphan order data, and even data loss if you do not follow recovery procedures.

If you spooled the output of the stored procedure to a file (recommended), review the file to determine the reason and point of failure. If the purge capacity is greater than 1, the file also indicates which purge tables were involved. You can identify the point of failure by reviewing the started and finished messages that mark the beginning and end of procedure execution.

A failure may occur in these procedures:

  • sys$bpdr_backup: Copies the orders that do not satisfy the purge criteria into the backup tables.

  • sys$bpdr_purge: Purges one or more partitions entirely by exchanging them with purge tables.

  • sys$bpdr_drop: Drops N-1 empty partitions, where N is the number of purged partitions.

  • sys$bpdr_restore: Restores the retained orders from the backup tables into the Nth partition.

  • rebuild_unusable_indexes: Rebuilds all or specific unusable indexes as required. It is executed:

    • By sys$bpdr_backup prior to copying orders into the backup tables.

    • By sys$bpdr_restore prior to restoring retained orders.

    • At the end, prior to sys$purge_orphan_order_data.

  • sys$purge_orphan_order_data: Purges orphan order data (executed at the end).

  • sys$purge_xchg_prg_tables: If the a_purge_xchg_prg_tables argument is true, it is executed at the end to purge the purge tables. It may also be executed prior to purging a group of successive partitions, if the purge capacity is exhausted.

If the procedure output is not available, inspect the following for clues.

  • Most of the time you can determine the error and the point of failure by reviewing the om_sql_log and om_sql_pointer tables.

    • om_sql_pointer points to the SID (session ID) and line of the failed statement in om_sql_log. If there are several errors in om_sql_pointer, check the error_date column to find the SID of the most recent error.

    • If om_sql_log includes EXCHANGE PARTITION statements, execution failed in sys$bpdr_purge. Partitions are in an inconsistent state.

    • If om_sql_log includes INSERT statements into the backup tables, execution failed in sys$bpdr_backup. Partitions are in a consistent state.

    • If om_sql_log includes calls to sys$restore_table, execution failed in sys$bpdr_restore. Partitions contain partially restored orders.

    • If om_sql_log includes statements to rebuild indexes, execution failed in rebuild_unusable_indexes.

  • Review the backup tables:

    • If the backup tables are empty then there are a number of possibilities, such as a) no orders were retained, b) the failure occurred prior to sys$bpdr_backup, or c) sys$bpdr_restore purged the backup tables after a successful restore.

    • If the backup tables are not empty then the failure occurred either during or after sys$bpdr_backup and possibly during sys$bpdr_restore.

    • If none of the order IDs in XCHG_OM_BCK_$001$ exist in OM_ORDER_HEADER then most likely the failure occurred during or after sys$bpdr_purge. Check the remaining partitioned tables listed in the OM_XCHG_TABLE table. If you cannot find those order IDs in any of those tables then sys$bpdr_purge completed successfully (the data was exchanged into the purge tables). There is also a remote possibility that the failure occurred in sys$bpdr_restore while restoring retained orders into OM_ORDER_HEADER (the first table to be restored). In this case, user_parallel_execute_tasks should include a task with task_name equal to restore:om_order_header.

    • If some but not all of the order IDs in XCHG_OM_BCK_$001$ exist in OM_ORDER_HEADER then the failure occurred in sys$bpdr_restore while restoring retained orders into OM_ORDER_HEADER (the first table to be restored). In this case, user_parallel_execute_tasks should include a task with task_name equal to restore:om_order_header.

    • If all of the order IDs in XCHG_OM_BCK_$001$ exist in OM_ORDER_HEADER, check whether all the data in the remaining backup tables exist in the corresponding partitioned tables (the OM_XCHG_TABLE table specifies the exchange table ID for each partitioned table). If this is not the case then the failure occurred during sys$bpdr_purge or sys$bpdr_restore.

  • Review the purge tables, especially those that correspond to OM_ORDER_HEADER (for example, XCHG_OM_PRG_001$001$). If they do not contain any data, most likely the purge failed before the sys$bpdr_purge procedure. If the purge capacity is greater than 1, check the sys$xchg_purge_seq parameter in the om_parameter table to find out which set of purge tables was used for the latest purge.

  • Review the affected partitions. If a partition in the purge range is empty, most likely it was exchanged with the purge tables (it is also possible that it was previously empty). Check the purge tables to confirm.

  • Review the user_parallel_execute_tasks view in the OSM core schema. If the view contains any tasks with task_name equal to restore:tableName, execution failed in sys$bpdr_restore while restoring data into the tableName table (assuming the previous execution of purge_partitions was successful).

Error Handling

When you determine the point of failure, as discussed in "Troubleshooting," and you resolve the issue, you can recover and finish the purge operation as follows:

  • If the failure occurred during sys$bpdr_backup, the partitions are in a consistent state. Execute om_part_maintain.purge_xchg_bck_tables and om_sql_log_pkg.remove(SID) to purge the backup tables, om_sql_log and om_sql_pointer.

  • If the failure occurred during sys$bpdr_purge, the partitions are in an inconsistent state (partially purged):

    1. Execute om_sql_log_pkg.exec(SID) to finish the purge (exchange), where SID is the session ID of the failed execution (the SID is recorded in om_sql_pointer together with the error message).

    2. If you were consolidating partitions N-to-1, drop the N-1 partitions before the Nth partition, which was exchanged. To drop those partition, use the following statements instead of drop_partitions:

      ALTER TABLE OM_ORDER_HEADER DROP PARTITION partition_name;
      ALTER TABLE OM_SEQUENCE DROP PARTITION partition_name;
      
    3. If the backup tables are not empty, execute om_part_maintain.restore_orders with the desired degree of parallelism to rebuild unusable indexes and restore the retained orders.

  • If the failure occurred during sys$bpdr_drop while consolidating partitions: When you consolidate partitions N-to-1, purge_partitions copies retained orders into the backup tables, purges (exchanges) the Nth partition, drops N-1 partitions, and restores the retained orders into the Nth partition.

    • If the om_sql_log table contains the DROP PARTITION statements, execute om_sql_log_pkg.exec(SID), where SID is the session ID of the failed execution (the SID is recorded in om_sql_pointer together with the error message). In some releases, the DROP PARTITION statements are not logged in the om_sql_log table. In this case, you can find them in the DBMS output. If you do not have the DBMS output, execute these statements:

      ALTER TABLE OM_ORDER_HEADER DROP PARTITION partition_name;
      ALTER TABLE OM_SEQUENCE DROP PARTITION partition_name;
      
    • If the backup tables are not empty, execute om_part_maintain.restore_orders with the desired degree of parallelism to rebuild unusable indexes and restore the retained orders.

  • If the failure occurred during sys$bpdr_restore and you fixed the root cause, Oracle recommends that you finish the restore operation. The partitions are in an inconsistent state (retained orders are not fully restored). The backup tables are not affected and they contain all retained orders. To resume the restore operation from the point of failure:

    1. Run om_part_maintain.disable_ptned_fks to disable foreign keys of partitions tables.

    2. In the event of failure, sys$bpdr_restore automatically deletes the partially restored data from the last partitioned table in order to ensure that a second attempt will not fail due to unique key violations. However, Oracle recommends that you run om_part_maintain.sys$undo_restore_table(t) anyway, where t is the name of the partitioned table that caused the failure. This procedure deletes the data restored into the given table.

    3. Run om_sql_log_pkg.exec(SID) to finish the restore, where SID is the session ID of the failed execution (the SID is recorded in om_sql_pointer together with the error message).

    4. Run om_part_maintain.reenable_ptned_fks to re-enable foreign keys.

  • If the root cause lies with the data to be restored, you must execute om_part_maintain.sys$undo_restore_orders to delete all restored data from the partitioned tables. This is a slow operation because it uses DELETE statements. Then fix the orders retained in the backup tables, and execute om_part_maintain.restore_orders with the desired degree of parallelism to rebuild unusable indexes and restore the retained orders.

  • Oracle recommends that you always execute om_part_maintain.rebuild_unusable_indexes.

  • (Optional) In any case, you can execute om_part_maintain.purge_orphan_order_data to delete orphan order data. Otherwise orphan data will be deleted by the next execution of purge_partitions or drop_partitions.

  • Execute om_job_pkg.resume_jobs to resume database jobs.

Error Handling for rebuild_unusable_indexes

This procedure logs the DDL statements to be executed in the om_sql_log table. In the event of an error, it is important that you re-execute om_sql_log_pkg.exec to finish the rebuild operation. Otherwise:

  • Unusable indexes are likely to impact performance. Moreover, unusable indexes that enforce unique constraints report ORA-01502 errors. Error reporting is disabled for other unusable indexes, unless SKIP_UNUSABLE_INDEXES is set to false (the default is true).

  • An index may be left in PARALLEL state, which could result in undesirable behavior.

Error Handing for setup_xchg_tables

This procedure logs the DDL statements to be executed in the om_sql_log table. If the reason of failure is unknown, check om_sql_pointer and om_sql_log for clues. When you resolve the issue, you can take one of the following actions:

  • Re-execute setup_xchg_tables with a_force=true.

  • Execute drop_xchg_tables to drop any partially created exchange tables.

Performance Tuning

This section explains how to tune the following:

  • degree_of_parallelism

  • degree_of_parallelism_rebuild_indexes

  • degree_of_parallelism_rebuild_xchg_indexes

  • Parallel job execution

  • Row-based purge

Tuning degree_of_parallelism

This parameter specifies the default DOP for queries, DML, and most DDL operations. In particular, it affects the performance of order backup and restore statements performed by purge_partitions. You can use the following procedure to evaluate the optimal degree_of_parallelism without performing a purge.

To evaluate the optimal degree_of_parallelism:

  1. Clear the om_order_id_for_backup table.

    delete from om_order_id_for_backup;
    
  2. Select a representative number of order IDs that does not exceed the value of xchg_retained_orders_thres from a single partition. For example, if you frequently retain 10,000 orders when you purge partitions:

    insert into om_order_id_for_backup (
      select order_seq_id
      from om_order_header partition (P_000000000003000001)
      where rownum <= 10000);
    commit;
    
  3. Back up the selected order IDs with the desired parallelism (for example, 16) and record the elapsed time:

    exec om_part_maintain.backup_selected_ords(16);
    
  4. Purge the backup tables:

    exec om_part_maintain.purge_xchg_bck_tables;
    
  5. Repeat with a different degree of parallelism and compare the elapsed times until you find the optimal DOP.

Tuning degree_of_parallelism_rebuild_indexes

The best way to determine the optimal DOP for degree_of_parallelism_rebuild_indexes is through trials. Try purging or dropping partitions with different settings for this parameter, and review the DBMS output to compare the elapsed times for rebuilding indexes.

Tuning degree_of_parallelism_rebuild_xchg_indexes

The optimal DOP for degree_of_parallelism_rebuild_xchg_indexes is normally 1 (the default) because these indexes are very small and they are rebuilt one partition at a time. There is rarely a reason to increase this value.

Tuning Parallel Job Execution

You can use these parameters to tune parallel job execution:

  • degree_of_parallelism: Specifies the degree of parallelism for the majority of database operations performed by parallel jobs and parallel servers. For more information, see "Tuning degree_of_parallelism."

  • purge_job_class: Specifies the job class for purge operations. If your database is Oracle RAC, it is important that you configure this as described in the purge_job_class section.

  • parallel_execute_chunk_size: This is an advanced parameter that you rarely need to tune, especially beginning with 7.2.0.10.2, 7.2.2.5, and 7.2.4.2. For more information see the following section.

Tuning parallel_execute_chunk_size

The implementation of purge_partitions uses the dbms_parallel_execute package to restore retained orders, which uses database jobs to cause execution to be in parallel. Order data is restored one table at a time, and each table is divided into chunks. Each job is assigned a chunk, commits the transaction, gets the next chunk, and so on. The process repeats for the next table. For example, if the degree of parallelism is 32 and 64 chunks are created, 32 chunks will be processed concurrently by jobs and they will be committed at about the same time before the remaining 32 chunks are processed.

The number of chunks created depends primarily on the volume of data, the number of sub-partitions and the specified chunk size. The default value of parallel_execute_chunk_size is 2000 blocks. If the size of the retained order data is small to moderate, this chunk size normally results in as many chunks as sub-partitions (for example, 32 or 64), which is found to work well.

Beginning with 7.2.0.10.2, 7.2.2.5, and 7.2.4.2, each table to be restored is divided separately into chunks. This means that the number of chunks is different for each table. However, the volume of data for each chunk is about the same, regardless of the table. This results in shorter transactions (more frequent commits) that require less UNDO. Therefore, the default parallel_execute_chunk_size (2000 blocks) results in good performance, regardless of the volume of data retained, and there is rarely a need to change it.

Prior to 7.2.0.10.2, 7.2.2.5, and 7.2.4.2, the number of restore chunks is the same for all tables because chunks are generated from the XCHG_OM_BCK_$001$ table. However, the volume of data for each chunk varies from table to table. If the volume of retained order data is very large (for example, tens of thousands of orders), the chunks for large tables such as OM_ORDER_INSTANCE are large transactions that generate a lot of UNDO and therefore require a large UNDO tablespace.

In this case, it might be better to increase the number of chunks in order to increase the frequency of commits and reduce the UNDO size. For example, if your performance tests show that committing every 500 orders is more efficient in terms of elapsed time and/or UNDO size, and you normally retain about 100000 orders, the optimal number of chunks would be 200. To increase the number of chunks you must decrease the parallel_execute_chunk_size.

If you are not sure how chunks are generated at your patch level, review the restore statements. If they are joins, chunks are generated as in 7.2.0.10 or earlier.

Prior to 7.2.0.10.2, 7.2.2.5 and 7.2.4.2, use the following procedure to find out how different parallel_execute_chunk_size settings affect the number of chunks created. If you are using 7.2.0.10.2, 7.2.2.5, 7.2.4.2 or later, there is rarely a need to tune parallel_execute_chunk_size. However, if you want to do so, you can substitute om_order_header and xchg_om_bck_$001$ in the following procedure with any other partitioned table and the corresponding xchg_om_bck_table to find out the number of chunks that will be created for that partitioned table.

To find out how different parallel_execute_chunk_size settings affect the number of chunks created (7.2.0.10.2, 7.2.2.5, 7.2.4.2, or earlier):

  1. Ensure the exchange tables are created.

  2. Populate the backup tables with a large number of orders to retain, preferably all in the same partition (substitute x and y, so that the range (x, y) contains the desired number of orders):

    insert into xchg_om_bck_$001$
        (select * from om_order_header where order_seq_id between x and y) ;
    
  3. Repeat the following executions and with different values for chunk_size (20, 100, 200, and so on), until the query returns a count close to the desired number of chunks:

    exec dbms_parallel_execute.create_task('CHECK_CHUNK_TASK') ;
    exec dbms_parallel_execute.drop_chunks('CHECK_CHUNK_TASK') ;
    exec dbms_parallel_execute.create_chunks_by_rowid(
        'CHECK_CHUNK_TASK', user, 'XCHG_OM_BCK_$001$', by_row => false, chunk_size =>20) ;
    select count(*) from user_parallel_execute_chunks
    where task_name = 'CHECK_CHUNK_TASK'
    order by chunk_id;
    exec dbms_parallel_execute.drop_task('CHECK_CHUNK_TASK') ;
    
  4. When you are done testing, purge the backup tables:

    exec om_part_maintain.purge_xchg_bck_tables;

Tuning Row-Based Purge

Row-based purges are I/O intensive. The purge_policy_time_to_close_wait policy can reduce I/O, improve performance, and decrease purge rate fluctuations. For more information see "purge_policy_time_to_close_wait."

Database Terms

This chapter uses the following database terms:

  • Automatic Workload Repository (AWR): A built-in repository in every Oracle database. Oracle Database periodically makes a snapshot of its vital statistics and workload information and stores them in AWR.

  • closed and open orders: An order is closed if it is complete or aborted. Otherwise it is open. Open orders restrict certain operations. For example, if a partition has open orders, you cannot purge it online. Those restrictions are relaxed for cancelled orders: cancelled orders are treated as closed.

  • DDL (Data Definition Language): Includes database statements that define or change a data structure, such as CREATE TABLE or ALTER INDEX.

  • DML (Data Manipulation Language): Includes database statements that manipulate data, such as SELECT, INSERT, UPDATE, and DELETE.

  • exhausted partition: Each OSM partition stores a range of order Ids. The upper bound of the range is specified by the (non-inclusive) partition upper bound. Order Ids increase monotonically. When the next order ID reaches or exceeds the upper bound of the current partition, the partition is said to be exhausted.

  • high water mark: The boundary between used and unused space in a database segment

  • OLTP (Online Transaction Processing): OLTP systems are optimized for fast and reliable transaction handling. Compared to data warehouse systems, most OLTP interactions involve a relatively small number of rows, but a larger group of tables.

  • Oracle RAC (Real Application Clusters): A database option that allows multiple concurrent database instances to share a single physical database.

  • partitioning: The ability to decompose very large tables and indexes into smaller and more manageable pieces called partitions.

  • retained orders: Retained orders excluded from purging, for example, because they are still in the retention period or they do not satisfy other purge criteria.

  • retention period: The period of time an order should be kept after it is closed. This varies depending on policies, laws, and regulations prescribed by the business or governments. After its retention period, an order is eligible for purge.

  • tablespaces, segments, and extents: A database is divided into one or more logical storage units called tablespaces. Tablespaces are divided into logical units of storage called segments. Each segment consists of a set of extents allocated for a specific database object, such as a table, index, or partition. An extent is a logically contiguous allocation of space. A partitioned table has a segment for each partition. For example, a range-hash partitioned table with two range partitions and 32 hash sub-partitions per range partition has 64 segments.