5 Managing and Maintaining Time-Based Information
Oracle Database provides strategies to manage and maintain data based on time.
This chapter discusses the components in Oracle Database which can build a strategy to manage and maintain data based on time.
Although most organizations have long regarded their stores of data as one of their most valuable corporate assets, how this data is managed and maintained varies enormously from company to company. Originally, data was used to help achieve operational goals, run the business, and help identify the future direction and success of the company.
However, new government regulations and guidelines are a key driving force in how and why data is being retained. Regulations now require organizations to retain and control information for very long periods of time. Consequently, today there are additional objectives that information technology (IT) managers are trying to satisfy:
-
To store vast quantities of data for the lowest possible cost
-
To meet the new regulatory requirements for data retention and protection
-
To improve business opportunities by better analysis based on an increased amount of data
This chapter contains the following sections:
5.1 Managing Data in Oracle Database With ILM
With Information Lifecycle Management (ILM), you can manage data in Oracle Database using rules and regulations that apply to that data.
Information today comes in a wide variety of types, for example an e-mail message, a photograph, or an order in an Online Transaction Processing (OLTP) System. After you know the type of data and how it is used, you have an understanding of what its evolution and final disposition is likely to be.
One challenge facing each organization is to understand how its data evolves and grows, monitor how its usage changes over time, and decide how long it should survive, while adhering to all the rules and regulations that now apply to that data. Information Lifecycle Management (ILM) is designed to address these issues, with a combination of processes, policies, software, and hardware so that the appropriate technology can be used for each stage in the lifecycle of the data.
This section contains the following topics:
5.1.1 About Oracle Database for ILM
Oracle Database provides the ideal platform for implementing an ILM solution.
The Oracle Database platform offers the following:
-
Application Transparency
Application transparency is very important in ILM because it means that there is no need to customize applications and it also enables various changes to be made to the data without any effect on the applications that are using that data. Data can easily be moved at the different stages of its lifecycle and access to the data can be optimized with the database. Another important benefit is that application transparency offers the flexibility required to quickly adapt to any new regulatory requirements, again without any impact on the existing applications.
-
Fine-grained data
Oracle can view data at a very fine-grained level and group related data, whereas storage devices only see bytes and blocks.
-
Low-Cost Storage
With so much data to retain, using low cost storage is a key factor in implementing ILM. Because Oracle can take advantage of many types of storage devices, the maximum amount of data can be held for the lowest possible cost.
-
Enforceable Compliance Policies
When information is kept for compliance reasons, it is imperative to show to regulatory bodies that data is being retained and managed in accordance with the regulations. Within Oracle Database, it is possible to define security and audit policies, which enforce and log all access to data.
This section contains the following topics:
5.1.1.1 Oracle Database Manages All Types of Data
Information Lifecycle Management is concerned with all data in an organization.
This data includes not just structured data, such as orders in an OLTP system or a history of sales in a data warehouse, but also unstructured data, such as e-mail, documents, and images. Oracle Database supports the storing of unstructured data with BLOBs and Oracle SecureFiles, a sophisticated document management system is available in Oracle Text.
If all of the information in your organization is contained in Oracle Database, then you can take advantage of the features and functionality provided by the database to manage and move the data as it evolves during its lifetime, without having to manage multiple types of data stores.
5.1.1.2 Regulatory Requirements
Many organizations must retain specific data for a specific time period. Failure to follow these regulations could result in organizations having to pay very heavy fines.
Around the world various regulatory requirements, such as Sarbanes-Oxley, HIPAA, DOD5015.2-STD in the US and the European Data Privacy Directive in the European Union, are changing how organizations manage their data. These regulations specify what data must be retained, whether it can be changed, and for how long it must be retained, which could be for a period of 30 years or longer.
These regulations frequently demand that electronic data is secure from unauthorized access and changes, and that there is an audit trail of all changes to data and by whom. Oracle Database can retain huge quantities of data without impacting application performance. It also contains the features required to restrict access and prevent unauthorized changes to data, and can be further enhanced with Oracle Audit Vault and Database Firewall. Oracle Database also provides cryptographic functions that can demonstrate that a highly privileged user has not intentionally modified data. Using Flashback Data Technology, you can store all the versions of a row during its lifetime in a tamper proof historical archive.
5.1.1.3 The Benefits of an Online Archive
There are multiple benefits of an online archive.
There usually comes a point during the lifecycle of the data when it is no longer being regularly accessed and is considered eligible for archiving. Traditionally, the data would have been removed from the database and stored on tape, where you can store vast quantities of information for a very low cost. Today, it is no longer necessary to archive that data to tape, instead it can remain in the database, or be transferred to a central online archive database. All this information can be stored using low-cost storage devices whose cost per gigabyte is very close to that of tape.
There are multiple benefits to keeping all of the data in an Oracle Database for archival purposes. The most important benefit is that the data always be instantly available. Therefore, time is not wasted locating the tapes where the data was archived and determining whether the tape is readable and still in a format that can be loaded into the database.
If the data has been archived for many years, then development time may also be needed to write a program to reload the data into the database from the tape archive. This could prove to be expensive and time consuming, especially if the data is extremely old. If the data is retained in the database, then this is not a problem, because it is online, and in the latest database format.
Holding the historical data in the database no longer impacts the time required to backup the database and the size of the backup. When RMAN is used to back up the database, it only includes in the backup the data that has changed. Because historical data is less likely to change, after that data has been backed up, it is not backed up again.
Another important factor to consider is how the data is to be physically removed from the database, especially if it is to be transferred from a production system to a central database archive. Oracle provides the capability to move this data rapidly between databases by using transportable tablespaces or partitions, which moves the data as a complete unit.
When it is time to remove data from the database, the fastest way is to remove a set of data. This is achieved by keeping the data in its own partition. The partition can be dropped, which is a very fast operation. However, if this approach is not possible because data relationships must be maintained, then a conventional SQL delete statement must be issued. You should not underestimate the time required to issue the delete statement.
If there is a requirement to remove data from the database and there is a possibility that the data may need to be returned to the database in the future, then consider removing the data in a database format such as a transportable tablespace, or use the XML capability of Oracle Database to extract the information in an open format.
Consider an online archive of your data into Oracle Database for the following reasons:
-
The cost of disk is approaching that of tape, so you can eliminate the time to find the tape that contains the data and the cost of restoring that data
-
Data remains online when needed, providing you faster access to meet business requirements
-
Data online means immediate access, so fines by regulatory body for failing to produce data are less likely
-
The current application can be used to access the data, so you do not need to waste resources to build a new application
5.1.2 Implementing ILM Using Oracle Database
Building an Information Lifecycle Management solution using Oracle Database is quite straightforward.
An ILM solution can be completed by following these four simple steps, although Step 4 is optional if ILM is not being implemented for compliance:
5.1.2.1 Step 1: Define the Data Classes
To make effective use of Information Lifecycle Management, first review all the data in your organization before implementing an Information Lifecycle Management solution.
After reviewing the data, determine the following:
-
What data is important, where is it stored, and what must be retained
-
How this data flows within the organization
-
What happens to this data over time and whether it is still required
-
The degree of data availability and protection that is needed
-
Data retention for legal and business requirements
After there is an understanding of how the data is used, the data can then be classified on this basis. The most common type of classification is by age or date, but other types are possible, such as by product or privacy. A hybrid classification could also be used, such as by privacy and age.
To treat the data classes differently, the data must be physically separated. When information is first created, the information is often frequently accessed, but then over time it may be referenced very infrequently. For instance, when a customer places an order, they regularly look at the order to see its status and whether the order has been shipped. After the order arrives, they may never reference that order again. This order would also be included in regular reports that are run to see what goods are being ordered, but, over time, would not figure in any of the reports and may only be referenced in the future if someone does a detailed analysis that involves this data. For example, orders could be classified by the Financial Quarters Q1, Q2, Q3, and Q4, and as Historical Orders.
The advantage of using this approach is that when the data is grouped at the row level by its class, which in this example would be the date of the order, all orders for Q1 can be managed as a self contained unit, where as the orders for Q2 would reside in a different class. This can be achieved by using partitioning. Because partitions are transparent to the application, the data is physically separated but the application still locates all the orders.
5.1.2.1.1 Partitioning for ILM
Partitioning involves physically placing data according to a data value, and a frequently used technique is to partition information by date.
Figure 5-1 illustrates a scenario where the orders for Q1, Q2, Q3, and Q4 are stored in individual partitions and the orders for previous years are stored in other partitions.
Figure 5-1 Allocating Data Classes to a Partition
Description of "Figure 5-1 Allocating Data Classes to a Partition"
Oracle offers several different partitioning methods. Range partitioning is one frequently used partitioning method for ILM. Interval and reference partitioning are also particularly suited for use in an ILM environment.
There are multiple benefits to partitioning data. Partitioning provides an easy way to distribute the data across appropriate storage devices depending on its usage, while still keeping the data online and stored on the most cost-effective device. Because partitioning is transparent to anyone accessing the data, no application changes are required, thus partitioning can be implemented at any time. When new partitions are required, they are simply added using the ADD PARTITION
clause or they are created automatically if interval partitioning is being used.
Among other benefits, each partition can have its own local index. When the optimizer uses partition pruning, queries only access the relevant partitions instead of all partitions, thus improving query response times.
5.1.2.1.2 The Lifecycle of Data
An analysis of your data is likely to reveal that initially, it is accessed and updated on a very frequent basis. As the age of the data increases, its access frequency diminishes to almost negligible, if any.
Most organizations find themselves in the situation where many users are accessing current data while very few users are accessing older data, as illustrated in Figure 5-2. Data is considered to be: active, less active, historical, or ready to be archived.
With so much data being held, during its lifetime the data should be moved to different physical locations. Depending on where the data is in its lifecycle, it must be located on the most appropriate storage device.
5.1.2.2 Step 2: Create Storage Tiers for the Data Classes
Because Oracle Database can take advantage of many different storage options, the second step in implementing an Information Lifecycle Management solution is to establish the required storage tiers.
Although you can create as many storage tiers as you require, a suggested starting point are the following tiers:
-
High Performance
The high performance storage tier is where all the important and frequently accessed data, such as the partition holding our Q1 orders, is stored. This tier uses smaller, faster disks on high performance storage devices.
-
Low Cost
The low cost storage tier is where the less frequently accessed data is stored, such as the partitions holding the orders for Q2, Q3, and Q4. This tier is built using large capacity disks, such as those found in modular storage arrays or low costs ATA disks, which offer the maximum amount of inexpensive storage.
-
Online Archive
The online archive storage tier is where all the data that is seldom accessed or modified is stored. This storage tier is likely to be extremely large and to store the maximum quantity of data. You can use various techniques to compress the data. Stored on low cost storage devices, such as ATA drives, the data would still be online and available, for a cost that is only slightly higher than storing this information on tape, without the disadvantages that come with archiving data to tape. If the Online Archive storage tier is identified as read-only, then it would be impossible to change the data and subsequent backups would not be required after the initial database backup.
-
Offline Archive (optional)
The offline archive storage tier is an optional tier because it is only used when there is a requirement to remove data from the database and store it in some other format, such as XML on tape.
Figure 5-2 illustrates how data is used over a time interval. Using this information, it can be determined that to retain all this information, several storage tiers are required to hold all of the data, which also has the benefit of significantly reducing total storage costs.
After the storage tiers have been created, the data classes identified in Step 1: Define the Data Classes are physically implemented inside the database using partitions. This approach provides an easy way to distribute the data across the appropriate storage devices depending on its usage, while still keeping the data online and readily available, and stored on the most cost-effective device.
You can also use Oracle Automatic Storage Management (Oracle ASM) to manage the data across the storage tiers. Oracle ASM is a high-performance, ease-of-management storage solution for Oracle Database files. Oracle ASM is a volume manager and provides a file system designed exclusively for use by the database. To use Oracle ASM, you allocate partitioned disks for Oracle Database with preferences for striping and mirroring. Oracle ASM manages the disk space, distributing the I/O load across all available resources to optimize performance while removing the need for manual I/O tuning. For example, you can increase the size of the disk for the database or move parts of the database to new devices without having to shut down the database.
5.1.2.2.1 Assigning Classes to Storage Tiers
After the storage tiers have been defined, the data classes (partitions) identified in Step 1 can be assigned to the appropriate storage tiers.
This assignment provides an easy way to distribute the data across the appropriate storage devices depending on its usage, keeping the data online and available, and stored on the most cost-effective device. In Figure 5-3 data identified to be active, less active, historical, or ready to be archived is assigned to the high performance tier, low cost storage tier, online archive storage tier, and offline archive respectively. Using this approach, no application changes are required because the data is still visible.
5.1.2.2.2 The Costs Savings of Using Tiered Storage
One benefit of implementing an ILM strategy is the cost savings that can result from using multiple tiered storage.
Assume that there is 3 TB of data to store, comprising of 200 GB on High Performance, 800 GB on Low Cost, and 2 TB on Online Archive. Assume the cost per GB is $72 on the High Performance tier, $14 on the Low Cost tier, and $7 on the Online Archive tier.
Table 5-1 illustrates the possible cost savings using tiered storage, rather than storing all data on one class of storage. As you can see, the cost savings can be quite significant and, if the data is suitable for OLTP and HCC database compression, then even further cost savings are possible.
Table 5-1 Cost Savings Using Tiered Storage
Storage Tier | Single Tier using High Performance Disks | Multiple Storage Tiers | Multiple Tiers with Database Compression |
---|---|---|---|
High Performance (200 GB) |
$14,400 |
$14,400 |
$14,400 |
Low Cost (800 GB) |
$57,600 |
$11,200 |
$11,200 |
Online Archive (2 TB) |
$144,000 |
$14,000 |
$5,600 |
Total of each column |
$216,000 |
$39,600 |
$31,200 |
5.1.2.3 Step 3: Create Data Access and Migration Policies
The third step in implementing an Information Lifecycle Management solution is to ensure that only authorized users have access to the data and to specify how to move the data during its lifetime.
As the data ages, there are multiple techniques that can migrate the data between the storage tiers.
5.1.2.3.1 Controlling Access to Data
The security of your data is another very important part of Information Lifecycle Management because the access rights to the data may change during its lifetime.
In addition, there may be regulatory requirements that place exacting demands on how the data can be accessed.
The data in Oracle Database can be secured using database features, such as:
-
Database Security
-
Views
-
Virtual Private Database
Virtual Private Database (VPD) defines a very fine-grained level of access to the database. Security policies determine which rows may be viewed and the columns that are visible. Multiple policies can be defined so that different users and applications see different views of the same data. For example, the majority of users could see the information for Q1, Q2, Q3, and Q4, while only authorized users would be able to view the historical data.
A security policy is defined at the database level and is transparently applied to all database users. The benefit of this approach is that it provides a secure and controlled environment for accessing the data, which cannot be overridden and can be implemented without requiring any application changes. In addition, read-only tablespaces can be defined which ensures that the data does not change.
5.1.2.3.2 Moving Data using Partitioning
During its lifetime, data must be moved and partitioning is a technique that can be used.
Moving data may occur for the following reasons:
-
For performance, only a limited number of orders are held on high performance disks
-
Data is no longer frequently accessed and is using valuable high performance storage, and must be moved to a low-cost storage device
-
Legal requirements demand that the information is always available for a given time interval, and it must be held safely for the lowest possible cost
There are multiple ways that data can be physically moved in Oracle Database to take advantage of the different storage tiers. For example, if the data is partitioned, then a partition containing the orders for Q2 could be moved online from the high performance storage tier to the low cost storage tier. Because the data is being moved within the database, it can be physically moved, without affecting the applications that require it or causing disruption to regular users.
Sometimes individual data items, rather than a group of data, must be moved. For example, suppose data was classified according to a level of privacy and a report, which had been secret, is now to be made available to the public. If the classification changed from secret to public and the data was partitioned on its privacy classification, then the row would automatically move to the partition containing public data.
Whenever data is moved from its original source, it is very important to ensure that the process selected adheres to any regulatory requirements, such as, the data cannot be altered, is secure from unauthorized access, easily readable, and stored in an approved location.
5.1.2.4 Step 4: Define and Enforce Compliance Policies
The fourth step in an Information Lifecycle Management solution is the creation of policies for compliance.
When data is decentralized and fragmented, compliance policies have to be defined and enforced in every data location, which could easily result in a compliance policy being overlooked. However, using Oracle Database to provide a central location for storing data means that it is very easy to enforce compliance policies because they are all managed and enforced from one central location.
When defining compliance policies, consider the following areas:
-
Data Retention
-
Immutability
-
Privacy
-
Auditing
-
Expiration
5.1.2.4.1 Data Retention
The retention policy describes how the data is to be retained, how long it must be kept, and what happens after data life.
An example of a retention policy is a record must be stored in its original form, no modifications are allowed, it must be kept for seven years, and then it may be deleted. Using Oracle Database security, it is possible to ensure that data remains unchanged and that only authorized processes can remove the data at the appropriate time. Retention policies can also be defined through a lifecycle definition in the ILM Assistant.
5.1.2.4.2 Immutability
Immutability is concerned with proving to an external party that data is complete and has not been modified.
Cryptographic or digital signatures can be generated by Oracle Database and retained either inside or outside of the database, to show that data has not been altered.
5.1.2.4.3 Privacy
Oracle Database provides several ways to ensure data privacy.
Access to data can be strictly controlled with security policies defined using Virtual Private Database (VPD). In addition, individual columns can be encrypted so that anyone looking at the raw data cannot see its contents.
5.1.2.4.4 Auditing
Oracle Database can track all access and changes to data.
These auditing capabilities can be defined either at the table level or through fine-grained auditing, which specifies the criteria for when an audit record is generated. Auditing can be further enhanced using Oracle Audit Vault and Database Firewall.
See Also:
Oracle Audit Vault and Database Firewall Administrator's Guide for information about Oracle Audit Vault and Database Firewall
5.2 Implementing an ILM Strategy With Heat Map and ADO
To implement an Information Lifecycle Management (ILM) strategy for data movement in your database, you can use Heat Map and Automatic Data Optimization (ADO) features.
Note:
Heat Map and ADO are supported in Oracle Database 12c Release 2 multitenant environments.
This section contains the following topics:
See Also:
-
Managing ILM Heat Map and ADO with Oracle Enterprise Manager for information about using Oracle Enterprise Manager Cloud Control with Heat Map and ADO
-
Oracle Database Vault Administrator’s Guide for information about using Information Lifecycle Management (ILM) with Oracle Database Vault realms and command rules, including granting the authorization that enables an ADO administrative user to perform ILM operations on Database Vault-protected objects.
5.2.1 Using Heat Map
To implement your ILM strategy, you can use Heat Map in Oracle Database to track data access and modification.
Heat Map provides data access tracking at the segment-level and data modification tracking at the segment and row level. You can enable this functionality with the HEAT_MAP
initialization parameter.
Heat Map data can assist Automatic Data Optimization (ADO) to manage the contents of the In-Memory column store (IM column store) using ADO policies. Using Heat Map data, which includes column statistics and other relevant statistics, the IM column store can determine when it is almost full (under memory pressure). If the determination is almost full, then inactive segments can be evicted if there are more frequently accessed segments that would benefit from population in the IM column store.
This section contains the following topics:
See Also:
-
Oracle Database In-Memory Guide for information about enabling and sizing the In-Memory Column Store
5.2.1.1 Enabling and Disabling Heat Map
You can enable and disable heat map tracking at the system or session level with the ALTER
SYSTEM
or ALTER
SESSION
statement using the HEAT_MAP
clause.
For example, the following SQL statement enables Heat Map tracking for the database instance.
ALTER SYSTEM SET HEAT_MAP = ON;
When Heat Map is enabled, all accesses are tracked by the in-memory activity tracking module. Objects in the SYSTEM
and SYSAUX
tablespaces are not tracked.
The following SQL statement disables heat map tracking.
ALTER SYSTEM SET HEAT_MAP = OFF;
When Heat Map is disabled, accesses are not tracked by the in-memory activity tracking module. The default value for the HEAT_MAP
initialization parameter is OFF
.
The HEAT_MAP
initialization parameter also enables and disables Automatic Data Optimization (ADO). For ADO, Heat Map must be enabled at the system level.
See Also:
-
Using Automatic Data Optimization for more information about ADO
-
Oracle Database Reference for information about the
HEAT_MAP
initialization parameter
5.2.1.2 Displaying Heat Map Tracking Data With Views
Heat map tracking data is viewed with V$*
, ALL*
, DBA*
, and USER*
heat map views.
Example 5-1 shows examples of information provided by heat map views. The V$HEAT_MAP_SEGMENT
view displays real-time segment access information. The ALL_
, DBA_
, and USER_HEAT_MAP_SEGMENT
views display the latest segment access time for all segments visible to the user. The ALL_
, DBA_
, and USER_HEAT_MAP_SEG_HISTOGRAM
views display segment access information for all segments visible to the user. The DBA_HEATMAP_TOP_OBJECTS
view displays heat map information for the top most active objects. The DBA_HEATMAP_TOP_TABLESPACES
view displays heat map information for the top most active tablespaces.
See Also:
Oracle Database Reference for information about Heat Map views
Example 5-1 Heat map views
/* enable heat map tracking if necessary*/ SELECT SUBSTR(OBJECT_NAME,1,20), SUBSTR(SUBOBJECT_NAME,1,20), TRACK_TIME, SEGMENT_WRITE, FULL_SCAN, LOOKUP_SCAN FROM V$HEAT_MAP_SEGMENT; SUBSTR(OBJECT_NAME,1 SUBSTR(SUBOBJECT_NAM TRACK_TIM SEG FUL LOO -------------------- -------------------- --------- --- --- --- SALES SALES_Q1_1998 01-NOV-12 NO NO NO SALES SALES_Q3_1998 01-NOV-12 NO NO NO SALES SALES_Q2_2000 01-NOV-12 NO NO NO SALES SALES_Q3_1999 01-NOV-12 NO NO NO SALES SALES_Q2_1998 01-NOV-12 NO NO NO SALES SALES_Q2_1999 01-NOV-12 NO NO NO SALES SALES_Q4_2001 01-NOV-12 NO NO NO SALES SALES_Q1_1999 01-NOV-12 NO NO NO SALES SALES_Q4_1998 01-NOV-12 NO NO NO SALES SALES_Q1_2000 01-NOV-12 NO NO NO SALES SALES_Q1_2001 01-NOV-12 NO NO NO SALES SALES_Q2_2001 01-NOV-12 NO NO NO SALES SALES_Q3_2000 01-NOV-12 NO NO NO SALES SALES_Q4_2000 01-NOV-12 NO NO NO EMPLOYEES 01-NOV-12 NO NO NO ... SELECT SUBSTR(OBJECT_NAME,1,20), SUBSTR(SUBOBJECT_NAME,1,20), SEGMENT_WRITE_TIME, SEGMENT_READ_TIME, FULL_SCAN, LOOKUP_SCAN FROM USER_HEAT_MAP_SEGMENT; SUBSTR(OBJECT_NAME,1 SUBSTR(SUBOBJECT_NAM SEGMENT_W SEGMENT_R FULL_SCAN LOOKUP_SC -------------------- -------------------- --------- --------- --------- --------- SALES SALES_Q1_1998 30-OCT-12 01-NOV-12 SALES SALES_Q1_1998 30-OCT-12 01-NOV-12 SALES SALES_Q1_1998 30-OCT-12 01-NOV-12 SALES SALES_Q1_1998 30-OCT-12 01-NOV-12 SALES SALES_Q1_1998 30-OCT-12 01-NOV-12 SALES SALES_Q1_1998 30-OCT-12 01-NOV-12 ... SELECT SUBSTR(OBJECT_NAME,1,20), SUBSTR(SUBOBJECT_NAME,1,20), TRACK_TIME, SEGMENT_WRITE, FULL_SCAN, LOOKUP_SCAN FROM USER_HEAT_MAP_SEG_HISTOGRAM; SUBSTR(OBJECT_NAME,1 SUBSTR(SUBOBJECT_NAM TRACK_TIM SEG FUL LOO -------------------- -------------------- --------- --- --- --- SALES SALES_Q1_1998 31-OCT-12 NO NO YES SALES SALES_Q1_1998 01-NOV-12 NO NO YES SALES SALES_Q1_1998 30-OCT-12 NO YES YES SALES SALES_Q2_1998 01-NOV-12 NO NO YES SALES SALES_Q2_1998 31-OCT-12 NO NO YES SALES SALES_Q2_1998 30-OCT-12 NO YES YES SALES SALES_Q3_1998 01-NOV-12 NO NO YES SALES SALES_Q3_1998 30-OCT-12 NO YES YES SALES SALES_Q3_1998 31-OCT-12 NO NO YES SALES SALES_Q4_1998 01-NOV-12 NO NO YES SALES SALES_Q4_1998 31-OCT-12 NO NO YES SALES SALES_Q4_1998 30-OCT-12 NO YES YES SALES SALES_Q1_1999 01-NOV-12 NO NO YES SALES SALES_Q1_1999 31-OCT-12 NO NO YES ... SELECT SUBSTR(OWNER,1,20), SUBSTR(OBJECT_NAME,1,20), OBJECT_TYPE, SUBSTR(TABLESPACE_NAME,1,20), SEGMENT_COUNT FROM DBA_HEATMAP_TOP_OBJECTS ORDER BY SEGMENT_COUNT DESC; SUBSTR(OWNER,1,20) SUBSTR(OBJECT_NAME,1 OBJECT_TYPE SUBSTR(TABLESPACE_NA SEGMENT_COUNT -------------------- -------------------- ------------------ -------------------- ------------- SH SALES TABLE EXAMPLE 96 SH COSTS TABLE EXAMPLE 48 PM ONLINE_MEDIA TABLE EXAMPLE 22 OE PURCHASEORDER TABLE EXAMPLE 18 PM PRINT_MEDIA TABLE EXAMPLE 15 OE CUSTOMERS TABLE EXAMPLE 10 OE WAREHOUSES TABLE EXAMPLE 9 HR EMPLOYEES TABLE EXAMPLE 7 OE LINEITEM_TABLE TABLE EXAMPLE 6 IX STREAMS_QUEUE_TABLE TABLE EXAMPLE 6 SH FWEEK_PSCAT_SALES_MV TABLE EXAMPLE 5 SH CUSTOMERS TABLE EXAMPLE 5 HR LOCATIONS TABLE EXAMPLE 5 HR JOB_HISTORY TABLE EXAMPLE 5 SH PRODUCTS TABLE EXAMPLE 5 ... SELECT SUBSTR(TABLESPACE_NAME,1,20), SEGMENT_COUNT FROM DBA_HEATMAP_TOP_TABLESPACES ORDER BY SEGMENT_COUNT DESC; SUBSTR(TABLESPACE_NA SEGMENT_COUNT -------------------- ------------- EXAMPLE 351 USERS 11 SELECT COUNT(*) FROM DBA_HEATMAP_TOP_OBJECTS; COUNT(*) ---------- 64 SELECT COUNT(*) FROM DBA_HEATMAP_TOP_TABLESPACES; COUNT(*) ---------- 2
5.2.1.3 Managing Heat Map Data With DBMS_HEAT_MAP Subprograms
The DBMS_HEAT_MAP
package provides additional flexibility for displaying heat map data using DBMS_HEAT_MAP
subprograms.
DBMS_HEAT_MAP
includes one set of APIs that externalize heat maps at various levels of storage such as block, extent, segment, object, and tablespace; and a second set of APIs that externalize the heat maps materialized by the background process for the top tablespaces.
Example 5-2 shows examples of the use of DBMS_HEAT_MAP
package subprograms.
See Also:
Oracle Database PL/SQL
Packages and Types Reference for information about the DBMS_HEAT_MAP
package
Example 5-2 Using DBMS_HEAT_MAP package subprograms
SELECT SUBSTR(segment_name,1,10) Segment, min_writetime, min_ftstime FROM TABLE(DBMS_HEAT_MAP.OBJECT_HEAT_MAP('SH','SALES')); SELECT SUBSTR(tablespace_name,1,16) Tblspace, min_writetime, min_ftstime FROM TABLE(DBMS_HEAT_MAP.TABLESPACE_HEAT_MAP('EXAMPLE')); SELECT relative_fno, block_id, blocks, TO_CHAR(min_writetime, 'mm-dd-yy hh-mi-ss') Mintime, TO_CHAR(max_writetime, 'mm-dd-yy hh-mi-ss') Maxtime, TO_CHAR(avg_writetime, 'mm-dd-yy hh-mi-ss') Avgtime FROM TABLE(DBMS_HEAT_MAP.EXTENT_HEAT_MAP('SH','SALES')) WHERE ROWNUM < 10; SELECT SUBSTR(owner,1,10) Owner, SUBSTR(segment_name,1,10) Segment, SUBSTR(partition_name,1,16) Partition, SUBSTR(tablespace_name,1,16) Tblspace, segment_type, segment_size FROM TABLE(DBMS_HEAT_MAP.OBJECT_HEAT_MAP('SH','SALES')); OWNER SEGMENT PARTITION TBLSPACE SEGMENT_TYPE SEGMENT_SIZE ---------- ---------- ---------------- ---------------- -------------------- ------------ SH SALES SALES_Q1_1998 EXAMPLE TABLE PARTITION 8388608 SH SALES SALES_Q2_1998 EXAMPLE TABLE PARTITION 8388608 SH SALES SALES_Q3_1998 EXAMPLE TABLE PARTITION 8388608 SH SALES SALES_Q4_1998 EXAMPLE TABLE PARTITION 8388608 SH SALES SALES_Q1_1999 EXAMPLE TABLE PARTITION 8388608 ...
5.2.2 Using Automatic Data Optimization
To implement your ILM strategy, you can use Automatic Data Optimization (ADO) to automate the compression and movement of data between different tiers of storage within the database.
The functionality includes the ability to create policies that specify different compression levels for each tier, and to control when the data movement takes place.
This section contains the following topics:
To use Automatic Data Optimization, you must enable Heat Map at the system level. You enable this functionality with the HEAT_MAP
initialization parameter. For information about setting the HEAT_MAP
initialization parameter, refer to Enabling and Disabling Heat Map.
5.2.2.1 Managing Policies for Automatic Data Optimization
You can specify policies for ADO at the row, segment, and tablespace granularity level when creating and altering tables with SQL statements. In addition, ADO policies can perform actions on indexes.
By specifying policies for ADO, you can automate data movement between different tiers of storage within the database. These policies also enable you to specify different compression levels for each tier, control when the data movement takes place, and optimize indexes.
ADO Policies for Tables
The ILM clauses of the SQL CREATE
and ALTER
TABLE
statements enable you to create, delete, enable or disable a
policy for ADO. An ILM policy clause determines the compression or storage tiering
policy and contains additional clauses, such as the AFTER
and
ON
clauses to specify the condition when a policy action should
occur. When you create a table, you can add a new policy for ADO. You can alter the
table to add more policies or to enable, disable, or delete existing policies. You can
add policies to an entire table or a partition of a table. You can specify only one
condition type for an AFTER
clause when adding ADO policies to a table
or partition of a table. ILM ADO policies are given a system-generated name, such
P1
, P2
, and so on to
P
n
.
A segment level policy executes only one time. After the policy executes successfully, it is disabled and is not evaluated again. However, you can explicitly enable the policy again. A row level policy continues to execute and is not disabled after a successful execution.
The scope of an ADO policy can be specified for a group of related objects or at the level of a segment or row, using the keywords GROUP
, ROW
, or SEGMENT
.
The default mappings for compression that can be applied to group policies are:
-
COMPRESS
ADVANCED
on a heap table maps to standard compression for indexes andLOW
for LOB segments. -
COMPRESS
FOR
QUERY
LOW
/QUERY
HIGH
on a heap table maps to standard compression for indexes andMEDIUM
for LOB segments. -
COMPRESS
FOR
ARCHIVE
LOW
/ARCHIVE
HIGH
on a heap table maps to standard compression for indexes andHIGH
for LOB segments.
The compression mapping cannot be changed. GROUP
can only be applied to segment level policies. The storage tiering policies are applicable only at the segment level and cannot be specified at the row level.
ADO Policies for Indexes
ADO policies for indexes enable the compression and optimization for indexes using the existing Automatic Data Optimization (ADO) framework.
You can add an ADO index policy with the ILM clause of the ALTER
INDEX
or CREATE
INDEX
SQL statement. An ADO index policy is given a system-generated name, such as P1, P2, ... Pnn.
For example, you can add an ADO policy when the index is created.
CREATE TABLE product_sales (PRODUCT_ID NUMBER NOT NULL, CUSTOMER_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER, QUANTITY_SOLD NUMBER(10,2) NOT NULL); CREATE INDEX prod_id_idx ON product_sales(product_id) ILM ADD POLICY OPTIMIZE AFTER 7 DAYS OF NO MODIFICATION; SELECT POLICY_NAME, POLICY_TYPE, ENABLED FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENA --------------------- ------------- --- P21 DATA MOVEMENT YES
You can add an ADO policy to an existing index.
ALTER INDEX hr.emp_id_idx ILM ADD POLICY SEGMENT TIER TO LOW_COST_TBS; ALTER INDEX hr.emp_id_idx ILM ADD POLICY OPTIMIZE AFTER 3 DAYS OF NO ACCESS;
The OPTIMIZE
clause enables ADO to optimize the index whenever the policy condition is met. The optimization process includes actions such as compressing, shrinking, or rebuilding indexes.
-
Compress: Compresses portions of the key values in an index segment
-
Shrink: Merges the contents of index blocks where possible to free blocks for reuse
-
Rebuild: Rebuilds an index to improve space usage and access speed
When the OPTIMIZE
clause is specified, Oracle automatically determines which action is optimal for the index and implements that action as part of the optimization process. You do not have to specify which action is taken.
When administering ADO policies for indexes, you cannot manually disable policies. An ADO policy for indexes executes only one time. After the policy executes successfully, the policy is disabled and is not evaluated again.
You can delete one policy at a time with the ILM clause of ALTER
INDEX
SQL statement. For example:
ALTER INDEX prod_id_idx ILM DELETE POLICY p21;
Modifying an ILM ADO policy at the index partition level is not supported. An ADO policy modified at the index level is cascaded to all partitions.
ADO Policies for In-Memory Column Store
Automatic Data Optimization (ADO) supports the In-Memory Column Store (IM column
store) with the INMEMORY
, INMEMORY
MEMCOMPRESS
, and NO
INMEMORY
policy types.
-
To enable objects for population in the In-Memory Column Store, include
INMEMORY
in theADD
POLICY
clause. -
To increase the compression level on objects in an IM column store, include
INMEMORY
MEMCOMPRESS
in theADD
POLICY
clause. -
To explicitly evict objects that benefit the least from the IM column store, include
NO
INMEMORY
in theADD
POLICY
clause. For example:
The following is an example of the use the NO
INMEMORY
clause to evict objects from the IM column store.
ALTER TABLE sales_2015 ILM ADD POLICY NO INMEMORY AFTER 7 DAYS OF NO ACCESS;
An ADO policy with an In-Memory Column Store clause can only be a segment level policy. The USER/DBA_ILMDATAMOVEMENTPOLICIES
and V$HEAT_MAP_SEGMENT
views include information about ADO policies for the In-Memory Column Store.
Customizing ADO Policies
You can customize policies with the ON
PL/SQL_function
option which provides the ability to determine when the policy should be executed. The ON
PL/SQL_function
option is available only with segment level policies. For example:
CREATE OR REPLACE FUNCTION my_custom_ado_rules (objn IN NUMBER) RETURN BOOLEAN; ALTER TABLE sales_custom ILM ADD POLICY COMPRESS ADVANCED SEGMENT ON my_custom_ado_rules;
See Also:
-
Oracle Database In-Memory Guide for information about In-Memory Column Store and ADO support
-
Oracle Database SQL Language Reference for information about the syntax of the ILM clauses in the SQL CREATE TABLE statement
-
Oracle Database SQL Language Reference for information about the syntax of the ILM clauses in the SQL CREATE INDEX statement
5.2.2.2 Creating a Table With an ILM ADO Policy
Use the ILM
ADD
POLICY
clause with the CREATE
TABLE
statement to create a table with ILM ADO policy.
The SQL statement in Example 5-3 creates a table and adds an ILM policy.
Example 5-3 Creating a table with an ILM ADO policy
/* Create an example table with an ILM ADO policy */ CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) PARTITION BY RANGE (time_id) ( PARTITION sales_q1_2012 VALUES LESS THAN (TO_DATE('01-APR-2012','dd-MON-yyyy')), PARTITION sales_q2_2012 VALUES LESS THAN (TO_DATE('01-JUL-2012','dd-MON-yyyy')), PARTITION sales_q3_2012 VALUES LESS THAN (TO_DATE('01-OCT-2012','dd-MON-yyyy')), PARTITION sales_q4_2012 VALUES LESS THAN (TO_DATE('01-JAN-2013','dd-MON-yyyy')) ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 12 MONTHS OF NO ACCESS; /* View the existing ILM ADO polices */ SELECT SUBSTR(policy_name,1,24) POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES
5.2.2.3 Adding ILM ADO Policies
Use the ILM
ADD
POLICY
clause with the ALTER
TABLE
statement to add an ILM ADO policy to a table.
The SQL statements in Example 5-4 provide examples of adding ILM policies to a partition of the sales
table.
Example 5-4 Adding ILM ADO policies
/* Add a row-level compression policy after 30 days of no modifications */ ALTER TABLE sales MODIFY PARTITION sales_q1_2002 ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER 30 DAYS OF NO MODIFICATION; /* Add a segment level compression policy for data after 6 months of no modifications */ ALTER TABLE sales MODIFY PARTITION sales_q1_2001 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO MODIFICATION; /* Add a segment level compression policy for data after 12 months of no access */ ALTER TABLE sales MODIFY PARTITION sales_q1_2000 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 12 MONTHS OF NO ACCESS; /* Add storage tier policy to move old data to a different tablespace */ /* that is on low cost storage media */ ALTER TABLE sales MODIFY PARTITION sales_q1_1999 ILM ADD POLICY TIER TO my_low_cost_sales_tablespace; /* View the existing polices */ SELECT SUBSTR(policy_name,1,24) POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES P3 DATA MOVEMENT YES P4 DATA MOVEMENT YES P5 DATA MOVEMENT YES
5.2.2.4 Disabling and Deleting ILM ADO Policies
Use the ILM
DISABLE
POLICY
or ILM
DELETE
POLICY
clauses with the ALTER
TABLE
statement to disable or delete an ILM ADO policy.
You can disable or delete ILM policies for ADO as shown in the SQL statements in Example 5-5. At times you may need to remove existing ILM policies if those policies conflict with a new policy that you want to add.
Example 5-5 Disabling and deleting ILM ADO policies
/* You can disable or delete an ADO policy in a table with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable or delete all ADO policies in a table with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable or delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_q1_2002 ILM DISABLE POLICY P2; ALTER TABLE sales MODIFY PARTITION sales_q1_2002 ILM DELETE POLICY P2; /* You can disable or delete all ADO policies in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_q1_2000 ILM DISABLE_all; ALTER TABLE sales MODIFY PARTITION sales_q1_2000 ILM DELETE_ALL;
5.2.2.5 Specifying Segment-Level Compression and Storage Tiering With ADO
You can specify compression at the segment-level within a table using a segment-level compression tiering policy.
In combination with the row-level compression tiering policy, you have fine-grained control over how the data in your database is stored and managed.
Example 5-6 illustrates how to create policies for ADO to enforce a compression and storage tiering policy on the sales_ado
table, reflecting the following business requirements:
-
Bulk Load Data
-
Run OLTP workloads
-
After six months with no updates, compress for Archive High
-
Move to low cost storage
Example 5-6 Using segment-level compression and storage tiering
/* Add a segment level compression policy after 6 months of no changes */ ALTER TABLE sales_ado ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO MODIFICATION; Table altered. /* Add storage tier policy */ ALTER TABLE sales_ado ILM ADD POLICY TIER TO my_low_cost_tablespace; SELECT SUBSTR(policy_name,1,24) POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED ------------------------ ------------- ------- ... P6 DATA MOVEMENT YES P7 DATA MOVEMENT YES
5.2.2.6 Specifying Row-Level Compression Tiering With ADO
Automatic Data Optimization (ADO) policies support Hybrid Columnar Compression (HCC) in addition to basic and advanced compression.
An HCC row level policy can be defined on any table regardless of the compression type of the table. Rows from cold blocks can be compressed with HCC when there is DML activity on other parts of the segment.
Note:
HCC row level policies are supported for ASSM (Automatic Segment Space Management) tablespaces only.With HCC policies on non-HCC tables, there may be row movement during updates if the row is in a HCC compression unit (CU). Also, similar to other use cases of row movement, index maintenance is necessary to update index entries that referenced the moved row.
Row-level policies are supported in Oracle Database 12c Release 1 (12.1): however, the database must be at 12.2 compatibility or greater to use HCC row-level compression policies.
See Also:
Oracle Database Administrator’s Guide for information about table compression
Example 5-7 Creating an ADO policy using row-level Hybrid Columnar Compression
The SQL statement in Example 5-7 creates a policy using HCC on the rows of the table employees_ilm
.
ALTER TABLE employees_ilm ILM ADD POLICY COLUMN STORE COMPRESS FOR QUERY ROW AFTER 30 DAYS OF NO MODIFICATION;
Example 5-8 Creating an ADO policy using row-level advanced compression
The SQL statement in Example 5-8 creates a policy using advanced compression on the rows of the table sales_ado
.
ALTER TABLE sales_ado ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER 60 DAYS OF NO MODIFICATION; SELECT policy_name, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------- ... P8 DATA MOVEMENT YES
5.2.2.7 Managing ILM ADO Parameters
You can customize your ADO environment with ILM ADO parameters that you set with the CUSTOMIZE_ILM
procedure in the DBMS_ILM_ADMIN
PL/SQL package.
Various ILM ADO parameters are described in Table 5-2.
Table 5-2 ILM ADO Parameters
Name | Description |
---|---|
|
The value for |
|
The value for |
|
The The settings of
|
|
The value of |
|
The value of |
|
The value for |
|
The value for |
|
The value for |
|
The value for |
|
The value for |
For the values of the TBS_PERCENT
* parameters, ADO makes a best effort, but not a guarantee. When the percentage of the tablespace quota reaches the value of TBS_PERCENT_USED
, ADO begins to move data so that percent free of the tablespace quota approaches the value of TBS_PERCENT_FREE
. As an example, assume that TBS_PERCENT_USED
is set to 85 and TBS_PERCENT_FREE
is set to 25
, and that a tablespace becomes 90
percent full. ADO then initiates actions to move data so that the tablespace quota has at least 25
percent free, which can also be interpreted as less than 75
percent used of the tablespace quota.
You can display the parameters with the DBA_ILMPARAMETERS
view. For example, the following query displays the values of the ADO-related parameters.
SQL> SELECT NAME, VALUE FROM DBA_ILMPARAMETERS; ---------------------------------------------------------------- ---------- ENABLED 1 RETENTION TIME 30 JOB LIMIT 2 EXECUTION MODE 2 EXECUTION INTERVAL 15 TBS PERCENT USED 85 TBS PERCENT FREE 25 POLICY TIME 0 ABSOLUTE JOB LIMIT 10 DEGREE OF PARALLELISM 4 ...
See Also:
-
Example 5-9 for an example showing how to set ILM ADO parameters with the
CUSTOMIZE_ILM
procedure in theDBMS_ILM_ADMIN
PL/SQL package -
Managing ILM Heat Map and ADO with Oracle Enterprise Manager for information about setting ILM ADO parameters with Oracle Enterprise Manager Cloud Control
-
Oracle Database PL/SQL Packages and Types Reference for a complete list of ILM ADO parameters
-
Oracle Database PL/SQL Packages and Types Reference for information about the
DBMS_ILM_ADMIN
package
5.2.2.8 Using PL/SQL Functions for Policy Management
You can use the PL/SQL DBMS_ILM
and DBMS_ILM_ADMIN
packages for advanced policy management and customization to implement more complex ADO scenarios and control when policies are actively moving and compressing data.
With the PL/SQL DBMS_ILM
and DBMS_ILM_ADMIN
packages, you can manage ILM activities for ADO so that they do not negatively impact important production workloads. Database compatibility must be set to a minimum of 12.0 to use these packages.
The EXECUTE_ILM
procedure of the DBMS_ILM
package creates and schedules jobs to enforce policies for ADO. The EXECUTE_ILM()
procedure provides this functionality, regardless of any previously-scheduled ILM jobs. All jobs are created and scheduled to run immediately; however, whether they are run immediately depends on the number of jobs queued with the scheduler.
You can use the EXECUTE_ILM
procedure if you want more control when ILM jobs are performed, and do not want to wait until the next maintenance window.
The STOP_ILM
procedure of the DBMS_ILM
package stops all jobs, all running jobs, jobs based on a task Id, or a specific job.
The CUSTOMIZE_ILM
procedure in the DBMS_ILM_ADMIN
PL/SQL package enables you to customize settings for ADO, as shown in Example 5-9.
For example, you can set the values for the TBS_PERCENT_USED
and TBS_PERCENT_FREE
ILM parameters or set the ABS_JOBLIMIT
ILM parameter. TBS_PERCENT_USED
and TBS_PERCENT_FREE
determine when data is moved based on tablespace quotas and ABS_JOBLIMIT
sets the absolute number of concurrent ADO jobs.
You can also recreate objects with policies using the DBMS_METADATA
PL/SQL package.
See Also:
-
Managing ILM ADO Parameters for information about ILM ADO parameters
-
Oracle Database PL/SQL Packages and Types Reference for information about the
DBMS_ILM
,DBMS_ILM_ADMIN
, andDBMS_METADATA
packages
Example 5-9 Using CUSTOMIZE_ILM to customize ADO settings
SQL> BEGIN 2 DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.TBS_PERCENT_USED, 85); 3 DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.TBS_PERCENT_FREE, 25); 4 END; 5 / SQL> BEGIN 2 DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.ABS_JOBLIMIT, 10); 3 END; 4 /
5.2.2.9 Using Views to Monitor Policies for ADO
You can view and monitor the policies for ADO that are associated with your database objects using the DBA_ILM*
and USER_ILM*
views, making it easier to change policies as needed.
-
The
DBA/USER_ILMDATAMOVEMENTPOLICIES
view displays information specific to data movement related attributes of an ILM policy for ADO. -
The
DBA/USER_ILMTASKS
view displays the task Ids of the procedureEXECUTE_ILM
. Every time a user invokes the procedureEXECUTE_ILM
, a task Id is returned to track this particular invocation. A task Id is also generated to track periodic internal ILM tasks by the database. This view contains information about all ILM tasks for ADO. -
The
DBA/USER_ILMEVALUATIONDETAILS
view displays details on policies considered for a particular task. It also shows the name of the job that executes the policy in case the policy was selected for evaluation. In case the policy was not executed, this view also provides a reason. -
The
DBA/USER_ILMOBJECTS
view displays all the objects and policies for ADO in the database. Many objects inherit policies through their parent objects or because they were created in a particular tablespace. This view provides a mapping between the policies and objects. In the case of an inherited policy, this view also indicates the level from which policy is inherited. -
The
DBA/USER_ILMPOLICIES
view displays details about all the policies for ADO in the database. -
The
DBA/USER_ILMRESULTS
view displays information about data movement-related jobs for ADO in the database. -
The
DBA_ILMPARAMETERS
view displays information about ADO-related parameters.
See Also:
Oracle Database
Reference for information about the ILM
views
5.2.3 Limitations and Restrictions With ADO and Heat Map
The limitations and restrictions associated with ADO and Heat Map are discussed in this topic.
Limitations and restrictions associated with ADO and Heat Map include:
-
Partition-level ADO and compression are supported for Temporal Validity except for row-level ADO policies that would compress rows that are past their valid time (access or modification).
-
Partition-level ADO and compression are supported for in-database archiving if partitioned on the
ORA_ARCHIVE_STATE
column. -
Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level.
- ADO Row-level policies support the
OF NO MODIFICATION
condition type only. -
ADO does not perform checks for storage space in a target tablespace when using storage tiering.
-
ADO is not supported on tables with object types or materialized views.
-
ADO is not supported with index-organized tables or clusters.
-
ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later.
-
ADO has restrictions related to moving tables and table partitions.
See Also:
-
Oracle Database SQL Language Reference for information about restrictions on moving tables
-
Oracle Database SQL Language Reference for information about restrictions on moving table partitions
-
5.3 Controlling the Validity and Visibility of Data in Oracle Database
You can control the validity and visibility of data in Oracle Database with In-Database Archiving and Temporal Validity.
This section contains the following topics:
5.3.1 Using In-Database Archiving
In-Database Archiving enables you to mark rows within a table as active or archived.
How In-Database Archiving Helps both Database and Applications Operations
These are the advantages of In-Database Archiving.
- With In-Database Archiving, you can store large volumes of archival data in the database without compromising application performance.
- Archived data can be compressed to help improve backup performance.
- Can be a performance accelerator for data searches.
It narrows the scope of the search down to what is currently relevant, which may be a minority of the data volume in the targeted tables.
- Can reduce the downtime caused by application upgrades that involve data
changes.
Upgrade scripts can be written to process active and archived data separately. For example, an application can be made available again immediately after upgrades and data fixes have been applied to active data, since the archived data is invisible to users. Changes to the archived data can be deferred until later.
Active Versus Archival Data
For the purposes of In-Database Archiving, a row in a table exists in one of two states:
-
Active – Rows subject to frequent querying are usually considered active. For example, in a business environment the sales data for the current and last quarter may be considered active.
-
Archival – Rows containing data that is no longer essential to ongoing activity can implicitly be considered archival. They may for example contain data from transactions completed months or years ago which is stored for records retention and/or compliance purposes only. Such data is seldom read or modified. For example, the sales data for the last five fiscal years could be considered archival data.
Often the majority of the data in a database is inactive and therefore
archival. Filtering out this data automatically in queries can significantly speed up query
performance. Likewise, bulk updates (such as during application upgrades) also benefit by
reduced downtime when irrelevant data is excluded. To set up this data filtering, you can
enable the table for row archival storage, which adds the column
ORA_ARCHIVE_STATE
to the table. Archived rows are not ordinarily visible
to queries, so these rows are not processed and are excluded from query results.
What is ORA_ARCHIVE_STATE?
ORA_ARCHIVE_STATE
is the system column responsible for
designating a given row as active
or archived
. It is a
hidden column which does not have a user column number (usrcol#
) in the
dictionary. It is not visible when you DESCRIBE
a table, or issue
SELECT *
queries on the table. However, this column can be explicitly
specified in the select-list of a SELECT
query, the modification list for
an UPDATE
query, and in predicates for both queries and DMLs.
Note:
You and the ILM practices of your organization must determine which rows in a row-archival table should be archived. In-Database Archiving does not perform such an analysis. By default, it sets each row in a row-archival table to active.Enabling a new or Existing Table for Row Archiving
Include the ROW ARCHIVAL
clause in a CREATE
TABLE
statement or use ALTER TABLE
to set ROW
ARCHIVAL
on an existing table. For example:
CREATE TABLE scott.emp (EMPNO NOT NULL NUMBER(7), FULLNAME VARCHAR2(100), JOB VARCHAR2(9), MGR NUMBER(7)) ROW ARCHIVAL;
ALTER TABLE scott.emp ROW ARCHIVAL;
You can also remove the ROW ARCHIVAL
property from a table.
This removes the ORA_ARCHIVE_STATE
column from the table.
ALTER TABLE scott.emp NO ROW ARCHIVAL;
Note:
You cannot remove the row-archive attribute of a table during online redefinition.
These are some of the characteristics of the ORA_ARCHIVE_STATE
column:
- It is a hidden column not implicitly reported by
SELECT *
queries and theDESCRIBE
command. - You can explicitly update the column.
Querying Row-Archive Enabled Tables
Note how INSERT
,
UPDATE
, and CREATE AS SELECT
work when used against
row-archive enabled tables.
The default value of the
ORA_ARCHIVE_STATE
in each row is “0," which implies that the row is
considered active data until you change the column value to archived. Therefore, an INSERT
into a row-archival table always sets new rows to active.
UPDATE
Updates can directly set the value of
ORA_ARCHIVE_STATE
in any row. The convention is to use either “0” or “1”
to indicate active
or archived
, but you may update
ORA_ARCHIVE_STATE
to any valid VARCHAR2(4000) value.
CREATE TABLE AS SELECT
behavior depends upon
whether or not each table in the selection is row-archival enabled and the structure of the
columns in the SELECT
list. (See the table below )
Note that "ORA_ARCHIVE_STATE" is not an Oracle reserved word and therefore is valid as the name of user column if row archival is not already enabled on the respective table.
INSERT INTO SELECT
An INSERT INTO SELECT
where the source and target tables
are row-archival enabled does not populate the target table’s ORA_ARCHIVE_STATE column with
the value of the corresponding column from the source table unless explicitly mapped in
source and target table column lists. Instead, the default active row-archival state is
set.
ALTER TABLE MOVE
If you move a table via
ALTER TABLE MOVE
, the row-archival state column of the table is
preserved.
How CREATE TABLE AS SELECT Behavior Differs in Data Transfers Between Two Tables, Depending on Whether or not Both Tables are Row Archive Enabled
The table below shows the behavior of CREATE TABLE AS SELECT
under different scenarios.
Table 5-3 ORA_ARCHIVE_STATE Attributes
Source Table Row-Archive Enabled? | Target Table Row-Archive Enabled? | Select List Structure | ORA_ARCHIVE_STATE is a User Column on Source Table | Behavior |
---|---|---|---|---|
No | No | Either SELECT * , or, includes
ORA_ARCHIVE_STATE is explicitly in the SELECT list
|
Yes | Copies all rows to the target table with requested columns
including ORA_ARCHIVE_STATE as a user column.
|
Yes | No | SELECT * |
No | Copies selected rows depending on row visibility with all columns
except the ORA_ARCHIVE_STATE column.
|
Yes | No | ORA_ARCHIVE_STATE is explicitly included in the
SELECT list
|
Yes | Copies selected rows depending on row visibility with requested
columns including ORA_ARCHIVE_STATE as a user column.
|
Yes | Yes | SELECT * |
No | Copies all rows to the target table with the requested columns.
Assigns ‘0’ to the ORA_ARCHIVE_STATE column in all new rows in the
target table.
|
Yes | Yes | The ORA_ARCHIVE_STATE column is explicitly
included in a SELECT list
|
No | Copies selected rows with requested columns (depending on row visibility). Propagates the existing row archival state of each row from the source table to the target table. |
Yes | Yes | ORA_ARCHIVE_STATE is not explicitly included in
the SELECT
|
No | Copies the selected rows with the selected columns (depending on
the row visibility). Assigns '0' to the new ORA_ARCHIVE_STATE
column in all new rows in the target table.
|
No | Yes | Either SELECT * , or, ORA_ARCHIVE_STATE
is explicitly included in the SELECT list
|
Yes |
Important: Returns
|
Setting ROW ARCHIVAL VISIBILITY
for Session Level
Visibility Control
By default, only active data is visible to users and applications in row-archival enabled tables. But you can modify this per session and make archived rows visible (or back to invisible) within the current session.
ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE;
ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL;
See Also:
- In the PL/SQL Packages and Types Reference, the
DBMS_ILM
package contains active and archived state constants for theORA_ARCHIVE_STATE
column. These are represented by '0
' and '1
' respectively.
DBMS_ILM also provides the functionarchive_state_active constant varchar2(1) := '0'; archive_state_archived constant varchar2(1) := '1';
ARCHIVESTATENAME
for returning the current value ofORA_ARCHIVE_STATE
.function archiveStateName(value in varchar2) return varchar2;
Input Parameters Description value The string value for which the archive state name is to be returned return “archive_state_active” for “0”
“archive_state_archived” for others
- The SQL Language Reference describes the use of
the
ROW_ARCHIVAL
clause inALTER TABLE
andCREATE TABLE
statements.ALTER TABLE [schema.]table [alter_table_properties | … ] [row_archival_clause]
CREATE TABLE [schema.]table [(relational properties)] [table properties] table_properties::= [column_properties ] [ table_partitioning_clauses ] … [row_archival_clause] row_archival_clause ::= [NO] ROW ARCHIVAL
Example 5-10 Using In-Database Archiving
/* Set visibility to ACTIVE to display only active rows of a table.*/ ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE; CREATE TABLE employees_indbarch (employee_id NUMBER(6) NOT NULL, first_name VARCHAR2(20), last_name VARCHAR2(25) NOT NULL, email VARCHAR2(25) NOT NULL, phone_number VARCHAR2(20), hire_date DATE NOT NULL, job_id VARCHAR2(10) NOT NULL, salary NUMBER(8,2), commission_pct NUMBER(2,2), manager_id NUMBER(6), department_id NUMBER(4)) ROW ARCHIVAL; /* Show all the columns in the table, including hidden columns */ SELECT SUBSTR(COLUMN_NAME,1,22) NAME, SUBSTR(DATA_TYPE,1,20) DATA_TYPE, COLUMN_ID AS COL_ID, SEGMENT_COLUMN_ID AS SEG_COL_ID, INTERNAL_COLUMN_ID AS INT_COL_ID, HIDDEN_COLUMN, CHAR_LENGTH FROM USER_TAB_COLS WHERE TABLE_NAME='EMPLOYEES_INDBARCH'; NAME DATA_TYPE COL_ID SEG_COL_ID INT_COL_ID HID CHAR_LENGTH ---------------------- -------------------- ---------- ---------- ---------- --- ----------- ORA_ARCHIVE_STATE VARCHAR2 1 1 YES 4000 EMPLOYEE_ID NUMBER 1 2 2 NO 0 FIRST_NAME VARCHAR2 2 3 3 NO 20 LAST_NAME VARCHAR2 3 4 4 NO 25 EMAIL VARCHAR2 4 5 5 NO 25 PHONE_NUMBER VARCHAR2 5 6 6 NO 20 HIRE_DATE DATE 6 7 7 NO 0 JOB_ID VARCHAR2 7 8 8 NO 10 SALARY NUMBER 8 9 9 NO 0 COMMISSION_PCT NUMBER 9 10 10 NO 0 MANAGER_ID NUMBER 10 11 11 NO 0 DEPARTMENT_ID NUMBER 11 12 12 NO 0 /* Insert some data into the table */ INSERT INTO employees_indbarch(employee_id, first_name, last_name, email, hire_date, job_id, salary, manager_id, department_id) VALUES (251, 'Scott', 'Tiger', 'scott.tiger@example.com', '21-MAY-2009', 'IT_PROG', 50000, 103, 60); INSERT INTO employees_indbarch(employee_id, first_name, last_name, email, hire_date, job_id, salary, manager_id, department_id) VALUES (252, 'Jane', 'Lion', 'jane.lion@example.com', '11-JUN-2009', 'IT_PROG', 50000, 103, 60); /* Decrease the ORA_ARCHIVE_STATE column size to improve formatting in queries */ COLUMN ORA_ARCHIVE_STATE FORMAT a18; /* The default value for ORA_ARCHIVE_STATE is '0', which means active */ SELECT employee_id, ORA_ARCHIVE_STATE FROM employees_indbarch; EMPLOYEE_ID ORA_ARCHIVE_STATE ----------- ------------------ 251 0 252 0 /* Insert a value into ORA_ARCHIVE_STATE to set the record to inactive status*/ UPDATE employees_indbarch SET ORA_ARCHIVE_STATE = '1' WHERE employee_id = 252; /* Only active records are in the following query */ SELECT employee_id, ORA_ARCHIVE_STATE FROM employees_indbarch; EMPLOYEE_ID ORA_ARCHIVE_STATE ----------- ------------------ 251 0 /* Set visibility to ALL to display all records */ ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL; SELECT employee_id, ORA_ARCHIVE_STATE FROM employees_indbarch; EMPLOYEE_ID ORA_ARCHIVE_STATE ----------- ------------------ 251 0 252 1
5.3.2 Using Temporal Validity
Temporal Validity enables you to track time periods for real world validity. Valid times can be set by users and applications for data, and data can be selected by a specified valid time, or a valid time range.
Applications often note the validity (or effectivity) of a fact recorded in a database with dates or timestamps that are relevant to the management of a business. For example, the hire-date of an employee in a human resources (HR) application, which determines the effective date of coverage in the insurance industry, is a valid date. This date is in contrast to the date or time at which the employee record was entered in the database. The former temporal attribute (hire-date) is called the valid time (VT) while the latter (date entered into the database) is called the transaction time (TT). While the valid time is usually controlled by the user, the transaction-time is system-managed.
For ILM, the valid time attributes can signify when a fact is valid in the business world and when it is not. Using valid time attributes, a query could just show rows that are currently valid, while not showing rows that contains facts that are not currently valid, such as a closed order or a future hire.
Concepts that are integral to valid time temporal modeling include:
-
Valid time
This is a user-defined representation of time. Examples of a valid time include project start and finish dates, and employee hire and termination dates.
-
Tables with valid-time semantics
These tables have one or more dimensions of user-defined time, each of which has a start and an end.
-
Valid-time flashback queries
This is the ability to do as-of and versions queries using a valid-time dimension.
A valid-time period consists of two date-time columns specified in the table definition. You can add a valid-time period by explicitly adding columns, or the columns can be created automatically. A valid-time period can be added during the create table or alter table process.
To support session level visibility control for temporal table queries, the DBMS_FLASHBACK_ARCHIVE
PL/SQL package provides the ENABLE_AT_VALID_TIME
procedure. To execute the procedure, you need the required system and object privileges.
The following PL/SQL procedure sets the valid time visibility as of the given time.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time ('ASOF', '31-DEC-12 12.00.01 PM');
The following PL/SQL procedure sets the visibility of temporal data to currently valid data within the valid time period at the session level.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('CURRENT');
The following procedure sets the visibility of temporal data to the full table, which is the default temporal table visibility.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('ALL');
See Also:
-
Oracle Database Development Guide for information about Oracle Temporal
-
Oracle Database PL/SQL Packages and Types Reference for information about the
DBMS_FLASHBACK_ARCHIVE
package -
Oracle Database SQL Language Reference for information about using the
CREATE
TABLE
orALTER
TABLE
to initiate valid-time temporal modeling -
Oracle Database Reference for information about views used to monitor table information
5.3.3 Creating a Table With Temporal Validity
The example in this topic shows how to create a table with temporal validity.
Example 5-11 shows the use of temporal validity.
Live SQL:
View and run a related example on Oracle Live SQL at Oracle Live SQL: Creating a Table with Temporal Validity.
Example 5-11 Creating a table with temporal validity
/* Create a time with an employee tracking timestamp */ /* using the specified columns*/ CREATE TABLE employees_temp ( employee_id NUMBER(6) NOT NULL, first_name VARCHAR2(20), last_name VARCHAR2(25) NOT NULL, email VARCHAR2(25) NOT NULL, phone_number VARCHAR2(20), hire_date DATE NOT NULL, job_id VARCHAR2(10) NOT NULL, salary NUMBER(8,2), commission_pct NUMBER(2,2), manager_id NUMBER(6), department_id NUMBER(4), PERIOD FOR emp_track_time); DESCRIBE employees_temp Name Null? Type ------------------------------------------------------- -------- --------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SQL> SELECT SUBSTR(COLUMN_NAME,1,22) NAME, SUBSTR(DATA_TYPE,1,28) DATA_TYPE, COLUMN_ID AS COL_ID, SEGMENT_COLUMN_ID AS SEG_COL_ID, INTERNAL_COLUMN_ID AS INT_COL_ID, HIDDEN_COLUMN FROM USER_TAB_COLS WHERE TABLE_NAME='EMPLOYEES_TEMP'; NAME DATA_TYPE COL_ID SEG_COL_ID INT_COL_ID HID ---------------------- ---------------------------- ------ ---------- ---------- --- EMP_TRACK_TIME_START TIMESTAMP(6) WITH TIME ZONE 1 1 YES EMP_TRACK_TIME_END TIMESTAMP(6) WITH TIME ZONE 2 2 YES EMP_TRACK_TIME NUMBER 3 YES EMPLOYEE_ID NUMBER 1 3 4 NO FIRST_NAME VARCHAR2 2 4 5 NO LAST_NAME VARCHAR2 3 5 6 NO EMAIL VARCHAR2 4 6 7 NO PHONE_NUMBER VARCHAR2 5 7 8 NO HIRE_DATE DATE 6 8 9 NO JOB_ID VARCHAR2 7 9 10 NO SALARY NUMBER 8 10 11 NO COMMISSION_PCT NUMBER 9 11 12 NO MANAGER_ID NUMBER 10 12 13 NO DEPARTMENT_ID NUMBER 11 13 14 NO /* Insert/update/delete with specified values for time columns */ INSERT INTO employees_temp(emp_track_time_start, emp_track_time_end, employee_id, first_name, last_name, email, hire_date, job_id, salary, manager_id, department_id) VALUES (TIMESTAMP '2009-06-01 12:00:01 Europe/Paris', TIMESTAMP '2012-11-30 12:00:01 Europe/Paris', 251, 'Scott', 'Tiger', 'scott.tiger@example.com', DATE '2009-05-21', 'IT_PROG', 50000, 103, 60); INSERT INTO employees_temp(emp_track_time_start, emp_track_time_end, employee_id, first_name, last_name, email, hire_date, job_id, salary, manager_id, department_id) VALUES (TIMESTAMP '2009-06-01 12:00:01 Europe/Paris', TIMESTAMP '2012-12-31 12:00:01 Europe/Paris', 252, 'Jane', 'Lion', 'jane.lion@example.com', DATE '2009-06-11', 'IT_PROG', 50000, 103, 60); UPDATE employees_temp set salary = salary + salary * .05 WHERE emp_track_time_start <= TIMESTAMP '2009-06-01 12:00:01 Europe/Paris'; SELECT employee_id, SALARY FROM employees_temp; EMPLOYEE_ID SALARY ----------- ---------- 251 52500 252 52500 /* No rows are deleted for the following statement because no records */ /* are in the specified track time. */ DELETE employees_temp WHERE emp_track_time_end < TIMESTAMP '2001-12-31 12:00:01 Europe/Paris'; 0 rows deleted. /* Show rows that are in a specified time period */ SELECT employee_id FROM employees_temp WHERE emp_track_time_start > TIMESTAMP '2009-05-31 12:00:01 Europe/Paris' AND emp_track_time_end < TIMESTAMP '2012-12-01 12:00:01 Europe/Paris'; EMPLOYEE_ID ----------- 251 /* Show rows that are in a specified time period */ SELECT employee_id FROM employees_temp AS OF PERIOD FOR emp_track_time TIMESTAMP '2012-12-01 12:00:01 Europe/Paris'; EMPLOYEE_ID ----------- 252
5.3.4 Limitations and Restrictions With In-Database Archiving and Temporal Validity
This topic lists the limitations and restrictions associated with In-Database Archiving and Temporal Validity.
The limitations and restrictions include:
-
ILM is not supported with OLTP table compression for Temporal Validity. Segment-level ILM and compression is supported if partitioned on the end-time columns.
-
ILM is not supported with OLTP table compression for in-database archiving. Segment-level ILM and compression is supported if partitioned on the
ORA_ARCHIVE_STATE
column.
5.4 Implementing an ILM System Manually Using Partitioning
You can manually implement an Information Lifecycle Management (ILM) system using partitioning.
Example 5-12 illustrates how to manually create storage tiers and partition a table across those storage tiers and then setup a virtual private database (VPD) policy on that database to restrict access to the online archive tier data.
See Also:
-
Oracle Database SQL Language Reference for information about the
CREATE
TABLE
SQL statement -
Oracle Database PL/SQL Packages and Types Reference for information about the
DBMS_RLS
package
Example 5-12 Manually implementing an ILM system
REM Setup the tablespaces for the data REM These tablespaces would be placed on a High Performance Tier CREATE SMALLFILE TABLESPACE q1_orders DATAFILE 'q1_orders' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE SMALLFILE TABLESPACE q2_orders DATAFILE 'q2_orders' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE SMALLFILE TABLESPACE q3_orders DATAFILE 'q3_orders' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE SMALLFILE TABLESPACE q4_orders DATAFILE 'q4_orders' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; REM These tablespaces would be placed on a Low Cost Tier CREATE SMALLFILE TABLESPACE "2006_ORDERS" DATAFILE '2006_orders' SIZE 5M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE SMALLFILE TABLESPACE "2005_ORDERS" DATAFILE '2005_orders' SIZE 5M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; REM These tablespaces would be placed on the Online Archive Tier CREATE SMALLFILE TABLESPACE "2004_ORDERS" DATAFILE '2004_orders' SIZE 5M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE SMALLFILE TABLESPACE old_orders DATAFILE 'old_orders' SIZE 15M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; REM Now create the Partitioned Table CREATE TABLE allorders ( prod_id NUMBER NOT NULL, cust_id NUMBER NOT NULL, time_id DATE NOT NULL, channel_id NUMBER NOT NULL, promo_id NUMBER NOT NULL, quantity_sold NUMBER(10,2) NOT NULL, amount_sold NUMBER(10,2) NOT NULL) -- -- table wide physical specs -- PCTFREE 5 NOLOGGING -- -- partitions -- PARTITION BY RANGE (time_id) ( partition allorders_pre_2004 VALUES LESS THAN (TO_DATE('2004-01-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE old_orders, partition allorders_2004 VALUES LESS THAN (TO_DATE('2005-01-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE "2004_ORDERS", partition allorders_2005 VALUES LESS THAN (TO_DATE('2006-01-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE "2005_ORDERS", partition allorders_2006 VALUES LESS THAN (TO_DATE('2007-01-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE "2006_ORDERS", partition allorders_q1_2007 VALUES LESS THAN (TO_DATE('2007-04-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE q1_orders, partition allorders_q2_2007 VALUES LESS THAN (TO_DATE('2007-07-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE q2_orders, partition allorders_q3_2007 VALUES LESS THAN (TO_DATE('2007-10-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE q3_orders, partition allorders_q4_2007 VALUES LESS THAN (TO_DATE('2008-01-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE q4_orders); ALTER TABLE allorders ENABLE ROW MOVEMENT; REM Here is another example using INTERVAL partitioning REM These tablespaces would be placed on a High Performance Tier CREATE SMALLFILE TABLESPACE cc_this_month DATAFILE 'cc_this_month' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE SMALLFILE TABLESPACE cc_prev_month DATAFILE 'cc_prev_month' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; REM These tablespaces would be placed on a Low Cost Tier CREATE SMALLFILE TABLESPACE cc_prev_12mth DATAFILE 'cc_prev_12' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; REM These tablespaces would be placed on the Online Archive Tier CREATE SMALLFILE TABLESPACE cc_old_tran DATAFILE 'cc_old_tran' SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; REM Credit Card Transactions where new partitions REM are automatically placed on the high performance tier CREATE TABLE cc_tran ( cc_no VARCHAR2(16) NOT NULL, tran_dt DATE NOT NULL, entry_dt DATE NOT NULL, ref_no NUMBER NOT NULL, description VARCHAR2(30) NOT NULL, tran_amt NUMBER(10,2) NOT NULL) -- -- table wide physical specs -- PCTFREE 5 NOLOGGING -- -- partitions -- PARTITION BY RANGE (tran_dt) INTERVAL (NUMTOYMINTERVAL(1,'month') ) STORE IN (cc_this_month ) ( partition very_old_cc_trans VALUES LESS THAN (TO_DATE('1999-07-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE cc_old_tran , partition old_cc_trans VALUES LESS THAN (TO_DATE('2006-07-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE cc_old_tran , partition last_12_mths VALUES LESS THAN (TO_DATE('2007-06-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE cc_prev_12mth, partition recent_cc_trans VALUES LESS THAN (TO_DATE('2007-07-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE cc_prev_month, partition new_cc_tran VALUES LESS THAN (TO_DATE('2007-08-01 00:00:00' ,'SYYYY-MM-DD HH24:MI:SS' ,'NLS_CALENDAR=GREGORIAN' )) TABLESPACE cc_this_month); REM Create a Security Policy to allow user SH to see all credit card data, REM PM only sees this years data, REM and all other uses cannot see the credit card data CREATE OR REPLACE FUNCTION ilm_seehist (oowner IN VARCHAR2, ojname IN VARCHAR2) RETURN VARCHAR2 AS con VARCHAR2 (200); BEGIN IF SYS_CONTEXT('USERENV','CLIENT_INFO') = 'SH' THEN -- sees all data con:= '1=1'; ELSIF SYS_CONTEXT('USERENV','CLIENT_INFO') = 'PM' THEN -- sees only data for 2007 con := 'time_id > ''31-Dec-2006'''; ELSE -- others nothing con:= '1=2'; END IF; RETURN (con); END ilm_seehist; /
5.5 Managing ILM Heat Map and ADO with Oracle Enterprise Manager
You can manage Heat Map and Automatic Data Optimization with Oracle Enterprise Manager Cloud Control.
This section contains the following topics:
-
Viewing Automatic Data Optimization Activity at the Tablespace Level
-
Changing Execution Frequency of Background Automatic Data Optimization
See Also:
-
Displaying Heat Map Tracking Data With Views and Using Views to Monitor Policies for ADO for information about views available for displaying Heat Map and ADO policy details
-
Oracle Enterprise Manager Cloud Control Administrator's Guide for information about managing Oracle Enterprise Manager Cloud Control
5.5.1 Accessing the Database Administration Menu
To access the Administration menu for a database:
-
Log in to Oracle Enterprise Manager Cloud Control.
-
Select Databases under the Targets menu.
-
Click the database name in the list.
-
The Administration menu appears on the database home page.
5.5.2 Viewing Automatic Data Optimization Activity at the Tablespace Level
To monitor the activity of the top 100 tablespaces selected by size in the database, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, you can view the Top 100 Tablespace Activity Heat Map based on the last access time, last write time, last full scan time, and last look up scan time.
By default, the size of each box in the heat map represents a tablespace within a heat map and is determined by tablespace size. You can then use Information Lifecycle Management to drill down from tablespace to object to segment level heat maps.
5.5.3 Viewing the Segment Activity Details of Any Tablespace
To view segment activity details of any tablespace, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Show Additional button.
Enterprise Manager displays a dialog box that you can use to search for the segment activity details of any tablespace.
-
On the dialog box, enter the Tablespace Name and click the Search button.
Enterprise Manager displays the Segment Activity details for the tablespace.
-
Click the Edit Tablespace Policy button to display the Edit Tablespace page with the ADO tab selected. You can create a policy for the tablespace that allows it to be compressed or moved.
5.5.4 Viewing the Segment Activity Details of Any Object
To view segment activity details of any object, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
Move the cursor over any of the boxes within the Database level heat map where each box represents a tablespace. Click the Tablespace to which the object you want to view belongs.
Enterprise Manager displays the Tablespace level heat map for the 100 largest objects belonging to the tablespace. The Segment Activity tables displays the Segment Activity details for the 100 largest objects.
-
On the Information Lifecycle Management page, click the Show Additional button.
Enterprise Manager displays a dialog box that you can use to search for the segment activity details of any object belonging to the tablespace.
-
On the dialog box, enter the Schema Name and Object Name and click Search.
Enterprise Manager displays the Segment Activity details for the object.
-
Click the Edit Object Policy button to display the Edit Object page with the ADO tab selected. You can create a policy for the object that allows it to be compressed or moved.
5.5.5 Viewing the Segment Activity History of Any Object
To view the segment activity history of any object, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
Move the cursor over any of the boxes within the Database level heat map where each box represents a tablespace. Click the Tablespace to which the object you want to view belongs.
Enterprise Manager displays the Tablespace level heat map for the 100 largest objects belonging to the tablespace. The Segment Activity tables displays the Segment Activity details for the 100 largest objects.
-
Select the object in the Segment Activity details table and click the Activity History button.
Enterprise Manager displays the Edit Object page with the ADO tab selected. The ADO tab displays a list of policies and the Segment Access history.
-
You can select a segment, change the date range to be the last 60 days, select the Daily option, and clicks the Refresh button to display the Segment Access History for the object for the last 60 days.
5.5.6 Searching Segment Activity in Automatic Data Optimization
To search for segment activities during different time durations in Automatic Data Optimization, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click any of the boxes in a heat map. From the initial display which is the database level, click a box in the heat map to display the 100 largest objects in that tablespace. You can then click a heat map box to see the heat map of the 100 largest segments in that object.
-
Enter a timestamp that is one year ago for the Last Access Time and then click Go. A list of segments that have not been accessed or modified in the last year are displayed. The segments are sorted in descending order by segment size.
On the object level heat map, you can search for a specific segment based on Tablespace, Name, Partition, Type, Last Access Time, Last Write Time, Last Full Scan Time, and Last Look Up Scan Time.
You can select a row (segment) and view the row activity for that segment clicking the Policies column to view the policies associated with the segment.
5.5.7 Viewing Policies for a Segment
To view the policies associated with a segment, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click Go. If the segments in the search results have policies associated with them, the count is displayed in the Policies column and is non-zero. Move your mouse over the count to view the associated policies with the segment, including inherited policies. If the count is zero then no policies are associated with the segment.
From the Database level heat map drill down to Tablespace level heat map. On the Tablespace level heat map Enterprise Manager displays the top 100 Objects belonging to the Tablespace. For each object, Enterprise Manager displays the count of policies in the column.
From Tablespace level heat map select an object and drill down to the object level heat map. Enterprise Manager displays the Top 100 largest Segments belonging to the Object. For each segment, Enterprise Manager displays a count of policies in the Policies column.
5.5.8 Disabling Background Activity
To disable the Automatic Data Optimization background evaluation and scheduler, follow these steps:
For more information about the ILM ADO ENABLED
parameter, refer to Managing ILM ADO Parameters.
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
Click Configure in the Policy Execution Settings section.
The Configure Policy Execution Settings dialog box is displayed.
-
Change the Status drop-down to Disabled and click OK.
Enterprise Manager displays the Information Lifecycle Management page where the Status now shows Disabled on the Policy tab in the Policy Execution Settings section.
5.5.9 Changing Execution Frequency of Background Automatic Data Optimization
To change the execution frequency of the Information Lifecycle Management background evaluation and scheduler, follow these steps:
For more information about the ILM ADO EXECUTION
INTERVAL
parameter, refer to Managing ILM ADO Parameters.
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
Click Configure in the Policy Execution Settings section.
The Configure Policy Execution Settings dialog box is displayed.
-
Change the value of the Execution Interval (mins) to a lower or higher number than what is currently displayed and then click OK.
Enterprise Manager displays the Information Lifecycle Management page where the Execution Interval now shows the new value on the Policy tab under Policy Execution Settings.
5.5.10 Viewing Policy Executions In the Last 24 Hours
To view policies that were executed in the last 24 hours and to view what objects were moved or compressed with the execution of the policies, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
Click the Policies Completed or the Policies Failed link on the Policy execution summary for last 24 hours row. Clicking either displays the execution history for the past 24 hours.
The Policy Execution Details dialog box display, showing the execution details for the policies in the last 24 hours.
5.5.11 Viewing Objects Moved In Last 24 Hours
To view which objects were moved in the last 24 hours and which policies/jobs moved those objects, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
On the Policies execution summary for last 24 hours row, click the Objects Moved link.
The Policy Execution History dialog box displays, showing the execution history for the jobs and policies executed and the objects moved in the last 24 hours.
5.5.12 Viewing Policy Details
To view the details of a specific ADO policy, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
To view policy details, click the policy name link in the policy table, or select any row in the policies table and then click the View Policy Details button.
5.5.13 Viewing Objects Associated With a Policy
To view the objects associated with a specific policy, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
Click the count in the Objects column.
The Objects associated with the Policy are displayed
5.5.14 Evaluating a Policy Before Execution
To evaluate a policy before executing the policy, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
In the Evaluations region, click Evaluate.
A dialog box displays giving you the choice of evaluating all policies in the database, or all policies affecting objects in a particular schema.
-
Enter a Schema Name and click OK to initiate the evaluation.
The evaluation dialog box closes and the evaluation is submitted. You can refresh the page or perform other Enterprise Manager tasks and then revisit the Policy Tab later. When you do, the Completed count in the Evaluations region is increased by 1.
-
When you click the Completed count link in the Evaluations region, a dialog box is displayed that lists all completed evaluations.
-
When you click the Task ID of the most recent evaluation, the Evaluation Details dialog box is displayed listing all objects currently included in the evaluation task that will either be compressed or moved if this evaluation is executed.
-
Click OK to include the list of objects in the execution. The Evaluation Details dialog box closes.
-
Select the row in the Evaluations table containing the most recent evaluation (top most row), then click Execute.
The Execute Evaluation dialog box is displayed, again listing the objects that will be affected during execution.
-
Click OK to begin the execution.
The Execute Evaluation dialog box closes. Eventually, execution results can be seen by clicking the Completed or Failed links in the Jobs or Policies regions under Policy Execution Summary for Last 24 Hours. Also, eventually the Evaluations Completed count is decreased by 1, as the task changes state from INACTIVE to ACTIVE to COMPLETED.
5.5.15 Executing a Single Policy
To execute a policy immediately on the objects associated with the Policy, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
Select the policy and click Execute Policy.
The Execute Policy dialog box is displayed, listing all objects that are evaluated by the selected policy. The dialog box also includes a Hide/Show button to display the
EXECUTE_ILM
commands to be executed. Only objects with this policy enabled are included.
5.5.16 Stopping a Policy Execution
To stop a policy execution, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
In the Jobs region under Policy Execution Summary for Last 24 Hours, click the In Progress link. This step assumes there is at least one task or job in progress.
A dialog box displays listing all currently executing tasks.
-
Click the Jobs link for one of the tasks listed in the table.
A dialog box displays listing details about the job(s) running as part of the task.
-
Clicks OK.
The Jobs Details dialog box closes.
-
Select a row in the table and click Stop Execution.
A dialog box displays confirmation of the stop execution process.
-
Click OK.
The confirmation dialog box is dismissed.
5.5.17 Viewing Policy Execution History
To view the execution history for a specific policy, follow these steps:
-
From the Administration menu, choose Storage, then select Information Lifecycle Management.
Enterprise Manager displays the Information Lifecycle Management page.
-
On the Information Lifecycle Management page, click the Policy tab.
-
Select the policy and click Execution History.
The Policy Execution History dialog box displays, showing the execution history for the selected policy. The details include the job information and the objects that were moved or compressed.
5.6 Using Automatic Storage Compression
Use Automatic Storage Compression to speed up initial direct loads and DMLs, and scans on the final dataset.
Sorting Behavior in Automatic Storage Compression
- If clustering directive is from the
CLUSTERING
clause inCREATE TABLE
orALTER TABLE
:- Each direct load performs a sort at the statement-level.
- Automatic Storage Compression preserves the sort-order when it moves the 1 GB chunks.
- If clustering has been enabled using the
DBMS_AUTO_CLUSTERING
PL/SQL package:- The direct load does not sort.
- Auto Storage Compression sorts the rows when moving each 1 GB chunk.
Prerequisites for Automatic Storage Compression
- The table must be in a tablespace with these properties:
SEGMENT SPACE MANAGEMENT AUTO
-
AUTOALLOCATE
- In the PDB, set
HEAT_MAP=ON
. - The table must be an HCC table.
To Enable Automatic Storage Compression
- Execute
DBMS_ILM_ADMIN.ENABLE_AUTO_OPTIMIZE
at the PDB level. - Direct load the uncompressed data into a HCC table.
- Check that all rows are uncompressed and note the segment size.
- After the time specified in minutes by
AUTO_OPTIMIZE_INACTIVITY_THRESHOLD
with no DML activity, Automatic Storage Compression starts running as a background AutoTask. - Rows are compressed incrementally.
You can monitor the incremental progress by checking the "
Auto compression data moved
" system statistic, which increases over time. - Eventually, "
Auto compression data moved
" matches the initial uncompressed segment size. This indicates that Automatic Storage Compression has completed. Check the segment size against the original to confirm that it has been compressed.
Note:
If clustering was specified using the attribute clustering DDL:- Sorting occurs during the direct load.
- Automatic Storage Compression preserves the sort order
DBMS_AUTO_CLUSTERING
:
- Sorting does not occur during the direct load.
- Automatic Storage Compression performs a sort during data movement.
See Also:
- DML and Hybrid Columnar Compression is explained in the Oracle Database Concepts Guide.
- You can find the ENABLE_AUTO_OPTIMIZE procedure in the Database PL/SQL Packages and Types Reference.
5.6.1 Example: Using Automatic Storage Compression
- Enable Automatic Storage Compression. Set
DBMS_ILM_ADMIN.ENABLE_AUTO_OPTIMIZE
at the PDB levelEXEC DBMS_ILM_ADMIN.ENABLE_AUTO_OPTIMIZE;
- Create a table without HCC compression.
This example uses the table
MYTAB
, which was created with no compression. - Check to verify that the table is not
compressed.
SELECT UNIQUE DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT', 'MYTAB', <ROWID>) FROM SCOTT.MYTAB; DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT', 'MYTAB', <ROWID>) –-------------------------------------------------------------------- 1
Note:
DBMS_COMPRESSION.GET_COMPRESSION_TYPE
uses constants that can determine compression type. The value “1” indicates that the table is not compressed. See DBMS_COMPRESSION Constants in the Oracle PL/SQL Packages and Types Reference for more information. - Set
AUTO_OPTIMIZE_INACTIVITY_THRESHOLD.
AUTO_OPTIMIZE_INACTIVITY_THRESHOLD
specifies how long to wait with no DML activity before starting automatic compression. Wait time is set byDBMS_ILM_ADMIN.CUSTOMIZE_ILM
. The default is 1440 minutes (one day). You can change the value:EXEC DBMS_ILM_ADMIN.CUSTOMIZE_ILM(DBMS_ILM_ADMIN.AUTO_OPTIMIZE_INACTIVITY_THRESHOLD, <NEW_VALUE_IN_MINUTES>);
After the defined wait time, Automatic Storage Compression starts up as a background AutoTask and rows are moved and compressed incrementally. Automatic Compression can be paused and restarted. For example, if you want to pause it to determine the size of the table before compression, use
DBMS_ILM_ADMIN.PAUSE_AUTO_OPTIMIZE
.EXEC DBMS_ILM_ADMIN.PAUSE_AUTO_OPTIMIZE;
- Run
ALTER TABLE
to add HCC compression and load data using direct path. Here we add HCC Query LOW compression to the table and insert /*+ append */ to perform the direct-path load. For this example, we will determine the uncompressed size of the table while auto compression is paused.SELECT BYTES/1024/1024 MB FROM DBA_SEGMENTS WHERE OWNER = 'SCOTT' AND SEGMENT_NAME = 'MYTAB'; MB ----- 5.625
The query returns the uncompressed table size.
- Monitor the incremental progress of the automatic compression by
checking the
Auto compression data moved
system statistic, which increases over time as data is moved and compressed. Since we previously paused compression, the value ofAuto compression data moved
is 0. Auto Compression has not started.SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME LIKE 'AUTO COMPRESSION DATA%'; NAME VALUE ------------------------------------------- -------- Auto compression data movement success 0 Auto compression data movement failure 0 Auto compression data moved 0
Note that
V$SYSSTAT
show the sum of values, across all tables that are using auto compression. If you are compressing more than one table, then the value ofAuto compression data moved
would include the data moved by all of those tables. Also, due to rounding up, the value may not exactly match the actual size of uncompressed data over time. - Now let's resume automatic compression so that it begins when
auto_optimize_inactivity_threshold
is met.EXEC DBMS_ILM_ADMIN.RESUME_AUTO_OPTIMIZE;
- You can monitor the compression operation. As it continues, the
value of "
Auto compression data moved
" increases.SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME LIKE 'AUTO COMPRESSION DATA%'; NAME VALUE ------------------------------------------- -------- Auto compression data movement success 1 Auto compression data movement failure 0 Auto compression data moved 6
The value of "Auto compression data moved" indicates that approximately 6 MB of uncompressed data was moved to compression. Note that when auto compression started, the uncompressed size of the data was 5.625 MB.
As this example demonstrates, the segment size before compression, and the amount of data moved during compression, may not be an exact match.
V$SYSSTAT
shows the sum of values across all tables using automatic compression. As noted, if you are loading more than one table, thenAuto compression data moved
includes the data for those tables as well.