10 Setting up Oracle Exadata Storage Snapshots

This chapter contains the following topics:

10.1 Overview of Exadata Snapshots

Traditionally, to clone databases in a production system, you would create test master and snapshot databases on a non-Exadata system (Figure 10-1). In some cases, these databases are a full copy that consumes as much storage as its source (Figure 10-2).

Figure 10-1 Traditional Clone to a Non-Exadata System

Description of Figure 10-1 follows
Description of "Figure 10-1 Traditional Clone to a Non-Exadata System"

Figure 10-2 Database Clone That Is a Full Copy of Its Source

Description of Figure 10-2 follows
Description of "Figure 10-2 Database Clone That Is a Full Copy of Its Source"

If the clones are a full copy of the production database, this is expensive in terms of the amount of storage consumed and the time it takes to create the clones. Imagine creating ten clones for a multi-terabyte database and it is easy to see why this approach does not scale.

Another drawback to this approach is that Oracle Exadata Storage Server Software features such as Smart Scan, Smart Logging, and Smart Flash are not available on non-Exadata systems.

To solve these problems, you can use Exadata snapshots. Exadata snapshots are ideal for creating space-efficient read-only or read-write clones of an Oracle database that you can use for development, testing, or other non-production purposes, and when multiple clones are required because of disk space and time savings. The following image depicts the space required for an Exadata snapshot.

Note:

Exadata snapshots should be used only for development and testing purposes. They should not be used in production environments.

Figure 10-3 An Exadata Snapshot

Description of Figure 10-3 follows
Description of "Figure 10-3 An Exadata Snapshot"

An Exadata snapshot is based on a test master, which is a full clone of the source database. The test master is the only full copy of the source database. From a single test master you can create multiple Exadata snapshots with minimal additional storage and minimal effort. Each Exadata snapshot uses a small fraction of the disk space required for the test master and can be created or dropped in seconds. Each Exadata snapshot is a logical copy of the test master.

Before creating Exadata snapshots from the test master, you can modify the data in the test master, if required. For example, you can delete or mask sensitive data in the test master before making it available to non-privileged users.

Creating an Exadata snapshot from the test master is as simple as recording the parent file name in the child file header, an operation that completes in seconds and requires minimal disk space. Additional disk space is consumed only when the user of the snapshot begins to change data. Only new data is written to data blocks that are allocated to the snapshot on write. All requests for data that has not changed are serviced by the data blocks of the test master.

Multiple users can create independent snapshots from the same test master. This enables multiple development and test environments to share space while maintaining independent databases for each user. The following image shows an Exadata environment with three Exadata snapshots that use the same test master.

Figure 10-4 An Exadata Environment with 3 Exadata Snapshots from the Same Test Master

Description of Figure 10-4 follows
Description of "Figure 10-4 An Exadata Environment with 3 Exadata Snapshots from the Same Test Master"

Hierarchical and Read-Write Snapshots

Oracle Exadata release 12.2.1.1.0 introduces hierarchical and read-write snapshots. Hierarchical snapshots enable you to create snapshots from snapshots. You might want to do this if you are working on your snapshot and wish to save a copy before you make additional changes to it. In hierarchical snapshots, you can make a snapshot at any level descended from the test master. Snapshots are writable. A snapshot points to the parent’s blocks for the data. If you edit a snapshot, then the snapshot will point to the new data. For the unchanged data, it will point to the parent’s blocks.

If you have taken a snapshot from a snapshot, and you edit the parent snapshot, then you have to delete all snapshots that are dependent on that snapshot.

10.1.1 Support of Exadata Features

In addition to space and time savings, Exadata snapshots provide cost-effective development, quality assurance and test environments on Exadata. Exadata snapshots can be used by developers and testers who need to validate functionality, and practice maintenance and operational steps in a fully functional Exadata environment (for example, Exadata smart flash, smart scan, hybrid columnar compression).

10.1.2 Separate Test/Development and Production Environments

Oracle recommends that test and development environments be hosted on a separate physical Exadata rack from the rack hosting the production database (see Figure 10-5 and Figure 10-6). An Exadata system dedicated to development and test is ideal. Test masters and their associated Exadata snapshots would be hosted on this system. Alternatively it may be an Exadata system that hosts Data Guard standby databases for high availability, disaster recovery, or other purposes as long as capacity allows. Test masters and their snapshots may reside in either physical or virtual machines on an Exadata system.

10.1.3 Types of Exadata Snapshots

You can create two types of Exadata snapshots, depending on the current setup of your environment:

  • You have a pluggable database (PDB) and want to create a test master from it.

    You can create Exadata snapshot PDBs from individual PDBs within a container database (CDB). You can clone individual PDBs to create the test master PDB, from which Exadata snapshot PDBs are generated.

    You can move Exadata snapshot PDBs from one CDB to another in the same Exadata cluster. You can also create an Exadata snapshot PDB in one container from a test master PDB residing in another container as long as both CDBs are in the same Exadata cluster.

    The test master database and their Exadata snapshots must be in the same ASM cluster environment.

    The following image shows a production CDB with three PDBs. Two of the PDBs (PDB1 and PDB2) have been cloned to create test master PDBs, which were then unplugged and plugged into another CDB on the test Exadata system. In this figure, six Exadata snapshot PDBs have been created from the test master PDBs.

    Figure 10-5 Exadata Snapshot PDBs

    Description of Figure 10-5 follows
    Description of "Figure 10-5 Exadata Snapshot PDBs"
  • You have a container database (CDB) and want to create test masters from all its PDBs, or you have a simple non-container database and want to create a test master from it.

    An Exadata snapshot database can be either a non-container database (non-CDB) or a container database (CDB). Creating an Exadata snapshot database of a CDB enables access to all of the pluggable databases in that container.

    The next figure depicts a full clone of the production database. The clone, which is the test master database, is hosted on a separate test/development system, and it is associated with six Exadata snapshots.

    The test master database is created using either a Data Guard standby database (recommended if the test master will be refreshed on a regular basis) or RMAN.

    The test master database and their Exadata snapshots must be in the same ASM cluster environment.

    Figure 10-6 Exadata Snapshot Databases in a Non-Multitenant Environment

    Description of Figure 10-6 follows
    Description of "Figure 10-6 Exadata Snapshot Databases in a Non-Multitenant Environment"

10.1.4 Hierarchical Snapshot Databases

Oracle Exadata release 12.2.1.1.0 introduces hierarchical snapshots. Hierarchical snapshots enable you to create snapshot databases from other snapshot databases. You might want to do this if you are working on your snapshot database and you want to save a copy before you make additional changes to it. You can make as many levels of snapshot databases as you want, but for performance reasons, you might want to limit it at 10 levels.

Figure 10-7 Hierarchical Snapshot Databases

Description of Figure 10-7 follows
Description of "Figure 10-7 Hierarchical Snapshot Databases"

A snapshot database points to its parent’s blocks for data. If you make a change to a snapshot database, then the snapshot database allocates a new block for the changed data. For the unchanged data, it points to the parent’s blocks. A snapshot that is several levels descended from the original test master will retrieve its data by traversing up the tree starting with the snapshot database from which it was created.

Figure 10-8 Allocation of Blocks in Hierarchical Snapshot Databases

Description of Figure 10-8 follows
Description of "Figure 10-8 Allocation of Blocks in Hierarchical Snapshot Databases"

If you have taken a snapshot database from another snapshot database, and you make a change to the parent snapshot database, then you have to drop all snapshot databases that depend on that snapshot database. Parent snapshot becomes read-only when child snapshots are created from it. If you want to write to the parent snapshot again, you need to drop all child snapshots.

10.1.5 Sparse Test Masters

With the introduction of hierarchical snapshots, you can now create Sparse Test Masters.

Hierarchical snapshot databases provide the capability to create Sparse Test Masters. With a sparse test master there is one full copy of data to provide the parent file data for multiple levels of snapshots. When you are ready to create snapshots, you create an additional set of sparse files to receive updates from your production database via a replication technology such as Data Guard. When you need to create a new test master of current data, rather than cloning the full production database again as of the new point in time, you mark the previous additional sparse files read only creating a sparse test master and create a new set of sparse files to be kept current. Additional snapshots on this sparse test master could then be created to be used by application developers for testing. You can repeat this process up to 9 times total to allow one full copy of the production database to support snapshots from multiple points in time.  Creating the new sparse test master is very quick (5 minutes or less) allowing you to easily keep current with production.

Figure 10-9 Example configuration with Sparse Test Masters

Description of Figure 10-9 follows
Description of "Figure 10-9 Example configuration with Sparse Test Masters"

The sparse test master database applies to both CDBs and non-CDBs.

10.2 Prerequisites for Exadata Snapshot Databases

Before creating Exadata snapshot databases, check that your environment meets the following requirements:

  • Storage servers must be Exadata X3 or later

  • Exadata Storage Server software 12.1.2.1.0 or later for Exadata Storage and Database Servers

    You cannot downgrade to an earlier version with sparse grid disks on a cell.

  • Oracle Grid Infrastructure 12.1.0.2.0 BP5 or later

    The ASM disk group that contains the sparse ASM grid disks must have both COMPATIBLE.RDBMS and COMPATIBLE.ASM set to 12.1.0.2 or later.

    The parent disk group can be 11.2 compatible.

  • Oracle Database version 12.1.0.2.0 BP5 or later

    The parent database and the snapshot database must be 12.1.0.2 compatible.

  • The data files for the snapshot database and the parent database must be on the same ASM cluster.

  • The db_block_size must be at least 4K and be a multiple of 4K.

  • If you are using hierarchical snapshot databases, sparse test master databases, the new “--sparse” option for the ASM cp command, or the new setsparseparent ASM command, then you need Oracle Database and Oracle Grid Infrastructure 12.2.0.1.0 and Oracle Exadata 12.2.1.1.0, or later.

 

10.3 Concepts

This section describes the objects that you need for creating and using Exadata snapshots. It contains the following sections:

10.3.1 Sparse Database and Sparse Files

A database consists of the following files:

  • control files

  • online redo logs

  • temp files

  • data files

In a sparse database, such as an Exadata snapshot, its data files are sparse files. A sparse file contains only changes made to blocks from the parent file (the parent file remains unchanged) and maintains a pointer to the parent file for access to unchanged data, as illustrated in Figure 10-3.

The Exadata snapshot has its own copy of the other database files (control files, online redo logs, and temp files). These other database files will not be sparse files.

10.3.2 Sparse Grid Disks

A sparse ASM disk group is composed of sparse grid disks. A sparse grid disk has a virtual size attribute as well as physical size.

The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB. The maximum allowed virtual size is 100 TB.

You create sparse grid disks before creating a sparse ASM disk group. If you are deploying a new system or re-deploying an existing system, set aside some space from each cell disk for use by the disk group just as you would for any other disk group.

If you want to use an existing system without reimaging, see My Oracle Support note 1467056.1 for instructions on resizing existing grid disks.

10.3.3 Sparse Disk Groups

Exadata snapshots utilize Oracle ASM sparse disk groups: sparse data files can be created only in Oracle ASM sparse disk groups. The following figure shows a sparse disk group containing three Exadata snapshots.

Figure 10-10 Sparse Disk Group Contains Exadata Snapshots

Description of Figure 10-10 follows
Description of "Figure 10-10 Sparse Disk Group Contains Exadata Snapshots"

A sparse ASM disk group can store both sparse and non-sparse files. You can create full database objects in a sparse ASM disk group. For example, you can create your test master database in your sparse ASM disk group along with the Exadata snapshots. Note that there will be no space savings for the test master database because it will be a full copy of the source database.

A sparse disk group has the following attributes:

  • compatible.asm must be set to 12.1.0.2 or higher.

  • compatible.rdbms must be set to 12.1.0.2 or higher.

  • cell.sparse_dg must be set to "allsparse". This is a new attribute that identifies the disk group to ASM as being made up of sparse grid disks.

  • appliance.mode must be set to true.

  • A sparse disk group uses 16 times the size of an extent. For 4M allocation unit (AU), each extent is 64M.

  • A sparse disk group uses Virtually Allocated Metadata.

For example, the following SQL command creates a sparse disk group:

SQL> create diskgroup SPARSE
  normal redundancy 
  disk 'o/*/SPARSE_*'
  attribute 
    'compatible.asm'          = '12.1.0.2',
    'compatible.rdbms'        = '12.1.0.2',
    'cell.smart_scan_capable' = 'true',
    'cell.sparse_dg'          = 'allsparse',
    'au_size'                 = '4M';

10.4 Sparse Disk Sizing and Allocation Methodology

To create Exadata snapshots, you must have sparse grid disks created with an ASM disk group created based on those disks. It is not possible to directly modify an existing disk group and convert it into a sparse disk group; you must create grid disks fresh with the sparse attributes. If you want to use an existing disk group, then the disk group and associated grid disks must be dropped and re-created.

If you want to create a new sparse disk group and all of your space is currently allocated to existing disk groups, then you must resize one or more existing disk groups to free up space. Use the following steps perform this process.

10.4.1 Sizing Steps

When sizing, remember that you can also put non-sparse databases into the sparse disk group. Non-sparse databases and files will use up the full physical space.

The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB.

The maximum virtual size that can be allocated per sparse grid disk is 100 TB. However, to avoid additional performance impact Oracle does not recommend creating large virtual sizes unless it is truly required.

  1. Determine the amount of physical space for your snapshots. Use the formula found in "Calculating the Physical Size for Grid Disks".

    Note that a good maximum starting point for physical space allocation for sparse grid disk is 15% of your total space. This can be tuned up or down as required based on future utilization patterns and needs.

  2. Determine the amount of virtual space you will require for your snapshots. Use the formula found in "Calculating the Virtual Size for Grid Disks".

    Note that the virtual size can be modified online by just modifying the ASM disk group without making any changes to the underlying grid disks.

  3. Using the number of snapshots you plan on having at any one time and how much changes you expect to make, use the process defined in "Determine the Amount of Available Space" to determine which existing disk group(s) you want to shrink to release space for the sparse disk group. Based on your current allocations, you may need to shrink multiple disk groups to get the space you require for the sparse disk group. Leave a minimum of 15% free space in the original disk groups to allow for rebalancing during the resize operation.

    If there is not enough free space available in your current disk groups to provide desired space for the sparse disk group:

    • Rethink your sparse usage in this environment and downsize accordingly

    • Relocate objects and databases from this environment to another with available free space

    • Add storage

  4. Once you have decided which disk group to resize, follow the steps in "Shrink the Oracle ASM Disks in the Donor Disk Group" and "Shrink the Grid Disks in the Donor Disk Group" to resize the disks.

    This process allows you to resize disks while they are online and only require a single rebalance operation per disk group. Oracle ASM may run more than one rebalance operation concurrently, depending on the number of Oracle ASM instances available in the cluster.

  5. Create the sparse grid disk from the space just released using the commands outlined in "Creating Sparse Grid Disks".
  6. Create the sparse ASM disk group using the commands outlined in "Resizing the Physical Space".
  7. To monitor sparse grid disk activity see "Monitoring Sparse Disk Group Activity".
  8. If you find that you need to resize your sparse grid disks:
    • If more physical space is required, follow steps 1 through 4 in this section to free up space and then follow the steps in "Resizing the Physical Space". The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB.

    • If more virtual space is required, follow the steps in "Resizing the Virtual Space". The maximum virtual size that can be allocated to sparse grid disks from a single cell disk is 100 TB.

10.5 Refresh Considerations, or Lifecycle of Exadata Snapshots

Refresh cycles can influence how you use and create Exadata snapshots. The test master must remain in either READONLY state (if open) or closed while it has snapshots associated with it. If the test master needs to be refreshed, all snapshots dependent on the test master must be dropped and re-created.

If different groups of snapshot users have different refresh cycle requirements, you may need to maintain multiple test masters. The following figure shows three test masters, each with its own refresh schedule.

Figure 10-11 Three Developer Groups for PDB1, Each With a Different Refresh Cycle

Description of Figure 10-11 follows
Description of "Figure 10-11 Three Developer Groups for PDB1, Each With a Different Refresh Cycle"

 

10.6 Using an Oracle Data Guard Standby Database as the Test Master

In a situation where the test master is either a non-container database or a full CDB that needs to be refreshed regularly (that is, the test master is not a PDB), Oracle recommends creating the test master database as a Data Guard physical standby dedicated to this purpose.

There are multiple benefits when using this approach:

  • Easier refresh

    Oracle Data Guard is a proven solution for synchronizing multiple physical replicas of a production database used for disaster recovery, read-only offload from production, backup offload, and test. This same functionality can be used to maintain a copy of the production database dedicated to serving as a test master database that is more easily refreshed on a periodic basis. The benefit of using a Data Guard physical standby increases as the size of the test master database and the number of times it must be refreshed increases. These benefits far outweigh the small incremental effort required to create a Data Guard replica compared to simply cloning a database from an RMAN backup.

  • Minimal impact to primary

    During the time the Data Guard replica is being used as the test master database, Data Guard redo transport and apply are disabled. There is zero impact to the production database. When it is time to refresh, only the deltas generated since the Data Guard replica was converted to a test master database are taken from the production database and used to resynchronize the test master database.

    NOTE: Since transport and apply for this Data Guard replica will be stopped while it functions as a test master, it should not be used for disaster recovery or for any purpose other than as test master. If you are already using Data Guard for high availability or disaster protection, Oracle recommends creating a Data Guard replica(s) to use as test master databases for Exadata snapshot databases.

  • Easy to scrub prior to creating snapshot clones

    Data Guard makes it easy to modify the test master database before making it available to create Exadata snapshots. For example, a Data Guard replica can be opened read-write and data can be masked or scrubbed prior to creating Exadata snapshots. Later, when testing is complete, the test master database can be converted back into a Data Guard replica discarding any modifications made to the original copy and refreshing it using only the deltas from the production database. Note that after refreshing the Data Guard replica, you need to re-scrub the database before you can use it as a test master again.

    If you are using an RMAN backup database, and you mask or scrub the data, when you need to refresh the test master, you have to create another backup as the test master and rescrub it to make it current.

10.7 Managing Exadata Snapshots

To create and manage Exadata snapshots, you need to perform these procedures:

10.7.1 Creating Sparse Grid Disks

When creating sparse grid disks, you need to specify the physical size and the virtual size.

10.7.1.1 Calculating the Physical Size for Grid Disks

You can use the following formula to get a rough estimate of the total physical space to set aside for a sparse ASM disk group:

Total physical space =
   (SUM(size of all test masters in the sparse ASM disk group) +
    SUM(approximate size of all updates to the snapshot databases))
   * ASM Redundancy

In the formula above, ASM redundancy takes into account ASM mirroring of extents. Exadata requires ASM redundancy set to either normal redundancy (double mirror the extents) or high redundancy (triple mirror the extents). If the sparse ASM disk group will use normal redundancy, expect to double the space used. If using high redundancy, expect to triple the space used.

For example, if you want 2 test masters in the sparse ASM disk group created with normal redundancy with a combined total space of 500 GB (250 GB each) and each test master will have 5 Exadata snapshots with the expectation that each snapshot will modify 20% of the blocks, then the total physical space that will be needed can be calculated as follows:

Space for 2 test masters:   2 * 250 GB =                                500 GB
Space for 5 snapshots per test master, for a total of 10 snapshots:
    10 * 250 GB * 20% =                                                 500 GB
Subtotal                                                               1000 GB
Normal redundancy: 2 * 1000 GB =                                       2000 GB

Divide this value by the number of disks to determine the size parameter for each disk. ASM grid disks should be allocated on 16 MB boundaries. If the size parameter in MB for each grid disk is not evenly divisible by 16, adjust up to a 16 MB boundary.

Note that you should set aside additional space to use for multiple projects and through multiple iterations.

Also to accommodate any disk rebalancing operations, you should add a 15% space cushion on top of the space used for snapshots and test masters.

10.7.1.2 Calculating the Virtual Size for Grid Disks

You can use the following formula to get a rough estimate of the virtual size to assign for a sparse ASM disk group:

Virtual size required for sparse disks =
   (SUM(full virtual size of all Exadata snapshots) + Physical space allocated)
   * ASM Redundancy

To continue with the example from the previous section, you have 10 Exadata snapshots. If they were full copies of the test master, they would be 250 GB each.

The following shows the calculation for the total virtual space:

Full size for 5 snapshots per test master, for a total of 10 snapshots:
    10 * 250 GB =                                                      2500 GB
Size of the 2 test masters: 2 * 250 GB =                                500 GB
Subtotal                                                               3000 GB
Normal redundancy: 2 * 3000 GB =                                       6000 GB

Divide this value by the number of disks to determine the virtualsize parameter for each disk. The virtual size for each grid disk should be allocated on 16 MB boundaries. If the virtualSize parameter in MB for each grid disk is not evenly divisible by 16, adjust up to a 16 MB boundary.

Note that you should set aside additional space to use for multiple projects and through multiple iterations.

10.7.1.3 Creating a Sparse Grid Disk

To create the sparse ASM grid disk, log in to each cell (or use dcli) in CellCLI and run a command similar to the following, changing size values where appropriate:

CellCLI> create griddisk all harddisk prefix=SPARSE, size=56G, virtualsize=560G

This creates a grid disk of physical size 56 GB but presents to ASM as a 560 GB grid disk. The size parameter should match the actual physical size of the ASM grid disk while the virtualsize parameter should be at least the physical size of the ASM grid disk.

Attributes for the ASM grid disk created above would look like the following from the "LIST GRIDDISK DETAIL" command:

CellCLI> LIST GRIDDISK DETAIL
size:        56G
sparse:      TRUE
virtualSize: 560G

size displays the actual physical size of the grid disk.

sparse has a value of TRUE.

virtualSize displays the virtual size of the grid disk.

10.7.2 Creating an ASM Disk Group for the Sparse Grid Disks

After you have created the sparse grid disks, you create an ASM disk group to enable those disks to be accessible to a database on the cluster. To create a disk group, log in to an ASM instance using SQL*Plus as sysasm and run a command similar to the following:

SQL> create diskgroup SPARSE high redundancy disk 'o/*/SPARSE_*' attribute
'compatible.asm'='12.1.0.2', 
'compatible.rdbms'='12.1.0.2', 
'au_size'='4M', 
'cell.smart_scan_capable'='true', 
'cell.sparse_dg'='allsparse', 
'appliance.mode' = 'TRUE';

compatible.asm must be set to 12.1.0.2 or higher.

compatible.rdbms must be set to 12.1.0.2 or higher.

cell.sparse_dg must be set to "allsparse". This identifies the disk group to ASM as being made up of sparse grid disks.

appliance.mode must be set to true.

 

10.7.3 Setting Up the Test Master

You can create the test master using either of these methods:

Then perform the tasks described in Setting the Ownership of the Test Master Data Files.

10.7.3.1 Create a New Test Master - Full Clone on a Disk Group with ASM ACL Enabled

You can create a full clone of your database using RMAN backup/restore, data pump, or any of the other methods typically used to create a full clone of a database.

After creating the full clone, remove write permissions on all the data files to help prevent accidental overwrite.

SQL commands in ASM instance only allow to set file permissions to read only. You cannot remove write permissions in SQL.

SQL> ALTER DISKGROUP DATA set permission owner=read ONLY, group=read ONLY, other=none for file 'FILENAME';

10.7.3.2 Converting an Existing Full Clone or Standby Database to a Test Master

If you already have a full clone or a standby database that you want to repurpose as a test master, then you can convert that database to a test master.

Standby databases cannot be running redo apply while serving as a test master.
  1. If you are using an Oracle Data Guard standby database, perform the following steps:
    1. For initial creation of the Oracle Data Guard replica, use the steps outlined in My Oracle Support note 1617946.1.

      The Data Guard copy must have enough redo applied that it can be opened in a READ ONLY state.

      Note:

      An Oracle Data Guard snapshot standby is different from an Oracle Exadata snapshot. An Oracle Data Guard snapshot standby is a complete copy of the source database that is open read-write. Conversion to an Oracle Data Guard snapshot standby is a simple operation using a single command. Oracle Data Guard snapshot standby facilitates making modifications to the test master and refreshing it for subsequent rounds of testing. See "Managing a Snapshot Standby Database" in Oracle Data Guard Concepts and Administration for more information on Oracle Data Guard snapshot standby databases.

    2. When the standby database is at a consistent state and can be opened in READ ONLY mode, stop log transport to the standby and disable redo apply on the standby.
      DGMGRL> edit database TESTMASTER set property logshipping=OFF;
      Property "logshipping" updated
      DGMGRL> edit database TESTMASTER set state=APPLY-OFF;
      Succeeded
      
    3. Make any modifications to the test master, for example, deleting sensitive data, if necessary.
  2. Check that both compatible.ASM and compatible.RDBMS are 11.2.0.0 or newer.
  3. If access control is not already enabled on the disk group that contains the test master's data files, then enable access control on the disk group.

    The disk group must be on Oracle Exadata.

    SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';
    
  4. Grant ownership to all data files.
  5. Remove write permissions on all the data files to help prevent accidental overwrite.

    SQL commands in ASM instance only allow you to set file permissions to read only. You cannot remove write permissions in SQL.

    SQL> ALTER DISKGROUP DATA set permission owner=read ONLY, group=read ONLY, other=none for file 'FILENAME';
    

    This allows snapshots to be created and owned by users other than the owner of the base files.

10.7.3.3 Setting the Ownership of the Test Master Data Files

After you have cloned the database to create a test master database, you set an operating system user as the owner of the disk group, and make the operating system user as the owner of the test master's data files.

You can do this by running SQL commands manually in SQL*Plus, or by running a script:

10.7.3.3.1 Running Commands Manually

You can use SQL*Plus to manually run the commands to set the ownership of the test master data files.

The following commands are run in SQL*Plus.
  1. If the operating system user you are granting access to is not added as a user on the disk group, then add the user.

    Note:

    When enabling access control, all software owners that are running databases must be added as a user to the disk group.

    For example, to add the user SCOTT as an owner of the DATA disk group, use the following command:

    SQL> ALTER DISKGROUP DATA ADD USER 'scott';
    
  2. Make the operating system user the owner of the test master's data files:
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE
      '+DATA/TESTMASTER/DATAFILE/system.257.865863315';
    
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE
      '+DATA/TESTMASTER/DATAFILE/sysaux.258.865863317';
    
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE 
      '+DATA/TESTMASTER/DATAFILE/sysext.259.865863317';
    
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE 
      '+DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315';
    
10.7.3.3.2 Running from a Script

The following procedure is equivalent to the commands above, but it queries v$datafile for the filenames:

  1. Add an operating system user as owner of the disk group.
    SQL> ALTER DISKGROUP DATA ADD USER 'scott';
    
  2. Generate a script called set_owner.sql to set the owner of the test master's data files.
    • If the test master is a full database, run the following in the test master database:

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback off
      set heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' for file '||''''||name||''''||';' from v$datafile;
      exit
      
    • If the test master is a PDB, run the following in the CDB$ROOT of the test master PDB:

      In the select statement below, the example assumes the test master PDB has a con_id of 10.

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback off
      set heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' for file '||''''||name||''''||';' from v$datafile where con_id=10;
      exit
      
  3. Remove extra lines in set_owner.sql.
    sed -i '/SQL/d' set_owner.sql
    
  4. Run the script in the ASM instance.
    SQL> @set_owner
    

10.7.4 Creating the Snapshot Databases from a (Read-Only) Test Master Database

You can create an Exadata snapshot database of a pluggable database (PDB) or of a full database. When you create the snapshot database from a test master database, the test master database becomes read-only.

The first example addresses Exadata snapshot PDBs, the second example addresses snapshot databases (non-CDB or CDB).

10.7.4.1 Creating a Snapshot of a Pluggable Database

Creating an Exadata snapshot of a pluggable database (PDB) is the simplest method for creating a snapshot because it requires no additional manual steps. Two new clauses to the CREATE PLUGGABLE DATABASE statement were added to identify the PDB as an Exadata snapshot. The snapshot creation process alters the permissions on the files on the test master PDB to READONLY to ensure that it is not modified.

Creating individual Exadata snapshot PDBs is best used when creating snapshots for a smaller number of PDBs within a given CDB. The following figure shows a high-level example of a typical lifecycle for a PDB with two PDB snapshots.

Figure 10-12 Lifecycle of Exadata Snapshots Using PDBs

Description of Figure 10-12 follows
Description of "Figure 10-12 Lifecycle of Exadata Snapshots Using PDBs"

One of the benefits of Oracle Multitenant and PDBs is the ability to easily clone an existing PDB to create a test master and move it from one CDB to another. Oracle recommends that you clone your source PDB to create the test master and then migrate it to your test environment where you can perform any data scrubbing that may be needed. Once complete, the test master PDB can be used to create any number of Exadata snapshot PDBs.

NOTE: An Exadata snapshot PDB is created as a PDB in the CDB. You are limited to a total of 252 PDBs in a single CDB. All PDBs in that CDB, whether they are test masters, snapshots or PDBs that are not being used for snapshots, are included in that limit. If you need to create more PDBs than can fit in a single CDB, you can create the Exadata snapshot PDB in another CDB on the same cluster.

After creating the test master PDB, perform the following steps to create an Exadata snapshot PDB:

  1. In SQL*Plus connect to the cdb$root.
  2. Close the test master PDB in all instances.
    SQL> alter pluggable database PDB1TM1 close instances=all;
    
  3. Open the test master PDB in the local instance in read only mode.
    SQL> alter pluggable database PDB1TM1 open read only;
    
  4. Create an Exadata snapshot PDB of the test master.
    SQL> create pluggable database PDB1S1 from PDB1TM1 tempfile reuse 
    create_file_dest='+SPARSE' snapshot copy;
    

    create_file_dest must specify the name of a sparse disk group; this ensures the files for the Exadata snapshot are created in the correct disk group. The "snapshot copy" clause creates the PDB as a snapshot rather than a full PDB clone.

    NOTE: When an Exadata snapshot PDB is created, the command changes file permissions on the test master PDB, marking the files as READONLY in ASM.

10.7.4.2 Creating a Snapshot of a Full Database

You create an Exadata snapshot of a full database if your test master database is a non-CDB or a CDB and virtually all of the PDBs are to be used for testing.

Note that the test master database cannot be used as a failover target to provide high availability or disaster recovery (a Data Guard configuration may have multiple replicas that can each serve different purposes). Similar to test master PDBs, test master databases cannot be modified while Exadata snapshots exist against them.

The test master database cannot be a read-only physical standby database that is in recovery mode (for example, Active Data Guard in Real Time Apply).

The test master database and their Exadata snapshots must be in the same Oracle ASM cluster environment.

The following figure shows the lifecycle an Exadata test master database and snapshot databases.

Figure 10-13 Lifecycle of Test Master and Snapshot Databases

Description of Figure 10-13 follows
Description of "Figure 10-13 Lifecycle of Test Master and Snapshot Databases"
  1. In the test master database, create a sample control file script to use for your Exadata snapshot databases by backing up the existing control file to trace.

    Connect to the test master database via SQL*Plus as SYSDBA and do the following:

    1. Determine name and location of any trace file to be generated by your session:
      SQL> SELECT value FROM v$diag_info WHERE name = 'Default Trace File';
      
      VALUE
      ---------------------------------------------------------------------------------
      /u01/app/oracle/diag/rdbms/TESTMASTER/TESTMASTER1/trace/TESTMASTER1_ora_26756.trc
      
    2. Run the BACKUP CONTROLFILE TO TRACE command to place the CREATE CONTROLFILE command into the trace file.
      SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
      
    3. Retrieve the file shown for the Default Trace File.
  2. In the test master database, determine the existing file names for the rename that will happen in step 10.

    Log into SQL*Plus as SYSDBA and run the following:

    SET newpage 0
    SET linesize 999
    SET pagesize 0
    SET feedback off
    SET heading off
    SET echo off
    SET space 0
    SET tab off
    SET trimspool on
    SPOOL rename_files.sql
    SELECT 'EXECUTE dbms_dnfs.clonedb_renamefile -
    ('||''''||name||''''||','||''''||REPLACE(REPLACE(REPLACE(name,'.','_'),-
    'TESTMASTER','JOHNTEST'),'+DATA','+SPARSE')||''''||');' FROM v$datafile;
    EXIT 
    

    The previous query builds a file called rename_files.sql that contains statements for each data file similar to the following:

    EXECUTE dbms_dnfs.clonedb_renamefile (
    '+DATA/TESTMASTER/DATAFILE/system.257.865863315',
    '+SPARSE/JOHNTEST/DATAFILE/system_257_865863315');
    

    The REPLACE function in the query:

    • Replaces periods in the original filename with underscores

    • Replaces the original database name of TESTMASTER with JOHNTEST

    • Replaces the original disk group name of +DATA with +SPARSE

  3. Shut down the test master.
    SQL> shutdown;
    
  4. Create an init.ora file for the Exadata snapshot database.

    You can use the init.ora file of the test master as a template, but make sure to change the db_name and control_files entries. For this procedure, the init.ora file for the Exadata snapshot database is referenced as snap_init.ora in commands and examples.

    $ cp init_TestMaster.ora snap_init.ora
    

    Modify snap_init.ora with the new database name, new control file name, and audit file destination, for example:

    db_name = JohnTest
    control_files = '+DATA/JOHNTEST/control1.f'
    audit_file_dest=/u01/app/oracle/admin/johntest/adump
    
  5. Edit the trace file generated in step 1.

    Modify the trace file to create a control file for the Exadata snapshot database and create a SQL file called crt_ctlfile.sql. You will use this script later in step 9.

    The control file should be created with the Exadata snapshot database name, new log file names, and the data file names of the test master.

    The example below shows a control file creation script. JohnTest is the Exadata snapshot database. The LOGFILE lines specify the new log file locations, and the DATAFILE lines specify the data file locations for the test master database. The new log files can be in any disk group that has enough space, but they should not be created in the sparse Oracle ASM disk group.

    SQL> CREATE CONTROLFILE REUSE SET DATABASE JohnTest RESETLOGS ARCHIVELOG
          MAXLOGFILES 32 
          MAXLOGMEMBERS 2 
          MAXINSTANCES 1 
          MAXLOGHISTORY 908 
      LOGFILE 
          GROUP 1 '+DATA/JOHNTEST/t_log1.f' SIZE 100M BLOCKSIZE 512,
          GROUP 2 '+DATA/JOHNTEST/t_log2.f' SIZE 100M BLOCKSIZE 512
      DATAFILE
          '+DATA/TESTMASTER/DATAFILE/system.257.865863315',
          '+DATA/TESTMASTER/DATAFILE/sysaux.258.865863317',
          '+DATA/TESTMASTER/DATAFILE/sysext.259.865863317',
          '+DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315'
      CHARACTER SET WE8DEC; 
    
  6. Create the audit_file_dest directory on all nodes on which the snapshot will be running.
    $ mkdir -p /u01/app/oracle/admin/johntest/adump
    
  7. Create the directories in Oracle ASM for the snapshot data files.
    Use ASMCDM to execute the following commands:
    $ asmcmd -p 
    ASMCMD > cd +SPARSE 
    ASMCMD [+sparse] > mkdir JOHNTEST 
    ASMCMD [+sparse] > cd JOHNTEST 
    ASMCMD [+sparse/johntest] > mkdir DATAFILE
    
  8. Start a database instance pointing to the Exadata snapshot database init.ora file using the following commands:
    $ sqlplus / as sysdba
    SQL> startup nomount pfile=snap_init.ora
    
  9. Create the Exadata snapshot control file using the script created in step 5.

    In the following example the script is named crt_ctlfile.sql.

    SQL> @crt_ctlfile
    
  10. Run the script you modified in step 2.

    All the files must be renamed prior to opening the Exadata snapshot database.

    Connect using SQL*Plus as SYSDBA to the Exadata snapshot database and run the following command:

    SQL> @rename_files
    

    This script modifies the permissions of the test master database files stored in Oracle ASM, marking them as READONLY.

    The dbms_dnfs.clonedb_renamefile procedure, which is called by rename_files.sql, sets up the parent-child relationship between the test master database and the snapshot database, and renames the filenames in the snapshot database's control file.

  11. Open the Exadata snapshot database with the RESETLOGS option:
    SQL> ALTER DATABASE OPEN RESETLOGS;
    
  12. Confirm that the Exadata snapshot files are child files of the test master database. Connect using SQL*Plus as SYSASM to the Exadata snapshot, and run the following command:
    SQL> SELECT filenumber num, clonefilename child, snapshotfilename parent
    FROM x$ksfdsscloneinfo;
    

    The following is an example of the output from the query:

    NUM  CHILD                                        
    ---- ---------------------------------------------
    PARENT 
    -----------------
    1     +SPARSE/JOHNTEST/DATAFILE/system_257_865863315
    +DATA/TESTMASTER/DATAFILE/system.257.865863315
    
    2     +SPARSE/JOHNTEST/DATAFILE/sysaux_258_865863317
    +DATA/TESTMASTER/DATAFILE/sysaux.258.865863317
    
    3     +SPARSE/JOHNTEST/DATAFILE/sysext_259_865863317
    +DATA/TESTMASTER/DATAFILE/sysext.259.865863317
    
    4     +SPARSE/JOHNTEST/DATAFILE/tbs_1_256_865863315
     +DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315 
    
  13. Log in using SQL*Plus to the Exadata snapshot database, and add temp files to the TEMP tablespace. This is a full size temp file, not a sparse temp file.
    SQL> ALTER TABLESPACE temp ADD TEMPFILE '+DATA' SIZE 10G;
    

10.7.5 Refreshing the (Read-Only) Test Master Database

To refresh the (read-only) test master database, the main steps are:

10.7.5.1 Step 1: Drop the Snapshot Databases

Delete the Exadata snapshot databases that are children of the test master database you want to refresh.

You can delete them using RMAN. For example, connect to each Exadata snapshot database using RMAN with the Exadata snapshot as the target and run the following commands:

RMAN> startup mount force;
RMAN> delete database;

NOTE: Failure to drop an Exadata snapshot database will have no impact on the state of a test master database. However, an Exadata snapshot can behave unpredictably if its test master database is dropped or refreshed.

10.7.5.2 Step 2: Change the Permissions on the Test Master to Read-Write

  1. When all Exadata snapshot databases have been deleted, start up the test master database in mount mode.

    Connect to the test master using SQL*Plus and run the following script to create a new SQL script containing the commands to reset permissions on the data files belonging to the test master database.

    set newpage 0 
    set linesize 999 
    set pagesize 0 
    set feedback off 
    set heading off 
    set echo off 
    set space 0 
    set tab off 
    set trimspool on 
    spool change_perm.sql 
    select 'ALTER DISKGROUP DATA set permission owner=read write, 
    group=read write, other=none for file '||''''||name||''''||';' from v$datafile; 
    exit 
    
  2. Run the following sed command to remove extra lines from the change_perm.sql script created in the previous step. You run this command from a shell, in the directory that contains the change_perm.sql script.
    $ sed -i '/SQL/d' change_perm.sql
    
  3. Connect via SQL*Plus to an ASM instance as sysasm and run the change_perm.sql script. This script changes the permissions of the test master's data files to make them writeable.
    SQL> @change_perm
    

10.7.5.3 Step 3: Convert the Test Master Database Back to a Data Guard Replica

If you had originally prepared the test master database using Data Guard snapshot standby, convert it back to its original state as a Data Guard replica using the Data Guard convert command. This discards any changes previously made to the replica to prepare it to be the test master. It also makes it possible to refresh the test master using just incremental changes from the source database instead of a complete restore from a current backup.

10.7.5.4 Step 4: Update the Test Master Database

You have two options for refreshing the test master database:

  • Allow Data Guard to refresh the test master database

    If the Data Guard replica has been used as a test master database for only a short period of time and you have all the redo generated during this time in archive logs on disk at the source database, you can enable redo shipping and start redo apply. The test master database will use regular Data Guard protocols to retrieve archive logs and apply the logs until it is caught up with the primary database. Once the Data Guard replica is as current as you need it to be, disable redo shipping, stop redo apply and repeat the test master/snapshot creation cycle described earlier.

    This option has the benefit of being able to stop redo apply at some intermediate point rather than bringing the test master database totally current.

    To let Data Guard refresh the standby, enable log shipping to the standby and redo apply on the standby:

    DGMGRL> edit database TESTMASTER set property logshipping=ON; 
    Property "logshipping" updated 
    DGMGRL> edit database TESTMASTER set state=apply-on; 
    Succeeded
    
  • Use RMAN RECOVER...FROM SERVICE to roll forward the test master database

    If the Data Guard replica has been used as a test master database for a long period of time or if you no longer have the redo available on disk to enable Data Guard to automatically refresh the test master database, use RMAN to perform live incremental apply over the network.

    A major advantage to using this method is that no additional disk space is required. RMAN will bring changed blocks to the standby from the primary over the network and apply them directly. Also RMAN greatly simplifies the process by determining which blocks need to be retrieved based on the SCN of the datafiles on the test master. With this method you cannot recover to an intermediate point in time; the refresh will bring the test master database current with the primary. For more information on this method see the Oracle Database Backup and Recovery User's Guide section on Performing RMAN Recovery: Advanced Scenarios.

    To refresh the test master database using RMAN Network Incrementals:

    1. Prepare SQL*Net for the RMAN connections. These steps only need to be performed the first time through the process.

      1. Create a listener.ora entry for the test master database (the Data Guard replica) to allow RMAN to connect to the SID since the service will not be started when the database is only opened in NOMOUNT mode. The following is an example of the entry:

        SID_LIST_LISTENER = 
          (SID_LIST =   
            (SID_DESC = 
              (SID_NAME = TESTMASTER1)
              (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
            )
          )
        
      2. Reload the listener to pick up the changes to the listener.ora.

         $ lsnrctl reload listener 
        
      3. Create a TNS entry on the test master environment pointing to the SID of the local test master instance. The entry should use the local host name rather than the SCAN name to ensure the connection request goes to the correct host.

        TESTMASTER1 = 
          (DESCRIPTION = 
            (ADDRESS = (PROTOCOL = TCP)(HOST = standbydb01.us.oracle.com)(PORT = 1521))
            (CONNECT_DATA = 
              (SERVER = DEDICATED) 
              (SID = TESTMASTER1) 
              (UR=A) 
            ) 
          ) 
        
    2. Connect via RMAN to the test master database and save the CURRENT_SCN for later. This will be used to determine if newly created files since the last refresh need to be restored from the source database.

      RMAN> select current_scn from v$database;
      CURRENT_SCN# 
      ------------------ 
                17081990 
      
    3. Note the names of the online redo log files and standby redo log files of the Data Guard replica. You may require these names in a later step.

      The following command lists the names and group identifiers of the redo log files:

      RMAN> SELECT type, group#, member FROM v$logfile;
      
    4. Refresh the standby control file of the Data Guard replica from the source database to make the control file current.

      1. Reconnect to the Data Guard replica as the RMAN target.

      2. Restart it in NOMOUNT mode.

        RMAN> startup nomount force;
        
      3. Restore the standby control file by using the control file on the source database.

        The following command example restores the control file on the Data Guard replica by using the database control file from SOURCEMASTER, the source database.

        RMAN> RESTORE STANDBY CONTROLFILE FROM SERVICE SOURCEMASTER;
        
      4. Mount the Data Guard replica using the following command:

        RMAN> ALTER DATABASE MOUNT;
        
    5. If you are not using an RMAN catalog, the names of files in the standby control file are the names that were used in the source database. You must update the names of the data files and the temp files in the standby control file.

      Use the CATALOG command and the SWITCH command to update all the data file names. The SWITCH command will be used after restoring any newly created files from the source database in step 7.

      RMAN> CATALOG START WITH '+DATA/TESTMASTER/DATAFILE/';
      

      In the example above, "+DATA/TESTMASTER/DATAFILE/" is the location of the data files on the Data Guard replica. All data files must be stored in this location.

    6. Use the CURRENT_SCN from step 2 to determine if new files were added that need to be restored from the source database.

      RMAN> select file# from v$datafile where creation_change# >= 17081990;
       FILE#
      ---------- 
               9 
              10 
      
    7. If there are files returned by the previous query, restore those datafiles from the source database. Execute an RMAN RUN block similar to the following using the list of FILE#s returned by the previous step. If no FILE#s were returned, skip this step.

      RMAN> run{ 
      2> set newname for database to '+DATA';
      3> restore datafile 9,10 from service SOURCEMASTER;
      4> } 
      
    8. If not using an RMAN catalog, rename the datafiles in the standby control file by switching to the copies cataloged in step 5.

      RMAN> SWITCH DATABASE TO COPY;
      
    9. Update the names of the online redo logs and standby redo logs in the standby control file using one of the following methods:

      • Use the ALTER DATABASE CLEAR command to clear the log files in all redo log groups of the Data Guard replica. RMAN then recreates all the standby redo logs and the online redo log files.

        NOTE: Clearing log files is recommended only if the Data Guard replica does not have access to the online redo log files and standby redo log files of the source database. If the Data Guard replica has access to the redo log files of the source database and the redo log file names of the source database are OMF names, then the ALTER DATABASE command will delete log files on the source database. Also, the clearing of the logfiles will create new logfiles, it will not use existing, the control file is not aware of those existing files. To conserve space, delete the existing logfiles from ASM prior to running the ALTER DATABASE CLEAR commands.

        The GROUP# column of the V$LOGFILE view queried in step 5 provides the redo log group identifiers of the log groups that must be cleared. Use separate ALTER DATABASE CLEAR commands to clear each redo log group.

        The following command clears the redo log group with identifier 2.

        SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2;
        
      • Use the ALTER DATABASE RENAME FILE command to rename the redo log files. Use a separate command to rename each log file listed in step 5.

        To rename log files, the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL. Renaming log files is recommended when the number of online redo logs files and standby redo log files is the same in the source database and the Data Guard replica.

    10. Use RMAN RECOVER....FROM SERVICE to roll forward the data files to current state. No additional space is required for this operation. Note that this process can only bring the files totally current; it cannot bring the files to a previous point in time. Connect via RMAN to the Data Guard replica as target using the TNS entry created in step 3 and issue the following. The service specified should point to the primary.

      RMAN> recover database noredo from service SOURCEMASTER;
      
    11. Enable redo shipping to the Data Guard replica and start redo apply. This is necessary to update the control file with the blocks applied as part of step 10.

      DGMGRL> edit database TESTMASTER set property logshipping=ON;
      Property "logshipping" updated
      DGMGRL> edit database TESTMASTER set state=apply-on;
      Succeeded.
      
    12. After redo has been applied, repeat the process you used to convert the Data Guard replica into a test master database and then create Exadata database snapshots. Remember to once again disable log shipping and redo apply at the standby.

10.7.5.5 Step 5: Close the Test Master and Make All Test Master Data Files Read-Only

Complete one of the tasks described in "Setting the Ownership of the Test Master Data Files".

10.7.5.6 Step 6: Re-Create All Snapshots

You can create an Exadata snapshot database of a pluggable database (PDB) or of a full database as described in "Creating the Snapshot Databases from a (Read-Only) Test Master Database."

10.7.6 Creating a Snapshot Database from Another Snapshot Database

To create a snapshot from a snapshot:

  1. Create a first level snapshot. In the following example, the snapshot is called PDB1S1.

    create pluggable database PDB1S1
      from PDB1TM1
      create_file_dest='+SPARSE'
      snapshot copy;
    
  2. Open and close the PDB, so you can re-open it as read-only in the next step.

    alter pluggable database PDB1S1 open;
    alter pluggable database PDB1S1 close;
    
  3. Open the PDB in read-only mode so it can serve as a test master.

    alter pluggable database PDB1S1 open read only;
    
  4. Create a snapshot from the snapshot created in step 1. In the following example, the second level snapshot is called PDB1S1_A.

    create pluggable database PDB1S1_A
      from PDB1S1
      create_file_dest='+SPARSE'
      snapshot copy;
    
    alter pluggable database PDB1S1_A open;
    

10.7.7 Creating SparseTest Masters from a Single Full Database Copy

You can create multiple sparse test master from a single copy of a full database.

The source for the test master is a full copy of a Data Guard physical standby database. This standby database should not be used as a target for switchover or failover; it should only be used as the source for the test masters defined in this process.

This process takes advantage of the hierarchical snapshot functionality to allow redo to be shipped and applied, keeping the standby database current with production while also providing files to be used as source for test masters to be used by Exadata storage snapshots. The physical standby database begins as a full copy of the primary database. When you are ready to create storage snapshots, sparse data files are created pointing to the full database files to apply redo shipped from the primary. These sparse files are then used in the standby database instance to apply redo. You can also open the sparse data files in Active Data Guard mode to supply read only access of current data.

When additional snapshots are required at different points in time, you repeat the process of creating new sparse files on top of the previously created sparse files to apply redo and keep the data current. This allows you to use a single full copy of the data files to use as multiple test masters from different points in time. Also, you can create a new test master in a matter of minutes because you do not have to drop the existing snapshots.

Restrictions

This process can be used for either non-container databases or container databases, however there are restrictions for pluggable databases (PDBs) when used with Oracle Data Guard:

  • Creation of new PDBs from seed at the primary database automatically roll to the standby database when the CREATE PLUGGABLE DATABASE statement redo is applied, regardless of Active Data Guard operation at the standby database.

  • For clones performed at the primary database where the source PDB is in the primary database, the standby database must be in Active Data Guard mode to automatically instantiate the cloned PDB at the standby database.

  • For clones performed at the primary database where the source PDB resides in another database, the PDB creation must use a method to defer recovery at this standby database. These cloned PDBs will not be included in the standby until the standby is fully refreshed.

  • For PDBs plugged into the primary database, the files must either be pre-copied to the standby database or the PDB plug-in must use a method to defer recovery at this standby database.

If redo apply fails at the standby due to PDB migration operations or you need to enable recovery of a PDB, then you should refresh the full standby database from the primary.

The following tasks assume that a physical standby database has already been created to be used as the source for the test masters.

Figure 10-14 Starting Configuration


Description of Figure 10-14 follows
Description of "Figure 10-14 Starting Configuration"

See Also:

"Creating a PDB in a Primary Database" in Oracle Data Guard Concepts and Administration

10.7.7.1 Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master

The existing files for the standby database are used to support snapshots. You create a series of sparse files pointing to the existing files of the standby. Redo received from the primary database is applied to these files. These sparse files allow the standby database to be used as a sparse test master and also kept current with the primary database.

  1. Stop redo apply at the standby.

    To ensure that the structure of the database is at a quiesced state for creating the supporting files to build snapshots, redo apply should be turned off at the standby database.

    DGMGRL> edit database tm_standby set state='APPLY-OFF';
    
  2. Prepare the current standby database data files to be used as a test master.
    Data files must meet the pre-requisites to be used as a test master:
    1. The disk group the files reside on must have the access_control.enabled attribute set to TRUE.

      This example sets the attribute on the +DATA disk group. This step only needs to be done one time for the disk group.

      As SYSASM, log into an Oracle ASM instance using SQL*Plus and issue the following command:

      SQL> alter diskgroup DATA set attribute 'ACCESS_CONTROL.ENABLED'='TRUE';
      
    2. The database owner user must be added as an explicit user of the disk group the files reside on.

      This example grants access to the +DATA disk group to the user SCOTT. This step only needs to be done one time per user per disk group.

      Using SQL*Plus while connected as the SYSASM user, issue the following command:

      SQL> alter diskgroup DATA add user 'scott';
      
    3. The files that going to be used must have explicit permissions granted to the database owner user.

      You must perform this step for all users that will be creating snapshots using these files and for all files that will be referenced by the snapshot. The following script can be used to build SQL statements to perform the set ownership. Run the script while connected to the standby database using SQL*Plus. If the standby is a contained database (CDB) you must be connected to the cdb$root container:

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback offset heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' 
       for file '||''''||name||''''||';' from v$datafile;
      exit
      

      After building the statement, login to an Oracle ASM instance using SQL*Plus as the SYSASM user, and execute the set_owner.sql script.

      SQL> @set_owner
      
  3. Create a backup of the controlfile.

    All snapshots are created using the current state of the standby database, so they need to know all of the files that make up the standby. Create a binary backup of the control file to allow future creation of the CREATE CONTROLFILE script required for additional snapshots

    SQL> ALTER DATABASE BACKUP CONTROLFILE TO '/home/oracle/snap_tm/control_monday_tm.ctl';
    
  4. Create the rename_files.sql script to create the sparse data files for the snapshot.

    This script builds a series of RENAME statements to create the sparse data files to be used for the snapshot to apply redo received from the primary. Use a SQL statement similar to the following. Note that this statement uses the same directory structure as the original files, but the files will be created in the SPARSE disk group. The new file names will be created replacing ‘_’ (underscores) with ‘.’ (periods).

    set newpage 0
    set linesize 999
    set pagesize 0
    set feedback off
    set heading off
    set echo offset space 0
    set tab off
    set trimspool on
    spool rename_files.sql
    select 'EXECUTE dbms_dnfs.clonedb_renamefile ('||''''||name||''''||
    ','||''''||replace(replace(name,'.','_'),'DATA/','SPARSE/')||''''||
    ');' from v$datafile;
    exit
    

    This script produces output similar to the following:

    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/DATAFILE/system.515.9304
    75939','+SPARSE/TM_STANDBY/DATAFILE/system_515_930475939');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/429CE0836E0166ACE05382C8
    E50A1154/DATAFILE/system.567.930475945','+SPARSE/TM_STANDBY/429CE0836E0166ACE053
    82C8E50A1154/DATAFILE/system_567_930475945');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/DATAFILE/sysaux.571.9304
    75939','+SPARSE/TM_STANDBY/DATAFILE/sysaux_571_930475939');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/429CE0836E0166ACE05382C8
    E50A1154/DATAFILE/sysaux.516.930475945','+SPARSE/TM_STANDBY/429CE0836E0166ACE053
    82C8E50A1154/DATAFILE/sysaux_516_930475945');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/DATAFILE/undotbs1.497.93
    0475939','+SPARSE/TM_STANDBY/DATAFILE/undotbs1_497_930475939');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/429CE0836E0166ACE05382C8
    E50A1154/DATAFILE/undotbs1.564.930475945','+SPARSE/TM_STANDBY/429CE0836E0166ACE0
    5382C8E50A1154/DATAFILE/undotbs1_564_930475945');
    
  5. Using ASMCMD, create directories for all of the directories identified in the script rename_files.sql.

    When the dbms_dnfs_clonedb_renamefile command is executed, it requires that all directory structures used for the files already exist in ASM. Use the output from the previous step to determine the structures required and then create them as needed. You can use ASMCMD to create the directories as in the following example:

    cd ASMCMD [+] > cd sparse
    ASMCMD [+sparse] > ls
    ASMCMD [+sparse] > mkdir tm_standby
    ASMCMD [+sparse] > cd tm_standby
    ASMCMD [+sparse/tm_standby] > mkdir datafile
    ASMCMD [+sparse/tm_standby] > mkdir 429DC0E1BCBD1B90E05382C8E50A8E80
    ASMCMD [+sparse/tm_standby] > mkdir 429CE0836E0166ACE05382C8E50A1154
    ASMCMD [+sparse/tm_standby] > cd 429DC0E1BCBD1B90E05382C8E50A8E80
    ASMCMD [+sparse/tm_standby/429DC0E1BCBD1B90E05382C8E50A8E80] > mkdir datafile
    ASMCMD [+sparse/tm_standby/429DC0E1BCBD1B90E05382C8E50A8E80] > cd ../429CE0836E0166ACE05382C8E50A1154
    ASMCMD [+sparse/tm_standby/429CE0836E0166ACE05382C8E50A1154] > mkdir datafile
    

10.7.7.2 Task 2: Configure the Sparse Test Master and Sparse Files on the Standby Site

In this task you convert the standby into a test master and create sparse files to receive and apply redo from the primary database.

Note:

During this process you will not be creating a full snapshot database, you will be using portions of the existing standby database and adding sparse data files to the standby. The standby database controlfile will be modified to use the sparse files that are added. Going forward, the same standby instance is used, but redo apply will use the sparse files to store changes, leaving the original standby data files to serve as a sparse test master files for snapshots.

The existing data files are used to support full database snapshots with data as of the point in time the process was run.

  1. Shutdown all instances of the TM_STANDBY database.
    $ srvctl stop db –d tm_standby –o abort
    
  2. Using SQL*Plus, start one of the TM_STANDBY instances in mount mode.
    SQL> startup mount
    
  3. Change the DB_CREATE_FILE_DEST setting in the standby instance to point to the SPARSE disk group.
    This ensures all new data files that are created will reside in the SPARSE disk group. To perform this step you must disable standby file management.
    SQL> alter system set standby_file_management='MANUAL';
    SQL> alter system set db_create_file_dest='+SPARSE';
    
  4. Run the rename_files.sql script created in Task 1 against the standby database.
    Running the script renames the data files in the TM_STANDBY controlfile and creates the sparse files for the snapshot.
    SQL> @rename_files
    
  5. Re-enable standby file management.
    Completing this step ensures that all new data files added to the primary will automatically be created by the standby when it receives the redo to create the data file.
    SQL> alter system set standby_file_management='AUTO';
    
  6. Enable redo apply on TM_STANDBY.
    Completing this step applies redo to the snapshot, keeping it current and preparing for the next round of snapshot creation.
    DGMGRL> edit database tm_standby set state='APPLY-ON';
    
  7. Restart the remaining instances in mount mode.
    $ srvctl start db –d tm_standby –o mount
    

    Note:

    If you plan to clone local PDBs at the primary database, then enable Active Data Guard mode at the standby. This requires an Active Data Guard license.

Figure 10-15 Configuration with Test Master Files and Sparse Files for Redo Apply

Description of Figure 10-15 follows
Description of "Figure 10-15 Configuration with Test Master Files and Sparse Files for Redo Apply"

10.7.7.3 Task 3: Create Full Database Snapshots Using the New Sparse Test Master

Create full snapshots using the new sparse test master.

At this time you can create full snapshots against the original files of the standby database as described in "Creating a Snapshot of a Full Database".

You must use the backup controlfile created in step 2 of "Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master" to build the CREATE CONTROLFILE statement. To use the file you can create a temporary database instance to mount the controlfile and execute the backup controlfile to trace command.

  1. Create a small PFILE file for the instance to use.

    At a minimum, the PFILE file should contain the following parameters:

    control_files='/home/oracle/snap_tm/control_monday_tm.ctl'  # This should be the control file name created above
    db_name=primary             # This should be the db_name used in the Data Guard configuration
    db_unique_name=temp         # This should be a unique name for a database instance on this host
    sga_target=5g               # Provide enough memory to start the instance
    enable_pluggable_database=true   # This is required if the database is a container database. Do not specify for non-CDBs.
    
  2. Set your environment to point to a unique ORACLE_SID.
    $ export ORACLE_SID=temp
    
  3. Using SQL*Plus, start the instance in mount mode using the PFILE created in Step 1.
    SQL> startup mount pfile='/home/oracle/snap_tm/pfile.ora'
    
  4. Build the create controlfile statement and the rename files script.
    Use steps 1 and 2 in "Creating a Snapshot of a Full Database" to build the CREATE CONTROLFILE statement and the rename files script. The rename files script created in step 4 of "Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master" can be used, but you must modify the directory structure of the sparse files to be created.

Figure 10-16 Configuration After Creating Snapshots

Description of Figure 10-16 follows
Description of "Figure 10-16 Configuration After Creating Snapshots"

10.7.7.4 Task 4: Create a New Sparse Test Master Using a Previously Created Sparse Test Master

Create a new set of snapshots to provide a new test master and new redo apply files.

It is possible that at periodic intervals you will want to use the standby database to create additional snapshots without having to build a complete copy of the test master. You can repeat the process performed in the previous three tasks to do just that, taking advantage of the hierarchical snapshot functionality. The new test master is built on top of the latest existing snapshot that is applying redo. This snapshot becomes read-only and a new snapshot is built to continue the redo apply processing.

Do the following to configure the standby for the new test master time line:

  1. Repeat the steps from "Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master " with the following changes:

    This process is the same as before, except now you are using files that are sparse files for the new snapshots being created.

    1. In Step 2, change all the commands that alter the +DATA disk group to instead alter the +SPARSE disk group.
      The commands for setting access control and adding the database owner user to the disk group only need to be performed one time.
    2. In Step 4, supply a different name for the snapshot.

      You are creating a new snapshot, so the files need unique names from what was previously used. As a suggestion, you can append an identifier to the end of the file name to help identify it with the snapshot to be built. For example, if this was the original command:

      EXECUTE dbms_dnfs.clonedb_renamefile ('+SPARSE/TM_STANDBY/DATAFILE/system_515
      _930475939','+SPARSE/TM_STANDBY/DATAFILE/system_515_930475939');
      

      You can add an identifier to the end of the file name to create a unique file name, as shown here:

      EXECUTE dbms_dnfs.clonedb_renamefile ('+SPARSE/TM_STANDBY/DATAFILE/system_515
      _930475939','+SPARSE/TM_STANDBY/DATAFILE/system_515_930475939_Dec_15_16');
      

      This step must be repeated for each statement for the rename_files.sql script.

  2. Repeat the steps from "Task 2: Configure the Sparse Test Master and Sparse Files on the Standby Site."

    There should be no changes required to the steps.

Figure 10-17 Configuration After Repeating the Process


Description of Figure 10-17 follows
Description of "Figure 10-17 Configuration After Repeating the Process"

The process can be repeated up to 9 times which would create an environment 10 levels deep (the original standby data files and 9 hierarchical snapshots).  When repeating the process for the 9th time, do not create a new snapshot to receive the redo from the primary database.

Figure 10-18 Potential Future Configuration


Description of Figure 10-18 follows
Description of "Figure 10-18 Potential Future Configuration"

If you have reached the maximum 10 levels, you have multiple options:

  • If you have enough space to maintain multiple copies of the standby database and snapshots, refresh the standby database and create a new tree of hierarchical snapshots. The original complete files and snapshots can remain as long as required.

  • If you do not have enough space to maintain multiple copies of the standby database and snapshots, then delete all data files and snapshots, refresh the standby and create a new tree of hierarchical snapshots.

  • Create a new standby database on a different environment and create a new tree of hierarchical snapshots.

10.7.8 Creating Sparse Test Masters for PDBs

This procedure creates a hierarchical snapshot tree or sparse test masters manually for a pluggable database (PDB) in an Oracle Multitenant Database.

The test master must be closed while making the daily reference snapshot. The downtime is very short (less than 5 minutes). You can use a replication mechanism, such as Oracle GoldenGate, to keep the sparse test master current with the production PDB. For more information about configuring Oracle GoldenGate with PDBs, see "Configuring Oracle GoldenGate in a Multitenant Container Database" in Oracle GoldenGate Oracle Installation and Setup Guide. The following example assumes you are using Oracle GoldenGate.

STEP 1: Create First Test Master PDB From the PROD PDB

This is a traditional PDB clone operation to instantiate the test master PDB. Once the clone completes, you configure Oracle GoldenGate to extract changes from the PRODPDB1 PDB in production and replicate these changes to the test master TMPDB1 PDB.

  1. Run the following commands on the PROD container database (CDB) root:

    PRODCDB> alter pluggable database prodpdb1 close;
    
    PRODCDB> alter pluggable database prodpdb1 open read only;
    
  2. Run the following commands from the test master CDB root:

    TMCDB> create database link PROD_DBLINK
    connect to system identified by password using 'PROD_CDB';
    
    TMCDB> create pluggable database TMPDB1
    from PRODPDB1@PROD_DBLINK;
    
    TMCDB> alter pluggable database TMPDB1 open;
    
  3. Configure Oracle GoldenGate so that the changes made at the PRODPDB1 PDB will be extracted, replicated and applied to the TMPDB1 PDB. After configuring the extract and replicat and starting the extract process, open PRODPDB1 PDB in read write mode.

    Note:

    The PRODPDB1 PDB cannot be opened for changes until after Oracle GoldenGate has been configured and the extract process started.
    PRODCDB> alter pluggable database PRODPDB1 close;
    PRODCDB> alter pluggable database PRODPDB1 open;
    

At this point, you have a full copy of PRODPDB1 on test master as TMPDB1 receiving all data changes made at PRODPDB1.

Note:

Oracle GoldenGate does not replicate data dictionary changes such as CREATE TABLESPACE or ADD DATAFILE. Only schema changes are replicated from PRODPDB1 to TMPDB1.

Figure 10-19 TMPDB1 Created From the PRODPDB1 Pluggable Database

Description of Figure 10-19 follows
Description of "Figure 10-19 TMPDB1 Created From the PRODPDB1 Pluggable Database"

Although TMPDB1 can be opened in read/write mode, you should leave it in read-only mode because the only changes it should receive are from PRODPDB1 through Oracle GoldenGate.

To create a snapshot from TMPDB1, the test master PDB must be opened in read-only mode. To provide a test master PDB from which you can create snapshots, and a test master PDB that is kept current with its source, you need two PDBs. The next step shows how you can accomplish this.

STEP 2: Create Daily Read-Only Snapshot and Move TMPDB1 PDB to a New Sparse Test Master PDB

This step creates a read-only snapshot PDB that acts as a test master. You can then create read/write snapshot PDBs from this read-only snapshot PDB every day. The main steps are:

  • Create a (daily) read-only snapshot PDB that you can make available to private read/write clients.

  • Create a new sparse TMPDB1 PDB pointing back to the read-only daily snapshot PDB. The new TMPDB1 PDB also accepts and applies changes from PRODPDB1.

Connect to the TMPDB1 PDB, then run the following commands:

TMCDB> alter session set container = CDB$ROOT;

# Stop the Oracle GoldenGate replicat process at the Test Master database.  This allows 
# all changes made at PRODPDB1 to continue to be extracted and then applied to  
# TMPDB1 when the replicat process is restarted.

# Close the test master PDB.
TMCDB> alter pluggable database TMPDB1 close;

# Write the test master PDB metadata to an XML file.
TMCDB> alter pluggable database TMPDB1 unplug into 
       '/home/oracle/snapshot/TMPDB1_monday.XML';

# Drop the test master PDB, but keep the data files.
TMCDB> drop pluggable database TMPDB1 keep datafiles;

# Create a TMPDB1_MONDAY PDB using the XML file you just created.
#Use the NOCOPY clause to reuse the original data files.

TMCDB> create pluggable database TMPDB1_MONDAY using 
       '/home/oracle/snapshot/TMPDB1_monday.XML' nocopy;

# Open the new TMPDB1_MONDAY PDB. The PDB must be opened
# once in read/write mode to complete the creation process.

TMCDB> alter pluggable database TMPDB1_MONDAY open;
TMCDB> alter pluggable database TMPDB1_MONDAY close;
TMCDB> alter pluggable database TMPDB1_MONDAY open read only;

# Create the new TMPDB1 PDB to receive changes from PRODPDB1. This PDB
# must have the same name as the original test master PDB to ensure no
# changes are required to the Oracle GoldenGate configuration.

TMCDB> create pluggable database TMPDB1 from TMPDB1_MONDAY
        create_file_dest='+SPARSE'
        snapshot copy;

# Open the new TMPDB1 PDB. The PDB must be opened once in read/write 
# mode to complete the PDB creation process.

TMCDB> alter pluggable database TMPDB1 open;
TMCDB> alter pluggable database TMPDB1 close;
TMCDB> alter pluggable database TMPDB1 open read only;


# Restart the Oracle GoldenGate replicat process to the new TMPDB1
# PDB. The Oracle GoldenGate replicat process now applies changes from  
# PRODPDB1 to the TMPDB1 snapshot and all changes are written to 
# sparse files.

The following figure shows the TMPDB1 created from TMPDB1_MONDAY. The original TMPDB1 has been renamed to TMPDB1_Monday as part of the DROP PLUGGABLE DATABASE/CREATE PLUGGABLE DATABASE steps listed above. The new TMPDB1 is a sparse snapshot pluggable database that, until any changes are made to TMPDB1, looks exactly like TMPDB1_Monday. Oracle GoldenGate applies redo to the new TMPDB1 snapshot without having to make any changes to the replicat configuration

Figure 10-20 TMPDB1 Created From TMPDB1_MONDAY

Description of Figure 10-20 follows
Description of "Figure 10-20 TMPDB1 Created From TMPDB1_MONDAY"

STEP 3: Create Read/Write Snapshot From TMPDB1_MONDAY

You create the snapshots from TMPDB1_MONDAY, not from TMPDB1. This allows TMPDB1 to continue receiving and applying changes from PRODPDB1.

Connect to the TMPDB1_MONDAY PDB, then run the following commands:

TMCDB> alter session set container = cdb$ROOT;

TMCDB> create pluggable database TEST_MONDAY_JIM from TMPDB1_MONDAY
   create_file_dest='+SPARSE'
   snapshot copy;

TMCDB> alter pluggable database TEST_MONDAY_JIM open;

The following figure shows the TEST_MONDAY_JIM snapshot PDB created from TMPDB1_MONDAY. TEST_MONDAY_JIM uses TMPDB1_MONDAY as its parent so all data in TMPDB1_MONDAY_JIM is that same as the data in TMPDB1_MONDAY until Jim begins making changes to his snapshot PDB. Oracle GoldenGate continues to receive and apply redo to TMPDB1.

Figure 10-21 TEST_MONDAY_JIM Created From TMPDB1_MONDAY

Description of Figure 10-21 follows
Description of "Figure 10-21 TEST_MONDAY_JIM Created From TMPDB1_MONDAY"

When you need to create another test master and snapshot, you just need to repeat Step 2. For example, to create a test master on Tuesday, you can do the following:

Start a SQL*Plus session for the TMPDB1 PDB.

TMCDB> alter session set container = CDB$ROOT;

# Stop the Oracle GoldenGate replicat process from applying changes to
# TMPDB1

# Close the test master PDB
TMCDB> alter pluggable database TMPDB1 close;

# Write the test master PDB metadata to an XML file
TMCDB> alter pluggable database TMPDB1 unplug into 
'/home/oracle/snapshots/TMPDB1_tuesday.XML';

# Drop the test master PDB, but keep the data files
TMCDB> drop pluggable database TMPDB1 keep datafiles;

# Create a TMPDB1_TUESDAY PDB from the XML file
TMCDB> create pluggable database TMPDB1_TUESDAY using 
'/home/oracle/snapshot/TMPDB1_tuesday.XML' nocopy;

# Open the new TMPDB1_TUESDAY PDB
TMCDB> alter pluggable database TMPDB1_TUESDAY open;
TMCDB> alter pluggable database TMPDB1_TUESDAY close;
TMCDB> alter pluggable database TMPDB1_TUESDAY open read only;

# Create the new TMPDB1 PDB as a snapshot PDB
TMCDB> create pluggable database TMPDB1 from TMPDB1_TUESDAY
   create_file_dest='+SPARSE'
   snapshot copy;

# Open the TMPDB1 PDB
TMCDB> alter pluggable database TMPDB1 open;
TMCDB> alter pluggable database TMPDB1 close;
TMCDB> alter pluggable database TMPDB1 open read only;

# Restart the Oracle GoldenGate replicat process to apply changes to
# the new TMPDB1

You can now create read/write snapshot PDBs from TMPDB1_TUESDAY, similar to Step 3 above. The same as with full database sparse test masters, you can repeat this process up to 9 times in total before needing to either create a new TMPDB1 test master or drop and recreate the original TMPDB1 to begin building a new hierarchical snapshot tree.

10.7.9 Doing a Sparse Copy

The ASM cp command copies a sparse file to a new destination. However, this operation copies the sparse file with all the blocks instantiated from the parent. The “sparse copy” feature enables you to do a sparse copy of a file.

You can have multiple ASM instances running at the same time. If an operation involves a source or a destination on a different ASM instance other than the one it is executing on, it is treated as a remote ASM instance. You can do a sparse copy on a local ASM instance, or between a local and a remote ASM instance. However, sparse copy does not work between two remote ASM instances.

To do a sparse copy, you use the new --sparse option in the existing ASM cp command. The syntax looks like the following:

ASMCMD> cp --sparse <src_sparse_file> <tgt_file>

A new ASM command called setsparseparent enables you to set the parent of a sparse file. If you do a sparse copy of a file to a sparse destination on a local ASM instance, its parent is set as part of the sparse copy operation. However, if the destination is on a remote ASM instance, you have to set its parent explicitly using the setsparseparent command.

The setsparseparent command requires sparse child file and parent file as parameters. It sets the parent of the sparse child file to the new parent file. The syntax looks like the following:

ASMCMD> setsparseparent <sparse_file> <parent_file>

The cp ASM command performs the following validations before doing a sparse copy operation. The operation is allowed only if it satisfies the following criteria:

  • The source file must exist and must be a sparse file.

  • If you specify multiple source sparse files, all of them must be on the same ASM instance.

  • Copying multiple sparse files on a remote ASM instance to a destination on a local ASM instance and vice versa is allowed provided all source files are on the same ASM instance.

  • Destination file should be backed by a sparse disk group. However, it can be a non-sparse file if event “KFTST_KFPKG_CP_SPARSE” is set. This event is required to validate sparse copy operation by merging and copying the files to a non-sparse destination.

  • Both source and destination cannot be on a remote ASM instance. However, either source or destination can be on a remote ASM instance.

  • If the destination is on a remote ASM instance, its file type cannot be validated and you have to ensure that it is backed by a sparse disk group. You also have to set the parent explicitly using the ASM setsparseparent command.

  • If the destination is a non-sparse file and you run the setsparseparent command, the command will fail because the child file should be sparse. This is a second-level validation if the destination is a non-sparse file.

The setsparseparent ASM command performs the following validations before it sets the parent. The operation is allowed only if it satisfies the following criteria:

  • The child file must exist and must be a sparse file.

  • The parent file must exist. It can be a sparse or a non-sparse file.

  • Parent and child files must be present on same ASM instance.

Note:

You have to ensure that the files you specify in the setsparseparent ASM command have a valid parent-child relationship. The command cannot perform this check for files on remote ASM instances. If the files do not have a valid parent-child relationship, then data integrity and corruption issues are likely to occur.

Example 1: The following ASM command copies sparse file “TBS_1.264.908376549” to the destination “+SPARSEDG/child_1”.

ASMCMD> cp –-sparse +SPARSEDG/MERGE/DATAFILE/TBS_1.264.908376549 +SPARSEDG/child_1

Example 2: The following ASM command sets parent “tbs_1.269.908374993” for the sparse file “remote_child_10”.

ASMCMD> setsparseparent +SPARSEDG/remote_child_10 +DATAFILE/DATAFILE/tbs_1.269.908374993

Example 3: The following command copies sparse child files child_1, child_2 and child_3 to the destination directory +SPARSEDG.

ASMCMD> cp –-sparse +SPARSEDG/DATAFILE/child_1 +SPARSEDG/DATAFILE/child_2 +SPARSEDG/DATAFILE/child_3 +SPARSEDG/

10.8 Managing Sparse Griddisks

You can resize, recreate, or monitor the activity of sparse griddisks.

10.8.1 Resizing the Virtual Space

When V$ASM_DISKGROUP.FREE_MB or V$ASM_DISK.FREE_MB is running low, you need to increase the virtual address space.

  1. To increase the size of the virtual space:
    1. Run the following command on the cells, specifying all the grid disks for the SPARSE disk group:
      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,....,SPARSE_CD_11_CELL01 virtualSize=newSize
      

      For example, on the first cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,SPARSE_CD_02_
      CELL01,SPARSE_CD_03_CELL01,SPARSE_CD_04_CELL01,SPARSE_CD_05_CELL01,SPARSE_CD_
      06_CELL01,SPARSE_CD_07_CELL01,SPARSE_CD_08_CELL01,SPARSE_CD_09_CELL01,SPARSE_
      CD_10_CELL01,SPARSE_CD_11_CELL01 virtualSize=12000G
      
      GridDisk SPARSE_CD_00_CELL01 successfully altered
      GridDisk SPARSE_CD_01_CELL01 successfully altered
      ...
      

      For example, on the next cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL02,SPARSE_CD_01_CELL02,SPARSE_CD_02_
      CELL02,SPARSE_CD_03_CELL02,SPARSE_CD_04_CELL02,SPARSE_CD_05_CELL02,SPARSE_CD_
      06_CELL02,SPARSE_CD_07_CELL02,SPARSE_CD_08_CELL02,SPARSE_CD_09_CELL02,SPARSE_
      CD_10_CELL02,SPARSE_CD_11_CELL02 virtualSize=12000G
      
      GridDisk SPARSE_CD_00_CELL02 successfully altered
      GridDisk SPARSE_CD_01_CELL02 successfully altered
      ...
      

      Note:

      You must change the size of the grid disks on all cells before making any changes in Oracle ASM.
    2. On an ASM instance, resize the disk group to this new size:
      SQL> alter diskgroup SPARSE resize all size newSize;
      

      For example:

      SQL> alter diskgroup SPARSE resize all size 12000G;
      
  2. To decrease the size of the virtual space:
    1. On an ASM instance, resize the disk group to this new size:
      SQL> alter diskgroup SPARSE resize all size newSize;
      

      For example:

      SQL> alter diskgroup SPARSE resize all size 8000G;
      
    2. Run the following command on the cells, specifying all the grid disks for the SPARSE disk group:
      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,....,SPARSE_CD_11_CELL01 virtualSize=newSize
      

      For example, on the first cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,SPARSE_CD_02_
      CELL01,SPARSE_CD_03_CELL01,SPARSE_CD_04_CELL01,SPARSE_CD_05_CELL01,SPARSE_CD_
      06_CELL01,SPARSE_CD_07_CELL01,SPARSE_CD_08_CELL01,SPARSE_CD_09_CELL01,SPARSE_
      CD_10_CELL01,SPARSE_CD_11_CELL01 virtualSize=8000G
      
      GridDisk SPARSE_CD_00_CELL01 successfully altered
      GridDisk SPARSE_CD_01_CELL01 successfully altered
      ...
      

      For example, on the next cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL02,SPARSE_CD_01_CELL02,SPARSE_CD_02_
      CELL02,SPARSE_CD_03_CELL02,SPARSE_CD_04_CELL02,SPARSE_CD_05_CELL02,SPARSE_CD_
      06_CELL02,SPARSE_CD_07_CELL02,SPARSE_CD_08_CELL02,SPARSE_CD_09_CELL02,SPARSE_
      CD_10_CELL02,SPARSE_CD_11_CELL02 virtualSize=8000G
      
      GridDisk SPARSE_CD_00_CELL02 successfully altered
      GridDisk SPARSE_CD_01_CELL02 successfully altered
      ...
      

10.8.2 Resizing the Physical Space

When the grid disks are running out of physical space, then you need to increase the physical size of the grid disks.

You can determine the amount of physical space left by comparing the values of the TOTAL_MAT_MB and ALLOCATED_MAT_MB columns in V$ASM_DISK_SPARSE. When the values in these two columns are close in size, then you need to increase the physical size of the grid disks.

  1. To increase the size of the physical disk space:
    1. Before increasing the physical size of grid disks ensure there is free space available on the respective celldisks.
      [root@exa01adm01 tmp]# dcli -g ~/cell_group -l root "cellcli -e list celldisk \
        attributes name,freespace" 
      exa01celadm01: CD_00_exa01celadm01 0 
      exa01celadm01: CD_01_exa01celadm01 0 
      exa01celadm01: CD_02_exa01celadm01 0 
      exa01celadm01: CD_03_exa01celadm01 0 
      exa01celadm01: CD_04_exa01celadm01 0 
      exa01celadm01: CD_05_exa01celadm01 0 
      exa01celadm01: CD_06_exa01celadm01 0 
      exa01celadm01: CD_07_exa01celadm01 0 
      exa01celadm01: CD_08_exa01celadm01 0 
      exa01celadm01: CD_09_exa01celadm01 0 
      exa01celadm01: CD_10_exa01celadm01 0 
      exa01celadm01: CD_11_exa01celadm01 0 
      ...
      

      If there is no available free space, then you have to free up disk space being used by the Oracle ASM disk groups.

    2. Run the following command on the cells, specifying the grid disk to resize:
      CellCLI> alter griddisk CD_00_exa01celadm01,CD_01_exa01celadm01,...,CD_11_exa01celadm01 size=newPhysicalSize
      

      For example, on the first cell:

      CellCLI> alter griddisk CD_00_exa01celadm01,CD_01_exa01celadm01,CD_02_exa01celadm01,CD_03_exa01celadm01,CD_04_exa01celadm01,CD_05_exa01celadm01,CD_06_exa01celadm01,CD_07_exa01celadm01,CD_08_exa01celadm01,CD_09_exa01celadm01,CD_10_exa01celadm01,CD_11_exa01celadm01 size=12000G
      

      And then on the next cell:

      CellCLI> alter griddisk CD_00_exa01celadm02,CD_01_exa01celadm02,CD_02_exa01celadm02,CD_03_exa01celadm02,CD_04_exa01celadm02,CD_05_exa01celadm02,CD_06_exa01celadm02,CD_07_exa01celadm02,CD_08_exa01celadm02,CD_09_exa01celadm02,CD_10_exa01celadm02,CD_11_exa01celadm02 size=12000G
      
    3. You do not need to do anything on the Oracle ASM instance.
  2. To decrease the size of the physical disk space:
    1. You do not need to do anything on the Oracle ASM instance.
    2. Make sure you are not using more space in the Oracle ASM disk group than the size to which you are shrinking the physical disk space.
      SQL> SELECT sum(allocated_mat_mb) FROM v$asm_disk_sparse
            WHERE group_number = group_number_of_diskgrp_to_shrink;
      

      If the amount of used physical space is more than the size you plan to shrink the disk down to, then you must drop objects from the sparse disk group until the used space is below the new physical space threshold.

    3. Shrink the size of the disks by running the following command on the cells, specifying the grid disks to resize:
      CellCLI> alter griddisk CD_00_exa01celadm01,CD_01_exa01celadm01,...,CD_11_exa01celadm01 size=newPhysicalSize
      

      For example, on the first cell:

      CellCLI> alter griddisk CD_00_exa01celadm01,CD_01_exa01celadm01,CD_02_exa01celadm01,CD_03_exa01celadm01,CD_04_exa01celadm01,CD_05_exa01celadm01,CD_06_exa01celadm01,CD_07_exa01celadm01,CD_08_exa01celadm01,CD_09_exa01celadm01,CD_10_exa01celadm01,CD_11_exa01celadm01 size=4000G
      

      And then on the next cell:

      CellCLI> alter griddisk CD_00_exa01celadm02,CD_01_exa01celadm02,CD_02_exa01celadm02,CD_03_exa01celadm02,CD_04_exa01celadm02,CD_05_exa01celadm02,CD_06_exa01celadm02,CD_07_exa01celadm02,CD_08_exa01celadm02,CD_09_exa01celadm02,CD_10_exa01celadm02,CD_11_exa01celadm02 size=4000G
      

Related Topics

10.8.3 Monitoring Sparse Disk Group Activity

The V$ASM_DISK and V$ASM_DISKGROUP views contain information about the virtual size and utilization of the sparse ASM disk group. New views V$ASM_DISK_SPARSE and V$ASM_DISKGROUP_SPARSE contain information about the actual size and utilization of the sparse ASM disk group. V$ASM_DISK_SPARSE also contains performance and usage metrics.

The following table describes the columns in V$ASM_DISK_SPARSE:

Table 10-1 V$ASM_DISK_SPARSE

Column Description

GROUP_NUMBER

Number of the disk group containing the disk

DISK_NUMBER

Number assigned to the disk within this disk group

INCARNATION

Incarnation number for the disk

ALLOCATED_MAT_MB

Total used physical capacity of this disk

TOTAL_MAT_MB

Total physical capacity of this disk

SPARSE_READS

Number of read requests on sparse regions of this disk

SPARSE_BYTES_READ

Bytes read from sparse regions of this disk

SPARSE_READ_TIME

Time taken by sparse read I/Os

The following table describes the columns in V$ASM_DISKGROUP_SPARSE:

Table 10-2 V$ASM_DISKGROUP_SPARSE

Column Description

GROUP_NUMBER

Cluster-wide number assigned to the disk group

ALLOCATED_MAT_MB

Total used physical capacity of this disk group

TOTAL_MAT_MB

Total physical capacity of this disk group

The following example shows the used space and the total space for some disks.

SQL> select 
     DISK_NUMBER          dsk_num,
     ALLOCATED_MAT_MB     alloc,
     TOTAL_MAT_MB         total
from V$ASM_DISK_SPARSE
where GROUP_NUMBER = 5;
 
DSK_NUM    ALLOC      TOTAL
---------- ---------- ----------
         0       5536      57336
         1       5424      57336
         2       5532      57336
         3       5424      57336
         4       5424      57336

In the following example, sparse ASM grid disks were created with an actual size of 56 GB and a virtual size of 560 GB. When you query V$ASM_DISK's OS_MB and TOTAL_MB columns, you can see the virtual size of 573440 MB (573440 MB / 1024 = 560 GB).

SQL> select os_mb, total_mb from v$asm_disk where group_number=4;

     OS_MB   TOTAL_MB 
---------- ---------- 
    573440     573440 
    573440     573440 
    573440     573440 

Querying V$ASM_DISK_SPARSE for TOTAL_MB, you can see the actual size of the ASM grid disk available for use. Note that each ASM grid disk stores metadata information of approximately 2 MB per 16 GB of space allocated to the sparse ASM grid disk. For 56 GB allocated per grid disk in this example, 8 MB of space is reserved for sparse disk metadata (57336 MB + 8 MB = 57344 MB / 1024 = 56 GB).

SQL> select total_mb from v$asm_disk_sparse where group_number=4; 

 TOTAL_MB 
--------- 
 57336 
 57336 
 57336 

10.8.4 Repurposing Sparse Griddisks

You can change sparse griddisks back to normal griddisks.

If you previously created sparse griddisks, but now want to use them as normal griddisks, you can drop and recreate the disks.
  1. Drop the snapshot database that is currently using the sparse griddisks.
    RMAN> startup mount force;
    RMAN> delete database;
    
  2. Using SQL*Plus or ASMCMD, drop the Oracle ASM disk group that contains the sparse griddisks.
    SQL> DROP DISKGROUP sparse INCLUDING CONTENTS force;
    
  3. Using CellCLI, drop the griddisks in the storage cells.
    cellcli -e drop griddisk all harddisk prefix=SPARSEC1
    
  4. Recreate the griddisks.

    When you create the griddisks, use a similar size as the other disks and add back to the disk group of your choice. Do not specify the sparse attribute.

  5. Add the recreated griddisks to an Oracle ASM disk group.

    Use the SQL ALTER DISKGROUP command with the ADD DISK clause to add the disk to the Oracle ASM disk group using syntax similar to the following:

    SQL> ALTER DISKGROUP disk_group_name ADD DISK 'o/cell_IPaddress/data*';