Skip Headers
Oracle® Exadata Storage Server Software User's Guide
12c Release 1 (12.1)

E50471-21
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

12 Setting up Oracle Exadata Storage Snapshots

This chapter contains the following topics:

Overview of Exadata Snapshots

Traditionally, to clone databases in a production system, you would create test master and snapshot databases on a non-Exadata system (Figure 12-1). In some cases, these databases are a full copy that consumes as much storage as its source (Figure 12-2).

Figure 12-1 Traditional Clone to a Non-Exadata System

Description of Figure 12-1 follows
Description of "Figure 12-1 Traditional Clone to a Non-Exadata System"

Figure 12-2 Database Clone That Is a Full Copy of Its Source

Description of Figure 12-2 follows
Description of "Figure 12-2 Database Clone That Is a Full Copy of Its Source"

If the clones are a full copy of the production database, this is expensive in terms of the amount of storage consumed and the time it takes to create the clones. Imagine creating ten clones for a multi-terabyte database and it is easy to see why this approach does not scale.

Another drawback to this approach is that Oracle Exadata Storage Server Software features such as Smart Scan, Smart Logging, and Smart Flash are not available on non-Exadata systems.

To solve these problems, you can use Exadata snapshots. Exadata snapshots are ideal for creating space-efficient read-only or read-write clones of an Oracle database that you can use for development, testing, or other non-production purposes, and when multiple clones are required because of disk space and time savings. Figure 12-3 depicts the space required for an Exadata snapshot.

Figure 12-3 An Exadata Snapshot

Description of Figure 12-3 follows
Description of "Figure 12-3 An Exadata Snapshot"

An Exadata snapshot is based on a test master, which is a full clone of the source database. The test master is the only full copy of the source database. From a single test master you can create multiple Exadata snapshots with minimal additional storage and minimal effort. Each Exadata snapshot uses a small fraction of the disk space required for the test master and can be created or dropped in seconds. Each Exadata snapshot is a logical copy of the test master.

Before creating Exadata snapshots from the test master, you can modify the data in the test master, if required. For example, you can delete or mask sensitive data in the test master before making it available to non-privileged users.

Creating an Exadata snapshot from the test master is as simple as recording the parent file name in the child file header, an operation that completes in seconds and requires minimal disk space. Additional disk space is consumed only when the user of the snapshot begins to change data. Only new data is written to data blocks that are allocated to the snapshot on write. All requests for data that has not changed are serviced by the data blocks of the test master.

Multiple users can create independent snapshots from the same test master. This enables multiple development and test environments to share space while maintaining independent databases for each user. Figure 12-4 shows an Exadata environment with three Exadata snapshots that use the same test master. Currently you cannot create a second-level snapshot based on an existing Exadata snapshot.

Figure 12-4 An Exadata Environment with 3 Exadata Snapshots from the Same Test Master

Description of Figure 12-4 follows
Description of "Figure 12-4 An Exadata Environment with 3 Exadata Snapshots from the Same Test Master"

Support of Exadata Features

In addition to space and time savings, Exadata snapshots provide cost-effective development, quality assurance and test environments on Exadata. Exadata snapshots can be used by developers and testers who need to validate functionality, and practice maintenance and operational steps in a fully functional Exadata environment (for example, Exadata smart flash, smart scan, hybrid columnar compression).

Separate Test/Development and Production Environments

Oracle recommends that test and development environments be hosted on a separate physical Exadata rack from the rack hosting the production database (see Figure 12-5 and Figure 12-6). An Exadata system dedicated to development and test is ideal. Test masters and their associated Exadata snapshots would be hosted on this system. Alternatively it may be an Exadata system that hosts Data Guard standby databases for high availability, disaster recovery, or other purposes as long as capacity allows. Test masters and their snapshots may reside in either physical or virtual machines on an Exadata system.

Types of Exadata Snapshots

You can create two types of Exadata snapshots, depending on the current setup of your environment:

  • You have a pluggable database (PDB) and want to create a test master from it.

    You can create Exadata snapshot PDBs from individual PDBs within a container database (CDB). You can clone individual PDBs to create the test master PDB, from which Exadata snapshot PDBs are generated.

    You can move Exadata snapshot PDBs from one CDB to another in the same Exadata cluster. You can also create an Exadata snapshot PDB in one container from a test master PDB residing in another container as long as both CDBs are in the same Exadata cluster.

    The test master database and their Exadata snapshots must be in the same ASM cluster environment.

    Figure 12-5 shows a production CDB with three PDBs. Two of the PDBs (PDB1 and PDB2) have been cloned to create test master PDBs, which were then unplugged and plugged into another CDB on the test Exadata system. In this figure, six Exadata snapshot PDBs have been created from the test master PDBs.

    Figure 12-5 Exadata Snapshot PDBs

    Description of Figure 12-5 follows
    Description of "Figure 12-5 Exadata Snapshot PDBs"

  • You have a container database (CDB) and want to create test masters from all its PDBs, or you have a simple non-container database and want to create a test master from it.

    An Exadata snapshot database can be either a non-container database (non-CDB) or a container database (CDB). Creating an Exadata snapshot database of a CDB enables access to all of the pluggable databases in that container.

    Figure 12-6 depicts a full clone of the production database. The clone, which is the test master database, is hosted on a separate test/development system, and it is associated with six Exadata snapshots.

    The test master database is created using either a Data Guard standby database (recommended if the test master will be refreshed on a regular basis) or RMAN.

    The test master database and their Exadata snapshots must be in the same ASM cluster environment.

    Figure 12-6 Exadata Snapshot Databases in a Non-Multitenant Environment

    Description of Figure 12-6 follows
    Description of "Figure 12-6 Exadata Snapshot Databases in a Non-Multitenant Environment"

Prerequisites for Exadata Snapshots

Before creating Exadata snapshots, check that your environment meets the following requirements:

  • Storage servers must be Exadata X3 or later

  • Exadata Storage Server software 12.1.2.1.0 or later for Exadata Storage and Database Servers

    You cannot downgrade to an earlier version with sparse grid disks on a cell.

  • Oracle Grid Infrastructure 12.1.0.2.0 BP5 or later

    The ASM disk group that contains the sparse ASM grid disks must have both COMPATIBLE.RDBMS and COMPATIBLE.ASM set to 12.1.0.2 or later.

    The parent disk group can be 11.2 compatible.

  • Oracle Database version 12.1.0.2.0 BP5 or later

    The parent database and the snapshot database must be 12.1.0.2 compatible.

  • The data files for the snapshot database and the parent database must be on the same ASM cluster.

  • The db_block_size must be at least 4K and be a multiple of 4K.

 

Concepts

This section describes the objects that you need for creating and using Exadata snapshots. It contains the following sections:

Sparse Database and Sparse Files

A database consists of the following files:

  • control files

  • online redo logs

  • temp files

  • data files

In a sparse database, such as an Exadata snapshot, its data files are sparse files. A sparse file contains only changes made to blocks from the parent file (the parent file remains unchanged) and maintains a pointer to the parent file for access to unchanged data. See Figure 12-3.

The Exadata snapshot has its own copy of the other database files (control files, online redo logs, and temp files). These other database files will not be sparse files.

Sparse Grid Disks

A sparse ASM disk group is composed of sparse grid disks. A sparse grid disk has a virtual size attribute as well as physical size.

The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB.

The maximum allowed virtual size is 100 TB.

You create sparse grid disks before creating a sparse ASM disk group. If you are deploying a new system or re-deploying an existing system, set aside some space from each cell disk for use by the disk group just as you would for any other disk group.

If you want to use an existing system without reimaging, see My Oracle Support note 1467056.1, "Resizing Grid Disks in Exadata: Examples", for instructions on resizing existing grid disks.

See "Creating Sparse Grid Disks" for more details.

Sparse Disk Groups

Exadata snapshots utilize Oracle ASM sparse disk groups: sparse data files can be created only in Oracle ASM sparse disk groups. Figure 12-7 shows a sparse disk group containing three Exadata snapshots.

Figure 12-7 Sparse Disk Group Contains Exadata Snapshots

Description of Figure 12-7 follows
Description of "Figure 12-7 Sparse Disk Group Contains Exadata Snapshots"

A sparse ASM disk group can store both sparse and non-sparse files. You can create full database objects in a sparse ASM disk group. For example, you can create your test master database in your sparse ASM disk group along with the Exadata snapshots. Note that there will be no space savings for the test master database because it will be a full copy of the source database.

A sparse disk group has the following attributes:

  • compatible.asm must be set to 12.1.0.2 or higher.

  • compatible.rdbms must be set to 12.1.0.2 or higher.

  • cell.sparse_dg must be set to "allsparse". This is a new attribute that identifies the disk group to ASM as being made up of sparse grid disks.

  • appliance.mode must be set to true.

  • A sparse disk group uses 16 times the size of an extent. For 4M allocation unit (AU), each extent is 64M.

  • A sparse disk group uses Virtually Allocated Metadata.

For example, the following SQL command creates a sparse disk group:

SQL> create diskgroup SPARSE
  normal redundancy 
  disk 'o/*/SPARSE_*'
  attribute 
    'compatible.asm'          = '12.1.0.2',
    'compatible.rdbms'        = '12.1.0.2',
    'cell.smart_scan_capable' = 'true',
    'cell.sparse_dg'          = 'allsparse',
    'au_size'                 = '4M';

Sparse Disk Sizing and Allocation Methodology

To create Exadata snapshots, you must have sparse grid disks created with an ASM disk group created based on those disks. It is not possible to directly modify an existing disk group and convert it into a sparse disk group; you must create grid disks fresh with the sparse attributes. If you want to use an existing disk group, then the disk group and associated grid disks must be dropped and re-created.

If you want to create a new sparse disk group and all of your space is currently allocated to existing disk groups, then you must resize one or more existing disk groups to free up space. Use the following steps perform this process.

Sizing Steps

When sizing, remember that you can also put non-sparse databases into the sparse disk group. Non-sparse databases and files will use up full physical space.

The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB.

The maximum virtual size that can be allocated per sparse grid disk is 100 TB. However, to avoid additional performance impact Oracle does not recommend creating large virtual sizes unless it is truly required.

  1. Determine the amount of physical space for your snapshots. Use the formula found in "Calculating the Physical Size for Grid Disks".

    Note that a good maximum starting point for physical space allocation for sparse grid disk is 15% of your total space. This can be tuned up or down as required based on future utilization patterns and needs.

  2. Determine the amount of virtual space you will require for your snapshots. Use the formula found in "Calculating the Virtual Size for Grid Disks".

    Note that the virtual size can be modified online by just modifying the ASM disk group without making any changes to the underlying grid disks.

  3. Using the number of snapshots you plan on having at any one time and how much changes you expect to make, use the script from My Oracle Support note 1464809.1 to determine which existing disk group(s) you want to shrink to release space for the sparse disk group. Based on the results of this script, you may need to shrink multiple disk groups to get the space you require for the sparse disk group. Leave a minimum of 15% free space in the original disk groups to allow for rebalancing during the resize operation.

    If there is not enough free space available in your current disk groups to provide desired space for the sparse disk group:

    • Rethink your sparse usage in this environment and downsize accordingly

    • Relocate objects and databases from this environment to another with available free space

    • Add storage

  4. Once you have decided which disk group to resize, follow the steps in My Oracle Support note 1465230.1 to resize the disks. This process allows you to do the resize in a rolling manner. The main prerequisite is that you have enough free space in the disk group you are resizing to be able to rebalance one cell out of the group.

  5. Create the sparse grid disk from the space just released using the commands outlined in "Creating Sparse Grid Disks".

  6. Create the sparse ASM disk group using the commands outlined in "Creating an ASM Disk Group for the Sparse Grid Disks".

  7. To monitor sparse grid disk activity see "Monitoring Sparse Disk Group Activity".

  8. If you find that you need to resize your sparse grid disks:

    • If more physical space is required, follow steps 1 through 4 in this section to free up space and then follow the steps in "Resizing Grid Disks". The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB.

    • If more virtual space is required, follow the steps in "Resizing the Virtual Space". The maximum virtual size that can be allocated to sparse grid disks from a single cell disk is 100 TB.

Refresh Considerations, or Lifecycle of Exadata Snapshots

Refresh cycles can influence how you use and create Exadata snapshots. The test master must remain in either READONLY state (if open) or closed while it has snapshots associated with it. If the test master needs to be refreshed, all snapshots dependent on the test master must be dropped and re-created.

If different groups of snapshot users have different refresh cycle requirements, you may need to maintain multiple test masters. Figure 12-8 shows three test masters, each with its own refresh schedule.

Figure 12-8 Three Developer Groups for PDB1, Each With a Different Refresh Cycle

Description of Figure 12-8 follows
Description of "Figure 12-8 Three Developer Groups for PDB1, Each With a Different Refresh Cycle"

 

Using an Oracle Data Guard Standby Database as the Test Master

In a situation where the test master is either a non-container database or a full CDB that needs to be refreshed regularly (that is, the test master is not a PDB), Oracle recommends creating the test master database as a Data Guard physical standby dedicated to this purpose. This provides multiple benefits:

  • Easier refresh

    Oracle Data Guard is a proven solution for synchronizing multiple physical replicas of a production database used for disaster recovery, read-only offload from production, backup offload, and test. This same functionality can be used to maintain a copy of the production database dedicated to serving as a test master database that is more easily refreshed on a periodic basis. The benefit of using a Data Guard physical standby increases as the size of the test master database and the number of times it must be refreshed increases. These benefits far outweigh the small incremental effort required to create a Data Guard replica compared to simply cloning a database from an RMAN backup.

  • Minimal impact to primary

    During the time the Data Guard replica is being used as the test master database, Data Guard redo transport and apply are disabled. There is zero impact to the production database. When it is time to refresh, only the deltas generated since the Data Guard replica was converted to a test master database are taken from the production database and used to resynchronize the test master database. See "Refreshing the Test Master Database" for details.

    NOTE: Since transport and apply for this Data Guard replica will be stopped while it functions as a test master, it should not be used for disaster recovery or for any purpose other than as test master. If you are already using Data Guard for high availability or disaster protection, Oracle recommends creating a Data Guard replica(s) to use as test master databases for Exadata snapshot databases.

  • Easy to scrub prior to creating snapshot clones

    Data Guard makes it easy to modify the test master database before making it available to create Exadata snapshots. For example, a Data Guard replica can be opened read-write and data can be masked or scrubbed prior to creating Exadata snapshots. Later, when testing is complete, the test master database can be converted back into a Data Guard replica discarding any modifications made to the original copy and refreshing it using only the deltas from the production database.

    If you are using an RMAN backup database, and you mask or scrub the data, when you need to refresh the test master, you have to create another backup as the test master and rescrub it to make it current.

 

Managing Exadata Snapshots

To create and manage Exadata snapshots, you need to perform these procedures:

Creating Sparse Grid Disks

When creating sparse grid disks, you need to specify the physical size and the virtual size.

Calculating the Physical Size for Grid Disks

You can use the following formula to get a rough estimate of the total physical space to set aside for a sparse ASM disk group:

Total physical space =
   (SUM(size of all test masters in the sparse ASM disk group) +
    SUM(approximate size of all updates to the snapshot databases))
   * ASM Redundancy

In the formula above, ASM redundancy takes into account ASM mirroring of extents. Exadata requires ASM redundancy set to either normal redundancy (double mirror the extents) or high redundancy (triple mirror the extents). If the sparse ASM disk group will use normal redundancy, expect to double the space used. If using high redundancy, expect to triple the space used.

For example, if you want 2 test masters in the sparse ASM disk group created with normal redundancy with a combined total space of 500 GB (250 GB each) and each test master will have 5 Exadata snapshots with the expectation that each snapshot will modify 20% of the blocks, then the total physical space that will be needed can be calculated as follows:

Space for 2 test masters:   2 * 250 GB =                                500 GB
Space for 5 snapshots per test master, for a total of 10 snapshots:
    10 * 250 GB * 20% =                                                 500 GB
Subtotal                                                               1000 GB
Normal redundancy: 2 * 1000 GB =                                       2000 GB

Divide this value by the number of disks to determine the size parameter for each disk. ASM grid disks should be allocated on 16 MB boundaries. If the size parameter in MB for each grid disk is not evenly divisible by 16, adjust up to a 16 MB boundary.

Note that you should set aside additional space to use for multiple projects and through multiple iterations.

Calculating the Virtual Size for Grid Disks

You can use the following formula to get a rough estimate of the virtual size to assign for a sparse ASM disk group:

Virtual size required for sparse disks =
   (SUM(full virtual size of all Exadata snapshots) + Physical space allocated)
   * ASM Redundancy

To continue with the example from the previous section, you have 10 Exadata snapshots. If they were full copies of the test master, they would be 250 GB each.

The following shows the calculation for the total virtual space:

Full size for 5 snapshots per test master, for a total of 10 snapshots:
    10 * 250 GB =                                                      2500 GB
Size of the 2 test masters: 2 * 250 GB =                                500 GB
Subtotal                                                               3000 GB
Normal redundancy: 2 * 3000 GB =                                       6000 GB

Divide this value by the number of disks to determine the virtualsize parameter for each disk. The virtual size for each grid disk should be allocated on 16 MB boundaries. If the virtualSize parameter in MB for each grid disk is not evenly divisible by 16, adjust up to a 16 MB boundary.

Note that you should set aside additional space to use for multiple projects and through multiple iterations.

Creating a Sparse Grid Disk

To create the sparse ASM grid disk, log in to each cell (or use dcli) in CellCLI and run a command similar to the following, changing size values where appropriate:

CellCLI> create griddisk all harddisk prefix=SPARSE, size=56G, virtualsize=560G

This creates a grid disk of physical size 56 GB but presents to ASM as a 560 GB grid disk. The size parameter should match the actual physical size of the ASM grid disk while the virtualsize parameter should be at least the physical size of the ASM grid disk.

Attributes for the ASM grid disk created above would look like the following from the "LIST GRIDDISK DETAIL" command:

CellCLI> LIST GRIDDISK DETAIL
size:        56G
sparse:      TRUE
virtualSize: 560G

size displays the actual physical size of the grid disk.

sparse has a value of TRUE.

virtualSize displays the virtual size of the grid disk.

Creating an ASM Disk Group for the Sparse Grid Disks

After you have created the sparse grid disks, you create an ASM disk group to enable those disks to be accessible to a database on the cluster. To create a disk group, log in to an ASM instance using SQL*Plus as sysasm and run a command similar to the following:

SQL> create diskgroup SPARSE high redundancy disk 'o/*/SPARSE_*' attribute
'compatible.asm'='12.1.0.2', 
'compatible.rdbms'='12.1.0.2', 
'au_size'='4M', 
'cell.smart_scan_capable'='true', 
'cell.sparse_dg'='allsparse', 
'appliance.mode' = 'TRUE';

compatible.asm must be set to 12.1.0.2 or higher.

compatible.rdbms must be set to 12.1.0.2 or higher.

cell.sparse_dg must be set to "allsparse". This identifies the disk group to ASM as being made up of sparse grid disks.

appliance.mode must be set to true.

 

Setting Up the Test Master

You can create the test master using either of these methods:

Create a New Test Master - Full Clone on a Disk Group with ASM ACL Enabled

You can create a full clone of your database using RMAN backup/restore, data pump, or any of the other methods typically used to create a full clone of a database.

After creating the full clone, remove write permissions on all the data files to help prevent accidental overwrite.

SQL commands in ASM instance only allow to set file permissions to read only. You cannot remove write permissions in SQL.

SQL> ALTER DISKGROUP DATA set permission owner=read ONLY, group=read ONLY, other=none for file 'FILENAME';

Converting an Existing Full Clone or Standby Database to a Test Master

If you already have a full clone or a standby database that you want to repurpose as a test master, perform the following steps:

Note that standby databases cannot be running redo apply while serving as test master.

  1. If you are using a standby database:

    1. For initial creation of the Data Guard replica, use the steps outlined in My Oracle Support note 1617946.1, "Creating a Standby using RMAN Duplicate (RAC or Non-RAC)".

      The Data Guard copy must have enough redo applied that it can be opened in a READONLY state.

      Note:

      A Data Guard snapshot standby is different from an Exadata snapshot. A Data Guard snapshot standby is a complete copy of the source database that is open read-write. Conversion to a Data Guard snapshot standby is a simple operation using a single command. Data Guard snapshot standby facilitates making modifications to the test master and refreshing it for subsequent rounds of testing. See "Managing a Snapshot Standby Database" in Oracle Data Guard Concepts and Administration for more information on Data Guard snapshot standby databases.
    2. When the standby database is at a consistent state and can be opened READONLY, you should stop log transport to and disable redo apply on the standby.

      DGMGRL> edit database TESTMASTER set property logshipping=OFF;
      Property "logshipping" updated
      DGMGRL> edit database TESTMASTER set state=APPLY-OFF;
      Succeeded
      
    3. Make any modifications to the test master, for example, deleting sensitive data, if necessary.

  2. Check that both compatible.ASM and compatible.RDBMS are 11.2.0.0 or newer.

  3. Enable access control on the disk group that contains the test master's data files. The disk group must be on Exadata.

    SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';
    
  4. Grant ownership to all data files. See "Setting the Ownership of the Test Master Data Files" for details.

  5. Remove write permissions on all the data files to help prevent accidental overwrite.

    SQL commands in ASM instance only allow to set file permissions to read only. You cannot remove write permissions in SQL.

    SQL> ALTER DISKGROUP DATA set permission owner=read ONLY, group=read ONLY, other=none for file 'FILENAME';
    

    This allows snapshots to be created and owned by users other than the owner of the base files.

Setting the Ownership of the Test Master Data Files

After you have cloned the database to create a test master database, you set an operating system user as the owner of the disk group, and make the operating system user as the owner of the test master's data files.

You can do this by running SQL commands manually in SQL*Plus, or by running a script:

Running Commands Manually

The following commands are run in SQL*Plus.

Specify an operating system user as the owner of the disk group:

SQL> ALTER DISKGROUP DATA ADD USER 'scott';

Make the operating system user the owner of the test master's data files:

SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE
  '+DATA/TESTMASTER/DATAFILE/system.257.865863315';

SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE
  '+DATA/TESTMASTER/DATAFILE/sysaux.258.865863317';

SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE 
  '+DATA/TESTMASTER/DATAFILE/sysext.259.865863317';

SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE 
  '+DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315';
Running from a Script

The following procedure is equivalent to the commands above, but it queries v$datafile for the filenames:

  1. Add an operating system user as owner of the disk group.

    SQL> ALTER DISKGROUP DATA ADD USER 'scott';
    
  2. Generate a script called set_owner.sql to set the owner of the test master's data files.

    • If the test master is a full database, run the following in the test master database:

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback off
      set heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' for file '||''''||name||''''||';' from v$datafile;
      exit
      
    • If the test master is a PDB, run the following in the CDB$ROOT of the test master PDB:

      In the select statement below, the example assumes the test master PDB has a con_id of 10.

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback off
      set heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' for file '||''''||name||''''||';' from v$datafile where con_id=10;
      exit
      
  3. Remove extra lines in set_owner.sql.

    sed -i '/SQL/d' set_owner.sql
    
  4. Run the script in the ASM instance.

    SQL> @set_owner
    

Creating the Snapshot Databases

You can create an Exadata snapshot database of a pluggable database (PDB) or of a full database.

The first example addresses Exadata snapshot PDBs, the second example addresses snapshot databases (non-CDB or CDB).

Creating a Snapshot of a Pluggable Database

Creating an Exadata snapshot of a pluggable database (PDB) is the simplest method for creating a snapshot because it requires no additional manual steps. Two new clauses to the CREATE PLUGGABLE DATABASE statement were added to identify the PDB as an Exadata snapshot. The snapshot creation process alters the permissions on the files on the test master PDB to READONLY to ensure that it is not modified.

Creating individual Exadata snapshot PDBs is best used when creating snapshots for a smaller number of PDBs within a given CDB. Figure 12-9 shows a high-level example of a typical lifecycle for a PDB with two PDB snapshots.

Figure 12-9 Lifecycle of Exadata Snapshots Using PDBs

Description of Figure 12-9 follows
Description of "Figure 12-9 Lifecycle of Exadata Snapshots Using PDBs"

One of the benefits of Oracle Multitenant and PDBs is the ability to easily clone an existing PDB to create a test master and move it from one CDB to another. Oracle recommends that you clone your source PDB to create the test master and then migrate it to your test environment where you can perform any data scrubbing that may be needed. Once complete, the test master PDB can be used to create any number of Exadata snapshot PDBs.

NOTE: An Exadata snapshot PDB is created as a PDB in the CDB. You are limited to a total of 252 PDBs in a single CDB. All PDBs in that CDB, whether they are test masters, snapshots or PDBs that are not being used for snapshots, are included in that limit. If you need to create more PDBs than can fit in a single CDB, you can create the Exadata snapshot PDB in another CDB on the same cluster.

After creating the test master PDB, perform the following steps to create an Exadata snapshot PDB:

  1. In SQL*Plus connect to the cdb$root.

  2. Close the test master PDB in all instances.

    SQL> alter pluggable database PDB1TM1 close instances=all;
    
  3. Open the test master PDB in the local instance in read only mode.

    SQL> alter pluggable database PDB1TM1 open read only;
    
  4. Create an Exadata snapshot PDB of the test master.

    SQL> create pluggable database PDB1S1 from PDB1TM1 tempfile reuse 
    create_file_dest='+SPARSE' snapshot copy;
    

    create_file_dest must specify the name of a sparse disk group; this ensures the files for the Exadata snapshot are created in the correct disk group. The "snapshot copy" clause creates the PDB as a snapshot rather than a full PDB clone.

    NOTE: When an Exadata snapshot PDB is created, the command changes file permissions on the test master PDB, marking the files as READONLY in ASM.

Creating a Snapshot of a Full Database

You create an Exadata snapshot of a full database if your test master database is a non-CDB or a CDB and virtually all of the PDBs are to be used for testing.

Note that the test master database cannot be used as a failover target to provide high availability or disaster recovery (a Data Guard configuration may have multiple replicas that can each serve different purposes). Similar to test master PDBs, test master databases cannot be modified while Exadata snapshots exist against them.

The test master database cannot be a read-only physical standby database that is in recovery mode (for example, Active Data Guard in Real Time Apply).

The test master database and their Exadata snapshots must be in the same ASM cluster environment.

Figure 12-10 shows the lifecycle an Exadata test master database and snapshot databases.

Figure 12-10 Lifecycle of Test Master and Snapshot Databases

Description of Figure 12-10 follows
Description of "Figure 12-10 Lifecycle of Test Master and Snapshot Databases"

To create the Exadata snapshot databases:

  1. In the test master database, create a sample control file script to use for your Exadata snapshot databases by backing up the existing control file to trace.

    Connect to the test master database via SQL*Plus as sysdba and run the following:

    1. Determine name and location of any trace file to be generated by your session:

      SQL> select value from v$diag_info where name = 'Default Trace File';
      VALUE
      --------------------------------------------------------------------------
      /u01/app/oracle/diag/rdbms/TESTMASTER/TESTMASTER1/trace/TESTMASTER1_ora_26756.trc
      
    2. Run the backup control file to trace command to place the "create controlfile" command into the trace file.

      SQL> alter database backup controlfile to trace;
      
    3. Retrieve the file shown for the Default Trace File.

  2. In the test master database, determine the existing file names for the rename step (step 9). Log into SQL*Plus as sysdba and run the following:

    set newpage 0
    set linesize 999
    set pagesize 0
    set feedback off
    set heading off
    set echo off
    set space 0
    set tab off
    set trimspool on
    spool rename_files.sql
    select 'EXECUTE dbms_dnfs.clonedb_renamefile 
    ('||''''||name||''''||','||''''||replace(replace(replace(name,'.','_'),'TESTMASTER','JOHNTEST'),'DATA','SPARSE')||''''||');' from v$datafile;
    exit 
    

    The query above builds a file called rename_files.sql that contains statements for each data file similar to the following:

    EXECUTE dbms_dnfs.clonedb_renamefile (
    '+DATA/TESTMASTER/DATAFILE/system.257.865863315',
    '+SPARSE/JOHNTEST/DATAFILE/system_257_865863315');
    

    The REPLACE function in the query:

    • replaces periods in the original filename with underscores

    • replaces the original database name of TESTMASTER with JOHNTEST

    • replaces the original disk group name of DATA with SPARSE

  3. Shut down the test master.

    SQL> shutdown;
    
  4. Create an init.ora file for the Exadata snapshot database. You can use the test master's init.ora file as a template. If so, make sure to change the db_name and control file entries. For this procedure, the init.ora file for the Exadata snapshot database is referenced as snap_init.ora in commands and examples.

    $ cp init_TestMaster.ora snap_init.ora
    

    Modify snap_init.ora with the new database name and new control files, for example:

    db_name = JohnTest
    control_files = '+DATA/JOHNTEST/control1.f'
    
  5. Edit the trace file generated in step 1 to create a control file for the Exadata snapshot database and create a .sql file called crt_ctlfile.sql that you will run later in step 8. The control file should be created with Exadata snapshot database name, new log file names, and the test master's data file names.

    The example below shows a control file creation script. "JohnTest" is the Exadata snapshot database. The LOGFILE lines specify the new log file locations, and the DATAFILE lines specify the test master database's data file locations. The new log files can be in any disk group that has enough space, but they should not be created in the sparse ASM disk group.

    SQL> CREATE CONTROLFILE REUSE SET DATABASE JohnTest RESETLOGS
          MAXLOGFILES 32 
          MAXLOGMEMBERS 2 
          MAXINSTANCES 1 
          MAXLOGHISTORY 908 
      LOGFILE 
          GROUP 1 '+DATA/JOHNTEST/t_log1.f' SIZE 100M BLOCKSIZE 512,
          GROUP 2 '+DATA/JOHNTEST/t_log2.f' SIZE 100M BLOCKSIZE 512
      DATAFILE
          '+DATA/TESTMASTER/DATAFILE/system.257.865863315',
          '+DATA/TESTMASTER/DATAFILE/sysaux.258.865863317',
          '+DATA/TESTMASTER/DATAFILE/sysext.259.865863317',
          '+DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315'
      CHARACTER SET WE8DEC; 
    
  6. Create the audit_file_dest directory.

    $ mkdir /u01/app/oracle/admin/johntest
    
  7. Start a database instance pointing to the Exadata snapshot database init.ora file using the following commands:

    SQL> sqlplus / as sysdba
    SQL> startup nomount pfile=snap_init.ora
    
  8. Create the Exadata snapshot control file using the script created in step 5. In the following example the script is named crt_ctlfile.sql.

    SQL> @crt_ctlfile
    
  9. Run the script you modified in step 2. All the files must be renamed prior to opening the Exadata snapshot database. Connect using SQL*Plus as sysdba to the Exadata snapshot database and run the following:

    SQL> @rename_files
    

    This script modifies the permissions of the test master database files in ASM, marking them as READONLY.

    The dbms_dnfs.clonedb_renamefile procedure, which is called by rename_files.sql, sets up the parent-child relationship between the test master database and the snapshot database, and renames the filenames in the snapshot database's control file.

  10. Open the Exadata snapshot database with the RESETLOGS option:

    SQL> ALTER DATABASE OPEN RESETLOGS;
    
  11. Confirm that the Exadata snapshot files are child files of the test master database. Connect using SQL*Plus as sysasm to the Exadata snapshot, and run the following command:

    SQL> a.select filenumber num, CLONEFILENAME child, SNAPSHOTFILENAME parent
    from x$ksfdsscloneinfo;
    

    The following is an example of the output from the query:

    NUM CHILD                                        PARENT 
    ------------------------------------------------------------------
    1   +SPARSE/JOHNTEST/DATAFILE/system_257_865863315
    +DATA/TESTMASTER/DATAFILE/system.257.865863315
    
    2   +SPARSE/JOHNTEST/DATAFILE/sysaux_258_865863317
    +DATA/TESTMASTER/DATAFILE/sysaux.258.865863317
    
    3   +SPARSE/JOHNTEST/DATAFILE/sysext_259_865863317
    +DATA/TESTMASTER/DATAFILE/sysext.259.865863317
    
    4   +SPARSE/JOHNTEST/DATAFILE/tbs_1_256_865863315
     +DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315
     
     
    
  12. Log in using SQL*Plus to the Exadata snapshot database, and add temp files to the TEMP tablespace. This will be a full size temp file, not sparse.

    SQL> alter tablespace TEMP add tempfile '+DATA' size 10g;
    

Refreshing the Test Master Database

To refresh the test master database, the main steps are:

Step 1   Drop the Snapshot Databases

Delete the Exadata snapshot databases that are children of the test master database you want to refresh.

You can delete them using RMAN. For example, connect to each Exadata snapshot database using RMAN with the Exadata snapshot as the target and run the following commands:

RMAN> startup mount force;
RMAN> delete database;

NOTE: Failure to drop an Exadata snapshot database will have no impact on the state of a test master database. However, an Exadata snapshot can behave unpredictably if its test master database is dropped or refreshed.

Step 2   Change the Permissions on the Test Master to Read-Write
  1. When all Exadata snapshot databases have been deleted, start up the test master database in mount mode.

    Connect to the test master using SQL*Plus and run the following script to create a new SQL script containing the commands to reset permissions on the data files belonging to the test master database.

    set newpage 0 
    set linesize 999 
    set pagesize 0 
    set feedback off 
    set heading off 
    set echo off 
    set space 0 
    set tab off 
    set trimspool on 
    spool change_perm.sql 
    select 'ALTER DISKGROUP DATA set permission owner=read write, 
    group=read write, other=none for file '||''''||name||''''||';' from v$datafile; 
    exit 
    
  2. Run the following sed command to remove extra lines from the change_perm.sql script created in the previous step. You run this command from a shell, in the directory that contains the change_perm.sql script.

    $ sed -i '/SQL/d' change_perm.sql
    
  3. Connect via SQL*Plus to an ASM instance as sysasm and run the change_perm.sql script. This script changes the permissions of the test master's data files to make them writeable.

    SQL> @change_perm
    
Step 3   Convert the Test Master Database Back to a Data Guard Replica

If you had originally prepared the test master database using Data Guard snapshot standby, convert it back to its original state as a Data Guard replica using the Data Guard convert command. This discards any changes previously made to the replica to prepare it to be the test master. It also makes it possible to refresh the test master using just incremental changes from the source database instead of a complete restore from a current backup.

Step 4   Update the Test Master Database

You have two options for refreshing the test master database:

  • Allow Data Guard to refresh the test master database

    If the Data Guard replica has been used as a test master database for only a short period of time and you have all the redo generated during this time in archive logs on disk at the source database, you can enable redo shipping and start redo apply. The test master database will use regular Data Guard protocols to retrieve archive logs and apply the logs until it is caught up with the primary database. Once the Data Guard replica is as current as you need it to be, disable redo shipping, stop redo apply and repeat the test master/snapshot creation cycle described earlier.

    This option has the benefit of being able to stop redo apply at some intermediate point rather than bringing the test master database totally current.

    To let Data Guard refresh the standby, enable log shipping to the standby and redo apply on the standby:

    DGMGRL> edit database TESTMASTER set property logshipping=ON; 
    Property "logshipping" updated 
    DGMGRL> edit database TESTMASTER set state=apply-on; 
    Succeeded
    
  • Use RMAN RECOVER...FROM SERVICE to roll forward the test master database

    If the Data Guard replica has been used as a test master database for a long period of time or if you no longer have the redo available on disk to enable Data Guard to automatically refresh the test master database, use RMAN to perform live incremental apply over the network.

    A major advantage to using this method is that no additional disk space is required. RMAN will bring changed blocks to the standby from the primary over the network and apply them directly. Also RMAN greatly simplifies the process by determining which blocks need to be retrieved based on the SCN of the datafiles on the test master. With this method you cannot recover to an intermediate point in time; the refresh will bring the test master database current with the primary. For more information on this method see the Oracle Database Backup and Recovery User's Guide section on Performing RMAN Recovery: Advanced Scenarios.

    To refresh the test master database using RMAN Network Incrementals:

    1. Prepare SQL*Net for the RMAN connections. These steps only need to be performed the first time through the process.

      1. Create a listener.ora entry for the test master database (the Data Guard replica) to allow RMAN to connect to the SID since the service will not be started when the database is only opened in NOMOUNT mode. The following is an example of the entry:

        SID_LIST_LISTENER = 
          (SID_LIST =   
            (SID_DESC = 
              (SID_NAME = TESTMASTER1)
              (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
            )
          )
        
      2. Reload the listener to pick up the changes to the listener.ora.

         $ lsnrctl reload listener 
        
      3. Create a TNS entry on the test master environment pointing to the SID of the local test master instance. The entry should use the local host name rather than the SCAN name to ensure the connection request goes to the correct host.

        TESTMASTER1 = 
          (DESCRIPTION = 
            (ADDRESS = (PROTOCOL = TCP)(HOST = standbydb01.us.oracle.com)(PORT = 1521))
            (CONNECT_DATA = 
              (SERVER = DEDICATED) 
              (SID = TESTMASTER1) 
              (UR=A) 
            ) 
          ) 
        
    2. Connect via RMAN to the test master database and save the CURRENT_SCN for later. This will be used to determine if newly created files since the last refresh need to be restored from the source database.

      RMAN> select current_scn from v$database;
      CURRENT_SCN# 
      ------------------ 
                17081990 
      
    3. Note the names of the online redo log files and standby redo log files of the Data Guard replica. You may require these names in a later step.

      The following command lists the names and group identifiers of the redo log files:

      RMAN> SELECT type, group#, member FROM v$logfile;
      
    4. Refresh the standby control file of the Data Guard replica from the source database to make the control file current.

      1. Reconnect to the Data Guard replica as the RMAN target.

      2. Restart it in NOMOUNT mode.

        RMAN> startup nomount force;
        
      3. Restore the standby control file by using the control file on the source database.

        The following command example restores the control file on the Data Guard replica by using the database control file from SOURCEMASTER, the source database.

        RMAN> RESTORE STANDBY CONTROLFILE FROM SERVICE SOURCEMASTER;
        
      4. Mount the Data Guard replica using the following command:

        RMAN> ALTER DATABASE MOUNT;
        
    5. If you are not using an RMAN catalog, the names of files in the standby control file are the names that were used in the source database. You must update the names of the data files and the temp files in the standby control file.

      Use the CATALOG command and the SWITCH command to update all the data file names. The SWITCH command will be used after restoring any newly created files from the source database in step 7.

      RMAN> CATALOG START WITH '+DATA/TESTMASTER/DATAFILE/';
      

      In the example above, "+DATA/TESTMASTER/DATAFILE/" is the location of the data files on the Data Guard replica. All data files must be stored in this location.

    6. Use the CURRENT_SCN from step 2 to determine if new files were added that need to be restored from the source database.

      RMAN> select file# from v$datafile where creation_change# >= 17081990;
       FILE#
      ---------- 
               9 
              10 
      
    7. If there are files returned by the previous query, restore those datafiles from the source database. Execute an RMAN RUN block similar to the following using the list of FILE#s returned by the previous step. If no FILE#s were returned, skip this step.

      RMAN> run{ 
      2> set newname for database to '+DATA';
      3> restore datafile 9,10 from service SOURCEMASTER;
      4> } 
      
    8. If not using an RMAN catalog, rename the datafiles in the standby control file by switching to the copies cataloged in step 5.

      RMAN> SWITCH DATABASE TO COPY;
      
    9. Update the names of the online redo logs and standby redo logs in the standby control file using one of the following methods:

      • Use the ALTER DATABASE CLEAR command to clear the log files in all redo log groups of the Data Guard replica. RMAN then recreates all the standby redo logs and the online redo log files.

        NOTE: Clearing log files is recommended only if the Data Guard replica does not have access to the online redo log files and standby redo log files of the source database. If the Data Guard replica has access to the redo log files of the source database and the redo log file names of the source database are OMF names, then the ALTER DATABASE command will delete log files on the source database. Also, the clearing of the logfiles will create new logfiles, it will not use existing, the control file is not aware of those existing files. To conserve space, delete the existing logfiles from ASM prior to running the ALTER DATABASE CLEAR commands.

        The GROUP# column of the V$LOGFILE view queried in step 5 provides the redo log group identifiers of the log groups that must be cleared. Use separate ALTER DATABASE CLEAR commands to clear each redo log group.

        The following command clears the redo log group with identifier 2.

        SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2;
        
      • Use the ALTER DATABASE RENAME FILE command to rename the redo log files. Use a separate command to rename each log file listed in step 5.

        To rename log files, the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL. Renaming log files is recommended when the number of online redo logs files and standby redo log files is the same in the source database and the Data Guard replica.

    10. Use RMAN RECOVER....FROM SERVICE to roll forward the data files to current state. No additional space is required for this operation. Note that this process can only bring the files totally current; it cannot bring the files to a previous point in time. Connect via RMAN to the Data Guard replica as target using the TNS entry created in step 3 and issue the following. The service specified should point to the primary.

      RMAN> recover database noredo from service SOURCEMASTER;
      
    11. Enable redo shipping to the Data Guard replica and start redo apply. This is necessary to update the control file with the blocks applied as part of step 10.

      DGMGRL> edit database TESTMASTER set property logshipping=ON;
      Property "logshipping" updated
      DGMGRL> edit database TESTMASTER set state=apply-on;
      Succeeded.
      
    12. After redo has been applied, repeat the process you used to convert the Data Guard replica into a test master database and then create Exadata database snapshots. Remember to once again disable log shipping and redo apply at the standby.

Step 5   Close the Test Master and Make All Test Master Data Files Read-Only

See the "Setting Up the Test Master" section.

Step 6   Re-Create All Snapshots

See the "Creating the Snapshot Databases" section.

 

Resizing Grid Disks

You can increase or decrease the virtual space and/or the physical space, as necessary. You can resize both physical and virtual space at the same time.

Resizing the Virtual Space

When V$ASM_DISKGROUP.FREE_MB or V$ASM_DISK.FREE_MB is running low, you need to increase the virtual address space:

  1. Run the following command on the cells:

    CellCLI> alter griddisk prefix=SPARSE, virtualSize=newSize
    

    For example:

    CellCLI> alter griddisk prefix=SPARSE, virtualSize=12000G
    
  2. On an ASM instance, resize the disk group to this new size:

    SQL> alter diskgroup SPARSE resize disks in failgroup cel1 size newSize
    

    For example:

    SQL> alter diskgroup SPARSE resize disks in failgroup cel1 size 12000G
    

See http://docs.oracle.com/cd/B28359_01/server.111/b31107/asmdiskgrps.htm#CHDFIHHC for details.

You can perform the same steps to decrease the virtual address space.

Resizing the Physical Space

When the grid disks are running out of physical space, that is, when the values in the TOTAL_MAT_MB and ALLOCATED_MAT_MB columns in V$ASM_DISK_SPARSE are close, then you need to increase the physical size of the grid disks:

Run the following command on the cells:

CellCLI> alter griddisk prefix=SPARSE, size=newPhysicalSize

For example:

CellCLI> alter griddisk prefix=SPARSE, size=4000G

You do not need to do anything on the ASM instance.

You can perform the same step to decrease the physical size.

 

Monitoring Sparse Disk Group Activity

The V$ASM_DISK and V$ASM_DISKGROUP views contain information about the virtual size and utilization of the sparse ASM disk group. New views V$ASM_DISK_SPARSE and V$ASM_DISKGROUP_SPARSE contain information about the actual size and utilization of the sparse ASM disk group. V$ASM_DISK_SPARSE also contains performance and usage metrics.

Table 12-1 describes the columns in V$ASM_DISK_SPARSE:

Table 12-1 V$ASM_DISK_SPARSE

Column Description

GROUP_NUMBER

Number of the disk group containing the disk

DISK_NUMBER

Number assigned to the disk within this disk group

INCARNATION

Incarnation number for the disk

ALLOCATED_MAT_MB

Total used physical capacity of this disk

TOTAL_MAT_MB

Total physical capacity of this disk

SPARSE_READS

Number of read requests on sparse regions of this disk

SPARSE_BYTES_READ

Bytes read from sparse regions of this disk

SPARSE_READ_TIME

Time taken by sparse read I/Os


Table 12-2 describes the columns in V$ASM_DISKGROUP_SPARSE:

Table 12-2 V$ASM_DISKGROUP_SPARSE

Column Description

GROUP_NUMBER

Cluster-wide number assigned to the disk group

ALLOCATED_MAT_MB

Total used physical capacity of this disk group

TOTAL_MAT_MB

Total physical capacity of this disk group


The following example shows the used space and the total space for some disks.

SQL> select 
     DISK_NUMBER          dsk_num,
     ALLOCATED_MAT_MB     alloc,
     TOTAL_MAT_MB         total
from V$ASM_DISK_SPARSE
where GROUP_NUMBER = 5;
 
DSK_NUM    ALLOC      TOTAL
---------- ---------- ----------
         0       5536      57336
         1       5424      57336
         2       5532      57336
         3       5424      57336
         4       5424      57336

In the following example, sparse ASM grid disks were created with an actual size of 56 GB and a virtual size of 560 GB. When you query V$ASM_DISK's OS_MB and TOTAL_MB columns, you can see the virtual size of 573440 MB (573440 MB / 1024 = 560 GB).

SQL> select os_mb, total_mb from v$asm_disk where group_number=4;

     OS_MB   TOTAL_MB 
---------- ---------- 
    573440     573440 
    573440     573440 
    573440     573440 

Querying V$ASM_DISK_SPARSE for TOTAL_MB, you can see the actual size of the ASM grid disk available for use. Note that each ASM grid disk stores metadata information of approximately 2 MB per 16 GB of space allocated to the sparse ASM grid disk. For 56 GB allocated per grid disk in this example, 8 MB of space is reserved for sparse disk metadata (57336 MB + 8 MB = 57344 MB / 1024 = 56 GB).

SQL> select total_mb from v$asm_disk_sparse where group_number=4; 

 TOTAL_MB 
--------- 
 57336 
 57336 
 57336