19 Enabling the DBaaS Cloud

This chapter covers the initial configuration required to enable a DBaaS Cloud.

It contains the following sections:

Getting Started

This section helps you get started by providing a list of steps that must be performed to get started in setting up a private database cloud. Before you set up the database cloud, you must have completed the common setup tasks described in Common Setup Tasks.

Table 19-1 Getting Started with DBaaS

Step Task Role

1

Configure Privilege Delegation Settings on your managed hosts. See Configuring Privilege Delegation Settings

Super Administrator

2

Set up provisioning credentials. See Setting Up Credentials for Provisioning

Self Service Administrator

3

Self Service Administrator

4

Configure the Listener. See Configuring the Oracle Listener.

Self Service Administrator

5

If you are using the Snap Clone profile, you must register the storage servers. See Registering and Managing Storage Servers.

Self Service Administrator

Setting Up Credentials for Provisioning

Before you perform any operations on the Managed Servers or databases, you must define the credentials that will be used by Enterprise Manager to connect to the targets.

You need to set up the following types of credentials:

  • Normal credentials are the host operating system credentials used to provision the database software and create databases. For example, oracle/<login password>. These credentials are saved when the Database Pool is created and are used when the EM_SSA_USER requests a database or a schema.

  • Privileged credentials are the host operating system credentials used to perform privileged actions like executing root scripts. These credentials are used deploying software (for running root.sh during deployment), for mounting and unmounting storage volumes (for databases created with snapshots) and so on. These credentials are saved along with the Database Pool if the pool is used for creating databases using snapshots.

  • Database SYSDBA credentials are used and saved for schema as a service database pool. These credentials are required only for schema as a service.

Note:

It is recommended that the same OS user who owns the Oracle Home on the host, creates the database.

To create named credentials, follow these steps:

  1. Log in to Enterprise Manager as an administrator with the EM_SSA_ADMINISTRATOR role.
  2. From the Setup menu, select Security, then select Named Credentials.
  3. Click Create in the Named Credentials page.
  4. Enter the Credential Name and Credential Description. Set the Authenticating Target Type field to Host and Scope field to Global. Enter the user name and password in the Credential Properties section. If you need to set privileged credentials, select Sudo or PowerBroker in the Run Privilege field and enter values in the Run As and Profile fields.
  5. Click Test and Save.
  6. Verify these credentials against a host target and click OK.

Provisioning Database Software for Database as a Service

Before you can enable database as a service, the database software must already be provisioned on all hosts. Database software can be provisioned by an administrator with the EM_SSA_ADMINISTRATOR role in the following ways:

  • Provisioning Profile

    • Capture a gold image of an existing database using a Provisioning Profile. See the Enterprise Manager Lifecycle Management Administrator's Guide for details.

    • Use the Provisioning Profile to provision the Clusterware/ASM or Grid Infrastructure (for Real Application Cluster databases), and Database Oracle Home. This method ensures that the necessary database plug-in (monitoring part of the database plug-in) is deployed onto the Management Agent as part of the database provisioning Oracle Home installation.

      To create a provisioning profile, from the Enterprise menu, select Provisioning and Patching, then select Database Provisioning and select the database provisioning deployment procedure to be used. You can select either Provision Oracle Database or Provisioning Oracle RAC Database deployment procedure.

      Note: Do not create a new database as part of this deployment procedure.

  • Using the Database Installer

    • From the Setup menu, select Extensibility, then select Plug-ins. and deploy the complete SSA (Enterprise Manager for Oracle Cloud) plug-in on all the Management Agents in a PaaS Infrastructure Zone.

    • Run the Clusterware/ASM or Grid Infrastructure installer to set up the cluster and ASM (for RAC databases).

    • Run the Database Installer and ensure you select the Install Database Software Only option on all hosts.

    • Discover the cluster. From the Setup menu, select Add Target, then Add Targets Manually, and then select Add Non-Host Targets Using Guided Process (Also Adds Related Targets).

      Select:
      • Oracle Cluster and High Availability Service to discover the cluster.

      • Oracle Database, Listener and Automatic Storage Management to discover ASM and listeners.

    • From the Enterprise menu, you can also select Job, then select Library and submit the Discover Promote Oracle Home Target job to add the Oracle Home.

For more details on provisioning the database software, see the Enterprise Manager Lifecycle Management Administrator's Guide.

Provision the Database for Schema as a Service

For schema as a service, you must deploy a single instance or RAC database. To deploy a database, you must use the Provision Oracle Database deployment procedure. See the Enterprise Manager Lifecycle Management Administrator's Guide for details.

Provision a Container Database for PDB as a Service

Note:

If pluggable databases (PDBs) need to be provisioned, you must be create container databases.

An Oracle Database can contain a portable collection of schemas, schema objects, and nonschema objects, that appear to an Oracle Net client as a separate database. This self-contained collection is called a pluggable database (PDB). A multi-tenant container database (CDB) is a database that includes one or more PDBs.

You can create a CDB either by using the Database Configuration Assistant (DBCA) or the CREATE_DATABASE SQL statement. See the Oracle Database Administrator's Guide for details. After the CDB is created, it consists of the root and the seed. The root contains minimal user data or no user data, and the seed contains no user data.

PDBs contain user data. After the CDB has been created, you can add PDBs to the CDB by using either of the following options:

  • Create a new PDB. See the Enterprise Manager Lifecycle Management Administrator’s Guide for details.

  • Plug in an unplugged PDB into a CDB. See the Enterprise Manager Lifecycle Management Administrator’s Guide for details.

Configuring the Oracle Listener

You need to configure an Oracle Home and the Oracle Listener before you can add them as Enterprise Manager targets.

To set up the Oracle Listener (Listener) for the database hosts, follow these steps:

  1. Log in as a user with the EM_SSA_ADMINISTRATOR role and perform mass deployment of database homes on the newly added hosts as described in Adding Hosts

  2. To configure a Listener running from the same Oracle Home on which the database instance is to be created, launch a Bash shell and enter the following commands:

    1. <AGENT_BASE>/agent_inst/bin/emctl stop agent

    2. export TNS_ADMIN=<DB_HOME_LOCATION>/network/admin

    3. export ORACLE_HOME=<DB_HOME_LOCATION>

    4. Run $ORACLE_HOME/bin/netca and create the listener. Make sure you have the same Listener name and Listener port on all the hosts.

  3. To configure a Listener running from the Single Instance High Availability (SIHA) Oracle Home, launch a Bash shell and enter the following commands:

    1. export ORACLE_HOME=<SIHA_HOME_LOCATION>

    2. Run $ORACLE_HOME/bin/netca and create the listener. Make sure you have the same listener name and listener port on all the hosts

  4. Log in as the user with the DBAAS_ADMIN_ROLE and discover the newly added Listener target on all the hosts. From the Setup menu, select Add Target, then select Add Target Manually.

  5. Select the Add Non-Host Targets Using Guided Process option and select Target Type as Oracle Database, Listener, and Automatic Storage Management and click Add Guided Discovery and follow the steps in the wizard. Before you add the new Listener target, ensure the ORACLE_HOME for the Listener is pointing to the correct ORACLE_HOME location. This process adds the Oracle Home target which is used when a database pool is created.

Registering and Managing Storage Servers

Note:

If you are creating thin clones from a snap clone based profile, you must register and manage the storage servers such as NetApp, Sun ZFS, or EMC. See Creating Snap Clones for details.

This section describes the following:

Overview of Registering Storage Servers

Registering a storage server, such as NetApp storage server, Sun ZFS storage server, or EMC storage server in Enterprise Manager enables you to provision databases using the snapshot and cloning features provided by the storage.

The registration process validates the storage, and discovers the Enterprise Manager managed database targets on this storage. Once the databases are discovered, you can enable them for Snap Clone. Snap Clone is the process of creating database clones using the Storage Snapshot technology.

Note:

Databases on Windows operating systems are not supported.

Before You Begin

Before you begin, note the following:

  • Windows databases are not discovered as part of storage discovery. This is because the Windows storage NFS collection does not happen at all. NFS collection is also not supported on certain OS releases, and thus databases on those OS releases cannot be Snap Cloned. For further details please refer to the My Oracle Support note 465472.1. Also, NAS volumes cannot be used on Windows for supporting Oracle databases.

  • Snap Clone is supported on Sun ZFS Storage 7120, 7320, 7410, 7420 and ZS3 models, NetApp 8 hardware in 7-mode and c-mode, EMC VMAX 10K and VNX 5300, and Solaris ZFS Filesystem.

  • Snap Clone supports Sun ZFS storage on HP-UX hosts only if the OS version is B.11.31 or higher. If the OS version is lower than that, the Sun Storage may not function properly thereby Snap Clone gives unexpected results.

  • By default, the maximum number of NFS file systems that Enterprise Manager discovers on a target host is 100. However, this threshold is configurable. You can also choose a list of file systems to be monitored if you do not want all the extra file systems to be monitored.

    The configuration file $agent_inst/sysman/emd/emagent_storage.config for each host agent contains various storage monitoring related parameters.

    To configure the threshold for the NFS file systems, you need to edit the following parameters:

    Collection Size:START
    Disks=1000
    FileSystems=1000
    Volumes=1000
    Collection Size:END 
    

    If you choose to provide a list of file systems to be monitored, it can be provided between the following lines:

    FileSystems:START

    FileSystems:END

    Restart the Management Agent and refresh the host configuration for the changes to this configuration file to be effective.

  • If the OMS Repository is running on RDBMS with 11.1.0.7.0 and AL32UTF8 character set, you need to apply patch 11893621.

Prerequisites for Registering Storage Servers

Before you register a storage server, follow the prerequisites outlined in the following sections:

Configuring Storage Servers

Before you register a storage server, you require the following privileges and licenses to successfully use Snap Clone:

Note:

Enterprise Manager Cloud Control 13c supports NetApp, Sun ZFS, Solaris File System (ZFS) and EMC storage servers.

Configuring NetApp Hardware

This section consists of the following:

Obtaining NetApp Hardware Privileges

Privileges is a generic term. NetApp refers to privileges as Capabilities.

For NetApp storage server, to use Snap Clone, assign the following privileges or capabilities to the NetApp storage credentials:

Note:

You can assign these capabilities individually or by using wildcard notations. For example:

'api-volume-*', 'api-*', 'cli-*' 
  • api-aggr-list-info

  • api-aggr-options-list-info

  • api-file-delete-file

  • api-file-get-file-info

  • api-file-read-file

  • api-license-list-info

  • api-nfs-exportfs-append-rules

  • api-nfs-exportfs-delete-rules

  • api-nfs-exportfs-list-rules

  • api-nfs-exportfs-modify-rule

  • api-snapshot-create

  • api-snapshot-delete

  • api-snapshot-list-info

  • api-snapshot-reclaimable-info

  • api-snapshot-restore-volume

  • api-snapshot-set-reserve

  • api-system-api-get-elements

  • api-system-api-list

  • api-snapshot-set-schedule

  • api-system-cli

  • api-system-get-info

  • api-system-get-ontapi-version

  • api-system-get-version

  • api-useradmin-group-list

  • api-useradmin-user-list

  • api-volume-clone-create

  • api-volume-clone-split-estimate

  • api-volume-create

  • api-volume-destroy

  • api-volume-get-root-name

  • api-volume-list-info

  • api-volume-list-info-iter-end

  • api-volume-list-info-iter-next

  • api-volume-list-info-iter-start

  • api-volume-offline

  • api-volume-online

  • api-volume-restrict

  • api-volume-set-option

  • api-volume-size

  • cli-filestats

  • login-http-admin

Obtaining NetApp Hardware Licenses

Snap Clone on a NetApp storage server requires a valid license for the following services:

  • flex_clone

  • nfs

  • snaprestore

Creating NetApp Storage Credentials

To create the NetApp storage credentials, follow these steps:

Note:

Snap Clone is supported only on NetApp Data ONTAP® 7.2.1.1P1D18 or higher, and ONTAP@ 8.x (7-mode, c-mode, and v-server mode).

  1. Create ROLE em_smf_admin_role' with all the recommended capabilities, such as api-aggr-list-info,api-file-delete-file, and the like.

  2. Create GROUP em_smf_admin_group with the ROLE em_smf_admin_role.

  3. Create USER em_smf_admin with GROUP em_smf_admin_group and a secure password.

Note:

The user em_smf_admin must be a dedicated user to be used by Oracle Enterprise Manager. Oracle does not recommend sharing this account for any other purposes.

Configuring NetApp 8 Cluster Mode Hardware

This topic discusses how to setup NetApp 8 cluster mode (c-mode) hardware for supporting Snap Clone in Enterprise Manager Cloud Control 13c.

NetApp 8 hardware in 7-mode is already supported for Snap Clone.

To configure NetApp 8 c-mode hardware, refer to the following sections:

NetApp 8 Configuration Supported with Cluster Mode

The configuration supported with c-mode is as follows:

  • Snap Clone features are supported only with SVM (Vserver).

  • Registration of a physical cluster node is not supported.

  • Multiple SVMs can be registered with Enterprise Manager Cloud Control 13c. All the registered SVMs are managed independently.

Preparing the NetApp 8 Storage and SVM

To prepare the NetApp 8 storage and SVM, ensure that the following requirements are done:

  • The NetApp 8 c-mode hardware should have an SVM created. If not, you should create an SVM which will be registered with the Enterprise Manager.

  • The SVM should have a network interface (LIF) with both Management and Data access. The domain name and IP address of this interface should be provided on the Storage Registration page in the Enterprise Manager.

  • There should be at least one aggregate (volume) assigned to the SVM. The aggregates should not be shared between the SVMs.

  • The SVM should have a user account that has the vsadmin-volume role assigned for ontapi access.

    The user credentials should be supplied on the Storage Registration page in the Enterprise Manager.

  • The root volume of SVM should have an export policy with a rule that allows Read Only access to all hosts. If you are using NFS v4, then the Superuser access needs to be granted from the Modify Export Rule dialog box.

    Note:

    • A directory named em_volumes is created with permissions 0444 inside the root volume of SVM. This directory will be used as an Enterprise Manager name space for the junction point.

    • All the storage volumes created will use the junction point /em_volumes in the name space.

    Note:

    When you register an SVM in Enterprise Manager Cloud Control, the details of all the aggregates assigned to it are fetched. The total size of an aggregate is required to set the quota, perform space computation, and for reporting.

    Presently, NetApp does not provide any Data ONTAP API to fetch the aggregate total size from an SVM. As a workaround, the available size of an aggregate is considered as the total size and is set as the Storage Ceiling during first Synchronize run. Storage Ceiling is the maximum amount of space that Enterprise Manager can use in an aggregate.

    If the total space of an aggregate is increased on the storage, you can increase the Storage Ceiling till you consume the available space in that aggregate.

Configuring Sun ZFS and ZS3 Hardware

This section consists of the following:

Obtaining Sun ZFS Hardware Privileges

Privileges is a generic term. For example, Sun ZFS refers to privileges as Permissions.

For Sun ZFS storage server, to use Snap Clone, assign the following privileges or permissions to the Sun ZFS storage credentials:

Note:

All the permissions listed must be set to true. The scope must be 'nas' and there must not be any further filters.

  • changeProtocolProps

  • changeSpaceProps

  • clone and createShare

  • destroy

  • rollback

  • takeSnap

Obtaining Sun ZFS Hardware Licenses

Snap Clone on Sun ZFS Storage Appliance requires a license for the Clones feature. A restricted-use license for the same is included with the Enterprise Manager Snap Clone.

Creating Sun ZFS Storage Credentials

To create the Sun ZFS storage credentials, follow these steps:

  1. Create ROLE em_smf_admin_role.

  2. Create AUTHORIZATIONS for the ROLE em_smf_admin_role.

  3. Set SCOPE as nas.

  4. Set the recommended permissions, such as, allow_changeProtocolProps, allow_changeSpaceProps, and the like to true.

  5. Create USER em_smf_admin and set its ROLE property as em_smf_admin_role.

    Note:

    The user em_smf_admin must be a dedicated user to be used by Oracle Enterprise Manager. Oracle does not recommend sharing this account for any other purposes.

Configuring Solaris File System (ZFS) Storage Servers

This section consists of the following:

Obtaining Solaris File System (ZFS) Privileges

Solaris File System (ZFS) refers to privileges as Permissions. For Solaris File System (ZFS) storage server, to use Snap Clone, grant the following permissions on the pool for the Solaris File System (ZFS) user:

  • clone

  • create

  • destroy

  • mount

  • rename

  • rollback

  • share

  • snapshot

  • quota

  • reservation

  • sharenfs

  • canmount

  • recordsize

Obtaining Solaris File System (ZFS) Licenses

Solaris File System (ZFS) does not require any special hardware license. Only Oracle Solaris OS version 11.1 is supported.

Setting Up Solaris File System (ZFS) Storage Servers

Solaris File System (ZFS) storage servers can work with any storage hardware. You do not need to buy any additional storage hardware. Instead, you can attach your in-house storage hardware and to acquire the Oracle Snap Clone functionality. For example, you can attach LUNs from an EMC VMAX, VNX systems, a Hitachi VSP, or an Oracle Pillar Axiom FC array.

The following storage topology figure explains how this works:

Note:

This figure assumes that you have a SAN storage device with 4 x 1TB logical unit devices exposed to the Solaris File System (ZFS) storage server.


Storage Topology

This section contains the following:

Prerequisites for Setting Up Solaris File System (ZFS) Storage Servers

Before you configure a Solaris File System (ZFS) storage server, ensure that you meet requirements:

  • Ensure that zfs_arc_max is not set in /etc/system. If it needs to be set ensure that it is set to a high value such as 80% of RAM.

  • Ensure that the storage server is configured with multiple LUNs. Each LUN should be a maximum of 1TB. A minimum 2 LUNs of 1TB each is recommended for a Snap Clone. Each LUN should have a mirror LUN which is mounted on the host over a different controller to isolate failover. A LUN can be attached to the Solaris host over Fibre Channel for better performance.

    Note:

    If Fibre Channel is not available, any direct attached storage or iSCSI based LUNs are sufficient.

  • All LUNs used in a pool should be equal in size. It is preferable to use less than 12 LUNs in a pool.

  • Apart from LUNs, the storage needs cache and log devices to improve zpool performance. Both these devices should ideally be individual flash/SSD devices. In case it's difficult to procure individual devices, one can use slices cut from a single device. Log device needs to be about 32GB in size and also have redundancy and battery backup to prevent data loss. Cache device can be about 128GB in size and need not have redundancy.

Requirements for Storage Area Network Storage

The requirements for Storage Area Network (SAN) storage are as follows:

  • It is recommended to create large LUNs and lesser number of LUNs. The maximum recommended size for a LUN is 3TB.

  • LUNs should come from different SAN storage pools or an entirely different SAN storage device.

    These LUNs are needed for mirroring, to maintain the pool level redundancy. If your SAN storage maintains a hardware level redundancy, then you can skip this requirement.

  • The LUNs should be exposed over Fiber Channel.

Recommendations for Solaris File System (ZFS) Pools

The recommendations for Solaris File system (ZFS) pools are as follows:

  • Create the Storage pool with multiple LUNs of the same size. You can add more disks to the storage pool to increase the size based on your usage.

  • The storage pool created on the Solaris File System (ZFS) storage server should use the LUNs coming from a different SAN storage pool or an entirely different SAN storage device. You can skip this if your SAN storage maintains hardware level redundancy.

  • To repair data inconsistencies, use ZFS redundancy such as mirror, RAIDZ, RAIDZ-2 or RAIDZ-3 to repair data inconsistencies, regardless of whether RAIDZ is implemented at the underlying storage device.

  • For better throughput and performance, use cache and log devices. Both these devices should ideally be on individual flash/SSD devices. In case of difficulty in procuring individual devices, you can use slices cut from a single device.

    It is recommended to have the Log device at about 50% of RAM and also have redundancy and battery backup to prevent data loss. Cache device size could be based on the size of the workload and the pool.

    Cache device do not support redundancy. This is optional.

  • While creating the pool, it has to be sized to accommodate the test master database along with the cloned databases. A clone will co-exist with the parent database in the same storage pool. Therefore, you should plan for test master and clone capacity well ahead.

    For example, The size of the test master is 1TB and you expect to create 10 clones with each of them expected to differ from the test master by 100G. Then, the storage pool should be minimum 2.5TB in size.

  • Maintain the storage pool with at least 20% free space. If the free space falls below this level, then the performance of the pool degrades.

Configuring Solaris File System (ZFS) Users and Pools

You need to create a user which will be able to administer the storage from Enterprise Manager. To do this, run the following commands as root user:

# /sbin/useradd -d /home/emzfsadm -s /bin/bash emzfsadm
# passwd emzfsadm

Note:

The username should be less than or equal to 8 characters.

You need to configure the ZFS pool that is used to host volumes, and grant privileges on this pool to the user created. The emzfsadm user should have the privileges on all the zpools and its mount points in the system.

To configure the ZFS pool, refer to the following table and run the following commands:

Note:

The table displays a reference implementation, and you can choose to change this as required.

Pool Name lunpool

Disks (SAN exposed LUNs over FC/iSCI)

lun1=c9t5006016E3DE0340Ed0,

lun2=c9t5006016E3DE0340Ed1

Disks Mirror (SAN exposed LUNs over FC/iSCI)

mir1=c10t5006016E3DE0340Ed2,

mir2=c10t5006016E3DE0340Ed3

Flash/ SSD disk (log)

ssd1=c4t0d0s0

Flash/SSD disk (cache)

ssd2=c4t0d1s0

# zpool create lunpool mirror lun1=c9t5006016E3DE0340Ed0 mir1=c10t5006016E3DE0340Ed2 mirror lun2=c9t5006016E3DE0340Ed1 mir2=c10t5006016E3DE0340Ed3 log ssd1=c4t0d0s0 cache ssd2=c4t0d1s0
 

Example format output is as follows:

bash-4.1# /usr/sbin/format
Searching for disks...done
 
AVAILABLE DISK SELECTIONS:
       0. c9t5006016E3DE0340Ed0 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,0
       1. c9t5006016E3DE0340Ed1 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,1
       2. c10t5006016E3DE0340Ed2 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,2
       3. c10t5006016E3DE0340Ed3 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,3


[ We need to find the size of pool that was created ]
# df -k /lunpool
Filesystem           1024-blocks        Used   Available Capacity  Mounted on
lunpool              1434746880          31  1434746784     1%    /lunpool
 
[ We use the Available size shown here to set quota as shown below ]
 
# zfs set quota=1434746784 lunpool
 
# zfs allow emzfsadm clone,create,destroy,mount,rename,rollback,share,snapshot,quota,reservation,sharenfs,canmount,recordsize,logbias lunpool
 
# chmod A+user:emzfsadm:add_subdirectory:fd:allow /lunpool
 
# chmod A+user:emzfsadm:delete_child:fd:allow /lunpool

Note:

this content did not make it through the conversion

Configuring EMC Storage Servers

Before you use an EMC Symmetrix VMAX Family or an EMC VNX storage server, you need to first setup the EMC storage hardware for supporting Snap Clone in Oracle Enterprise Manager 13c. Ensure that all the requirements are met in the following sections:

Supported Configuration for EMC Storage Servers

Before you configure the EMC Symmetrix VMAX family or the EMC VNX storage server, check the following list. The list displays components that are supported and are not supported for EMC VMAX and EMC VNX storage.

  • EMC VMAX 10K and VNX 5300 are certified to use. Higher models in the same series are expected to work.

  • Only Linux and Solaris operating systems are supported. Other operating systems are not yet supported.

  • Multi-pathing is mandatory.

  • Only EMC PowerPath, and Solaris MPxIO are supported.

  • Switched fabric is supported. Arbitrated loop is not supported.

  • Emulex (LPe12002-E) host bus adapters are certified to use. Other adapters are expected to work.

  • SCSI over Fibre Channel is supported. iSCSI, NAS are not yet supported.

  • Oracle Grid Infrastructure 11.2 is supported.

  • Oracle Grid Infrastructure 12.1 with Local ASM Storage option is supported. Flex ASM is not supported.

  • ASM Filter Driver is not supported.

  • ASM support is only for raw devices. File System is not supported.

  • Support for Thin Volumes (TDEV) on VMAX.

  • Support for only LUNs on VNX. NAS is not yet supported.

Requirements for EMC Symmetrix VMAX Family and Database Servers

The requirement for the operating system version of EMC Symmetrix VMAX Family is:

  • EMC VMAX Enginuity Version: 5876.251.161 and above

  • SMI-S Provider Version: V4.6.1.6 and above

  • Solutions Enabler Version: V7.6-1755 and above

Note:

The EMC VMAX Enginuity version is the Operating System version of the storage.

The SMI-Provider and Solutions Enabler are installed on a host in a SAN.

The requirements for database server are as follows:

Oracle Database Requirements

  • Oracle Database 10.2.0.5 and higher

Operating System Requirements

  • Oracle Linux 5 update 8 (compatible with RHEL 5 update 8) and above

  • Oracle Linux 6 (compatible with RHEL 6) and above

  • Oracle Solaris 10 and 11

Multipathing Requirements

  • EMC PowerPath Version 5.6 or above as available for Linux Operating System release and kernel version

  • EMC PowerPath Version 5.5 or above as available for Oracle Solaris 11.1 release

  • EMC PowerPath Version 6 is not supported.

  • Solaris MPxIO as available in latest update

Oracle Grid Infrastructure Requirements

  • Oracle Grid Infrastructure 11.2

  • Oracle Grid Infrastructure 12.1. Flex ASM is not supported.

Preparing the Storage Area Network

To prepare the storage area network, follow the configuration steps outlined in each section.

SAN Fabric Configuration

Configure your SAN fabric with multipathing by ensuring the following:

  • You must have redundancy at storage, switch and server level.

  • Perform the zoning such that multiple paths are configured from the storage to the server.

  • Configure the paths such that a failure at a target port, or a switch or a host bus adapter will not cause unavailability of storage LUNs.

  • Configure gatekeepers on the host where EMC SMI-S provider is installed. To configure gatekeepers, refer to the documentation available on the EMC website:

SMI-S Provider

You should install the SMI-S provider and Solutions Enabler on one of the servers in the fabric where the storage is configured. To install and configure the SMI-S provider, refer to the documentation available on the EMC website:

The SMI-S provider URL and login credentials are needed to interact with the storage. An example of an SMI-S Service Provider URL is https://rstx4100smis:5989.

These details are needed when you register a storage server. You are required to do the following:

  • Ensure that the VMAX or VNX storage is discovered by the SMI-S provider.

  • Add the VNX storages to the SMI-S provider.

  • Create a user account with administrator privileges in the SMI-S provider to access the VMAX or VNX storage.

  • Set a sync interval of 1 hour.


Setting up a Storage Area Network environment

Understanding VMAX Terminology

The following table outlines VMAX terms that are used in this section. Refer to these terms to gain a better conceptual understanding, before you prepare the EMC VMAX storage.

Table 19-2 VMAX terminologies

Term Definition

Logical Unit

An I/O device is referred to as a Logical Unit.

Logical Unit Number

A unique address associated with a Logical Unit.

Initiator

Any Logical Unit that starts a service request to another Logical Unit is referred to as an Initiator

Initiator Group

An initiator group is a logical grouping of up to 32 Fibre Channel initiators (HBA ports), eight iSCSI names, or a combination of both. An initiator group may also contain the name of another initiator group to allow the groups to be cascaded to a depth of one.

Port Group

A port group is a logical grouping of Fibre Channel and/or iSCSI front-end director ports. The only limit on the number of ports in a port group is the number of ports in the Symmetrix system. It is also likely that a port group can contain a subset of the available ports in order to isolate workloads to specific ports.

Note: As a pre-requisite, OEM expects a port group created with the name ORACLE_EM_PORT_GROUP and it should contain the required target ports.

Storage Group

A storage group is a logical grouping of up to 4,096 Symmetrix devices.

Target

Any Logical Unit to which a service request is targeted is referred to as a Target

Masking View

A masking view defines an association between one initiator group, one port group, and one storage group. When a masking view is created, the devices in the storage group are mapped to the ports in the port group and masked to the initiators in the initiator group.

SCSI Command

A service request is referred to as a SCSI command

Host Bus Adapter (HBA)

The term host bus adapter (HBA) is most often used to refer to a Fibre Channel interface card. Each HBA has a unique World Wide Name (WWN), which is similar to an Ethernet MAC address in that it uses an OUI assigned by the IEEE. However, WWNs are longer (8 bytes). There are two types of WWNs on a HBA: a node WWN (WWNN), which is shared by all ports on a host bus adapter, and a port WWN (WWPN), which is unique to each port. There are HBA models of different speeds: 1Gbit/s, 2Gbit/s, 4Gbit/s, 8Gbit/s, 10Gbit/s, 16Gbit/s and 20Gbit/s.

For more information on VMAX storage and terminologies, refer to the document EMC Symmetrix VMAX Family with Enginuity available in the EMC website.

Preparing the EMC VMAX Storage

Configure your EMC VMAX appliance such that it is zoned with all the required nodes where you need to provision databases. To prepare the EMC VMAX storage, do the following on the storage server:

  • Ensure that all the Host Initiator ports are available from the storage side.

  • It is recommended to create one initiator group per host with corresponding initiators to increase security. The 'Consistent LUNs' property of the immediate parent Initiator Group of the initiators should to set to 'No'

  • Create a Port Group called ORACLE_EM_PORT_GROUP to be used by Oracle Enterprise Manager for creating Masking Views. This port group should contain all the target ports that will be viewed collectively by all the hosts registered in the Enterprise Manager Cloud Control system.

    For example, host1 views storage ports P1 and P2, and host2 views storage ports P3 and P4. Then, the ORACLE_EM_PORT_GROUP should include all ports P1, P2, P3 and P4. Include only the necessary target ports as needed by the development infrastructure.

  • Create a separate Virtual Provisioning Pool also known as Thin Pool, and dedicate it for Oracle Enterprise Manager.

  • Ensure that the TimeFinder license is enabled to perform VP Snap

Preparing the EMC VNX Storage

Configure your EMC VNX appliance such that it is zoned with all the required nodes where you need to provision databases. To prepare the EMC VNX storage, do the following on the storage server:

Note:

EMC VNX Storage supports only LUN creation, cloning, and deletion. It does not support NAS.

  • Ensure that all host initiator ports are available from the storage side.

  • Ensure that the initiators belonging to one host are grouped and named after the Host on the EMC VNX storage.

  • Create one storage group with one host for each of the hosts registered in Enterprise Manager.

    For example, if initiators i1 and i2 belong to host1, register the initiators under the name Host1. Create a new storage group SG1 and connect Host1 to it. Similarly, create one storage group for each of the hosts that are to be added to Enterprise Manager.

Preparing Database Servers

To prepare your server for Enterprise Manager Snap Clone, ensure the following:

  • Servers should be physical and equipped with Host Bus Adapters. NPIV and VMs not supported.

  • Configure your servers with recommended and supported multipath software. If you use EMC PowerPath, then enable the PowerPath license.To enable the PowerPath license, use the following command:

    emcpreg -install
    
  • If you need the servers to support Oracle Real Application Clusters, then install Oracle Clusterware.

    Note:

    ASM and Clusterware have to be installed and those components have to be discovered in Enterprise Manager as a target. Once ASM and Clusterware are installed, additional ASM disk groups can be created from Enterprise Manager.

  • Enable Privileged Host Monitoring credentials for all the servers. If the server is part of a cluster, then you should enable privileged host monitoring credential for that cluster.

    For more details on enabling privileged host monitoring credentials, refer to Oracle Enterprise Manager Framework, Host, and Services Metric Reference Manual.

  • If you are using Linux, you should configure Oracle ASMLib, and set the asm_diskstring parameter to a valid ASM path. For example:

    /dev/oracleasm/disks/

    Update the boot sequence such that the ASMLib service is run first, and then the multipath service.

    To install Oracle ASMLib, refer to the following website:

    http://www.oracle.com/technetwork/server-storage/linux/install-082632.html

    To configure Oracle ASMLib on multipath disks, refer to the following website:

    http://www.oracle.com/technetwork/server-storage/linux/multipath-097959.html

Setting Privileged Host Monitoring Credentials

You should set the privilege delegation settings before setting Host monitoring credentials. Do the following:

Note:

This is required only on hosts that are used for snap cloning databases on EMC storage.

  1. From the Setup menu, click on Security and then select Monitoring Credentials.

  2. On the Monitoring Credentials page, select Cluster or Host according to your requirement and then. click Manage Monitoring Credentials.


    monitoring credentials page

  3. On the Cluster Monitoring Credentials page, select Privileged Host Monitoring Credentials set for the cluster or host and click Set Credentials.


    Cluster Monitoring Credentials page

  4. In the dialogue box that appears, specify the credentials, and click Save.

  5. After the host monitoring credentials are set for the cluster, refresh the cluster metrics. Verify if the Storage Area Network metrics get collected for the hosts.


    SAN host metric

Customizing Storage Proxy Agents

A Proxy Agent is required when you register a NetApp, Sun ZFSSA or Solaris File System (ZFS) File System.

Before you register a NetApp storage server, meet the following prerequisites:

Note:

Storage Proxy Agent is supported only on Linux Intel x64 platform.

Acquiring Third Party Licenses

The Storage Management Framework is shipped by default for Linux x86-64 bit platform, and is dependent on the following third party modules:

  • Source CPAN - CPAN licensing apply

    • IO::Tty (version 1.10)

    • XML::Simple (version 2.20)

    • Net::SSLeay (version 1.52)

  • Open Source - Owner licensing apply

    • OpenSSL(version 1.0.1e)

Uploading Storage Vendor SDK

Before you register a NetApp storage server, do the following:

  1. Download the NetApp Manageability SDK version 5.0R1 for all the platforms from the following NetApp support site: http://support.netapp.com/NOW/cgi-bin/software

  2. Unzip the 5.1 SDK and package the Perl NetApp Data OnTap Client SDK as a tar file. Generally, you will find the SDK in the lib/perl/NetApp folder. The tar file when extracted should look as follows:

    NetApp.tar
    - netapp
      - Na	Element.pm
      - NaServer.pm
      - NaErrno.pm
    

    For example, the Software Library entity Storage Management Framework Third Party/Storage/NetApp/default should have a single file entry that contains NetApp.tar with the above tar structure.

    Note:

    Ensure that there is no extra space in any file path name or software library name.

  3. Once the tar file is ready, create the following folder hierarchy in software library: Storage Management Framework Third Party/Storage/NetApp

  4. Upload the tar file as a Generic Component named default.

    Note:

    YTo upload the tar file, you must use the OMS shared filesystem for the software library.

    The tar file should be uploaded to this default software library entity as a Main File.

Overriding the Default SDK

The default SDK is used for all the NetApp storage servers. However, the storage server may work with only a certain SDK. In such a case, you can override the SDK per storage server, by uploading an SDK and using it only for this particular storage server.

To override the existing SDK for a storage server, upload the tar file to the Software Library entity. The tar file should have the structure as mentioned in Step 3 of the previous section.

The Software Library entity name should be the same as the storage server name.

For example, if the storage server name is mynetapp.example.com, then the Software Library entity must be as follows:

Storage Management Framework Third Party/Storage/NetApp/mynetapp.example.com

Note:

A storage specific SDK is given a higher preference than the default SDK,

Overriding Third Party Server Components

By default, all the required third party components are shipped for Linux Intel 64 bit platform. If you need to override it by any chance, package the tar file as follows:

Note:

The tar file should contain a thirdparty folder whose structure should be as mentioned below:

thirdparty
|-- lib
| |-- engines
| | |-- lib4758cca.so
| | |-- libaep.so
| | |-- libatalla.so
| | |-- libcapi.so
| | |-- libchil.so
| | |-- libcswift.so
| | |-- libgmp.so
| | |-- libgost.so
| | |-- libnuron.so
| | |-- libpadlock.so
| | |-- libsureware.so
| | `-- libubsec.so
| |-- libcrypto.a
| |-- libcrypto.so
| |-- libcrypto.so.1.0.0
| |-- libssl.a
| |-- libssl.so
| `-- libssl.so.1.0.0
`-- pm
|-- CPAN
| |-- IO
| | |-- Pty.pm
| | |-- Tty
| | | `-- Constant.pm
| | `-- Tty.pm
| |-- Net
| | |-- SSLeay
| | | `-- Handle.pm
| | `-- SSLeay.pm
| |-- XML
| | `-- Simple.pm
| `-- auto
| |-- IO
| | `-- Tty
| | |-- Tty.bs
| | `-- Tty.so
| `-- Net
| `-- SSLeay
| |-- SSLeay.bs
| |-- SSLeay.so

Ensure that the tar file is uploaded to the Software Library entity which is named after the platform name, x86_64. The Software Library entity must be under the following:

Storage Management Framework Third Party/Server

The x86_64 entity, when uploaded is copied to all the storage proxy hosts irrespective of which storage server it would be processing. To use this entity on a specific storage proxy agent, name the entity after the host name.

For example, Storage Management Framework/Third Party/Server/x86_64 will be copied to any storage proxy host which is on an x86_64 platform. Similarly, Storage Management Framework Third Party/Server/myhost.example.com is copied only to myhost.example.com, if it is used as a storage proxy host.

The host name is given a higher preference than the platform preference.

Registering Storage Servers

To register a particular storage server, follow the procedure outlined in the respective section:

Registering a NetApp or a Sun ZFS Storage Server

To register the storage server, follow these steps:

  1. From the Setup menu, click on Provisioning and Patching, and then select Storage Registration.
  2. On the Storage Registration page, in the Storage section, click on Register, and then select either NetApp Storage Appliance or Sun ZFS Storage Appliance, based on which storage server you want to register.

    Note:

    If you see a No named credentials available message, it means that no credentials are registered or the credentials are owned by another user.


    Storage Registration page

    Note:

    You need the EM_STORAGE_ADMINISTRATOR role to complete the storage registration.

  3. On the NetApp or Sun ZFS Storage Registration page, in the Storage section, do the following:

    NetApp Storage Registration

    • Enter the storage server name in the Name field. Ensure that the name is the valid host name and contains no spaces and invalid characters.

    • Select the protocol.

      Note:

      For NetApp storage, the connection is over http or https. For Sun ZFS storage, the connection is over ssh.

    • Select the Storage Credentials, or click on the green plus sign to add.

      Note:

      These credentials will be used by the Management Agent to execute storage (NetApp or Sun ZFS) APIs.

      Only credentials owned by the user are displayed here.

      In the display box that appears, enter the storage server name and password. Confirm the password and click OK.

    • Enter storage name aliases (optional).

      The storage name alias should be in lowercase.

      Note:

      A storage name alias is any name that may have been used when mounting a volume from the storage.

      For example: IP address, FQDN, DNS alias, and the like.

      A storage alias is necessary to identify the database targets on the storage. The database targets are identified by mapping the mount points to the files used by the database. For example, if the storage mystorage.com has an alias mystorage.net, and a database uses a data file mounted as mystorage.net:/u01, then mystorage.net must be added as an alias for the discovery to work.

      When you register the storage, use the admin interface as the storage name. List the data interfaces in the storage alias section. The registered storage name will be used to perform registration operations and while mounting the volumes on target host, it will give preference to interfaces listed as storage aliases.

  4. In the Agent to Manage Storage section, do the following:
    • Click Add to add a Management Agent host. A Storage Agent display box appears. Select a Management Agent from the Target Name column of the table. Then, click Select.

      Note:

      The Management Agent list displays only Linux X64 Management Agents.


      Agent to Manage Storage


      Storage Agent

      The Management Agent selected is used for performing operations on the storage server.

    • Once a Management Agent is selected, the Management Agent credentials are found and a named credential for the host is displayed.

      Note:

      The Management Agent credentials are used to connect to the Management Agent from Oracle Management Service.

      Multiple Management Agents can be configured to monitor the storage device. Click Add to choose a second Management Agent if required.

      Note:

      Configuring multiple Management Agents to monitor the storage device provides you with a backup in the event that an host is down or the Management Agent is under blackout.

    • Click Submit to register the storage server.

Registering a Solaris File System (ZFS) Storage Server

To register the storage server, follow these steps:

  1. From the Setup menu, click on Provisioning and Patching, and then select Storage Registration.
  2. On the Storage Registration page, in the Storage section, click on Register, and then select Solaris File System (ZFS)

    Note:

    If you see a No named credentials available message, it means that no credentials are registered or the credentials are owned by another user.


    Registering a Solaris ZFS file system

    Note:

    You need the EM_STORAGE_ADMINISTRATOR role to complete the storage registration.

  3. On the Register File System (ZFS) page, in the Storage section, do the following:
    • Enter the Solaris system name in the Name field. Ensure that the name is the valid host name or IP address and contains no spaces and invalid characters.

    • Select the protocol.

    • Select the Storage Credentials, or click on the green plus sign to add.

      Note:

      These credentials will be used by the Management Agent to execute Solaris file system APIs.

      Only credentials owned by the user are displayed here.

      In the display box that appears, enter the storage server name and password. Confirm the password and click OK.

    • Enter storage name aliases (optional).

      The storage name alias should be in lowercase.

      Note:

      A storage name alias is any name that may have been used when mounting a volume from the storage.

      For example: IP address, FQDN, DNS alias, and the like.

      Storage alias is necessary to identify the database targets on the storage. The database targets are identified by mapping the mount points to the files used by the database. For example, if the storage mystorage.com has an alias mystorage.net, and a database uses a data file mounted as mystorage.net:/u01, then mystorage.net must be added as an alias for the discovery to work.

      When you register the storage, use the admin interface as the storage name. List the data interfaces in the storage alias section. The registered storage name will be used to perform registration operations and while mounting the volumes on target host, it will give preference to interfaces listed as storage aliases.


      Storage Registration page for Solaris ZFS file system

  4. In the Synchronize Schedule section, specify the frequency to synchronize the storage details with the hardware.

    Ensure that the zpools setup is completed before clicking Submit. To setup the zpools, refer to Configuring Solaris File System (ZFS) Users and Pools.

Registering an EMC Storage Server

To register the storage server, follow these steps:

Note:

Before you register an EMC storage server, the storage server should be prepared. To prepare the storage server refer to Configuring EMC Storage Servers.

  1. From the Setup menu, click on Provisioning and Patching, and then select Storage Registration.
  2. On the Storage Registration page, in the Storage section, click on Register, and then select EMC Storage Array.

    EMC Storage Registration page

    Note:

    If you see a No named credentials available message, it means that no credentials are registered or the credentials are owned by another user.

    Note:

    You need the EM_STORAGE_ADMINISTRATOR role to complete the storage registration.

  3. On the Register EMC Storage Appliance page, in the Storage section, do the following:
    • Specify the storage server name in the Name field. Ensure that the name is the valid storage name and contains no spaces and invalid characters.

    • For EMC storage, the connection is over SMI-S protocol. Specify the SMI-S Provider URL.

    • Specify the SMI-S User Credentials, or click on the green plus sign to add.

      Note:

      These credentials are used by the Enterprise Manager to interact with the EMC storage appliance.

      The credentials should be of the Administrator in the SMI-S provider and not that of the storage.

      Only credentials owned by the user are displayed here.

      In the display box that appears, enter the SMI-S User name and password. Confirm the password and click OK.

  4. In the Synchronize Schedule section, specify the frequency to synchronize the storage details with the hardware.

    Register EMC Storage Appliance page

    Click Submit.

Administering the Storage Server

To administer the storage server, refer to the following sections:

Synchronizing Storage Servers

When you register a storage server for the first time, a synchronize job is run automatically. However, to discover new changes or creations, you should schedule a synchronize job to run at a scheduled time, preferably during a quiet period when Snap Clone actions are not in progress. To do this, follow these steps:

  1. On the Storage Registration page, in the Storage section, click Synchronize.

    Note:

    When you click on Synchronize, a deployment procedure is submitted which discovers all databases monitored by Enterprise Manager Cloud Control which can be used for Snap Clone.

    Windows databases are not discovered as part of storage discovery. This is because the Windows storage NFS collection does not happen at all. For further details please refer to the MOS note 465472.1

    You need EM_STORAGE_OPERATOR role along with GET_CREDENTIAL privilege on the Storage Server and Storage Management Agent credentials to be able to synchronize the storage.

  2. A confirmation box appears. Click OK.

    Synchronization confirmation box

    This action now submits a one-time synchronization job.

    Note:

    The synchronization job fetches latest storage information, and recomputes the mapping between storage volumes and databases.

  3. On the Storage Registration page, in the Storage section, to view the procedure details of the Management Agent host, click on the value (for example, Scheduled) in the Status column.

    Synchronization Confirmation

  4. On the Provisioning page, in the Procedure Steps section, click View, and then select Expand All. Keep clicking the Refresh button on the page to view the procedure activity as it progresses.

    Expand All on Provisioning page

    The synchronization status of the Management Agent on the Storage Registration page, changes to Succeeded once the synchronization process is complete.

  5. To update a synchronize schedule of a registered storage server, select a storage server on the Storage Registration page and then click on Edit. On the Edit Storage page, in the Synchronize Storage section, edit the repetition time and frequency of the synchronize job.

    Synchronize Schedule

    Note:

    The frequency of a synchronization job is set at 3 hours by default.

    Click Submit.

Note:

The Associating Storage Volumes With Targets step relies on both database target metrics and host metrics. The database target (oracle_database/rac_database) should have up-to-date metrics for the Controlfiles, Datafiles and Redologs. The File Systems metric should be up to date for the hosts on which the database is running.

Deregistering Storage Servers

To deregister a registered storage server, follow these steps:

Note:

To deregister a storage server, you need FULL_STORAGE privilege on the storage along with FULL_JOB privilege on the Synchronization GUID of the storage server.

  1. From the Setup menu, click on Provisioning and Patching, and then select Storage Registration.
  2. On the Storage Registration page, in the Storage section, select a storage server from the list of registered storage servers.
  3. Select Remove.

    On the Remove Storage page, select the storage server that you want to deregister, and then click Submit.

    The storage server is now deregistered.

Note:

Once a storage is deregistered, the Snap Clone profiles and Service Templates on the storage will no longer be functional, and the relationship between these Profiles, Service Templates and Snap Cloned targets will be lost.

Note:

It is recommended to delete the volumes created using Enterprise Manager before deregistering a storage. As a self service user, you should submit deletion requests for the cloned databases.

To submit these deletion requests, click Remove from the Hierarchy tab on the Storage Registrations page for deleting the volumes that were created by Enterprise Manager for hosting test master databases.

Managing Storage Servers

To manage the storage server, refer to the following sections:

Managing Storage Allocation

You can manage storage allocation by performing the following tasks:

Editing the Storage Ceiling

Storage Ceiling is the maximum amount of storage from a project, aggregate, or thin pool that Enterprise Manager is allowed to use. This ensures that Enterprise Manager creates clones in that project only till this limit is reached. When a storage project is discovered for the first time, the entire capacity of the project is set as the ceiling. In case of Sun ZFS, the quota set on the project is used.

Note:

You must explicitly set quota property for the Sun ZFS storage project on the storage end. Also, the project should have a non zero quota set on the storage end. Else, Enterprise Manager will not be able to clone on it.

To edit the storage ceiling, do the following:

  1. On the Storage Registration page, from the Storage section, select the storage server for which you want to edit the storage ceiling.

  2. Select the Contents tab, select the aggregate, and then click Edit Storage Ceiling.

    Note:

    Edit Storage Ceiling option enables you modify the maximum amount of storage that Enterprise Manager can use. You can create clones or resize volumes only till this limit is reached.

  3. In the Edit Storage Ceiling dialog box, enter the storage ceiling, and then, click OK.


    Edit Storage Ceiling Dialog Box

Creating Storage Volumes

To create storage volume, do the following:

  1. On the Storage Registration page, from the Storage section, select the storage server for which you want to create storage volume.

  2. Select the Contents tab, select the aggregate, and then select Create Storage Volumes.


    Create Storage Volumes button

  3. On the Create Storage Volumes page, in the Storage Volume Details section, click Add.


    Add storage volume

  4. Select a storage and specify the size in GB (size cannot exceed the storage size). The specified size should be able to accommodate the test master database size, without consuming the entire storage size.

    Next, specify a mount point starting with /.

    For example,
    If the storage is "lunpool", select the "lunpool".
    The specified size under the size column should not exceed the storage space. If the size of the "lunpool" is 100GB and the test master database is 10 GB, then specify size as 10GB. 
    The mount point should be a meaningful mount point starting with "/".
    For example: /oracle/oradata
    

    Storage Volume details

  5. In the Host Details section, specify the following:

    • Host Credentials: Specify the target host credentials of the Oracle software.

    • Storage Purpose: For using Snap Clone, the most important options are as follows:

      • Oracle Datafiles for RAC

      • Oracle Datafiles for Single Instance

      Note:

      You can also store the OCR and Voting disks and Oracle binaries in the storage volume,

    • Platform: Select the supported target platform. The volume will be mounted on the supported target platform.

    • Mount Options: Mount options field is automatically filled based on the values specified for the storage purpose and the platform. Do not edit the mount options.

    • Select NFS v3 or NFS v4.

  6. Select one or more hosts to perform the mount operations by clicking Add.

    If you select Oracle Datafiles for RAC, you would normally specify more than one host. The volume is then mounted on the specified hosts automatically after the completion of the procedure activity.


    Storage Volume host details

  7. Click Submit.

    When you click Submit, a procedure activity is executed. On completion of the procedure activity, the volumes get mounted on the target system. You can now proceed to create a test master database on the mounted volumes on the target system.

Resizing Volumes of a Database

When a database runs out of space in any of its volumes, you can resize the volume according to your requirement. To resize volume(s) of a clone, follow these steps:

Note:

This is not available for EMC storage servers.

Note:

Resizing of volumes of a Test Master database cannot be done using Enterprise Manager, unless the volumes for the Test Master were created using the Create Volumes UI.

Note:

You need the FULL_STORAGE privilege to resize volumes of a database or a clone. Also, ensure that the underlying storage supports quota management of volumes.

  1. On the Storage Registration page, from the Storage section select the required storage server.

  2. In the Details section, select the Hierarchy tab, and then select the target.

    The Storage Volume Details table displays the details of the volumes of the target. This enables you to identify which of the volume of the target is running out of space.

  3. In the Volume Details table, select Resize.


    Resize button for resizing volumes

  4. On the Resize Storage Volumes page, specify the New Writable Space for the volume or volumes that you want to resize. If you do not want to resize a volume, you can leave the New Writable Space field blank.


    Resizing Volume of database page

  5. You can schedule the resize to take place immediately or at a later time.


    Schedule for resizing storage volume of a database

  6. Click Submit.

    Note:

    You can monitor the re-size procedure from the Procedure Activity tab.

Creating Thin Volumes

This section is only for EMC Symmetrix VMAX Family and EMC VNX Storage.

An EMC Symmetrix VMAX Family or an EMC VNX storage enables you to create thin volumes and ASM disks from the created thin volumes. To create thin volumes on an EMC Symmetrix VMAX Family or EMC VNX storage, follow these steps:

Note:

Enterprise Manager enables you to create a thin volume from a thin pool after clusterware and ASM components are installed.

After you install the clusterware and ASM components, the asm_diskstring parameter may be set to Null. This could cause failure during creation of the thin pool.

To prevent this from happening, set the asm_diskstring parameter to a valid disk path and restart the ASM instance.

For example, set the asm_diskstring parameter as:

/dev/oracleasm/disks/*

  1. On the Storage Registration page, in the Storage section, select the EMC Symmetrix VMAX Family or EMX VNX storage on which you want to create thin volumes.
  2. In the Details section, select the Contents tab, and then click Create Thin Volume.

    Create Thin Volume button

  3. In the Storage Details section, the ASM Disk Group is set to Create with External Redundancy, by default. ASM Disk Group creation is optional. If you want to create an ASM Disk Group with redundancy you can skip this step for now by selecting Do not create. I will create later. You can then later create it using the Oracle Enterprise Manager ASM target home page or the ASM Configuration Assistant.
  4. Click Add to create multiple LUNs of same size and create an ASM disk group with those volumes. You can create one or more disk groups at a time.
  5. Select the thin pool/storage pool, and then specify the number of thin volumes, and the size for the thin volume.

    Note:

    It is recommended to create larger LUNs and lesser number of LUNs.

    For example, if you want a disk group of size 200 GB, create 1 LUN of size 200 GB and make a disk group out of it. Do not create 10 x 20 GB LUNs for it.

    Note:

    A thin volume of size more than 240GB is not permitted by default on a VMAX storage. To create thin volumes of size more than 240GB, request the storage administrator to enable auto meta on the VMAX. The storage administrator can remove this restriction or in the alternative, create multiple thin volumes of size less than 240GB.

    To create LUNs of size more than 240GB, contact the storage administrator to enable auto meta.

    Do not create LUNs of size more than 2TB.

  6. In the Host Details section, specify a host or a cluster, or select one by clicking the Search icon. The disks will be created on the ASM instance present on this host or cluster. A single disk partition is created on all the presented disks.

    Note:

    Only Linux and Solaris hosts are supported.

  7. Specify the root and grid infrastructure credentials. Only the credentials that you own are listed.

    Thin Volume Details

    Click Submit.

    Note:

    Once a Create Target request succeeds and a diskgroup has been created, you must manually set the attributes of the diskgroup in compatible.asm and compatible.rdbms depending on the version of the database that will be installed on the diskgroup.

Example 19-1 Understanding Space Utilization on EMC Storage Servers

Writable space implementation on EMC Storage Server is different from NetApp, Sun ZFS SA, and Sun ZFS storage servers. In NetApp, Sun ZFS SA, and Sun ZFS storage servers, writable space defined in a service template will be allocated from the storage pool to the clone database even if data is not written to the volume. In EMC storage servers (VMAX and VNX) space is only reserved on the storage pool. The space is consumed only when data is written to the volume or LUN.

For example, if you define 10GB writable space in a service template, in NetApp, Sun ZFS SA, and Sun ZFS, space of 10GB will be allocated to the clone database from storage pool even if data is not written to the volume. In an EMC storage, space is consumed only when data is written to the volume or LUN.

In Enterprise Manager, to create thin volumes (ASM Disk groups/LUNS) up to the maximum size defined for the storage pool, select the Contents tab on the Storage Registration page, and then select Create Thin Volume.The test master database can then be created on ASM Disk groups or LUNs.

The following graphic shows the Test Master database and the created clone database:


Test master and clone database

The following graphic shows the storage volume of the Test Master database:


Storage volume of the Test Master database

The following graphic shows storage volume of the clone database:


Storage volume of the clone database

Note:

ASM disk groups, as discussed, can be created using the Create Thin Volumes option. However, they can also be created using other methods. The following example illustrates the space usage on EMC VMAX and VNX storage servers:

Let us assume the Storage Pool is of size 1 TB and Storage ceiling is set to 1 TB.

Scenario 1:

If an Enterprise Manager Storage Administrator creates 2 ASM disk groups, as an example, DATA and REDO of sizes 125GB and 75GB respectively through the Create Thin Volume method and the Test Master database is created on those disk groups, used space on DATA and REDO disk groups are 100GB and 50GB respectively (remaining free space on DATA and REDO disk groups are 25GB), then each clone database created by a self service user will be allocated 25GB of writable space on the DATA and REDO disk groups.

New data written to the cloned database is the actual used space and can grow up to 25GB on each disk group. The DATA and REDO disk groups in this scenario.

The Enterprise Manager Storage Administrator will be able to create 600GB LUNs through the Create Thin volume method, assuming a clone database is created from a 200GB Test Master database. The size of the clone database will also be deducted from available space. The Self Service User will be able to create multiple clones. The number of clone databases that can be created cannot be estimated as it depends on the amount of new data written to the initial clone database in that storage pool.

Scenario 2:

If an Enterprise Manager Storage Administrator creates 2 ASM disk groups, as an example, DATA and REDO of sizes 850GB and 150GB respectively through the Create Thin Volume method, and the Test Master database is created on those ASM disk groups, used space on the DATA and REDO disk groups are 750GB and 50GB respectively (remaining free space on DATA and REDO is 100GB), then each clone database created by a self service user will be allocated 100GB of writable space on DATA and REDO disk groups.

New data written to the cloned database is the actual used space and can grow up to 100GB on each disk group. The DATA and REDO disk groups in this scenario.

Similar to Scenario 1, the Self Service User will be able to create multiple clones, but number of clone databases cannot be estimated. The Enterprise Manager Storage Administrator will not be able to create additional disk groups in scenario 2. This is the major difference when compared to Scenario 1.

In both scenarios, only the actual used space of the clones will be subtracted from the Storage Ceiling.The general formula for writable disk space is the difference between the LUN size and the actual space occupied by data.

Managing Storage Access Privileges

To manage storage access privileges for a registered storage server, follow these steps:

  1. On the Storage Registration page, in the Storage section, select a storage server from the list of registered storage servers.

    Note:

    The Storage Registration page displays only the databases which you have VIEW_STORAGE privilege on.

  2. Click Manage Access.

    Manage Access button

  3. On the Manage Access page, do the following:
    • Click Change, if you need to change the Owner of the registered storage server.

      Note:

      The Owner of a registered storage server can perform all actions on the storage server, and grant privileges and roles to other Administrators.


      Manage Access Change button

    • Click Add Grant to grant privileges to an Administrator, Role or both.

    • On the Add Grant page, enter an Administrator name or select the type, and then click Go.

    • Select an Administrator from the list of Administrators or Roles, and then click Select.

  4. On the Manage Access page, you can change privileges of an Administrator or Role by selecting the Administrator or Role from the Grantee column, and then clicking Change Privilege.
  5. In the Change Privilege display box, you can select one of the three following privileges:
    • View Storage (ability to view the storage)

    • Manage Storage (ability to edit the storage)

    • Full Storage (ability to edit or remove the storage)

    Click OK.

  6. You can also revoke a grant to an Administrator by selecting the Administrator from the Grantee column, and then clicking Revoke Grant.
  7. When you are done with granting, revoking, or changing privileges to Administrators or Roles, click Submit.

Note:

To be able to use the storage server, you also need to specifically grant privileges to the storage server and storage Management Agent credentials to the user.

Viewing Storage Registration Overview and Hierarchy

To view the storage registration overview, on the Storage Registration page, in the Details section, select the Overview tab. The Overview section provides a summary of storage usage information. It also displays a Snap Clone Storage Savings graph that shows the total space savings by creating the databases as a Snap Clone versus without Snap Clone.

Note:

If you have NetApp volumes with no space guarantee, you may see negative allocated space in the Overview tab. Set guarantee to 'volume' to prevent this.


Storage registration page Overview tab

To view the storage registration hierarchy, on the Storage Registration page, in the Details sections, select the Hierarchy tab. This displays the storage relationships between the following:

  • Test Master Database

  • Database Profile

  • Snap Clone Database

  • Snap Clone Database Snapshots

You can select a row to display the corresponding Volume or Snapshot Details.


Storage Registration page Hierarchy tab

If a database profile or Snap Clone database creation was not successful, and it is not possible to delete the entity from its respective user interface, click on the Remove button to access the Manage Storage page. From this page, you can submit a procedure to dismount volumes and delete the snapshots or volumes created from an incomplete database profile or snap clone database.

Note:

The Manage Storage page only handles cleanup of storage entities and does not remove any database profile or target information from the repository.

The Remove button is enabled only if you have the FULL_STORAGE privilege.

You can also select the Procedure Activity tab on the right panel, to see any storage related procedures run against that storage entity.

To view the NFS Exports, select the Volume Details tab. Select View, Columns, and then select NFS Exports.

The Volume Details tab, under the Hierarchy tab also has a Synchronize button.This enables you to submit a synchronize target deployment procedure. The deployment procedure collects metrics for a given target and its host, determines which volumes are used by the target, collects the latest information, and updates the storage registration data model. It can be used when a target has been recently changed, data files added in different locations, and the like.

Editing Storage Servers

To edit a storage server, on the Storage Registration page, select the storage server and then, click Edit. On the Storage Edit page, you can do the following:

  • Add or remove aliases.

  • Add, remove, or select an Agent that can be used to perform operations on the storage server.

  • Specify a frequency to synchronize storage details with the hardware.

Note:

If the credentials for editing a storage server are not owned by you, an Override Credentials checkbox will be present in the Storage and Agent to Manage Storage sections. You can choose to use the same credentials or you can override the credentials by selecting the checkbox.