Skip Headers
Oracle® Enterprise Manager Cloud Administration Guide
12c Release 4 (12.1.0.4)

E28814-13
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

13 Setting Up a DBaaS Cloud

This chapter covers the initial configuration required to enable a Database as a Service Cloud. It contains the following sections:

13.1 Setting Up the DBaaS Cloud

You can set up the DBaaS Cloud in two ways:

13.2 Getting Started

This section helps you get started by providing an overview of the steps involved in setting up a Private Database Cloud. Before you set up the database cloud, you must download and deploy the required plug-ins. For more details, see Section 3.2, "Deploying the Required Plug-ins".

Table 13-1 Getting Started with DBaaS

Step Task Role

1

Define roles for administrators and self service users. See Section 3.3, "Defining Roles and Assigning Users".

Super Administrator

2

Install the Management Agent on unmanaged hosts so that they can be monitored by Enterprise Manager. See Section 11.2, "Adding Hosts".

Super Administrator

3

Configure Privilege Delegation Settings on your managed hosts. See Section 3.5, "Configuring Privilege Delegation Settings"

Super Administrator

4

Set up provisioning credentials. See Section 13.3, "Setting Up Credentials for Provisioning"

Self Service Administrator

5

If you are:

Self Service Administrator

6

Configure the Listener. See Section 13.7, "Configuring the Oracle Listener".

Self Service Administrator

7

Create the database provisioning profile that best suits your requirement. See Section 14.7.4, "Creating a Database Provisioning Profile".

Self Service Administrator

8

If you are using the Snap Clone profile, you must register the storage servers. See Section 13.8, "Registering and Managing Storage Servers".

Self Service Administrator


13.3 Setting Up Credentials for Provisioning

Before you perform any operations on the Managed Servers or databases, you must define the credentials that will be used by Enterprise Manager to connect to the targets.

You need to set up the following types of credentials:

  • Normal credentials are the host operating system credentials used to provision the database software and create databases. For example, oracle/<login password>. These credentials are saved when the Database Pool is created and are used when the EM_SSA_USER requests a database or a schema.

  • Privileged credentials are the host operating system credentials used to perform privileged actions like executing root scripts. These credentials are used deploying software (for running root.sh during deployment), for mounting and unmounting storage volumes (for databases created with snapshots) and so on. These credentials are saved along with the Database Pool if the pool is used for creating databases using snapshots.

  • Database SYSDBA credentials are used and saved for schema as a service database pool. These credentials are required only for schema as a service.

Note:

It is recommended that the same OS user who owns the Oracle Home on the host, creates the database.

To create named credentials, follow these steps:

  1. Log in to Enterprise Manager as an administrator with the EM_SSA_ADMINISTRATOR role.

  2. From the Setup menu, select Security, then select Named Credentials.

  3. Click Create in the Named Credentials page.

  4. Enter the Credential Name and Credential Description. Set the Authenticating Target Type field to Host and Scope field to Global. Enter the user name and password in the Credential Properties section. If you need to set privileged credentials, select Sudo or PowerBroker in the Run Privilege field and enter values in the Run As and Profile fields.

  5. Click Test and Save.

  6. Verify these credentials against a host target and click OK.

13.4 Provisioning Database Software

Before you can enable database as a service, the database software must already be provisioned on all hosts. Database software can be provisioned by an administrator with the EM_SSA_ADMINISTRATOR role in the following ways:

  • Provisioning Profile

    • Capture a gold image of an existing database using a Provisioning Profile. See the Enterprise Manager Lifecycle Management Administrator's Guide for details.

    • Use the Provisioning Profile to provision the Clusterware/ASM or Grid Infrastructure (for Real Application Cluster databases), and Database Oracle Home. This method ensures that the necessary database plug-in (monitoring part of the database plug-in) is deployed onto the Management Agent as part of the database provisioning Oracle Home installation.

      To create a provisioning profile, from the Enterprise menu, select Provisioning and Patching, then select Database Provisioning and select the database provisioning deployment procedure to be used. You can select either Provision Oracle Database or Provisioning Oracle RAC Database deployment procedure.

      Note: Do not create a new database as part of this deployment procedure.

  • Using the Database Installer

    • From the Setup menu, select Extensibility, then select Plug-ins. and deploy the complete SSA (Enterprise Manager for Oracle Cloud) plug-in on all the Management Agents in a PaaS Infrastructure Zone.

    • Run the Clusterware/ASM or Grid Infrastructure installer to set up the cluster and ASM (for RAC databases).

    • Run the Database Installer and ensure you select the create database option on all hosts.

    • Discover the database. From the Setup menu, select Add Target, then Add Targets Manually, and then select Add Non-Host Targets Using Guided Process (Also Adds Related Targets).

    • From the Enterprise menu, you can also select Job, then select Library and submit the Discover Promote Oracle Home Target job to add the Oracle Home.

For more details on provisioning the database software, see the Enterprise Manager Lifecycle Management Administrator's Guide.

13.5 Deploying the Database

For schema as a service, you must deploy a single instance or RAC database. To deploy a database, you must use the Provision Oracle Database deployment procedure. See the Enterprise Manager Lifecycle Management Administrator's Guide for details.

13.6 Creating a Container Database

Note:

If pluggable databases (PDBs) need to be provisioned, you must be create container databases.

An Oracle Database can contain a portable collection of schemas, schema objects, and nonschema objects, that appear to an Oracle Net client as a separate database. This self-contained collection is called a pluggable database (PDB). A multi-tenant container database (CDB) is a database that includes one or more PDBs.

You can create a CDB either by using the Database Configuration Assistant (DBCA) or the CREATE_DATABASE SQL statement. See the Oracle Database Administrator's Guide for details. After the CDB is created, it consists of the root and the seed. The root contains minimal user data or no user data, and the seed contains no user data.

PDBs contain user data. After the CDB has been created, you can add PDBs to the CDB by using either of the following options:

  • Create a new CDB. See the Enterprise Manager Lifecycle Management Administrator's Guide for details.

  • Plug in an unplugged PDB into a CDB. See the Enterprise Manager Lifecycle Management Administrator's Guide for details.

13.7 Configuring the Oracle Listener

You need to configure an Oracle Home and the Oracle Listener before you can add them as Enterprise Manager targets.

To set up the Oracle Listener (Listener) for the database hosts, follow these steps:

  1. Log in as a user with the EM_SSA_ADMINISTRATOR role and perform mass deployment of database homes on the newly added hosts as described in Section 10.2, "Adding Hosts".

  2. To configure a Listener running from the same Oracle Home on which the database instance is to be created, launch a Bash shell and enter the following commands:

    1. <AGENT_BASE>/agent_inst/bin/emctl stop agent

    2. export TNS_ADMIN=<DB_HOME_LOCATION>/network/admin

    3. <AGENT_BASE>/agent_inst/bin/emctl start agent

    4. export ORACLE_HOME=<DB_HOME_LOCATION>

    5. Run $ORACLE_HOME/bin/netca and create the listener. Make sure you have the same Listener name and Listener port on all the hosts.

  3. To configure a Listener running from the Single Instance High Availability (SIHA) Oracle Home, launch a Bash shell and enter the following commands:

    1. export ORACLE_HOME=<SIHA_HOME_LOCATION>

    2. Run $ORACLE_HOME/bin/netca and create the listener. Make sure you have the same listener name and listener port on all the hosts

  4. Log in as the user with the DBAAS_ADMIN_ROLE and discover the newly added Listener target on all the hosts. From the Setup menu, select Add Target, then select Add Target Manually.

  5. Select the Add Non-Host Targets Using Guided Process option and select Target Type as Oracle Database, Listener, and Automatic Storage Management and click Add Guided Discovery and follow the steps in the wizard. Before you add the new Listener target, ensure the ORACLE_HOME for the Listener is pointing to the correct ORACLE_HOME location. This process adds the Oracle Home target which is used when a database pool is created.

13.8 Registering and Managing Storage Servers

Note:

If you are creating thin clones from a snap clone based profile, you must register and manage the storage servers such as NetApp or Sun ZFS. See Section 14.5, "Using Snap Clone to Provision Databases" for details.

This section describes the following:

13.8.1 Overview of Registering Storage Servers

Registering a storage server, such as NetApp storage server or Sun ZFS storage server, in Enterprise Manager enables you to provision databases using the snapshot and cloning features provided by the storage.

The registration process validates the storage, and discovers the Enterprise Manager managed database targets on this storage. Once the databases are discovered, you can enable them for Snap Clone. Snap Clone is the process of creating database clones using the Storage Snapshot technology.

Note:

Databases on Windows operating systems are not supported.

13.8.2 Before You Begin

Before you begin, note the following:

  • Windows databases are not discovered as part of storage discovery. This is because the Windows storage NFS collection does not happen at all. NFS collection is also not supported on certain OS releases, and thus databases on those OS releases cannot be Snap Cloned. For further details please refer to the My Oracle Support note 465472.1. Also, NAS volumes cannot be used on Windows for supporting Oracle databases.

  • Snap Clone is supported on Sun ZFS Storage 7120, 7320, 7410, 7420 and ZS3 models.

  • Snap Clone supports Sun ZFS storage on HP-UX hosts only if the OS version is B.11.31 or higher. If the OS version is lower than that, the Sun Storage may not function properly thereby Snap Clone gives unexpected results.

  • By default, the maximum number of NFS file systems that Enterprise Manager discovers on a target host is 100. However, this threshold is configurable. You can also choose a list of file systems to be monitored if you do not want all the extra file systems to be monitored.

    The configuration file $agent_inst/sysman/emd/emagent_storage.config for each host agent contains various storage monitoring related parameters.

    To configure the threshold for the NFS file systems, you need to edit the following parameters:

    Collection Size:START
    Disks=1000
    FileSystems=1000
    Volumes=1000
    Collection Size:END 
    

    If you choose to provide a list of file systems to be monitored, it can be provided between the following lines:

    FileSystems:START

    FileSystems:END

    
    

    Restart the Management Agent and refresh the host configuration for the changes to this configuration file to be effective.

  • If the OMS Repository is running on RDBMS with 11.1.0.7.0 and AL32UTF8 character set, you need to apply patch 11893621.

13.8.3 Prerequisites for Registering Storage Servers

Before you register a storage server, follow the prerequisites outlined in the following sections:

13.8.3.1 Configuring Storage Servers

Before you register a storage server, you require the following privileges and licenses to successfully use Snap Clone:

Note:

Enterprise Manager Cloud Control 12c supports NetApp, Sun ZFS, and Solaris File System (ZFS) storage servers.

Configuring NetApp Hardware

This section consists of the following:

Obtaining NetApp Hardware Privileges

Privileges is a generic term. NetApp refers to privileges as Capabilities.

For NetApp storage server, to use Snap Clone, assign the following privileges or capabilities to the NetApp storage credentials:

Note:

You can assign these capabilities individually or by using wildcard notations. For example:
'api-volume-*', 'api-*', 'cli-*' 
  • api-aggr-list-info

  • api-aggr-options-list-info

  • api-file-delete-file

  • api-file-get-file-info

  • api-file-read-file

  • api-license-list-info

  • api-nfs-exportfs-append-rules

  • api-nfs-exportfs-delete-rules

  • api-nfs-exportfs-list-rules

  • api-nfs-exportfs-modify-rule

  • api-snapshot-create

  • api-snapshot-delete

  • api-snapshot-list-info

  • api-snapshot-reclaimable-info

  • api-snapshot-restore-volume

  • api-snapshot-set-reserve

  • api-system-api-get-elements

  • api-system-api-list

  • api-snapshot-set-schedule

  • api-system-cli

  • api-system-get-info

  • api-system-get-ontapi-version

  • api-system-get-version

  • api-useradmin-group-list

  • api-useradmin-user-list

  • api-volume-clone-create

  • api-volume-clone-split-estimate

  • api-volume-create

  • api-volume-destroy

  • api-volume-get-root-name

  • api-volume-list-info

  • api-volume-list-info-iter-end

  • api-volume-list-info-iter-next

  • api-volume-list-info-iter-start

  • api-volume-offline

  • api-volume-online

  • api-volume-restrict

  • api-volume-set-option

  • api-volume-size

  • cli-filestats

  • login-http-admin

Obtaining NetApp Hardware Licenses

Snap Clone on a NetApp storage server requires a valid license for the following services:

  • flex_clone

  • nfs

  • snaprestore

Creating NetApp Storage Credentials

Note:

Snap Clone is supported only on NetApp Data ONTAP® 7.2.1.1P1D18 or higher, and ONTAP@ 8.x (7-mode).

To create the NetApp storage credentials, follow these steps:

  1. Create ROLE em_smf_admin_role' with all the recommended capabilities, such as api-aggr-list-info,api-file-delete-file, and the like.

  2. Create GROUP em_smf_admin_group with the ROLE em_smf_admin_role.

  3. Create USER em_smf_admin with GROUP em_smf_admin_group and a secure password.

Note:

The user em_smf_admin must be a dedicated user to be used by Oracle Enterprise Manager. Oracle does not recommend sharing this account for any other purposes.

Configuring Sun ZFS and ZS3 Hardware

This section consists of the following:

Obtaining Sun ZFS Hardware Privileges

Privileges is a generic term. For example, Sun ZFS refers to privileges as Permissions.

For Sun ZFS storage server, to use Snap Clone, assign the following privileges or permissions to the Sun ZFS storage credentials:

Note:

All the permissions listed must be set to true. The scope must be 'nas' and there must not be any further filters.
  • changeProtocolProps

  • changeSpaceProps

  • clone and createShare

  • destroy

  • rollback

  • takeSnap

Obtaining Sun ZFS Hardware Licenses

Snap Clone on Sun ZFS Storage Appliance requires a license for the Clones feature. A restricted-use license for the same is included with the Enterprise Manager Snap Clone.

Creating Sun ZFS Storage Credentials

To create the Sun ZFS storage credentials, follow these steps:

  1. Create ROLE em_smf_admin_role.

  2. Create AUTHORIZATIONS for the ROLE em_smf_admin_role.

  3. Set SCOPE as nas.

  4. Set the recommended permissions, such as, allow_changeProtocolProps, allow_changeSpaceProps, and the like to true.

  5. Create USER em_smf_admin and set its ROLE property as em_smf_admin_role.

    Note:

    The user em_smf_admin must be a dedicated user to be used by Oracle Enterprise Manager. Oracle does not recommend sharing this account for any other purposes.

Configuring Solaris File System (ZFS) Storage Servers

This section consists of the following:

Obtaining Solaris File System (ZFS) Privileges

Solaris File System (ZFS) refers to privileges as Permissions. For Solaris File System (ZFS) storage server, to use Snap Clone, grant the following permissions on the pool for the Solaris File System (ZFS) user:

  • clone

  • create

  • destroy

  • mount

  • rename

  • rollback

  • share

  • snapshot

  • quota

  • reservation

  • sharenfs

  • canmount

  • recordsize

Obtaining Solaris File System (ZFS) Licenses

Solaris File System (ZFS) does not require any special hardware license. Only Oracle Solaris OS version 11.1 is supported.

Setting Up Solaris File System (ZFS) Storage Servers

Solaris File System (ZFS) storage servers can work with any storage hardware. You do not need to buy any additional storage hardware. Instead, you can attach your in-house storage hardware and to acquire the Oracle Snap Clone functionality. For example, you can attach LUNs from an EMC VMAX, VNX systems, a Hitachi VSP, or an Oracle Pillar Axiom FC array.

The following storage topology figure explains how this works:

Note:

This figure assumes that you have a SAN storage device with 4 x 1TB logical unit devices exposed to the Solaris File System (ZFS) storage server.
Storage Topology

This section contains the following:

Prerequisites for Setting Up Solaris File System (ZFS) Storage Servers

Before you configure a Solaris File System (ZFS) storage server, ensure that you meet requirements:

  • Ensure that zfs_arc_max is not set in /etc/system. If it needs to be set ensure that it is set to a high value such as 80% of RAM.

  • The storage server should be configured with multiple LUNs. Each LUN should be a maximum of 1TB. A minimum 2 LUNs of 1TB each is recommended for a Snap Clone. Each LUN should have a mirror LUN which is mounted on the host over a different controller to isolate failover. A LUN can be attached to the Solaris host over Fibre Channel for better performance.

    Note:

    If Fibre Channel is not available, any direct attached storage or iSCI based LUNs are sufficient.
  • All LUNs used in a pool should be equal in size. It is preferable to use less than 12 LUNs in a pool.

  • Apart from LUNs, the storage needs cache and log devices to improve zpool performance. Both these devices should ideally be individual flash/SSD devices. In case it's difficult to procure individual devices, one can use slices cut from a single device. Log device needs to be about 32GB in size and also have redundancy and battery backup to prevent data loss. Cache device can be about 128GB in size and need not have redundancy.

Requirements for SAN Storage

The requirements for SAN storage are as follows:

  • Create multiple LUNs of the same size from the SAN storage device. The maximum recommended size for a LUN is 3TB.

  • LUNs should come from different SAN storage pools or an entirely different SAN storage device.

    These LUNs are needed for mirroring, to maintain the pool level redundancy. If your SAN storage maintains a hardware level redundancy, then you can skip this requirement.

  • The LUNs should be exposed over Fiber Channel.

Recommendations for Solaris File System (ZFS) Pools

The recommendations for Solaris File System (ZFS) pools are as follows:

  • Create the Storage pool with multiple LUNs of the same size. You can add more disks to the storage pool to increase the size based on your usage.

  • The storage pool created on the Solaris File System (ZFS) storage server should use the LUNs coming from a different SAN storage pool or an entirely different SAN storage device. You can skip this if your SAN storage maintains hardware level redundancy.

  • Use ZFS redundancy such as mirror, RAIDZ, RAIDZ-2 or RAIDZ-3 to repair data inconsistencies, regardless of whether RAIDZ is implemented at the underlying storage device.

  • Use cache and log devices to get better throughput and performance. Both these devices should ideally be on individual flash/SSD devices. In case of difficulty in procuring individual devices, you can use slices cut from a single device.

    It is recommended to have the Log device at about 50% of RAM and also have redundancy and battery backup to prevent data loss. Cache device size could be based on the size of the workload and the pool.

    Cache device do not support redundancy. This is optional.

  • While creating the pool, it has to be sized to accommodate the test master database along with the cloned databases. A clone will co-exist with the parent database in the same storage pool. Therefore, you should plan for test master and clone capacity well ahead.

    For example, The size of the test master is 1TB and you expect to create 10 clones with each of them expected to differ from the test master by 100G. Then, the storage pool should be minimum 2.5TB in size.

  • Maintain the storage pool with at least 20% free space. If the free space falls below this level, then the performance of the pool degrades.

Configuring Solaris File System (ZFS) Users and Pools

You need to create a user which will be able to administer the storage from Enterprise Manager. To do this, run the following commands as root user:

# /sbin/useradd -d /home/emzfsadm -s /bin/bash emzfsadm
# passwd emzfsadm

Note:

The username should be less than or equal to 8 characters.emzfsadm

You need to configure the ZFS pool that is used to host volumes, and grant privileges on this pool to the user created. The emzfsadm user should have the privileges on all the zpools and its mount points in the system.

To configure the ZFS pool, refer to the following table and run the following commands:

Note:

The table displays a reference implementation, and you can choose to change this as required.
Pool Name lunpool
Disks (SAN exposed LUNs over FC/iSCI) lun1=c9t5006016E3DE0340Ed0,

lun2=c9t5006016E3DE0340Ed1

Disks Mirror (SAN exposed LUNs over FC/iSCI) mir1=c10t5006016E3DE0340Ed2,

mir2=c10t5006016E3DE0340Ed3

Flash/ SSD disk (log) ssd1=c4t0d0s0
Flash/SSD disk (cache) ssd2=c4t0d1s0

# zpool create lunpool mirror lun1=c9t5006016E3DE0340Ed0 mir1=c10t5006016E3DE0340Ed2 mirror lun2=c9t5006016E3DE0340Ed1 mir2=c10t5006016E3DE0340Ed3 log ssd1=c4t0d0s0 cache ssd2=c4t0d1s0
 

Example format output is as follows:

bash-4.1# /usr/sbin/format
Searching for disks...done
 
AVAILABLE DISK SELECTIONS:
       0. c9t5006016E3DE0340Ed0 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,0
       1. c9t5006016E3DE0340Ed1 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,1
       2. c10t5006016E3DE0340Ed2 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,2
       3. c10t5006016E3DE0340Ed3 <DGC-VRAID-0532-1.00TB>
          /pci@78,0/pci8086,3c08@3/pci10df,f100@0/fp@0,0/disk@w5006016e3de0340e,3


[ We need to find the size of pool that was created ]
# df -k /lunpool
Filesystem           1024-blocks        Used   Available Capacity  Mounted on
lunpool              1434746880          31  1434746784     1%    /lunpool
 
[ We use the Available size shown here to set quota as shown below ]
 
# zfs set quota=1434746784 lunpool
 
# zfs allow emzfsadm clone,create,destroy,mount,rename,rollback,share,snapshot,quota,reservation,sharenfs,canmount,recordsize,logbias lunpool
 
# chmod A+user:emzfsadm:add_subdirectory:fd:allow /lunpool
 
# chmod A+user:emzfsadm:delete_child:fd:allow /lunpool
  • When you set the quota using the command zfs set quota, you may get the following error message::

    Size is less than current used or reserved space

    You need to ensure that the quota is set to more than the used space. To ensure that the quota is set correctly, it is recommended that you verify the quota by running the following command:

    $ zfs get quota lunpool

  • It is recommended that you verify if all the required parameters are set correctly after you run the zfs allow emzfsadm command.

13.8.3.2 Customizing Storage Proxy Agents

A Proxy Agent is required when you register a NetApp, Sun ZFSSA or Solaris File System (ZFS) File System.

Before you register a NetApp storage server, meet the following prerequisites:

  • Acquiring Third Party Licenses

  • Uploading Storage Vendor SDK

  • Overriding the Default SDK

  • Overriding Third Party Server Components

Note:

Storage Proxy Agent is supported only on Linux Intel x64 platform.
13.8.3.2.1 Acquiring Third Party Licenses

The Storage Management Framework is shipped by default for Linux x86-64 bit platform, and is dependent on the following third party modules:

  • Source CPAN - CPAN licensing apply

    • IO::Tty (version 1.10)

    • XML::Simple (version 2.20)

    • Net::SSLeay (version 1.52)

  • Open Source - Owner licensing apply

    • OpenSSL(version 1.0.1e)

13.8.3.2.2 Uploading Storage Vendor SDK

Before you register a NetApp storage server, do the following:

  1. Download the NetApp Manageability SDK version 5.0R1 for all the platforms from the following NetApp support site: http://support.netapp.com/NOW/cgi-bin/software

  2. Unzip the 5.1 SDK and package the Perl NetApp Data OnTap Client SDK as a tar file. Generally, you will find the SDK in the lib/perl/NetApp folder. The tar file when extracted should look as follows:

    NetApp.tar
    - netapp
      - Na  Element.pm
      - NaServer.pm
      - NaErrno.pm
    

    For example, the Software Library entity Storage Management Framework Third Party/Storage/NetApp/default should have a single file entry that contains NetApp.tar with the above tar structure.

    Note:

    Ensure that there is no extra space in any file path name or software library name.
  3. Once the tar file is ready, create the following folder hierarchy in software library: Storage Management Framework Third Party/Storage/NetApp

  4. Upload the tar file as a Generic Component named default.

    Note:

    YTo upload the tar file, you must use the OMS shared filesystem for the software library.

    The tar file should be uploaded to this default software library entity as a Main File.

13.8.3.2.3 Overriding the Default SDK

The default SDK is used for all the NetApp storage servers. However, the storage server may work with only a certain SDK. In such a case, you can override the SDK per storage server, by uploading an SDK and using it only for this particular storage server.

To override the existing SDK for a storage server, upload the tar file to the Software Library entity. The tar file should have the structure as mentioned in Step 3 of the previous section.

The Software Library entity name should be the same as the storage server name.

For example, if the storage server name is mynetapp.example.com, then the Software Library entity must be as follows:

Storage Management Framework Third Party/Storage/NetApp/mynetapp.example.com

Note:

A storage specific SDK is given a higher preference than the default SDK,
13.8.3.2.4 Overriding Third Party Server Components

By default, all the required third party components are shipped for Linux Intel 64 bit platform. If you need to override it by any chance, package the tar file as follows:

Note:

The tar file should contain a thirdparty folder whose structure should be as mentioned below:
thirdparty
|-- lib
| |-- engines
| | |-- lib4758cca.so
| | |-- libaep.so
| | |-- libatalla.so
| | |-- libcapi.so
| | |-- libchil.so
| | |-- libcswift.so
| | |-- libgmp.so
| | |-- libgost.so
| | |-- libnuron.so
| | |-- libpadlock.so
| | |-- libsureware.so
| | `-- libubsec.so
| |-- libcrypto.a
| |-- libcrypto.so
| |-- libcrypto.so.1.0.0
| |-- libssl.a
| |-- libssl.so
| `-- libssl.so.1.0.0
`-- pm
|-- CPAN
| |-- IO
| | |-- Pty.pm
| | |-- Tty
| | | `-- Constant.pm
| | `-- Tty.pm
| |-- Net
| | |-- SSLeay
| | | `-- Handle.pm
| | `-- SSLeay.pm
| |-- XML
| | `-- Simple.pm
| `-- auto
| |-- IO
| | `-- Tty
| | |-- Tty.bs
| | `-- Tty.so
| `-- Net
| `-- SSLeay
| |-- SSLeay.bs
| |-- SSLeay.so

Ensure that the tar file is uploaded to the Software Library entity which is named after the platform name, x86_64. The Software Library entity must be under the following:

Storage Management Framework Third Party/Server

The x86_64 entity, when uploaded is copied to all the storage proxy hosts irrespective of which storage server it would be processing. To use this entity on a specific storage proxy agent, name the entity after the host name.

For example, Storage Management Framework/Third Party/Server/x86_64 will be copied to any storage proxy host which is on an x86_64 platform. Similarly, Storage Management Framework Third Party/Server/myhost.example.com is copied only to myhost.example.com, if it is used as a storage proxy host.

The host name is given a higher preference than the platform preference.

13.8.4 Registering Storage Servers

To register a particular storage server, follow the procedure outlined in the respective section:

  • Registering a NetApp or a Sun ZFS Storage Server

  • Registering a Solaris File System (ZFS) Storage Server

13.8.4.1 Registering a NetApp or a Sun ZFS Storage Server

To register the storage server, follow these steps:

  1. From the Setup menu, click on Provisioning and Patching, and then select Storage Registration.

  2. On the Storage Registration page, in the Storage section, click on Register, and then select either NetApp Storage Appliance or Sun ZFS Storage Appliance, based on which storage server you want to register.

    Note:

    If you see a No named credentials available message, it means that no credentials are registered or the credentials are owned by another user.
    Storage Registration page

    Note:

    You need the EM_STORAGE_ADMINISTRATOR role to complete the storage registration.
  3. On the NetApp or Sun ZFS Storage Registration page, in the Storage section, do the following:

    NetApp Storage Registration
    • Enter the storage server name in the Name field. Ensure that the name is the valid host name and contains no spaces and invalid characters.

    • Select the protocol.

      Note:

      For NetApp storage, the connection is over http or https. For Sun ZFS storage, the connection is over ssh.
    • Select the Storage Credentials, or click on the green plus sign to add.

      Note:

      These credentials will be used by the Management Agent to execute storage (NetApp or Sun ZFS) APIs.

      Only credentials owned by the user are displayed here.

      In the display box that appears, enter the storage server name and password. Confirm the password and click OK.

    • Enter storage name aliases (optional).

      The storage name alias should be in lowercase.

      Note:

      A storage name alias is any name that may have been used when mounting a volume from the storage.

      For example: IP address, FQDN, DNS alias, and the like.

      Storage alias is necessary to identify the database targets on the storage. The database targets are identified by mapping the mount points to the files used by the database. For example, if the storage mystorage.com has an alias mystorage.net, and a database uses a data file mounted as mystorage.net:/u01, then mystorage.net must be added as an alias for the discovery to work.

  4. In the Agent to Manage Storage section, do the following:

    • Click Add to add a Management Agent host. A Storage Agent display box appears. Select a Management Agent from the Target Name column of the table. Then, click Select.

      Note:

      The Management Agent list displays only Linux X64 Management Agents.
      Agent to Manage Storage
      Storage Agent

      The Management Agent selected is used for performing operations on the storage server.

    • Once a Management Agent is selected, the Management Agent credentials are found and a named credential for the host is displayed.

      Note:

      The Management Agent credentials are used to connect to the Management Agent from Oracle Management Service.

      Multiple Management Agents can be configured to monitor the storage device. Click Add to choose a second Management Agent if required.

      Note:

      Configuring multiple Management Agents to monitor the storage device provides you with a backup in the event that an host is down or the Management Agent is under blackout.
    • Click Submit to register the storage server.

      Submit registration job

13.8.4.2 Registering a Solaris File System (ZFS) Storage Server

To register the storage server, follow these steps:

  1. From the Setup menu, click on Provisioning and Patching, and then select Storage Registration.

  2. On the Storage Registration page, in the Storage section, click on Register, and then select Solaris File System (ZFS)

    Note:

    If you see a No named credentials available message, it means that no credentials are registered or the credentials are owned by another user.
    Registering a Solaris ZFS file system

    Note:

    You need the EM_STORAGE_ADMINISTRATOR role to complete the storage registration.
  3. On the Register File System (ZFS) page, in the Storage section, do the following:

    • Enter the storage server name in the Name field. Ensure that the name is the valid host name or IP address and contains no spaces and invalid characters.

    • Select the protocol.

    • Select the Storage Credentials, or click on the green plus sign to add.

      Note:

      These credentials will be used by the Management Agent to execute Solaris file system APIs.

      Only credentials owned by the user are displayed here.

      In the display box that appears, enter the storage server name and password. Confirm the password and click OK.

    • Enter storage name aliases (optional).

      The storage name alias should be in lowercase.

      Note:

      A storage name alias is any name that may have been used when mounting a volume from the storage.

      For example: IP address, FQDN, DNS alias, and the like.

      Storage alias is necessary to identify the database targets on the storage. The database targets are identified by mapping the mount points to the files used by the database. For example, if the storage mystorage.com has an alias mystorage.net, and a database uses a data file mounted as mystorage.net:/u01, then mystorage.net must be added as an alias for the discovery to work.

      Storage Registration page for Solaris ZFS file system
  4. In the Synchronize Schedule section, specify the frequency to synchronize the storage details with the hardware.

    Ensure that the zpools setup is completed before clicking Submit. To setup the zpools, refer to Configuring Solaris File System (ZFS) Users and Pools.

13.8.5 Administering the Storage Server

To administer the storage server, refer to the following sections:

  • Synchronizing Storage Servers

  • Deregistering Storage Servers

13.8.5.1 Synchronizing Storage Servers

When you register a storage server for the first time, a synchronize job is run automatically. However, to discover new changes or creations, you should schedule a synchronize job to run at a scheduled time, preferably during a quiet period when Snap Clone actions are not in progress. To do this, follow these steps:

  1. On the Storage Registration page, in the Storage section, click Synchronize. Synchronize button

    Note:

    When you click on Synchronize, a deployment procedure is submitted which discovers all databases monitored by Enterprise Manager Cloud Control which can be used for Snap Clone.

    Windows databases are not discovered as part of storage discovery. This is because the Windows storage NFS collection does not happen at all. For further details please refer to the MOS note 465472.1

    You need EM_STORAGE_OPERATOR role along with GET_CREDENTIAL privilege on the Storage Server and Storage Management Agent credentials to be able to synchronize the storage.

  2. A confirmation box appears. Click OK. Synchronization confirmation box

    This action now submits a one-time synchronization job.

    Note:

    The synchronization job fetches latest storage information, and recomputes the mapping between storage volumes and databases.
  3. On the Storage Registration page, in the Storage section, to view the procedure details of the Management Agent host, click on the value (for example, Scheduled) in the Status column.

  4. On the Provisioning page, in the Procedure Steps section, click View, and then select Expand All. Keep clicking the Refresh button on the page to view the procedure activity as it progresses.

    The synchronization status of the Management Agent on the Storage Registration page, changes to Succeeded once the synchronization process is complete.

  5. To update a synchronize schedule of a registered storage server, select a storage server on the Storage Registration page and then click on Edit. On the Edit Storage page, in the Synchronize Storage section, edit the repetition time and frequency of the synchronize job. Synchronize Schedule

    Note:

    The frequency of a synchronization job is set at 3 hours by default.

    Click Submit.

Note:

The Associating Storage Volumes With Targets step relies on both database target metrics and host metrics. The database target (oracle_database/rac_database) should have up-to-date metrics for the Controlfiles, Datafiles and Redologs. The File Systems metric should be up to date for the hosts on which the database is running.

13.8.5.2 Deregistering Storage Servers

To deregister a registered storage server, follow these steps:

Note:

To deregister a storage server, you need FULL_STORAGE privilege on the storage along with FULL_JOB privilege on the Synchronization GUID of the storage server.
  1. From the Setup menu, click on Provisioning and Patching, and then select Storage Registration.

  2. On the Storage Registration page, in the Storage section, select a storage server from the list of registered storage servers.

  3. Select Remove.

    On the Remove Storage page, select the storage server that you want to deregister, and then click Submit.

    Deregistration confirmation

    The storage server is now deregistered.

Note:

Once a storage is deregistered, the Snap Clone profiles and Service Templates on the storage will no longer be functional, and the relationship between these Profiles, Service Templates and Snap Cloned targets will be lost.

13.8.6 Managing Storage Servers

To manage the storage server, refer to the following sections:

  • Managing Storage Allocation

  • Managing Storage Access Privileges

  • Viewing Storage Registration Overview and Hierarchy

  • Editing Storage Servers

13.8.6.1 Managing Storage Allocation

You can manage storage allocation by performing the following tasks:

  • Editing the Storage Ceiling

  • Creating Storage Volume

  • Resizing Volumes of a Database

13.8.6.1.1 Editing the Storage Ceiling

Storage Ceiling is the maximum amount of storage from a project or aggregate that Enterprise Manager is allowed to use. This ensures that Enterprise Manager creates clones in that project only till this limit is reached. When a storage project is discovered for the first time, the entire capacity of the project is set as the ceiling. In case of Sun ZFS, the quota set on the project is used.

Note:

You must explicitly set quota property for the Sun ZFS storage project on the storage end. Also, the project should have a non zero quota set on the storage end. Else, Enterprise Manager will not be able to clone on it.

To edit the storage ceiling, do the following:

  1. On the Storage Registration page, from the Storage section, select the storage server for which you want to edit the storage ceiling.

  2. Select the Contents tab, select the aggregate, and then click Edit Storage Ceiling.

    Note:

    Edit Storage Ceiling option enables you modify the maximum amount of storage that Enterprise Manager can use. You can create clones or resize volumes only till this limit is reached.
  3. In the Edit Storage Ceiling dialog box, enter the storage ceiling, and then, click OK.

13.8.6.1.2 Creating Storage Volume

To create storage volume, do the following:

  1. On the Storage Registration page, from the Storage section, select the storage server for which you want to create storage volume.

  2. Select the Contents tab, select the aggregate, and then select Create Storage Volume.

  3. On the Create Storage Volume page, in the Storage Volume Details section, click Add.

  4. Select a storage and specify the volume information.

  5. In the Host Details section, select the host credentials for which the permissions to access the volume would be granted.

  6. Select one or more hosts to perform the mount operation, by clicking Add.

  7. Click Submit.

13.8.6.1.3 Resizing Volumes of a Database

When a database runs out of space in any of its volumes, you can resize the volume according to your requirement. To resize volume(s) of a clone, follow these steps:

Note:

Resizing of volumes of a Test Master database cannot be done using Enterprise Manager, unless the volumes for the Test Master were not created using the Create Volumes UI.

Note:

You need the FULL_STORAGE privilege to resize volumes of a database or a clone. Also, ensure that the underlying storage supports quota management of volumes.
  1. On the Storage Registration page, from the Storage section select the required storage server.

  2. In the Details section, select the Hierarchy tab, and then select the target.

    The Volume Details table displays the details of the volumes of the target. This enables you to identify which of the volume of the target is running out of space.

  3. In the Volume Details table, select Resize.

  4. On the Resize page, specify the new size for the volume or volumes that you want to resize. If you do not want to resize a volume, you can leave the New Size field blank.

  5. You can schedule the resize to take place immediately or at a later time.

  6. Click Submit.

    Note:

    You can monitor the re-size procedure from the Procedure Activity tab.

13.8.6.2 Managing Storage Access Privileges

To manage storage access privileges for a registered storage server, follow these steps:

  1. On the Storage Registration page, in the Storage section, select a storage server from the list of registered storage servers.

    Note:

    the Storage Registration page displays only the databases which you have VIEW_STORAGE privilege on.
  2. Click Manage Access.

  3. On the Manage Access page, do the following:

    • Click Change, if you need to change the Owner of the registered storage server.

      Note:

      The Owner of a registered storage server can perform all actions on the storage server, and grant privileges and roles to other Administrators.
    • Click Add Grant to grant privileges to an Administrator, Role or both.

    • On the Add Grant page, enter an Administrator name or select the type, and then click Go.

    • Select an Administrator from the list of Administrators or Roles, and then click Select.

  4. On the Manage Access page, you can change privileges of an Administrator or Role by selecting the Administrator or Role from the Grantee column, and then clicking Change Privilege.

  5. In the Change Privilege display box, you can select one of the three following privileges:

    • View Storage (ability to view the storage)

    • Manage Storage (ability to edit the storage)

    • Full Storage (ability to edit or remove the storage)

    Click OK.

  6. You can also revoke a grant to an Administrator by selecting the Administrator from the Grantee column, and then clicking Revoke Grant.

  7. When you are done with granting, revoking, or changing privileges to Administrators or Roles, click Submit.

Note:

To be able to use the storage server, you also need to specifically grant privileges to the storage server and storage Management Agent credentials to the user.

13.8.6.3 Viewing Storage Registration Overview and Hierarchy

To view the storage registration overview, on the Storage Registration page, in the Details section, select the Overview tab. The Overview section provides a summary of storage usage information. It also displays a Snap Clone Storage Savings graph that shows the total space savings by creating the databases as a Snap Clone versus without Snap Clone.

Note:

If you have NetApp volumes with no space guarantee, you may see negative allocated space in the Overview tab. Set guarantee to 'volume' to prevent this.
Storage registration page Overview tab

To view the storage registration hierarchy, on the Storage Registration page, in the Details sections, select the Hierarchy tab. This displays the storage relationships between the following:

  • Test Master Database

  • Database Profile

  • Snap Clone Database

  • Snap Clone Database Snapshots

You can select a row to display the corresponding Volume or Snapshot Details.

Storage Registration page Hierarchy tab

If a database profile or Snap Clone database creation was not successful, and it is not possible to delete the entity from its respective user interface, click on the Remove button to access the Manage Storage page. From this page, you can submit a procedure to dismount volumes and delete the snapshots or volumes created from an incomplete database profile or snap clone database.

Note:

The Manage Storage page only handles cleanup of storage entities and does not remove any database profile or target information from the repository.

The Remove button is enabled only if you have the FULL_STORAGE privilege.

You can also select the Procedure Activity tab on the right panel, to see any storage related procedures run against that storage entity.

To view the NFS Exports, select the Volume Details tab. Select View, Columns, and then select NFS Exports.

The Volume Details tab, under the Hierarchy tab also has a Synchronize button.This enables you to submit a synchronize target deployment procedure. The deployment procedure collects metrics for a given target and its host, determines which volumes are used by the target, collects the latest information, and updates the storage registration data model. It can be used when a target has been recently changed, data files added in different locations, and the like.

13.8.6.4 Editing Storage Servers

To edit a storage server, on the Storage Registration page, select the storage server and then, click Edit. On the Storage Edit page, you can do the following:

  • Add or remove aliases.

  • Add, remove, or select an Agent that can be used to perform operations on the storage server.

  • Specify a frequency to synchronize storage details with the hardware.

Note:

If the credentials for editing a storage server are not owned by you, an Override Credentials checkbox will be present in the Storage and Agent to Manage Storage sections. You can choose to use the same credentials or you can override the credentials by selecting the checkbox.