Skip Headers
Oracle® VM Server User's Guide
Release 2.2

Part Number E15444-04
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

8 Preparing External Storage and Storage Repositories

Oracle VM uses the concept of storage repositories to define where Oracle VM resources may reside in a server pool. Resources include guest virtual machines, virtual machines templates (guest seed images), ISO images, shared virtual disks, and so on.

A storage repository is required for High Availability (HA) fail over, and for live migration of domains.

A storage repository should be set up using external storage based on any of the following technologies:

To enable HA or live migration, you must make sure all Oracle VM Servers:

If you used the default /OVS partition during installation, and you have only one Oracle VM Server in your server pool, you do not need to create a storage repository. If you want to add more Oracle VM Servers to your server pool, you must create a storage repository and remove the local /OVS partition created during installation.

If you have only one Oracle VM Server in your server pool, but want to use iSCSI, SAN or NFS-based storage for your guest domains, you must create a storage repository, even though it will not be shared.

You prepare the external storage and storage repository at the Oracle VM Server command line, and then use Oracle VM Manager to create a server pool, and propagate the storage repository to all Oracle VM Servers in the server pool.

You can also manage the HA of server pools and guests, and live migration of domains, using Oracle VM Manager. See the Oracle VM Manager User's Guide for more information.

This Chapter discusses preparing external storage, and storage repositories to be used in Oracle VM Manager to create server pools, and contains:

8.1 Preparing External Storage

If you want to enable HA, or perform live migration of domains to other, identical, computers, you must first create external storage, and then create a storage repository to be used in the server pool.

When you create a server pool in Oracle VM Manager, the external storage is automatically mounted on each Oracle VM Server in the server pool (the cluster). Similarly, when an Oracle VM Server in the server pool is restarted, the external storage is automatically mounted. Make sure the external storage is available to the Oracle VM Servers in the server pool so it can be automatically mounted when an Oracle VM Server is restarted.

This section discusses:

When you have prepared the external storage, prepare the storage repository. See Section 8.2, "Preparing a Storage Repository" for information on preparing a storage repository.

8.1.1 Preparing External Storage Using OCFS2 on iSCSI

To prepare external storage to be used in a storage repository using OCFS2 on iSCSI:

  1. Start the iSCSI service:

    # service iscsi start
    
  2. Run discovery on the iSCSI target. In this example, the target is 10.1.1.1:

    # iscsiadm -m discovery -t sendtargets -p 10.1.1.1
    

    This command returns output similar to:

    10.1.1.1:3260,5 iqn.1992-04.com.emc:cx.apm00070202838.a2
    10.1.1.1:3260,6 iqn.1992-04.com.emc:cx.apm00070202838.a3
    10.2.1.250:3260,4 iqn.1992-04.com.emc:cx.apm00070202838.b1
    10.1.0.249:3260,1 iqn.1992-04.com.emc:cx.apm00070202838.a0
    10.1.1.249:3260,2 iqn.1992-04.com.emc:cx.apm00070202838.a1
    10.2.0.250:3260,3 iqn.1992-04.com.emc:cx.apm00070202838.b0
    
  3. Delete entries that you do not want to use, for example:

    # iscsiadm -m node -p 10.2.0.250:3260,3 -T iqn.1992-04.com.emc:cx.apm00070202838.b0 -o delete
    # iscsiadm -m node -p 10.1.0.249:3260,1 -T iqn.1992-04.com.emc:cx.apm00070202838.a0 -o delete
    # iscsiadm -m node -p 10.2.1.250:3260,4 -T iqn.1992-04.com.emc:cx.apm00070202838.b1 -o delete
    # iscsiadm -m node -p 10.1.1.249:3260,2 -T iqn.1992-04.com.emc:cx.apm00070202838.a1 -o delete
    # iscsiadm -m node -p 10.0.1.249:3260,5 -T iqn.1992-04.com.emc:cx.apm00070202838.a2 -o delete
    
  4. Verify that only the iSCSI targets you want to use for the server pool are visible:

    # iscsiadm -m node
    
  5. Review the partitions by checking /proc/partitions:

    # cat /proc/partitions
    major minor  #blocks  name
       8     0   71687372 sda
       8     1     104391 sda1
       8     2   71577607 sda2
     253     0   70516736 dm-0
     253     1    1048576 dm-1
    
  6. Restart the iSCSI service:

    # service iscsi restart
    
  7. Review the partitions by checking /proc/partitions. A new device is listed.

    # cat /proc/partitions
    major minor  #blocks  name
       8     0   71687372 sda
       8     1     104391 sda1
       8     2   71577607 sda2
     253     0   70516736 dm-0
     253     1    1048576 dm-1
       8    16    1048576 sdb
    
  8. The new device can now be used. Use the fdisk utility to create at least one partition:

    # fdisk /dev/sdb
    

    Alternatively, if the external storage is more than 2 TB, use the gparted utility to create a BIOS boot partition (GPT):

    # gparted /dev/sdb
    
  9. Format the partition from any of the Oracle VM Servers in the (intended) server pool with a command similar to the following:

    # mkfs.ocfs2 -Tdatafiles -N8 /dev/sdb1
    

    The -Tdatafiles parameter makes mkfs.ocfs2 use a large cluster size. The size chosen depends on the device size.

    Add an additional parameter, --fs-features=local, if you are not mounting the volume in the server pool (cluster).

    The -N8 parameter allocates 8 slots to allow that many nodes to mount the file system concurrently. Increase the number if the server pool has (or will have) greater than 8 nodes. It is recommended that you have at least 8 slots, even for a local file system. Creating slots for a local file system allows users to easily cluster-enable the shared volume later.

For more information on using OCFS2, see the OCFS2 documentation online at:

http://oss.oracle.com/projects/ocfs2/documentation/v1.4/

http://oss.oracle.com/projects/ocfs2-tools/documentation/v1.4/

When you have prepared the external storage, prepare the storage repository. See Section 8.2, "Preparing a Storage Repository" for information on preparing a storage repository.

8.1.2 Preparing External Storage Using OCFS2 on SAN

To prepare external storage to be used in a storage repository using OCFS2 on SAN:

  1. Review the partitions by checking /proc/partitions:

    # cat /proc/partitions
    major minor  #blocks  name
       8     0   71687372 sda
       8     1     104391 sda1
       8     2   71577607 sda2
     253     0   70516736 dm-0
     253     1    1048576 dm-1
       8    16    1048576 sdb
    
  2. Determine the share disk volume you want to use. You may need to use the fdisk utility to create at least one partition on the shared disk volume:

    # fdisk /dev/sdb
    

    Alternatively, if the external storage is more than 2 TB, use the gparted utility to create a GPT partition:

    # gparted /dev/sdb
    
  3. Format the partition from any of the Oracle VM Servers in the (intended) server pool with a command similar to the following:

    # mkfs.ocfs2 -Tdatafiles -N8 /dev/sdb1
    

    The -Tdatafiles parameter makes mkfs.ocfs2 use a large cluster size. The size chosen depends on the device size.

    Add an additional parameter, --fs-features=local, if you are not mounting the volume in the server pool.

    The -N8 parameter allocates 8 slots to allow that many nodes to mount the file system concurrently. Increase the number if the server pool has (or will have) greater than 8 nodes. It is recommended that you have at least 8 slots, even for a local file system. Creating slots for a local file system allows users to easily cluster-enable the shared volume later.

For more information on using OCFS2, see the OCFS2 documentation online at:

http://oss.oracle.com/projects/ocfs2/documentation/v1.4/

http://oss.oracle.com/projects/ocfs2-tools/documentation/v1.4/

When you have prepared the external storage, prepare the storage repository. See Section 8.2, "Preparing a Storage Repository" for information preparing a storage repository.

8.1.3 Preparing External Storage Using NFS

To prepare external storage to be used in a storage repository using NFS, find an NFS mount point to use. This example uses the mount point:

mynfsserver:/vol/vol1/data/ovs

When you have identified an NFS mount point to use for the external storage, prepare the storage repository. See Section 8.2, "Preparing a Storage Repository" for information on preparing a storage repository.

8.2 Preparing a Storage Repository

In addition to preparing external storage for a server pool, you must also prepare a storage repository. The storage repository is used to host the resources used in the server pool, such as guest virtual machines and shared virtual disks.

The storage repository is used by Oracle VM Manager when you create a server pool and is automatically propagated to all Virtual Machine Servers in the server pool.

Creating a server pool ensures the safety of guest data, and protects from runaway nodes which may become unreachable. Oracle VM Servers in a server pool are in a cluster and have built-in rules and restrictions which are more stringent than unclustered Oracle VM Servers, for example, Quorum is needed and may require restarting one or more Oracle VM Servers to preserve the cluster under rare circumstances.

You can create a server pool using NFS- or OCFS2-based external storage. You must have at least one storage repository created and ready to use before you create a server pool. To prepare external storage for use in a storage repository, see Section 8.1, "Preparing External Storage".

Before you create a storage repository, make sure that every Oracle VM Server in the server pool has been correctly configured:

To prepare a storage repository for use in Oracle VM Manager when a server pool is created:

  1. On the Server Pool Master, create a storage repository with the script:

    /opt/ovs-agent-2.3/utils/repos.py -n storage_location

    For example, for an NFS set up, you might use something similar to:

    # /opt/ovs-agent-2.3/utils/repos.py -n mynfsserver:/vol/vol1/data/ovs
    

    Or for an OCFS2 using iSCSI or SAN set up, you might use something similar to:

    # /opt/ovs-agent-2.3/utils/repos.py -n /dev/sdb1
    
  2. The repos.py -n command from the previous step displays the UUID of the storage repository you created. You need this UUID to set the cluster root. Copy the UUID for the storage repository you want to use as the cluster root.

  3. Paste the UUID for the storage repository and use it to set the cluster root with the command:

    # /opt/ovs-agent-2.3/utils/repos.py -r UUID
    

For more detailed information on using the repos.py script, see Section 9.2, "Managing Storage Repositories".

The external storage and storage repository are configured and ready to use.

Log in to Oracle VM Manager and create a server pool. The storage repository is automatically propagated and the external storage mounted on all Virtual Machine Servers in the server pool.

Note:

If you use Oracle VM Manager to create a server pool, you do not need to use the repos.py -i command to initialize the storage repository. Oracle VM Agent automatically mounts the shared storage on each Virtual Machine Server in the server pool.