Oracle VM is designed to allow you to use a wide variety of storage types so you can adapt your configuration to your needs. Whether you have a limited hardware setup or a full rack of servers, whether you perform an installation for testing and temporary internal use or design a production environment that requires high availability in every area, Oracle VM offers support for a suitable storage solution.
Making use of both generic and vendor-specific Storage Connect plug-ins, Oracle VM allows you to use the following types of storage:
Oracle VM does not support mixing protocols to access the same storage device. Therefore, if you are using Fibre Channel to access a storage device, you must not then access the same device using the iSCSI protocol.
Usually the file system of a disk that is used as storage for an Oracle VM Server is formatted using OCFS2 (Oracle Cluster File System). The only exception to this is NFS, since by nature this file system is already designed to be accessed by multiple systems at once. The use of OCFS2 ensures that the file system can properly handle simultaneous access by multiple systems at the same time. OCFS2 is developed by Oracle and is integrated within the mainstream Linux kernel, ensuring excellent performance and full integration within an Oracle VM deployment. OCFS2 is also a fundamental component used to implement server pool clustering. This is discussed in more detail in Section 3.8, “How is Storage Used for Server Pool Clustering?”. Since OCFS2 is not available on SPARC, only NFS can be used to support SPARC-based server pool clustering or shared repository hosting.
To enable HA or live migration, you must make sure all Oracle VM Servers have access to the same storage resources. Specifically for live migration and virtual machine HA restarts, the Oracle VM Servers also must be part of the same server pool. Also note that clustered server pools require access to a shared file system where server pool information is stored and retrieved, for example in case of failure and subsequent server role changes. In x86 environments, the server pool file system can either be on an NFS share or on a LUN of a SAN server. In SPARC environments, the server pool file system can only reside on an NFS share.
Local storage consists of hard disks installed locally in your Oracle VM Server. For SPARC, this includes SAS disks and ZFS volumes.
In a default installation, Oracle VM Server detects any unused space on the installation disk and re-partitions the disk to make this space available for local storage. As long as no partition and data are present, the device is detected as a raw disk. You can use the local disks to provision logical storage volumes as disks for virtual machines or install a storage repository.
In Oracle VM Release 3.4.2, support was added for NVM Express (NVMe) devices. If you use an NVMe device as local storage in an Oracle VM deployment, Oracle VM Server detects the device and presents it through Oracle VM Manager.
- Oracle VM Server for x86
To use the entire NVMe device as a storage repository or for a single virtual machine physical disk, you should not partition the NVMe device.
To provision the NVMe device into multiple physical disks, you should partition it on the Oracle VM Server where the device is installed. If an NVMe device is partitioned then Oracle VM Manager displays each partition as a physical disk, not the entire device.
You must partition the NVMe device outside of the Oracle VM environment. Oracle VM Manager does not provide any facility for partitioning NVMe devices.
NVMe devices can be discovered if no partitions exist on the device.
If Oracle VM Server is installed on an NVMe device, then Oracle VM Server does not discover any other partitions on that NVMe device.
- Oracle VM Server for SPARC
Oracle VM Manager does not display individual partitions on an NVMe device but only a single device.
Oracle recommends that you create a storage repository on the NVMe device if you are using Oracle VM Server for SPARC. You can then create as many virtual disks as required in the storage repository. However, if you plan to create logical storage volumes for virtual machine disks, you must manually create ZFS volumes on the NVMe device. See Creating ZFS Volumes on NVMe Devices in the Oracle VM Administrator's Guide.
Some important points about local storage are as follows:
Local storage can never be used for a server pool file system.
Local storage is fairly easy to set up because no special hardware for the disk subsystem is required. Since the virtualization overhead in this set up is limited, and disk access is internal within one physical server, local storage offers reasonably high performance.
In an Oracle VM environment, sharing a local physical disk between virtual machines is possible but not recommended.
If you place a storage repository on a local disk in an x86 environment, an OCFS2 file system is created. If you intend to create a storage repository on a local disk, the disk must not contain any data or meta-data. If it does, it is necessary to clean the disk manually before attempting to create a storage repository on it. This can be done using the dd command, for example:
# dd if=/dev/zero of=/dev/
disk
bs=1MWhere
disk
is the device name of the disk where you want to create the repository.WarningThis action is destructive and data on the device where you perform this action may be rendered irretrievable.
In SPARC environments, a storage repository is created using the ZFS file system. If you use a local physical disk for your storage repository, a ZFS pool and file system are automatically created when you create the repository. If you use a ZFS volume to host the repository, the volume is replaced by a ZFS file system and the repository is created within the file system.
Oracle VM Release 3.4 uses features built into the OCFS2 file system on x86 platforms that enable you to perform live migrations of running virtual machines that have virtual disks on local storage. These live migrations make it possible to achieve nearly uninterrupted uptime for virtual machines.
However, if the virtual machines are stopped, Oracle VM Manager does not allow you to move virtual machines directly from one local storage repository to another local repository if the virtual machines use virtual disks. In this case, you must use an intermediate shared repository to move the virtual machines and the virtual disks from one Oracle VM Server to another.
For more information on live migrations and restrictions on moving virtual machines between servers with local storage, see Section 7.7, “How Can a Virtual Machine be Moved or Migrated?”.
The configuration where local storage is most useful is where you have an unclustered server pool that contains a very limited number of Oracle VM Servers. By creating a storage repository (see Section 4.4, “How is a Repository Created?”) on local storage you can set up an Oracle VM virtualized environment quickly and easily on one or two Oracle VM Servers: all your resources, virtual machines and their disks are stored locally.
Nonetheless, there are disadvantages to using local storage in a production environment, such as the following:
Data stored across disks located on a large number of different systems adds complexity to your backup strategy and requires more overhead in terms of data center management.
Local storage lacks flexibility in a clustered environment with multiple Oracle VM Servers in a server pool. Other resources, such as templates and ISOs, that reside on local storage cannot be shared between Oracle VM Servers, even if they are within the same server pool.
In general, it is recommended to use attached storage with comprehensive redundancy capabilities over local storage for critical data and for running virtual machines that require high availability with robust data loss prevention.
SAS (Serial Attached SCSI) disks can be used as local storage within an Oracle VM deployment. On SPARC systems SAS disk support is limited to the SAS controller using the mpt_sas driver. This means that there is internal SAS disk support on T3 and later, but internal SAS disks on T2 and T2+ are unsupported.
Only local SAS storage is supported with Oracle VM Manager. Oracle VM does not support shared SAS storage (SAS SAN), meaning SAS disks that use expanders to enable a SAN-like behavior can only be accessed as local storage devices. Oracle VM Manager recognizes local SAS disks during the discovery process and adds these as Local File Systems. SAS SAN disks are ignored during the discovery process and are not accessible for use by Oracle VM Manager.
On Oracle VM Servers that are running on x86 hardware you can determine whether SAS devices are shared or local by running the following command:
# ls -l /sys/class/sas_end_device
Local SAS:
lrwxrwxrwx 1 root root 0 Dec 18 22:07 end_device-0:2 ->\ ../../devices/pci0000:00/0000:00:01.0/0000:0c:00.0/host0/port-0:2/end_device-0:2/sas_end_device/ end_device-0:2 lrwxrwxrwx 1 root root 0 Dec 18 22:07 end_device-0:3 ->\ ../../devices/pci0000:00/0000:00:01.0/0000:0c:00.0/host0/port-0:3/end_device-0:3/sas_end_device/ end_device-0:3
SAS SAN:
lrwxrwxrwx 1 root root 0 Dec 18 22:07 end_device-0:0:0 -> \ ../../devices/pci0000:00/0000:00:01.0/0000:0c:00.0/host0/port-0:0/expander-0:0/port-0:0:0/ end_device-0:0:0/sas_end_device/end_device-0:0:0 lrwxrwxrwx 1 root root 0 Dec 18 22:07 end_device-0:1:0 -> \ ../../devices/pci0000:00/0000:00:01.0/0000:0c:00.0/host0/port-0:1/expander-0:1/port-0:1:0/ end_device-0:1:0/sas_end_device/end_device-0:1:0
For SAS SAN storage, note the inclusion of the expander within the device entries.
Network Attached Storage – typically NFS – is a commonly used file-based storage system that is very suitable for the installation of Oracle VM storage repositories. Storage repositories contain various categories of resources such as templates, virtual disk images, ISO files and virtual machine configuration files, which are all stored as files in the directory structure on the remotely located, attached file system. Since most of these resources are rarely written to but are read frequently, NFS is ideal for storing these types of resources.
With Oracle VM you discover NFS storage via the server IP or host name and typically present storage to all the servers in a server pool to allow them to share the same resources. This, along with clustering, helps to enable high availability of your environment: virtual machines can be easily migrated between host servers for the purpose of load balancing or protecting important virtual machines from going off-line due to hardware failure.
NFS storage is exposed to Oracle VM Servers in the form of shares on the NFS server which are mounted onto the Oracle VM Server's file system. Since mounting an NFS share can be done on any server in the network segment to which NFS is exposed, it is possible not only to share NFS storage between servers of the same pool but also across different server pools.
In terms of performance, NFS is slower for virtual disk I/O compared to a logical volume or a raw disk. This is due mostly to its file-based nature. For better disk performance you should consider using block-based storage, which is supported in Oracle VM in the form of iSCSI or Fibre Channel SANs.
NFS can also be used to store server pool file systems for clustered server pools. This is the only shared storage facility that is supported for this purpose for SPARC-based server pools. In x86 environments, alternate shared storage such as iSCSI or Fibre Channel is generally preferred for this purpose for performance reasons.
Oracle VM does not support the following configurations as they result in errors with storage repositories:
- Multiple IP addresses or hostnames for one NFS server
If you assign multiple IP addresses or hostnames to the same NFS server, Oracle VM Manager treats each IP address or hostname as a separate NFS server.
- DNS round-robin
If you configure your DNS so that a single hostname is assigned to multiple IP addresses, or a round-robin configuration, the storage repository is mounted repeatedly on the Oracle VM Server file system.
- Nested NFS exports
If your NFS file system has other NFS file systems that reside inside the directory structure, or nested NFS exports, exporting the top level file directory from the NFS server results in an error where Oracle VM Server cannot access the storage repository. In this scenario, the OVMRU_002063E No utility server message is returned for certain jobs and written to
AdminServer.log
.For more information about resolving errors with nested NFS exports, see Doc ID 2109556.1 in the Oracle Support Knowledge Base.
With Internet SCSI, or iSCSI, you can connect storage entities to client machines, making the disks behave as if they are locally attached disks. iSCSI enables this connectivity by transferring SCSI commands over existing IP networks between what is called an initiator (the client) and a target (the storage provider).
To establish a link with iSCSI SANs, all Oracle VM Servers can use configured network interfaces as iSCSI initiators. It is the user's responsibility to:
Configure the disk volumes (iSCSI LUNs) offered by the storage servers.
Discover the iSCSI storage through Oracle VM Manager. When discovered, unmanaged iSCSI and Fibre Channel storage is not allocated a name in Oracle VM Manager. Use the ID allocated to the storage to reference unmanaged storage devices in the Oracle VM Manager Web Interface or the Oracle VM Manager Command Line Interface.
Set up access groups, which are groups of iSCSI initiators, through Oracle VM Manager, in order to determine which LUNs are available to which Oracle VM Servers.
Performance-wise, an iSCSI SAN is better than file-based storage like NFS and it is often comparable to direct local disk access. Because iSCSI storage is attached from a remote server, it is perfectly suited for an x86-based clustered server pool configuration where high availability of storage and the possibility to live migrate virtual machines are important factors.
In SPARC-based environments, iSCSI LUNs are treated as local disks. Since OCFS2 is not supported under SPARC, these disks cannot be used to host a server pool cluster file system and they cannot be used to host a repository. Typically, if iSCSI is used in a SPARC environment, the LUNs are made available directly to virtual machines for storage.
Provisioning of iSCSI storage can be done through open source target creation software at no additional cost, with dedicated high-end hardware, or with anything in between. The generic iSCSI Oracle VM Storage Connect plug-in allows Oracle VM to use virtually all iSCSI storage providers. In addition, vendor-specific Oracle VM Storage Connect plug-ins exist for certain types of dedicated iSCSI storage hardware, allowing Oracle VM Manager to access additional interactive functionality otherwise only available through the management software of the storage provider. Examples are creating and deleting LUNs, extending existing LUNs and so on. Check with your storage hardware supplier if a Oracle VM Storage Connect plug-in is available. For installation and usage instructions, consult your supplier's plug-in documentation.
Oracle VM is designed to take advantage of Oracle VM Storage Connect plug-ins to
perform LUN management tasks. On a storage array do not unmap a
LUN and then remap it to a different LUN ID without rebooting
the affected Oracle VM Servers. In general, remapping LUNs is risky
because it can cause data corruption since the targets have been
switched outside of the affected Oracle VM Servers. If you attempt to
remap a LUN to a different ID, affected Oracle VM Servers are no longer
able to access the disk until they are rebooted and the
following error may appear in
/var/log/messages
:
Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
Functionally, a fibre channel SAN is hardly different from an iSCSI SAN. Fibre channel is actually older technology and uses dedicated hardware instead: special controllers on the SAN hardware, host bus adapters or HBAs on the client machines, and special fibre channel cables and switches to interconnect the components.
Like iSCSI, a Fibre Channel SAN transfers SCSI commands between initiators and targets establishing a connectivity that is almost identical to direct disk access. However, whereas iSCSI uses TCP/IP, a Fibre Channel SAN uses Fibre Channel Protocol (FCP). The same concepts from the iSCSI SAN, as described in Section 3.2.3, “iSCSI Storage Attached Network”, apply equally to the Fibre Channel SAN. Again, generic and vendor-specific Storage Connect plug-ins exist. Your storage hardware supplier can provide proper documentation with the Oracle VM Storage Connect plug-in.