6.4. Storage

6.4.1. Unclean File System Causes Errors When Used as a Server Pool File System
6.4.2. Rescanning a LUN Does Not Show the New Size of a Resized LUN
6.4.3. Local SAS Hard Disks Not Supported for Storage
6.4.4. Booting Server from Multipath SAN Is Not Supported
6.4.5. Blacklisting of System Disks for Multipathing Fails
6.4.6. Black Listing of System Disks for Legacy LSI MegaRAID Controllers Not Supported
6.4.7. Blacklisting of System Disks for Multipathing Fails on HP Smart Array (CCISS) Disk Devices
6.4.8. Multi-homed NFS Shares Are Not Supported
6.4.9. Create Physical Disk Icon Invalid Unless a Volume Group Selected
6.4.10. Refreshing a NAS-based File System Produces Invalid/Overlapping Exports
6.4.11. Expanding the Storage Array folder in the Navigation Pane Causes UI Hang
6.4.12. Dom0 Memory Requires 350 MB of Memory per 100 LUNs

This section contains the known issues and workarounds related to storage.

6.4.1. Unclean File System Causes Errors When Used as a Server Pool File System

If a server pool file system is not clean (contains existing files and server pool cluster information) and used to create a server pool, a number of errors may occur.

  • A server pool is created when the file system is discovered named Unknown pool found for discovered Pool FS. The server pool cannot be edited or used. The following error is displayed:

    OVMRU_002037E repository_name - Cannot present the Repository to server: server_name. 
    Both server and repository need to be in the same cluster.
  • Cannot create a server pool using the file system. The following error is displayed:

    OVMAPI_4010E Attempt to send command: create_pool_filesystem to server: server_name failed. 
    OVMAPI_4004E Server Failed Command: create_pool_filesystem  ... No such file or directory
  • Cannot delete a server pool file system using the Physical Disks tab in the Hardware view. The following error is displayed:

    "VALUEERROR: UNKNOWN ERROR: 'BACKING_DEVICE'"
  • An OCFS2-based storage repository becomes orphaned (the clusterId that was used when the OCFS2 file system was created no longer exists), you cannot mount or refresh the repository, and the following error is displayed:

    "OVMRU_002037E Cannot present the Repository to server: server_name. Both server and repository 
    need to be in the same cluster."

Workaround: Clean the file system of all files before it is used as a server pool file system.

6.4.2. Rescanning a LUN Does Not Show the New Size of a Resized LUN

When you resize a LUN and rescan the physical disks on the storage array, the new size is not reflected in the Oracle VM Manager UI in the Physical Disks tab in the Hardware view.

6.4.3. Local SAS Hard Disks Not Supported for Storage

If you use an Oracle VM Server with local SAS hard disks installed, empty disks of this type are not discovered in Oracle VM Manager and therefore cannot be used as local storage. This is caused by the fact that local SAS disks are not associated with a (local) storage array. SAS hard disks can therefore only be used for installing Oracle VM Server on them; they cannot be used for storage repositories, server pool file systems or raw LUNs as virtual machine disks.

6.4.4. Booting Server from Multipath SAN Is Not Supported

Oracle VM has multipath storage access enabled by default, and also supports booting from a SAN in single-path configuration. However, booting from a multipath SAN is not supported.

Workaround: If you need Oracle VM Servers to boot from SAN, configure storage access with a single physical path. Make the necessary adjustments to enable booting from SAN: set the BIOS to use the host bus adapter as a boot device, and so on. The disk must appear in the installer as an sd[x] device, not as an mpath[x] device.

6.4.5. Blacklisting of System Disks for Multipathing Fails

In Oracle VM, system disks of Oracle VM Servers are automatically blacklisted in the default multipath configuration. However, in some installations the system disks are not added correctly to the /etc/blacklisted.wwids file. The problem is caused by a mismatch between the disk ID listed in the device mapper and the ID used by the installer.

Even if the blacklisting of system disks fails, Oracle VM functionality is not impacted, since the disks in question are not multipathed and therefore cannot be assigned to virtual machines in Oracle VM Manager.

This issue occurs only on Oracle VM Servers upgraded from an earlier Oracle VM 3.0.x release using a Yum repository. It does not occur if the Oracle VM Server was upgraded using the Oracle VM Server CD/ISO method.

Workaround: To correct the blacklisting configuration, manually update the /etc/blacklisted.wwids file after installation. If you choose to do so, replace the disk ID listed with its correct SCSI ID. You can retrieve the SCSI ID with this command: scsi_id -gus /block/sd[x].

6.4.6. Black Listing of System Disks for Legacy LSI MegaRAID Controllers Not Supported

Oracle VM Server cannot add the system disks for Legacy LSI MegaRAID (Dell PERC4) bus controllers to the /etc/blacklisted.wwids file, so the disks are not blacklisted in the multipath configuration. This occurs because the bus controllers are not capable of returning a unique hardware ID for each disk. Using system disks on Legacy LSI MegaRAID (Dell PERC4) bus controllers is therefore not supported.

6.4.7. Blacklisting of System Disks for Multipathing Fails on HP Smart Array (CCISS) Disk Devices

Installing Oracle VM Server on an HP Smart Array (CCISS) fails to blacklist system disks (they are not included in the /etc/blacklisted.wwids file). Messages similar to the following are logged in the /var/log/messages file:

multipathd: /sbin/scsi_id exited with 1
last message repeated 3 times

CCISS disks are only supported for installing Oracle VM Server. If you want to installOracle VM Server on a CCISS disk, use the workaround below. CCISS disks are not supported when used for storage repositories, raw disks for virtual machines, or server pool file systems.

Workaround: Configure multipathing to blacklist the CCISS system devices by adding a new line to the multipath.conf file:

# List of device names to discard as not multipath candidates
#
## IMPORTANT for OVS do not remove the black listed devices.
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|nbd)[0-9]*"
        devnode "^hd[a-z][0-9]*"
        devnode "^etherd"+       devnode "^cciss!c[0-9]d[0-9]*"    <<====
        %include "/etc/blacklisted.wwids"
}

6.4.8. Multi-homed NFS Shares Are Not Supported

When an NFS file server has two IP addresses, it cannot expose the same file system over both interfaces. This situation would occur if you configure both IP addresses as separate access hosts; for example to provide access to different Oracle VM Servers via different paths. As a result, the same file system would correspond with two different storage object entries, each with a different path related to each of the IP addresses. As a storage server can only be represented by one object, this configuration is not supported in Oracle VM Release 3.0.3.

Workaround: Configure only one access host per storage server.

6.4.9. Create Physical Disk Icon Invalid Unless a Volume Group Selected

In the Storage view for a storage array, the Physical Disks tab contains a Create Physical Disk icon. This icon does nothing unless a volume group is selected in the table.

Workaround: Select a volume group in the table, then click the Create Physical Disk icon.

6.4.10. Refreshing a NAS-based File System Produces Invalid/Overlapping Exports

When a NAS-based file system is refreshed, it may produce invalid or overlapping exports. During a file system refresh job, all mount points defined in the NAS-based file server's exports file are refreshed, even file systems that are not intended to be used in Oracle VM environments.

Top level directories which also contain subdirectories in the exports file may also cause problems, for example, if an export file contains /xyz as an export location, and also contains /xyz/abc. In this case, the following error may be displayed during a refresh file system job:

OVMRU_002024E Cannot perform operation. File Server: server_name, has invalid exports.

Workaround: For the second issue, to work around this problem, do not export top level file systems in the NAS-based file server's exports file.

6.4.11. Expanding the Storage Array folder in the Navigation Pane Causes UI Hang

If you click on the Storage Array folder in the Storage tab of the Hardware view while a discovery job is in progress that involves discovering storage array objects, the Oracle VM Manager UI hangs. For example, if you discover an Oracle VM Server which has storage connected to it, then click Hardware > Storage and select the Storage Array folder, the UI hangs.

Workaround: Wait until all storage discovery jobs are complete before expanding the Storage Array folder.

6.4.12. Dom0 Memory Requires 350 MB of Memory per 100 LUNs

For every 100 LUNs on an iSCSI target, you should allocate at least 350MB of Dom0 memory. For example, to support 1,000 LUNs, you should allocate at least of 4GB memory to Dom0.