Navigation:
Identifies the name that is assigned to the LUN.
Identifies the status of each LUN.
Indicates that the volume is fully accessible.
Indicates that the volume is not accessible.
Indicates that the write‑back cache of the volume is disabled, which reduces system performance. A Conservative state might indicate a hardware problem.
Indicates that the storage resources for the volume are reserved for the clone, but the clone is not committed to the storage device.
Indicates that the volume is in write protect mode and is set to read‑only.
Indicates that not enough information is obtained from the volume to report its status.
Indicates the LUN creation and deletion status.
Idle
In‑Progress
Identifies the tier reallocation status of the Storage Domain. When tier reallocation is enabled, the Oracle FS System dedicates resources and uses statistical data and QoS priority property to migrate data from one storage tier to another.
Indicates that tier reallocation is active on the logical volume.
Indicates that tier reallocation is not active on the logical volume.
Indicates that tier reallocation is disabled on the Storage Domain and therefore disabled on the LUN.
Identifies the SAN host mapping status associated with the LUN.
Indicates that the LUN is mapped to one or more SAN hosts.
Indicates that the LUN is not mapped to a SAN host.
Indicates that the data path of the LUN is disabled, which makes the LUN inaccessible on the network.
Indicates that the LUN is accessible by all of the hosts on the network.
Identifies the access protocol used to map the LUN to the Controller.
FC only
iSCSI only
No Access
All
Lists the name of the volume group where the logical volume is located.
Specifies the name of the Storage Domain.
Identifies the total amount of storage capacity that is reserved for this volume.
Identifies the capacity limit to which the volume can grow.
Displays a graphical comparison of the allocated capacity that this volume uses to the allocated capacity that is unused.
Displays the RAID and priority levels.
Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. This parity level protects against the loss of one drive. Single parity is implemented as a variant of the RAID 5 storage technology.
Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. This parity level protects against the loss of one or two drives with a slight cost to write performance. Double parity is implemented as a variant of the RAID 6 storage technology.
Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. This RAID level protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations. Mirrored RAID is implemented as a variant of the RAID 10 storage technology.
Indicates the highest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.
Indicates the next highest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.
Indicates an intermediate priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.
Indicates the next to lowest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.
Indicates the lowest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.
Identifies the amount of storage that was requested for the clone repository.
Identifies the amount of clone capacity that is allocated to the volume for clone data. The amount of capacity includes the overhead that is needed to create the logical volume. The overhead is parity for data protection.
Identifies the total amount of clone capacity that the system reserved for the logical volume. The amount of capacity includes the overhead that is needed to create the logical volume.
Identifies the maximum clone capacity allowed. For clones. This field identifies how much space is available for clone data.
Identifies the physical and logical storage capacity that is required to meet the LUN Quality of Service (QoS) settings.
Specifies the amount of raw capacity in gigabytes (GB) that the system has assigned and designated to this logical volume.
Identifies the sum of the addressable capacity for the logical volume and its clone repository.
Displays a graphical comparison of the capacity that is used to the maximum capacity that is allocated.
Identifies the globally unique identifier of the LUN.
Identifies the unique identifier of the LUN.