Changes the properties of an existing LUN.
lun ‑modify ‑lun lun‑id‑or‑fqn [‑name new‑lun‑name] [‑capacity capacity] [‑allocatedCapacity allocated‑logical‑capacity] [{ ‑profile performance‑profile‑id‑or‑fqn | [‑priority {premium | high | medium | low | archive}] [‑storageClass {capDisk | perfDisk | perfSsd | capSsd}] { ‑redundancy {1 | 2} ‑accessBias {sequential | random | mixed} ‑ioBias {read | write | mixed} | [‑raidLevel {raid5 | raid10 | raid6 | default}] [‑readAhead {default | normal | aggressive | conservative}] } }] [{ ‑singleTier | ‑autoTier [‑preferredStorageClass {capDisk | perfDisk | perfSsd | capSsd} [, {capDisk | perfDisk | perfSsd | capSsd} ]... ] [‑preferredRepositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd} [, {capDisk | perfDisk | perfSsd | capSsd} ]... ] [{‑enableTierReallocation | ‑disableTierReallocation}] }] [‑repositoryPercentage capacity‑percentage] [{ ‑matchTierQos | [‑noMatchTierQos] [‑repositoryPriority {premium | high | medium | low | archive}] [‑repositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd}] { ‑repositoryRedundancy {1 | 2} ‑repositoryAccessBias {sequential | random | mixed} ‑repositoryIoBias {read | write | mixed} | [‑repositoryRaidLevel {raid5 | raid10 | raid6 | default}] } }] [‑volumeGroup volume‑group‑id‑or‑fqn] [‑controller controller‑id‑or‑fqn] [{‑fibreChannelAccess | ‑noFibreChannelAccess}] [{‑iscsiAccess | ‑noIscsiAccess}] [{ ‑maskedControllerPorts /controller[/slot[/port]] [, /controller[/slot[/port]]]... | ‑unMaskedControllerPorts /controller[/slot[/port]] [, /controller[/slot[/port]]]... }] [{ ‑unmapped | ‑globalMapping lun‑number }] [‑storageDomain storage‑domain‑id‑or‑fqn] [{‑active | ‑inactive}] [‑copyPriority {auto | low | high}] [{‑conservativeMode | ‑noConservativeMode}] [‑clearPinnedData] [{‑disableRefTagChecking | ‑enableRefTagChecking}] [‑bootLun | ‑noBootLun] [{‑sessionKey | ‑u admin‑user ‑oracleFS oracle‑fs‑system}] [{‑outputformat | ‑o} { text | xml }] [{‑timeout timeout‑in‑seconds | ‑verify | ‑usage | ‑example | ‑help}]
Use the lun ‑modify command to change the QoS attributes for a LUN, such as increasing the capacity that is allocated to the LUN or allocating space for clones of the LUN. You can also modify the mapping of a LUN as well as change the Controller to which the LUN is assigned.
Before using the lun ‑modify command, you can run the lun ‑maximumCapacity command to test different settings for the RAID level, the priority level, and the Storage Class to determine the effect of these properties on the modified LUN.
-IOBias
-AccessBias
-Redundancy
-repositoryIOBias
-repositoryAccessBias
-repositoryRedundancy
sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.
random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.
mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.
Enables the LUN to be accessible and available for use immediately after the LUN is created. To ensure accurate mapping relationships, use the ‑globalMapping option, the ‑hostmap option, or the ‑hostGroupMap option with the ‑active option.
Specifies the amount of space, in gigabytes, that the Oracle FS System sets aside for a LUN. This number cannot be less than the currently allocated size of the LUN, and cannot exceed the addressable logical capacity for the LUN as defined by the ‑capacity option.
Enables the auto-tiering capability, also called QoS Plus, as needed. An auto-tiered LUN monitors the data activity and automatically adjusts the QoS properties. Based on historical usage information, the system moves the data block to a Storage Class within the Storage Domain that can optimally store the data and best use the available storage types and capacities.
Identifies that the LUN can be used as a boot drive in the SAN.
Specifies the storage space in gigabytes for the volume. The amount of space cannot be less than the current maximum capacity of the LUN. This space is also referred to as addressable capacity. Repository space for the Clone LUNs is also included.
Allows the Oracle FS System to enter conservative mode for the specified LUN if a Controller node fails. In conservative mode, data is written to the storage array before the write operation is reported as complete to the SAN host. Allowing conservative mode is the default.
Specifies the fully qualified name (FQN) or the unique identifier (ID) of a Controller to which the LUN is assigned. By default, the Oracle FS System chooses the Controller. If included, the FQN format consists of /controller-name. For example, /CONTROLLER-01 specifies Controller01.
auto. Balances data movement rate and system performance. If you do not use the ‑copyPriority option, the default priority is auto.
low. Completes copy operations and data migration without degrading overall system performance. Completion rate might be slower.
high. Completes copy operations and data migration as quickly as possible. System performance might be degraded.
Instructs the HBA to bypass the check of whether a host has written to a specific area of the LUN before the host reads from that same area. If this option is omitted, read-before-write error events can be generated.
Turns off dynamic data progression for the LUN. The Oracle FS System does not migrate the LUN data to other Storage Classes.
Turns on dynamic data migration for the LUN. The Oracle FS System migrates the LUN data to the appropriate Storage Class based on the usage patterns of the data. By default, tier reallocation is enabled.
Allows users to access to the volume through the Fibre Channel (FC) ports. By default, FC access is enabled.
Maps the LUN globally to all hosts using the specified lun-number.
Renders the LUN volume invisible on the network. An inactive volume is not accessible and cannot be used by a SAN host.
read. Indicates that most of the access requests are for read operations.
write. Indicates that most of the access requests are for write operations.
mixed. Indicates that the number of access requests are similar for read operations and write operations.
Allows access to the modified LUN through the iSCSI ports.
Specifies the ID or the FQN of the LUN that you are changing.
For controller, provide a string that includes the FQN or ID of the Controller.
For slot, specify the HBA slot number.
For port, specify the port number.
Sets the QoS settings of the clone repository to match the QoS settings of the LUN.
Specifies a new name for the LUN. The name that you provide must be between 1 and 40 characters. Use double quotation marks around names containing one or more spaces or dashes to prevent parsing errors.
Tab
/ (slash) and \ (backslash)
. (dot) and .. (dot-dot)
Embedded tabs
Identifies that the LUN cannot be used as a boot drive in the SAN. Not using the LUN as a boot drive is the default.
Disables access to the modified LUN through FC ports. By default, access is enabled.
Disables access to the modified LUN through use of the iSCSI protocol. By default, the LUN is not accessible through the iSCSI protocol.
Indicates that the QoS settings of the clone repository are not automatically set to the QoS settings of the LUN. Not automatically matching the QoS settings is the default.
Identifies the Storage Classes for the Clone LUNs that are created for the LUN, based on the usage patterns of the Clone LUNs. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.
capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Identifies the Storage Classes for the LUN based on the usage patterns of the LUN data. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.
capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
premium. Indicates the highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.
high. Indicates the next highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.
medium. Indicates an intermediate priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.
low. Indicates the next to lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.
archive. Indicates the lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.
Specifies the fully qualified name (FQN) or unique identifier (ID) of the QoS Storage Profile to apply to the LUN. Use double quotes around names containing one or more spaces or dashes to prevent parsing errors.
raid5. Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. Single parity protects against the loss of one drive. Single parity is implemented as a variant of the RAID 5 storage technology.
raid6. Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. Double parity protects against the loss of one or two drives with a slight cost of write performance. Double parity is implemented as a variant of the RAID 6 storage technology.
raid10. Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. Mirroring protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations. Mirrored RAID is implemented as a variant of the RAID 10 storage technology.
default. Indicates that the level of protection is determined by the storage class. For large form factor (capacity) hard disk drives, the RAID 6 level of protection is the default. For the other storage classes, the RAID 5 level of protection is the default.
Indicates that the input requests and the output requests are accessing the data mostly in a random manner or in a mixed sequential and random manner.
Indicates that the input requests and the output requests are accessing the data mostly in a sequential manner and that the workload is biased toward read operations.
Indicates that the input requests and the output requests are mostly sequential and that the workload is biased toward write operations.
1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.
2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.
sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.
random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.
mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.
read. Indicates that most of the access requests are for read operations.
write. Indicates that most of the access requests are for write operations.
mixed. Indicates that the number of access requests are similar for read operations and write operations.
Determines the amount of extra space to set aside as a repository for Clone LUNs. Specify the amount as a percentage of the maximum capacity for the LUN. The default capacity is set to 110%. If you do not want to create a repository, specify 0.
premium. Indicates the highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.
high. Indicates the next highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.
medium. Indicates an intermediate priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.
low. Indicates the next to lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.
archive. Indicates the lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.
raid5. Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. Single parity protects against the loss of one drive. Single parity is implemented as a variant of the RAID 5 storage technology.
raid6. Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. Double parity protects against the loss of one or two drives with a slight cost of write performance. Double parity is implemented as a variant of the RAID 6 storage technology.
raid10. Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. Mirroring protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations. Mirrored RAID is implemented as a variant of the RAID 10 storage technology.
default. Indicates that the level of protection is determined by the storage class. For large form factor (capacity) hard disk drives, the RAID 6 level of protection is the default. For the other storage classes, the RAID 5 level of protection is the default.
1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.
2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.
capDisk. Specifies that the data is stored on high-capacity, rotating hard disk drives (HDDs). This Storage Class optimizes capacity at some sacrifice of speed. For the FS1, this storage class provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed HDDs. This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on SSDs that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of read operations and for capacity. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Creates a LUN that uses standard QoS properties. A single-tier LUN has QoS properties that you set to specify the Storage Class and other performance parameters for storing the LUN data onto the storage media. The QoS properties remain unchanged until you change these properties.
Indicates the type of storage media to be used for the LUN. If you do not use the ‑profile option, the ‑storageClass option is required if the Oracle FS System supports two or more Storage Classes.
capDisk. Specifies that the data is stored on high-capacity, rotating hard disk drives (HDDs). This Storage Class optimizes capacity at some sacrifice of speed. For the FS1, this storage class provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed HDDs. This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on SSDs that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of read operations and for capacity. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Specifies the FQN or GUID of the Storage Domain that contains the LUN. If you do not include this option, and there is only one Storage Domain on the Oracle FS System, the system uses the default Storage Domain in which to create the LUN. If you do not include this option, and there are multiple Storage Domains available, the system prompts you to specify a Storage Domain.
Opens access to the volume through the Controller ports that were previously set to restricted access.
For example, 0/1 specifies port 1 on HBA slot 0 of the specified Controller.
Prevents the LUN from being detected or accessed by any SAN host.
Specifies the FQN or the ID of the volume group to which the LUN is assigned.
Change the name of a LUN on the Oracle FS System and increase the logical capacity of the LUN. Change the priority to the highest processing queue setting for testing purposes, and make the LUN accessible immediately.
The FQN or the ID of the LUN: DISK1/user1_vg
The new name of the LUN: DISK3
The size of the LUN, in gigabytes: 128
The priority level: premium
$ fscli lun ‑modify ‑lun /user1_vg/DISK1 ‑name DISK3 ‑capacity 128 ‑priority premium ‑active