Creates a LUN on the Oracle FS System.
lun ‑add ‑name lun‑name ‑capacity capacity [‑allocatedCapacity allocated‑logical‑capacity] { ‑profile performance‑profile‑id‑or‑fqn | ‑priority {premium | high | medium | low | archive} [‑storageClass {capDisk | perfDisk | perfSsd | capSsd}] { [‑redundancy {1 | 2}] [‑accessBias {sequential | random | mixed}] [‑ioBias {read | write | mixed}] | [‑raidLevel {raid5 | raid10 | raid6 | default}] [‑readAhead {default | normal | aggressive | conservative}] } } [{ ‑singleTier | ‑autoTier [‑preferredStorageClass {capDisk | perfDisk | perfSsd | capSsd} [, {capDisk | perfDisk | perfSsd | capSsd} ]... ] [‑preferredRepositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd} [, {capDisk | perfDisk | perfSsd | capSsd} ]... ] [{‑enableTierReallocation | ‑disableTierReallocation}] }] [‑repositoryPercentage capacity‑percentage] [{ ‑matchTierQos | [‑noMatchTierQos] [‑repositoryPriority {premium | high | medium | low | archive}] [‑repositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd}] { [‑repositoryRedundancy {1 | 2}] [‑repositoryAccessBias {sequential | random | mixed}] [‑repositoryIoBias {read | write | mixed}] | [‑repositoryRaidLevel {raid5 | raid10 | raid6 | default}] } }] [‑volumeGroup volume‑group‑id‑or‑fqn] [‑controller controller‑id‑or‑fqn ] [‑maskedControllerPorts /controller[/slot[/port]] [, /controller[/slot[/port]]]... ] [{ ‑unmapped | ‑globalMapping lun‑number | { ‑hostmap host‑id‑or‑fqn [, host‑id‑or‑fqn]... | ‑hostGroupMap host‑group‑id‑or‑fqn } ‑lunNumber lun‑number }] [{‑fibreChannelAccess | ‑noFibreChannelAccess}] [{‑iscsiAccess | ‑noIscsiAccess}] [‑storageDomain storage‑domain‑id‑or‑fqn] [{‑active | ‑inactive}] [‑copyPriority {auto | low | high}] [{‑conservativeMode | ‑noConservativeMode}] [{‑disableRefTagChecking | ‑enableRefTagChecking}] [‑bootLun | ‑noBootLun] [{‑sessionKey | ‑u admin‑user ‑oracleFS oracle‑fs‑system}] [{‑outputformat | ‑o} { text | xml }] [{‑timeout timeout‑in‑seconds | ‑verify | ‑usage | ‑example | ‑help}]
‑storageClass
‑redundancy
‑accessBias
‑ioBias
‑raidLevel
‑readAhead
sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.
random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.
mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.
Enables the LUN to be accessible and available for use immediately after the LUN is created. To ensure accurate mapping relationships, use the ‑globalMapping option, the ‑hostmap option, or the ‑hostGroupMap option with the ‑active option.
Specifies the amount of space, in gigabytes, that the Oracle FS System sets aside for a LUN. This number can be less than the amount that you specify for the addressable logical capacity for the LUN using the ‑capacity option. When the allocated capacity is less than the space that is requested for ‑capacity, the system creates what is called a thinly provisioned LUN.
If you do not provide a value for ‑allocatedCapacity, Oracle FS System sets the allocated amount of space to the size defined by the ‑capacity option.
Enables the auto-tiering capability, also called QoS Plus, as needed. An auto-tiered LUN monitors the data activity and automatically adjusts the QoS properties. Based on historical usage information, the system moves the data block to a Storage Class within the Storage Domain that can optimally store the data and best use the available storage types and capacities.
If you do not specify the singleTier or the autoTier option, the single-tiering feature is selected by default even if there is more than one storage class available on the Oracle FS System.
Identifies that the LUN can be used as a boot drive in the SAN.
Specifies the storage space in gigabytes for the volume. If you intend to create clones of this LUN, be sure to include in this value sufficient space for the clone repository. The value you specify for ‑capacity is sometimes referred to as addressable capacity.
Allows the Oracle FS System to enter conservative mode for the specified LUN if a Controller node fails. In conservative mode, data is written to the storage array before the write operation is reported as complete to the SAN host. Allowing conservative mode is the default.
Specifies the fully qualified name (FQN) or the unique identifier (ID) of a Controller to which the LUN is assigned. By default, the Oracle FS System chooses the Controller. If included, the FQN format consists of /controller-name. For example, /CONTROLLER-01 specifies Controller01.
auto. Balances data movement rate and system performance. If you do not use the ‑copyPriority option, the default priority is auto.
low. Completes copy operations and data migration without degrading overall system performance. Completion rate might be slower.
high. Completes copy operations and data migration as quickly as possible. System performance might be degraded.
Instructs the HBA to bypass the check of whether a host has written to a specific area of the LUN before the host reads from that same area. If this option is omitted, read-before-write error events can be generated.
If this option is omitted, reference tag checking is enabled by default.
Turns off dynamic data progression for the LUN. The Oracle FS System does not migrate the LUN data to other Storage Classes.
By default, reference tag checking is enabled.
Turns on dynamic data migration for the LUN. The Oracle FS System migrates the LUN data to the appropriate Storage Class based on the usage patterns of the data. By default, tier reallocation is enabled.
Allows users to access to the volume through the Fibre Channel (FC) ports. By default, FC access is enabled.
Maps the LUN globally to all hosts using the specified lun-number.
Specifies a mapping between a LUN and a host group. You identify the host group mapping by providing a fully qualified name (FQN) or a unique ID (ID).
Identifies a mapping between a LUN and a SAN host. You identify the hostmap by providing a unique ID (ID) or a fully qualified name (FQN).
Renders the LUN volume invisible on the network. An inactive volume is not accessible and cannot be used by a SAN host.
read. Indicates that most of the access requests are for read operations.
write. Indicates that most of the access requests are for write operations.
mixed. Indicates that the number of access requests are similar for read operations and write operations.
Allows access to the LUN through the iSCSI ports.
For controller, provide a string that includes the FQN or ID of the Controller.
For slot, specify the HBA slot number.
For port, specify the port number.
Sets the QoS settings of the clone repository to match the QoS settings of the LUN.
Specifies the name of the LUN that you are creating on the Oracle FS System. Use double quotation marks around names containing dashes.
Non-printable characters, including ASCII 0 through 31, decimal
/ (slash) and \ (backslash)
. (dot) and .. (dot-dot)
Embedded tabs
Identifies that the LUN cannot be used as a boot drive in the SAN. Not using the LUN as a boot drive is the default.
Disables access to the LUN through FC ports. By default, access is enabled.
Disables access to the LUN through use of the iSCSI protocol. By default, the LUN is not accessible through the iSCSI protocol.
Indicates that the QoS settings of the clone repository are not automatically set to the QoS settings of the LUN. Not automatically matching the QoS settings is the default.
Identifies the Storage Classes for the Clone LUNs that are created for the LUN, based on the usage patterns of the Clone LUNs. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.
capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Identifies the Storage Classes for the LUN based on the usage patterns of the LUN data. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.
capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Indicates the highest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.
Indicates the next highest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.
Indicates an intermediate priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.
Indicates the next to lowest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.
Indicates the lowest priority for responding to requests in the processing queue. For auto-tiered LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.
Specifies the fully qualified name (FQN) or unique identifier (ID) of the QoS Storage Profile to apply to the LUN. Use double quotes around names containing one or more spaces or dashes to prevent parsing errors.
Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. This parity level protects against the loss of one drive.
Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. This parity level protects against the loss of one or two drives with a slight cost to write performance.
Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. This RAID level protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations.
Indicates that the level of RAID protection is determined by the Storage Class. For large form factor (capacity) hard disk drives, RAID 6 is the default level of protection. For the other Storage Classes, RAID 5 is the default level of protection.
Do not use the ‑raidLevel option if you use the ‑profile option to apply a QoS Storage Profile to the LUN.
Indicates that the input requests and the output requests are accessing the data mostly in a random manner or in a mixed sequential and random manner.
Indicates that the input requests and the output requests are accessing the data mostly in a sequential manner and that the workload is biased toward read operations.
Indicates that the input requests and the output requests are mostly sequential and that the workload is biased toward write operations.
Do not use the ‑readAhead option if you use the ‑profile option to apply a QoS Storage Profile to the volume.
1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.
2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.
sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.
random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.
mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.
read. Indicates that most of the access requests are for read operations.
write. Indicates that most of the access requests are for write operations.
mixed. Indicates that the number of access requests are similar for read operations and write operations.
Determines the amount of extra space to set aside as a repository for Clone LUNs. Specify the amount as a percentage of the maximum capacity for the LUN. The default capacity is set to 110%. If you do not want to create a repository, specify 0.
premium. Indicates the highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.
high. Indicates the next highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.
medium. Indicates an intermediate priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.
low. Indicates the next to lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.
archive. Indicates the lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.
raid5. Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. Single parity protects against the loss of one drive. Single parity is implemented as a variant of the RAID 5 storage technology.
raid6. Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. Double parity protects against the loss of one or two drives with a slight cost of write performance. Double parity is implemented as a variant of the RAID 6 storage technology.
raid10. Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. Mirroring protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations. Mirrored RAID is implemented as a variant of the RAID 10 storage technology.
default. Indicates that the level of protection is determined by the storage class. For large form factor (capacity) hard disk drives, the RAID 6 level of protection is the default. For the other storage classes, the RAID 5 level of protection is the default.
1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.
2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.
capDisk. Specifies that the data is stored on high-capacity, rotating hard disk drives (HDDs). This Storage Class optimizes capacity at some sacrifice of speed. For the FS1, this storage class provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed HDDs. This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on SSDs that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of read operations and for capacity. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Creates a LUN that uses standard QoS properties. A single-tier LUN has QoS properties that you set to specify the Storage Class and other performance parameters for storing the LUN data onto the storage media. The QoS properties remain unchanged until you change these properties.
If you do not specify the singleTier or the autoTier option, the single-tiering feature is selected by default even if there is more than one storage class available on the Oracle FS System.
Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.
Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.
Do not use the ‑storageClass option if you use the ‑profile option to match the QoS settings of the LUN.
Specifies the FQN or GUID of the Storage Domain that contains the LUN. If you do not include this option, and there is only one Storage Domain on the Oracle FS System, the system uses the default Storage Domain in which to create the LUN. If you do not include this option, and there are multiple Storage Domains available, the system prompts you to specify a Storage Domain.
Prevents the LUN from being detected or accessed by any SAN host.
Specifies the FQN or the ID of the volume group to which the LUN is assigned. If you do not include this option, the LUN is assigned to the root level volume group.
Create a LUN.
The name of the new LUN: DISK1
The size of the LUN: 16 GB
The priority of the LUN: medium
The Storage Class of the LUN: high-capacity hard disk drive
The volume group in which the LUN resides: /user1_vg
The Controller to which the LUN is assigned: /CONTROLLER-01
The Storage Domain in which the LUN is created: /sd1
$ fscli lun ‑add ‑name DISK1 ‑capacity 16 ‑priority medium ‑storageClass capDisk ‑volumeGroup /user1_vg ‑controller /CONTROLLER‑01 ‑storageDomain /sd1