Creates a LUN by copying the contents and the settings of an existing Clone LUN.
clone_lun ‑copy ‑source source‑clone‑lun‑id‑or‑fqn ‑name clone‑lun‑name [‑capacity capacity] [‑allocatedCapacity allocated‑logical‑capacity] [{ ‑profile performance‑profile‑id‑or‑fqn | ‑priority {premium | high | medium | low | archive} [‑storageClass {capDisk | perfDisk | perfSsd | capSsd}] { [‑redundancy {1 | 2}] [‑accessBias {sequential | random | mixed} ] [‑ioBias {read | write | mixed}] | [‑raidLevel {raid5 | raid10 | raid6 | default}] [‑readAhead {default | normal | aggressive | conservative}] } }] [{ ‑singleTier | ‑autoTier [‑preferredStorageClass {capDisk | perfDisk | perfSsd | capSsd} [, {capDisk | perfDisk | perfSsd | capSsd} ]... ] [‑preferredRepositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd} [, {capDisk | perfDisk | perfSsd | capSsd} ]... ] [{‑enableTierReallocation | ‑disableTierReallocation}] }] [‑repositoryPercentage clone‑capacity] [{ ‑matchTierQos | [‑noMatchTierQos] [‑repositoryPriority {premium | high | medium | low | archive}] [‑repositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd}] { [‑repositoryRedundancy {1 | 2}] [‑repositoryAccessBias {sequential | random | mixed}] [‑repositoryIoBias {read | write | mixed}] | [‑repositoryRaidLevel {raid5 | raid10 | raid6 | default}] } }] [‑volumeGroup volume‑group‑id‑or‑fqn] [{ ‑unmapped | ‑globalMapping lun‑number | ‑hostmap host‑id‑or‑fqn [, host‑id‑or‑fqn]... ‑lunNumber lun‑number | ‑hostGroupMap host‑group‑id‑or‑fqn ‑lunNumber lun‑number }] [{‑fibreChannelAccess | ‑noFibreChannelAccess}] [{‑iscsiAccess | ‑noIscsiAccess}] [{ ‑maskedControllerPorts /controller[/slot[/port]] [, /controller[/slot[/port]]]... | ‑unMaskedControllerPorts /controller[/slot[/port]] [, /controller[/slot[/port]]]... }] [‑storageDomain storage‑domain‑id‑or‑fqn] [{‑active | ‑inactive}] [‑copyPriority {auto | low | high}] [{‑conservativeMode | ‑noConservativeMode}] [{‑disableRefTagChecking | ‑enableRefTagChecking}] [‑bootLun | ‑noBootLun] [{‑sessionKey | ‑u admin‑user ‑oracleFS oracle‑fs‑system}] [{‑outputformat | ‑o} { text | xml }] [{‑timeout timeout‑in‑seconds | ‑verify | ‑usage | ‑example | ‑help}]
Storage capacity
Priority level
Redundancy setting
Volume group attributes
Single-tiering or auto-tiering capability
The new LUN consumes space from the repository that was allocated for clones when the source LUN was created. You can adjust the amount of space that is available for clones of a LUN by using the lun ‑modify command.
sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.
random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.
mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.
Enables the LUN to be accessible and available for use immediately after the LUN is created. To ensure accurate mapping relationships, use the ‑globalMapping option, the ‑hostmap option, or the ‑hostGroupMap option with the ‑active option. Enabling the LUN to be accessible is the default.
Specifies the amount of space, in gigabytes, that the Oracle FS System sets aside for a LUN. This number can be less than the addressable logical capacity for the LUN as defined by the ‑capacity option. When the allocated capacity is less than the space that is requested for ‑capacity, the system creates what is called a thinly provisioned LUN.
If you do not provide a value for ‑allocatedCapacity, Oracle FS System sets the allocated amount of space to the size used by the source LUN.
Enables the auto-tiering capability, also called QoS Plus, as needed. An auto-tiered LUN monitors the data activity and automatically adjusts the QoS properties. Based on historical usage information, the system moves the data block to a Storage Class within the Storage Domain that can optimally store the data and best use the available storage types and capacities.
If you do not specify the singleTier or the autoTier option, the default is whatever the source LUN's setting is. If the source LUN's setting is single tier, the source's copy defaults to single tier. If the source LUN's setting is autotier, the source's copy defaults to auto tier.
Identifies that the LUN can be used as a boot drive in the SAN.
Specifies the storage space in gigabytes for the new LUN. Specify this value if you want the capacity to be different from the capacity of the source Clone LUN. This space is sometimes referred to as addressable capacity.
Allows the Oracle FS System to enter conservative mode for the specified LUN if a Controller node fails. In conservative mode, data is written to the storage array before the write operation is reported as complete to the SAN host. Allowing conservative mode is the default.
auto. Balances data movement rate and system performance. If you do not use the ‑copyPriority option, the default priority is auto.
low. Completes copy operations and data migration without degrading overall system performance. Completion rate might be slower.
high. Completes copy operations and data migration as quickly as possible. System performance might be degraded.
Instructs the HBA to bypass the check of whether a host has written to a specific area of the LUN before the host reads from that same area. If this option is omitted, read-before-write error events can be generated.
If this option is omitted, reference tag checking is enabled by default.
Turns off dynamic data migration for the LUN. The Oracle FS System does not migrate the LUN data to other Storage Classes.
By default, reference tag checking is enabled.
Turns on dynamic data migration for the LUN. The Oracle FS System migrates the LUN data to the appropriate Storage Class based on the usage patterns of the data. By default, tier reallocation is enabled.
Allows users to access to the volume through the Fibre Channel (FC) ports. By default, FC access is enabled.
Maps the LUN globally to all hosts using the specified lun-number.
Specifies a mapping relationship between a LUN and a host group. You identify the host group mapping by providing a fully qualified name (FQN) or a unique ID (ID).
Specifies a mapping relationship between a LUN and a SAN host. You identify the host by providing a unique ID (ID) or a fully qualified name (FQN).
Renders the LUN volume invisible on the network. An inactive volume is not accessible and cannot be used by a SAN host.
read. Indicates that most of the access requests are for read operations.
write. Indicates that most of the access requests are for write operations.
mixed. Indicates that the number of access requests are similar for read operations and write operations.
Allows access to the LUN through the iSCSI ports.
For controller, provide a string that includes the FQN or ID of the Controller.
For slot, specify the HBA slot number.
For port, specify the port number.
Sets the QoS settings of the clone repository to match the QoS settings of the LUN.
Specifies a name for the LUN. The name that you provide must be between 1 and 40 characters. Use double quotation marks around names containing one or more spaces or dashes to prevent parsing errors.
Tab
/ (slash) and \ (backslash)
. (dot) and .. (dot-dot)
Embedded tabs
Identifies that the LUN cannot be used as a boot drive in the SAN. Not using the LUN as a boot drive is the default.
Disables access to the LUN through FC ports. By default, access is enabled.
Disables access to the LUN through use of the iSCSI protocol. By default, the LUN is not accessible through the iSCSI protocol.
Indicates that the QoS settings of the clone repository are not automatically set to the QoS settings of the LUN. Not automatically matching the QoS settings is the default.
Identifies the Storage Classes for the Clone LUNs that are created for the new LUN, based on the usage patterns of the Clone LUNs. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.
capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Identifies the Storage Classes for the LUN based on the usage patterns of the LUN data. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.
capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
premium. Indicates the highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.
high. Indicates the next highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.
medium. Indicates an intermediate priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.
low. Indicates the next to lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.
archive. Indicates the lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.
Specifies the fully qualified name (FQN) or unique identifier (ID) of the QoS Storage Profile to apply to the LUN. Use double quotes around names containing one or more spaces or dashes to prevent parsing errors. If the ‑profile option is omitted, the profile of the source Clone LUN is applied.
Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. This parity level protects against the loss of one drive.
Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. This parity level protects against the loss of one or two drives with a slight cost to write performance.
Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. This RAID level protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations.
Indicates that the level of RAID protection is determined by the Storage Class. For large form factor (capacity) hard disk drives, RAID 6 is the default level of protection. For the other Storage Classes, RAID 5 is the default level of protection.
Do not use the ‑raidLevel option if you use the ‑profile option to apply a QoS Storage Profile to the LUN.
Indicates that the input requests and the output requests are accessing the data mostly in a random manner or in a mixed sequential and random manner.
Indicates that the input requests and the output requests are accessing the data mostly in a sequential manner and that the workload is biased toward read operations.
Indicates that the input requests and the output requests are mostly sequential and that the workload is biased toward write operations.
1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.
2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.
sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.
random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.
mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.
read. Indicates that most of the access requests are for read operations.
write. Indicates that most of the access requests are for write operations.
mixed. Indicates that the number of access requests are similar for read operations and write operations.
Determines the amount of extra space to set aside as a repository for Clone LUNs. Specify the amount as a percentage of the maximum capacity for the LUN. The default capacity is set to 110%. If you do not want to create a repository, specify 0.
premium. Indicates the highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.
high. Indicates the next highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.
medium. Indicates an intermediate priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.
low. Indicates the next to lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.
archive. Indicates the lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.
raid5. Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. Single parity protects against the loss of one drive. Single parity is implemented as a variant of the RAID 5 storage technology.
raid6. Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. Double parity protects against the loss of one or two drives with a slight cost of write performance. Double parity is implemented as a variant of the RAID 6 storage technology.
raid10. Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. Mirroring protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations. Mirrored RAID is implemented as a variant of the RAID 10 storage technology.
default. Indicates that the level of protection is determined by the storage class. For large form factor (capacity) hard disk drives, the RAID 6 level of protection is the default. For the other storage classes, the RAID 5 level of protection is the default.
1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.
2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.
capDisk. Specifies that the data is stored on high-capacity, rotating hard disk drives (HDDs). This Storage Class optimizes capacity at some sacrifice of speed. For the FS1, this storage class provides the lowest cost for each GB of capacity.
perfDisk. Specifies that the data is stored on high-speed HDDs. This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.
perfSsd. Specifies that the data is stored on SSDs that are optimized for the performance of balanced read and write operations.
capSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of read operations and for capacity. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.
Creates a LUN that uses standard QoS properties. A single-tier LUN has QoS properties that you set to specify the Storage Class and other performance parameters for storing the LUN data onto the storage media. The QoS properties remain unchanged until you change these properties.
If you do not specify the singleTier or the autoTier option, the default is whatever the source LUN's setting is. If the source LUN's setting is single tier, the source's copy defaults to single tier. If the source LUN's setting is autotier, the source's copy defaults to auto tier.
Specifies the FQN or unique identifier (ID) of the source Clone LUN.
Indicates the type of storage media to be used for the clone repository.
If you do not use the ‑profile option, the ‑storageClass option is required if the Oracle FS System supports two or more Storage Classes.
Do not use the ‑storageClass option if you use the ‑profile option to match the QoS settings of the LUN.
Specifies the FQN or GUID of the Storage Domain that contains the clone repository. If you do not include this option, and there is only one Storage Domain on the Oracle FS System, the system uses the default Storage Domain in which to create the clone repository. If you do not include this option, and there are multiple Storage Domains available, the system prompts you to specify a Storage Domain.
Opens access to the volume through the Controller ports that were previously set to restricted access.
For example, 0/1 specifies port 1 on HBA slot 0 of the specified Controller.
Prevents the LUN from being detected or accessed by any SAN host.
Specifies the FQN or the ID of the volume group to which the Clone LUN is assigned. If you do not include this option, the Clone LUN is assigned to the root level volume group.
Copy a Clone LUN to create a LUN on the Oracle FS System. Increase the maximum logical capacity and apply a different Storage Profile to the new LUN.
The fully qualified name (FQN) of the source Clone LUN: /user1_vg/CLONE_DISK1
The name of the new LUN: DISK2
The size of the new LUN: 64 GB
The Storage Profile to apply: /user_adv_group1
$ fscli clone_lun ‑copy ‑source /user1_vg/CLONE_DISK1 ‑name DISK2 ‑capacity 64 ‑profile /user_adv_group1