clone_lun copy

Creates a LUN by copying the contents and the settings of an existing Clone LUN.

SYNOPSIS

clone_lun ‑copy 
   ‑source source‑clone‑lun‑id‑or‑fqn
   ‑name clone‑lun‑name
   [‑capacity capacity]
   [‑allocatedCapacity allocated‑logical‑capacity]
   [{ ‑profile performance‑profile‑id‑or‑fqn
    | ‑priority {premium | high | medium | low | archive}
      [‑storageClass {capDisk | perfDisk | perfSsd | capSsd}]
      { [‑redundancy {1 | 2}]
        [‑accessBias {sequential | random | mixed} ]
        [‑ioBias {read | write | mixed}]
      | [‑raidLevel {raid5 | raid10 | raid6 | default}]
        [‑readAhead {default | normal | aggressive | conservative}]
      }
    }]
   [{ ‑singleTier
    | ‑autoTier
      [‑preferredStorageClass {capDisk | perfDisk | perfSsd | capSsd}
                     [, {capDisk | perfDisk | perfSsd | capSsd} ]... ]
      [‑preferredRepositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd}
                     [, {capDisk | perfDisk | perfSsd | capSsd} ]... ]
      [{‑enableTierReallocation | ‑disableTierReallocation}]
    }]
   [‑repositoryPercentage clone‑capacity]
   [{ ‑matchTierQos
    |   [‑noMatchTierQos]
      [‑repositoryPriority {premium | high | medium | low | archive}]
   [‑repositoryStorageClass {capDisk | perfDisk | perfSsd | capSsd}]
   { [‑repositoryRedundancy {1 | 2}]
     [‑repositoryAccessBias {sequential | random | mixed}]
     [‑repositoryIoBias {read | write | mixed}]
   | [‑repositoryRaidLevel {raid5 | raid10 | raid6 | default}]
   }
 }]
   [‑volumeGroup volume‑group‑id‑or‑fqn]
   [{ ‑unmapped
    | ‑globalMapping lun‑number
    | ‑hostmap host‑id‑or‑fqn [, host‑id‑or‑fqn]...
      ‑lunNumber lun‑number
    | ‑hostGroupMap host‑group‑id‑or‑fqn
      ‑lunNumber lun‑number
    }]
   [{‑fibreChannelAccess | ‑noFibreChannelAccess}]
   [{‑iscsiAccess | ‑noIscsiAccess}]
   [{ ‑maskedControllerPorts   /controller[/slot[/port]]
                           [, /controller[/slot[/port]]]...
    | ‑unMaskedControllerPorts /controller[/slot[/port]]
                           [, /controller[/slot[/port]]]...
    }]
   [‑storageDomain storage‑domain‑id‑or‑fqn]
   [{‑active | ‑inactive}]
   [‑copyPriority {auto | low | high}]
   [{‑conservativeMode | ‑noConservativeMode}]
   [{‑disableRefTagChecking | ‑enableRefTagChecking}]
   [‑bootLun | ‑noBootLun]

   [{‑sessionKey | ‑u admin‑user ‑oracleFS oracle‑fs‑system}]
   [{‑outputformat | ‑o} { text | xml }]
   [{‑timeout timeout‑in‑seconds | ‑verify | ‑usage | ‑example | ‑help}] 

DESCRIPTION

You can run the clone_lun ‑copy command to copy the contents of a Clone LUN to a new LUN. The new LUN is fully independent, and does not reflect changes to the source Clone LUN or its source LUN. By default, the properties of the source Clone LUN are applied to the new LUN. Use the clone_lun ‑copy options to assign different properties to the new LUN, including:
  • Storage capacity

  • Priority level

  • Redundancy setting

  • Volume group attributes

  • Single-tiering or auto-tiering capability

The new LUN consumes space from the repository that was allocated for clones when the source LUN was created. You can adjust the amount of space that is available for clones of a LUN by using the lun ‑modify command.

Note: Only administrators with primary administrator, admin1, or admin2 roles are authorized to run the clone_lun –copy command.

OPTIONS

accessBias
Identifies the expected access pattern for the logical volume. Valid biases:
  • sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.

  • random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.

  • mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.

Note: Do not use the ‑accessBias option if you use the ‑profile option to apply a QoS Storage Profile to the volume.
active

Enables the LUN to be accessible and available for use immediately after the LUN is created. To ensure accurate mapping relationships, use the ‑globalMapping option, the ‑hostmap option, or the ‑hostGroupMap option with the ‑active option. Enabling the LUN to be accessible is the default.

allocatedCapacity

Specifies the amount of space, in gigabytes, that the Oracle FS System sets aside for a LUN. This number can be less than the addressable logical capacity for the LUN as defined by the ‑capacity option. When the allocated capacity is less than the space that is requested for ‑capacity, the system creates what is called a thinly provisioned LUN.

If you do not provide a value for ‑allocatedCapacity, Oracle FS System sets the allocated amount of space to the size used by the source LUN.

autoTier

Enables the auto-tiering capability, also called QoS Plus, as needed. An auto-tiered LUN monitors the data activity and automatically adjusts the QoS properties. Based on historical usage information, the system moves the data block to a Storage Class within the Storage Domain that can optimally store the data and best use the available storage types and capacities.

If you do not specify the singleTier or the autoTier option, the default is whatever the source LUN's setting is. If the source LUN's setting is single tier, the source's copy defaults to single tier. If the source LUN's setting is autotier, the source's copy defaults to auto tier.

bootLun

Identifies that the LUN can be used as a boot drive in the SAN.

capacity

Specifies the storage space in gigabytes for the new LUN. Specify this value if you want the capacity to be different from the capacity of the source Clone LUN. This space is sometimes referred to as addressable capacity.

conservativeMode

Allows the Oracle FS System to enter conservative mode for the specified LUN if a Controller node fails. In conservative mode, data is written to the storage array before the write operation is reported as complete to the SAN host. Allowing conservative mode is the default.

copyPriority
Identifies the setting to use when copying or migrating data from one location to another. To control the impact on system performance, you can specify one of the following priority levels:
  • auto. Balances data movement rate and system performance. If you do not use the ‑copyPriority option, the default priority is auto.

  • low. Completes copy operations and data migration without degrading overall system performance. Completion rate might be slower.

  • high. Completes copy operations and data migration as quickly as possible. System performance might be degraded.

disableRefTagChecking

Instructs the HBA to bypass the check of whether a host has written to a specific area of the LUN before the host reads from that same area. If this option is omitted, read-before-write error events can be generated.

If this option is omitted, reference tag checking is enabled by default.

disableTierReallocation

Turns off dynamic data migration for the LUN. The Oracle FS System does not migrate the LUN data to other Storage Classes.

enableRefTagChecking
Instructs the HBA to check whether a SAN host has written to a specific area of the LUN before the host reads from that area. When a host reads from a specific area before writing to that area, the Oracle FS System generates a read-before-write error event.
Note: This check is sometimes called a reference tag check and is a part of the process for ensuring data protection integrity.

By default, reference tag checking is enabled.

enableTierReallocation

Turns on dynamic data migration for the LUN. The Oracle FS System migrates the LUN data to the appropriate Storage Class based on the usage patterns of the data. By default, tier reallocation is enabled.

fibreChannelAccess

Allows users to access to the volume through the Fibre Channel (FC) ports. By default, FC access is enabled.

globalMapping

Maps the LUN globally to all hosts using the specified lun-number.

hostGroupMap

Specifies a mapping relationship between a LUN and a host group. You identify the host group mapping by providing a fully qualified name (FQN) or a unique ID (ID).

hostmap

Specifies a mapping relationship between a LUN and a SAN host. You identify the host by providing a unique ID (ID) or a fully qualified name (FQN).

inactive

Renders the LUN volume invisible on the network. An inactive volume is not accessible and cannot be used by a SAN host.

ioBias
Indicates the typical read-write ratio. Valid I/O biases:
  • read. Indicates that most of the access requests are for read operations.

  • write. Indicates that most of the access requests are for write operations.

  • mixed. Indicates that the number of access requests are similar for read operations and write operations.

A mixed read-write ratio is the default. Do not use the ‑ioBias option if you use the ‑profile option to apply a QoS Storage Profile to the LUN.
iscsiAccess

Allows access to the LUN through the iSCSI ports.

lunNumber
Identifies the logical unit number that is used to present a LUN to a SAN host or a host group.
Note: The clone_lun ‑copy command does not map the new LUN if the host already contains a LUN with the specified number. You can run the clone_lun ‑modify command to map the new LUN after determining the number to use.
maskedControllerPorts
Restricts access to the LUN through one or more Controller ports. Use the following format to mask all of the ports in a Controller, to mask all of the ports for a given Controller slot, or to mask only a specific Controller port: /⁠controller[/⁠slot[/⁠port]]
  • For controller, provide a string that includes the FQN or ID of the Controller.

  • For slot, specify the HBA slot number.

  • For port, specify the port number.

If you do not include this option, the LUN becomes accessible on all Controller ports on the assigned node by default.
matchTierQos

Sets the QoS settings of the clone repository to match the QoS settings of the LUN.

name

Specifies a name for the LUN. The name that you provide must be between 1 and 40 characters. Use double quotation marks around names containing one or more spaces or dashes to prevent parsing errors.

The following characters are invalid in a LUN name:
  • Tab

  • / (slash) and \ (backslash)

  • . (dot) and .. (dot-dot)

  • Embedded tabs

Note: The clone_lun ‑copy command does not create the LUN if the Oracle FS System already contains a LUN with the specified name within the same volume group.
noBootLun

Identifies that the LUN cannot be used as a boot drive in the SAN. Not using the LUN as a boot drive is the default.

noConservativeMode
Prevents the Oracle FS System from entering conservative mode for the specified LUN.
CAUTION
If a Controller node fails, the system does not enable write-through mode, which it normally would. If the remaining node fails, any data that has not been written to the storage arrays is lost.
noFibreChannelAccess

Disables access to the LUN through FC ports. By default, access is enabled.

noIscsiAccess

Disables access to the LUN through use of the iSCSI protocol. By default, the LUN is not accessible through the iSCSI protocol.

noMatchTierQos

Indicates that the QoS settings of the clone repository are not automatically set to the QoS settings of the LUN. Not automatically matching the QoS settings is the default.

preferredRepositoryStorageClass

Identifies the Storage Classes for the Clone LUNs that are created for the new LUN, based on the usage patterns of the Clone LUNs. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.

Specify one or more Storage Classes:
  • capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.

  • perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.

  • perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.

  • capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.

If you do not include this option, all Storage Classes can be used based on usage patterns.
preferredStorageClass

Identifies the Storage Classes for the LUN based on the usage patterns of the LUN data. The Storage Classes do not need to be physically present on the Oracle FS System when you set this property. Storage Classes that are not present can be used after they are installed in the Oracle FS System.

Specify one or more Storage Classes:
  • capDisk. Specifies that the data is stored on high-capacity, rotating HDDs. This Storage Class optimizes capacity at some sacrifice of speed. For a storage system that does not include tape storage as an option, this Storage Class always provides the lowest cost for each GB of capacity.

  • perfDisk. Specifies that the data is stored on high-speed hard disk drives (HDDs). This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.

  • perfSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of balanced read and write operations.

  • capSsd. Specifies that the data is stored on SSDs that are optimized for the performance of capacity and for read operations. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.

If you do not include the ‑preferredStorageClass option, all Storage Classes can be used based on usage patterns.
priority
Assigns a priority level that determines the system response to incoming I/O requests against the LUN. In general, the higher the priority level, the faster the system can respond to an access request. Valid priority levels:
  • premium. Indicates the highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.

  • high. Indicates the next highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.

  • medium. Indicates an intermediate priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.

  • low. Indicates the next to lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.

  • archive. Indicates the lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.

Note: When copying a Clone LUN to create a new LUN, you can include the ‑priority option or the ‑profile option. Do not include both options.
profile

Specifies the fully qualified name (FQN) or unique identifier (ID) of the QoS Storage Profile to apply to the LUN. Use double quotes around names containing one or more spaces or dashes to prevent parsing errors. If the ‑profile option is omitted, the profile of the source Clone LUN is applied.

Note: When copying a Clone LUN to create a new LUN, you can include either the ‑profile option or the ‑priority option. Do not include both options.
raidLevel
Specifies the level of RAID data protection to use for the logical volume. Valid values:
raid5

Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. This parity level protects against the loss of one drive.

raid6

Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. This parity level protects against the loss of one or two drives with a slight cost to write performance.

raid10

Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. This RAID level protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations.

default

Indicates that the level of RAID protection is determined by the Storage Class. For large form factor (capacity) hard disk drives, RAID 6 is the default level of protection. For the other Storage Classes, RAID 5 is the default level of protection.

Do not use the ‑raidLevel option if you use the ‑profile option to apply a QoS Storage Profile to the LUN.

readAhead
Identifies the read‑ahead policy to use for the logical volume for sequential read operations. The policy determines the amount of additional data, if any, that the system places into the Controller cache. Valid policies:
default and normal

Indicates that the input requests and the output requests are accessing the data mostly in a random manner or in a mixed sequential and random manner.

aggressive

Indicates that the input requests and the output requests are accessing the data mostly in a sequential manner and that the workload is biased toward read operations.

conservative

Indicates that the input requests and the output requests are mostly sequential and that the workload is biased toward write operations.

redundancy
Identifies the number of copies of the parity bits that the Oracle FS System creates for the LUN. Valid values:
  • 1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.

  • 2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.

    Note: Double parity is the default for large form factor (capacity) hard disk drives. Single parity is the default for the other storage classes.
Do not use the ‑redundancy option if you use the ‑profile option to apply a QoS Storage Profile to the LUN.
repositoryAccessBias
Identifies the expected access pattern for the LUN. Valid biases:
  • sequential. Indicates that the read requests and the write requests operate on the data mostly by accessing the records one after the other in a physical order.

  • random. Indicates that the read requests and the write requests operate on the data mostly by accessing the records in an arbitrary order.

  • mixed. Indicates that the read requests and the write requests operate on the data sometimes in sequential order and sometimes in random order. Accessing in a mixed pattern is the default.

Do not use the ‑repositoryAccessBias option if you use the ‑matchTierQos option to match the QoS settings of the Clone LUN source.
repositoryIoBias
Indicates the typical read-write ratio for the Clone LUNs that are created for the new LUN. Valid I/O biases:
  • read. Indicates that most of the access requests are for read operations.

  • write. Indicates that most of the access requests are for write operations.

  • mixed. Indicates that the number of access requests are similar for read operations and write operations.

A mixed read-write ratio is the default. Do not use the ‑repositoryIoBias option if you use the ‑matchTierQos option to match the QoS settings of the source Clone LUN.
repositoryPercentage

Determines the amount of extra space to set aside as a repository for Clone LUNs. Specify the amount as a percentage of the maximum capacity for the LUN. The default capacity is set to 110%. If you do not want to create a repository, specify 0.

repositoryPriority
Assigns a priority level to determine the system response to incoming I/O requests against all Clone LUNs that are created from the new LUN. In general, the higher the priority level, the faster the system can respond to an access request. Valid priority levels:
  • premium. Indicates the highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the highest priority when the system migrates the data to the higher-performing storage tiers.

  • high. Indicates the next highest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next highest priority when the system migrates the data to the higher-performing storage tiers.

  • medium. Indicates an intermediate priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive an intermediate priority when the system migrates the data to the higher-performing storage tiers.

  • low. Indicates the next to lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the next to lowest priority when the system migrates the data to the higher-performing storage tiers.

  • archive. Indicates the lowest priority for responding to requests in the processing queue. For Auto-Tier LUNs, busy LUN extents receive the lowest priority when the system migrates the data to the higher-performing storage tiers.

Do not use the ‑repositoryPriority option if you use the ‑matchTierQos option to match the QoS settings of the source Clone LUN.
repositoryRaidLevel
Specifies the level of RAID data protection to use for the clone repository. Valid values:
  • raid5. Indicates that, in addition to the actual data, one set of parity bits exists for the logical volume. Single parity protects against the loss of one drive. Single parity is implemented as a variant of the RAID 5 storage technology.

  • raid6. Indicates that, in addition to the actual data, two sets of parity bits exist for the logical volume. Double parity protects against the loss of one or two drives with a slight cost of write performance. Double parity is implemented as a variant of the RAID 6 storage technology.

  • raid10. Indicates that no parity bits exist for the volume. Instead, the system writes the data in two different locations. Mirroring protects against the loss of at least one drive and possibly more drives with an improvement of the performance of random write operations. Mirrored RAID is implemented as a variant of the RAID 10 storage technology.

  • default. Indicates that the level of protection is determined by the storage class. For large form factor (capacity) hard disk drives, the RAID 6 level of protection is the default. For the other storage classes, the RAID 5 level of protection is the default.

Do not use the ‑repositoryRaidLevel option if you use the ‑matchTierQos option to match the QoS settings of the source Clone LUN.
repositoryRedundancy
Identifies the number of copies of the parity bits that the Oracle FS System creates for clones of the new LUN. Valid values:
  • 1. Stores the original user data plus one set of parity bits to help in the recovery of lost data. Access to the data is preserved even after the failure of one drive. Single parity is implemented using RAID 5 technology.

  • 2. Stores the original user data plus two sets of parity bits to help in the recovery of lost data. Access to the data is preserved even after the simultaneous failure of two drives. Double parity is implemented using RAID 6 technology.

    Note: Double parity is the default for large form factor (capacity) hard disk drives. Single parity is the default for the other storage classes.
Do not use the ‑repositoryRedundancy option if you use the ‑matchTierQos option to match the QoS settings of the source Clone LUN.
repositoryStorageClass
Identifies the type of storage media to be used for all Clone LUNs that are created for the new LUN. Valid Storage Classes:
  • capDisk. Specifies that the data is stored on high-capacity, rotating hard disk drives (HDDs). This Storage Class optimizes capacity at some sacrifice of speed. For the FS1, this storage class provides the lowest cost for each GB of capacity.

  • perfDisk. Specifies that the data is stored on high-speed HDDs. This Storage Class sacrifices some capacity to reduce the access time and the latency of the read operations and of the write operations.

  • perfSsd. Specifies that the data is stored on SSDs that are optimized for the performance of balanced read and write operations.

  • capSsd. Specifies that the data is stored on solid state drives (SSDs) that are optimized for the performance of read operations and for capacity. The write performance for this Storage Class is sacrificed somewhat to achieve the optimizations for read performance and for capacity.

Do not use the ‑repositoryStorageClass option if you use the ‑matchTierQos option to match the QoS settings of the source Clone LUN.
singleTier

Creates a LUN that uses standard QoS properties. A single-tier LUN has QoS properties that you set to specify the Storage Class and other performance parameters for storing the LUN data onto the storage media. The QoS properties remain unchanged until you change these properties.

If you do not specify the singleTier or the autoTier option, the default is whatever the source LUN's setting is. If the source LUN's setting is single tier, the source's copy defaults to single tier. If the source LUN's setting is autotier, the source's copy defaults to auto tier.

source

Specifies the FQN or unique identifier (ID) of the source Clone LUN.

storageClass

Indicates the type of storage media to be used for the clone repository.

If you do not use the ‑profile option, the ‑storageClass option is required if the Oracle FS System supports two or more Storage Classes.

Do not use the ‑storageClass option if you use the ‑profile option to match the QoS settings of the LUN.

storageDomain

Specifies the FQN or GUID of the Storage Domain that contains the clone repository. If you do not include this option, and there is only one Storage Domain on the Oracle FS System, the system uses the default Storage Domain in which to create the clone repository. If you do not include this option, and there are multiple Storage Domains available, the system prompts you to specify a Storage Domain.

unMaskedControllerPorts

Opens access to the volume through the Controller ports that were previously set to restricted access.

Specify the port using the form HBA slot number/port number. Specify the arguments in the following manner:
HBA slot number

Specifies the PCIE slot number of the HBA on which the port is located. The slot number must be 0 or greater.

port number

Identifies the port number on the HBA slot. The port number must be 0 or greater.

For example, 0/1 specifies port 1 on HBA slot 0 of the specified Controller.

unmapped

Prevents the LUN from being detected or accessed by any SAN host.

volumeGroup

Specifies the FQN or the ID of the volume group to which the Clone LUN is assigned. If you do not include this option, the Clone LUN is assigned to the root level volume group.

EXAMPLE

Task

Copy a Clone LUN to create a LUN on the Oracle FS System. Increase the maximum logical capacity and apply a different Storage Profile to the new LUN.

Parameters
  • The fully qualified name (FQN) of the source Clone LUN: /⁠user1_vg/⁠CLONE_DISK1

  • The name of the new LUN: DISK2

  • The size of the new LUN: 64 GB

  • The Storage Profile to apply: /⁠user_adv_group1

$ fscli clone_lun ‑copy ‑source /⁠user1_vg/⁠CLONE_DISK1 ‑name DISK2 ‑capacity 64 ‑profile /⁠user_adv_group1