Go to main content

Using Unified Archives for System Recovery and Cloning in Oracle® Solaris 11.3

Exit Print View

Updated: October 2017
 
 

Viewing Unified Archive Information

Use the archiveadm info command to examine Unified Archive file information. The examples in this section show both the abbreviated and verbose output.

Example 8  Viewing Standard Information About an Archive

The following example shows the standard information displayed using the archiveadm info command.

$ /usr/sbin/archiveadm info production1.uar
Archive Information

          Creation Time:  2017-08-02T16:56:20Z
            Source Host:  example
           Architecture:  i386
       Operating System:  Oracle Solaris 11.3 X86
     Deployable Systems:  global
Example 9  Viewing All Information About an Archive

The following example shows the information displayed using the verbose option with the archiveadm info command.

% archiveadm info -v production1.uar
Archive Information
          Creation Time:  2017-08-02T16:56:20Z
            Source Host:  example
           Architecture:  i386
       Operating System:  Oracle Solaris 11.3 X86
       Recovery Archive:  No
              Unique ID:  0d3333d8-42fa-42b5-9216-b442b96d9280
        Archive Version:  1.0

Deployable Systems
          'global'
             OS Version:  0.5.11
              OS Branch:  0.175.3.23.0.3.0
              Active BE:  solaris
                  Brand:  solaris
            Size Needed:  3.3GB
              Unique ID:  88e338aa-94e4-4754-8311-c16e0869e2f8
               AI Media:  0.175.3.22.0.3.0_ai_i386.iso
              Root-only:  Yes
Example 10  Viewing Storage Configuration Information from the Origin System

The archiveadm info when used with the –t option (or the –targets option) shows you the storage configuration information from the system that the archive was created from.

# archiveadm info -t clone.uar
    <target name="origin">
      <disk in_zpool="rpool" in_vdev="rpool-none" whole_disk="true">
        <disk_name name="/SYS/SASBP/HDD0" name_type="receptacle"/>
        <disk_prop dev_type="scsi" dev_vendor="HITACHI" dev_size="585937500secs"/>
        <disk_keyword key="boot_disk"/>
        <gpt_partition name="0" action="create" force="false" part_type="solaris">
          <size val="585920827secs" start_sector="256"/>
        </gpt_partition>
      <disk in_zpool="datapool1" in_vdev="datapool1-none" whole_disk="true">
        <disk_name name="/SYS/SASBP/HDD3" name_type="receptacle"/>
        <disk_prop dev_type="scsi" dev_vendor="HITACHI" dev_size="1172123568secs"/>
        <gpt_partition name="0" action="create" force="false" part_type="solaris">
          <size val="1172106895secs" start_sector="256"/>
        </gpt_partition>
      </disk>
      <logical noswap="false" nodump="false">
        <zpool name="datapool1" action="create" is_root="false" is_boot="false" mountpoint="/datapool1">
          <vdev name="datapool1-none" redundancy="none"/>
        </zpool>
        <zpool name="rpool" action="create" is_root="true" is_boot="false" mountpoint="/rpool">
          <vdev name="rpool-none" redundancy="none"/>
        </zpool>
      </logical>
    </target>

You can take this information, then change it to match a new system in an AI manifest that can be used to deploy the system. In this example we changed the disk names.

<target name="origin">
  <disk in_zpool="rpool" in_vdev="rpool-none" whole_disk="true">
    <disk_name name=“c1d0" name_type=“ctd"/>
  </disk>
  <disk in_zpool="datapool1" in_vdev="datapool1-none" whole_disk="true">
    <disk_name name=“c4d0" name_type=“ctd"/>
  </disk>
  <logical noswap="false" nodump="false">
    <zpool name="datapool1" action="create" is_root="false" is_boot="false" mountpoint="/datapool1">
      <vdev name="datapool1-none" redundancy="none"/>
    </zpool>
    <zpool name="rpool" action="create" is_root="true" is_boot="false"  mountpoint="/rpool">
      <vdev name="rpool-none" redundancy="none"/>
    </zpool>
  </logical>
</target>