Oracle Solaris ZFS 管理指南

Procedure如何将具有区域根的 UFS 根文件系统迁移到 ZFS 根文件系统(最低 Solaris 10 5/09)

使用此过程将具有 UFS 根文件系统和区域根的系统迁移到 Solaris 10 5/09 或以上的发行版。然后,使用 Oracle Solaris Live Upgrade 创建 ZFS BE。

在后面的步骤中,示例 UFS BE 名称为 c0t1d0s0,UFS 区域根为 zonepool/zfszone,ZFS 根 BE 为 zfsBE

  1. 如果系统正在运行先前的 Solaris 10 发行版,请将其升级到最低 Solaris 10 5/09 发行版。

    有关对运行 Solaris 10 发行版的系统进行升级的信息,请参见《Oracle Solaris 10 9/10 安装指南:Solaris Live 升级和升级规划》

  2. 创建根池。

    有关根池要求的信息,请参见ZFS 支持对于 Oracle Solaris 安装和 Oracle Solaris Live Upgrade 的要求

  3. 确认已引导 UFS 环境中的区域。


    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       2 zfszone          running    /zonepool/zones                native   shared
  4. 创建新 ZFS 引导环境。


    # lucreate -c c1t1d0s0 -n zfsBE -p rpool
    

    此命令将为新引导环境在根池中建立数据集并将当前引导环境(包括区域)复制到这些数据集。

  5. 激活新 ZFS 引导环境。


    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    c1t1d0s0                   yes      no     no        yes    -         
    zfsBE                      yes      yes    yes       no     -         #
    luactivate zfsBE       
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.
    .
    .
    .
  6. 重新引导系统。


    # init 6
    
  7. 确认在新 BE 中创建了 ZFS 文件系统和区域。


    # zfs list
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              6.17G  60.8G    98K  /rpool
    rpool/ROOT                         4.67G  60.8G    21K  /rpool/ROOT
    rpool/ROOT/zfsBE                   4.67G  60.8G  4.67G  /
    rpool/dump                         1.00G  60.8G  1.00G  -
    rpool/swap                          517M  61.3G    16K  -
    zonepool                            634M  7.62G    24K  /zonepool
    zonepool/zones                      270K  7.62G   633M  /zonepool/zones
    zonepool/zones-c1t1d0s0             634M  7.62G   633M  /zonepool/zones-c1t1d0s0
    zonepool/zones-c1t1d0s0@zfsBE       262K      -   633M  -
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       - zfszone          installed  /zonepool/zones                native   shared

示例 5–7 将具有区域根的 UFS 根文件系统迁移到 ZFS 根文件系统

在此示例中,将具有 UFS 根文件系统和区域根 (/uzone/ufszone)、ZFS 非根池 (pool) 以及区域根 (/pool/zfszone) 的 Oracle Solaris 10 9/10 系统迁移到 ZFS 根文件系统。在尝试迁移之前,请确保创建了 ZFS 根池并且安装和引导了区域。


# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   2 ufszone          running    /uzone/ufszone                 native   shared
   3 zfszone          running    /pool/zones/zfszone            native   shared

# lucreate -c ufsBE -n zfsBE -p rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <ufsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Copying root of zone <ufszone> to </.alt.tmp.b-EYd.mnt/uzone/ufszone>.
Creating snapshot for <pool/zones/zfszone> on <pool/zones/zfszone@zfsBE>.
Creating clone for <pool/zones/zfszone@zfsBE> on <pool/zones/zfszone-zfsBE>.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-DLd.mnt
updating /.alt.tmp.b-DLd.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         
# luactivate zfsBE    
.
.
.
# init 6
.
.
.
# zfs list
NAME                                    USED  AVAIL  REFER  MOUNTPOINT
pool                                    628M  66.3G    19K  /pool
pool/zones                              628M  66.3G    20K  /pool/zones
pool/zones/zfszone                     75.5K  66.3G   627M  /pool/zones/zfszone
pool/zones/zfszone-ufsBE                628M  66.3G   627M  /pool/zones/zfszone-ufsBE
pool/zones/zfszone-ufsBE@zfsBE           98K      -   627M  -
rpool                                  7.76G  59.2G    95K  /rpool
rpool/ROOT                             5.25G  59.2G    18K  /rpool/ROOT
rpool/ROOT/zfsBE                       5.25G  59.2G  5.25G  /
rpool/dump                             2.00G  59.2G  2.00G  -
rpool/swap                              517M  59.7G    16K  -
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - ufszone          installed  /uzone/ufszone                 native   shared
   - zfszone          installed  /pool/zones/zfszone            native   shared