Utilize este procedimento para migrar um sistema com um sistema de arquivos raiz UFS e uma raiz de região para a versão Solaris 10 5/09, pelo menos. Em seguida, utilize o Oracle Solaris Live Upgrade para criar um BE do ZFS.
Nas etapas a seguir, o nome do BE do UFS de exemplo é c0t1d0s0, a raiz da região UFS é zonepool/zfszone, e o BE de raiz ZFS é zfs .
Atualize o sistema para, no mínimo, a versão Solaris 10 5/09 se estiver executando uma versão anterior do Solaris 10.
Para mais informações sobre atualizações de um sistema que executa o Solaris 10, consulte Oracle Guia de instalação do Solaris 10 9/10: Solaris Live Upgrade e planejamento da atualização.
Crie o conjunto raiz.
Para obter informações sobre os requisitos do conjunto raiz, consulte Requisitos de instalação do Oracle Solaris e Oracle Solaris Live Upgrade para suporte ZFS.
Comprove que as regiões do ambiente do UFS estejam inicializadas.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 zfszone running /zonepool/zones native shared |
Crie o novo ambiente de inicialização do ZFS.
# lucreate -c c1t1d0s0 -n zfsBE -p rpool |
Este comando estabelece os conjuntos de dados no conjunto raiz do novo ambiente de inicialização e copia o ambiente de inicialização atual (incluindo as regiões) em tais conjuntos de dados.
Ative o novo ambiente de inicialização do ZFS.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t1d0s0 yes no no yes - zfsBE yes yes yes no - # luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. . . . |
Reinicialize o sistema.
# init 6 |
Comprove se os sistemas de arquivos ZFS e as regiões foram criados no novo BE.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.17G 60.8G 98K /rpool rpool/ROOT 4.67G 60.8G 21K /rpool/ROOT rpool/ROOT/zfsBE 4.67G 60.8G 4.67G / rpool/dump 1.00G 60.8G 1.00G - rpool/swap 517M 61.3G 16K - zonepool 634M 7.62G 24K /zonepool zonepool/zones 270K 7.62G 633M /zonepool/zones zonepool/zones-c1t1d0s0 634M 7.62G 633M /zonepool/zones-c1t1d0s0 zonepool/zones-c1t1d0s0@zfsBE 262K - 633M - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zfszone installed /zonepool/zones native shared |
Neste exemplo, um sistema Solaris 10 9/10 com um sistema de arquivos raiz UFS e uma raiz de região (/uzone/ufszone) e um conjunto não-raiz ZFS (grupo ) e a região raiz (/grupo/zfszone) é migrado para um sistema de arquivos raiz ZFS. Certifique-se de que o conjunto raiz ZFS seja criado e que as regiões estejam instaladas e inicializadas antes de realizar a migração.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 ufszone running /uzone/ufszone native shared 3 zfszone running /pool/zones/zfszone native shared |
# lucreate -c ufsBE -n zfsBE -p rpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Copying root of zone <ufszone> to </.alt.tmp.b-EYd.mnt/uzone/ufszone>. Creating snapshot for <pool/zones/zfszone> on <pool/zones/zfszone@zfsBE>. Creating clone for <pool/zones/zfszone@zfsBE> on <pool/zones/zfszone-zfsBE>. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </rpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-DLd.mnt updating /.alt.tmp.b-DLd.mnt/platform/sun4u/boot_archive Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful. # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes - # luactivate zfsBE . . . # init 6 . . . # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 628M 66.3G 19K /pool pool/zones 628M 66.3G 20K /pool/zones pool/zones/zfszone 75.5K 66.3G 627M /pool/zones/zfszone pool/zones/zfszone-ufsBE 628M 66.3G 627M /pool/zones/zfszone-ufsBE pool/zones/zfszone-ufsBE@zfsBE 98K - 627M - rpool 7.76G 59.2G 95K /rpool rpool/ROOT 5.25G 59.2G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.25G 59.2G 5.25G / rpool/dump 2.00G 59.2G 2.00G - rpool/swap 517M 59.7G 16K - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ufszone installed /uzone/ufszone native shared - zfszone installed /pool/zones/zfszone native shared |