Guia de administração do ZFS Oracle Solaris

Utilizando o Oracle Solaris Live Upgrade para migrar para um sistema de arquivos raiz ZFS (sem regiões)

Os exemplos seguinte mostram como migrar um sistema de arquivos raiz UFS para ZFS.

Se for migrar ou atualizar um sistema com regiões, consulte as seguintes seções:


Exemplo 5–3 Utilizando o Oracle Solaris Live Upgrade para migrar um sistema de arquivos raiz UFS para um sistema de arquivos raiz ZFS

O exemplo seguinte mostra como criar um BE de um sistema de arquivos raiz ZFS a partir de um sistema de arquivos raiz UFS. O BE atual, ufsBE, que contém um sistema de arquivos raiz UFS, é identificado pela opção -c. Se a opção - c não for incluída, o nome do BE atual será, por padrão, o nome do dispositivo. O novo BE, zfsBE, é identificado pela opção -n. Um pool de armazenamento ZFS deve existir antes da operação com lucreate.

O conjunto de armazenamento ZFS deve ser criado com segmentos em vez de com um disco inteiro para que possa ser atualizável e inicializável. Antes de criar o novo grupo, certifique-se de que os discos a serem utilizados no grupo tenham uma legenda SMI (VTOC) em vez de uma legenda EFI. Se o disco for remarcado com uma legenda SMI, certifique-se de que o processo de rotulação não altera o esquema de particionamento. Na maioria dos casos, toda capacidade do disco deve estar nos segmentos destinados ao conjunto raiz.


# zpool create rpool mirror c1t2d0s0 c2t1d0s0
# lucreate -c ufsBE -n zfsBE -p rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufsBE>.
Creating initial configuration for primary boot environment <ufsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <ufsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-qD.mnt
updating /.alt.tmp.b-qD.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.

Após a operação com lucreate ter terminado, use o comando lustatus para exibir o status do BE. Por exemplo:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         

Em seguida, consulte a lista de componentes do ZFS. Por exemplo:


# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 7.17G  59.8G  95.5K  /rpool
rpool/ROOT            4.66G  59.8G    21K  /rpool/ROOT
rpool/ROOT/zfsBE      4.66G  59.8G  4.66G  /
rpool/dump               2G  61.8G    16K  -
rpool/swap             517M  60.3G    16K  -

Depois, use o comandoluactivate para ativar o novo BE do ZFS. Por exemplo:


# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
.
.
.
Modifying boot archive service
Activation of boot environment <zfsBE> successful.

Depois, reinicialize o sistema para o BE do ZFS.


# init 6

Comprove que o BE do ZFS está ativo.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -         
zfsBE                      yes      yes    yes       no     -      

Se voltar ao BE do UFS, você terá que reimportar os conjuntos de armazenamento ZFS que foram criados enquanto o BE do ZFS estava sendo inicializado, porque tais grupos não estão automaticamente disponíveis no BE do UFS.

Se o BE do UFS já não for mais necessário, você pode removê-lo com o comando ludelete.



Exemplo 5–4 Utilizando o Oracle Solaris Live Upgrade para criar um BE do ZFS a partir de um BE do ZFS

A criação de um BE do ZFS a partir de um BE do ZFS no mesmo pool é muito rápida porque esta operação usa os recursos de instantâneos e clones do ZFS. Se o BE estiver no mesmo conjunto ZFS, por exemplo, a opção -p é omitida.

Se possuir vários BEs de ZFS, faça o seguinte para selecionar a partir de qual BE inicializar:

Para obter mais informações, consulte o Exemplo 5–9.


# lucreate -n zfs2BE
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <zfsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfs2BE>.
Source boot environment is <zfsBE>.
Creating boot environment <zfs2BE>.
Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
Population of boot environment <zfs2BE> successful.
Creation of boot environment <zfs2BE> successful.


Exemplo 5–5 Atualizando o BE do ZFS (luupgrade)

Você pode atualizar o BE do ZFS com correções ou pacotes adicionais.

O processo básico é:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -   
# luupgrade -p -n zfsBE -s /net/system/export/s10up/Solaris_10/Product SUNWchxge
Validating the contents of the media </net/install/export/s10up/Solaris_10/Product>.
Mounting the BE <zfsBE>.
Adding packages to the BE <zfsBE>.

Processing package instance <SUNWchxge> from </net/install/export/s10up/Solaris_10/Product>

Chelsio N110 10GE NIC Driver(sparc) 11.10.0,REV=2006.02.15.20.41
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.

This appears to be an attempt to install the same architecture and
version of a package which is already installed.  This installation
will attempt to overwrite this package.

Using </a> as the package base directory.
## Processing package information.
## Processing system information.
   4 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWchxge> [y,n,?] y
Installing Chelsio N110 10GE NIC Driver as <SUNWchxge>

## Installing part 1 of 1.
## Executing postinstall script.

Installation of <SUNWchxge> was successful.
Unmounting the BE <zfsBE>.
The package add to the BE <zfsBE> completed.