Skip Navigation Links | |
Exit Print View | |
Oracle Solaris ZFS Administration Guide Oracle Solaris 10 1/13 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
4. Installing and Booting an Oracle Solaris ZFS Root File System
Installing and Booting an Oracle Solaris ZFS Root File System (Overview)
Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support
Oracle Solaris Release Requirements
General ZFS Root Pool Requirements
Disk Space Requirements for ZFS Root Pools
ZFS Root Pool Configuration Requirements
Installing a ZFS Root File System (Oracle Solaris Initial Installation)
How to Create a Mirrored ZFS Root Pool (Postinstallation)
Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)
Installing a ZFS Root File System ( JumpStart Installation)
JumpStart Profile Examples for ZFS
Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)
ZFS Migration Issues With Live Upgrade
Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones)
Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)
How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)
How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)
How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)
Managing Your ZFS Swap and Dump Devices
Adjusting the Sizes of Your ZFS Swap Device and Dump Device
Booting From a ZFS Root File System
Booting From an Alternate Disk in a Mirrored ZFS Root Pool
SPARC: Booting From a ZFS Root File System
x86: Booting From a ZFS Root File System
Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)
How to Resolve ZFS Mount-Point Problems
Booting for Recovery Purposes in a ZFS Root Environment
How to Boot ZFS From Alternate Media
Recovering the ZFS Root Pool or Root Pool Snapshots
How to Replace a Disk in the ZFS Root Pool
How to Create Root Pool Snapshots
How to Re-create a ZFS Root Pool and Restore Root Pool Snapshots
How to Roll Back Root Pool Snapshots From a Failsafe Boot
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Recommended Oracle Solaris ZFS Practices
During an initial Oracle Solaris OS installation or after performing a Live Upgrade migration from a UFS file system, a swap area is created on a ZFS volume in the ZFS root pool. For example:
# swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 4194288 4194288
During an initial Oracle Solaris OS installation or a Live Upgrade from a UFS file system, a dump device is created on a ZFS volume in the ZFS root pool. In general, a dump device requires no administration because it is set up automatically at installation time. For example:
# dumpadm Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes Save compressed: on
If you disable and remove the dump device, then you must enable it with the dumpadm command after it is re-created. In most cases, you will only have to adjust the size of the dump device by using the zfs command.
For information about the swap and dump volume sizes that are created by the installation programs, see Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support.
Both the swap volume size and the dump volume size can be adjusted during and after installation. For more information, see Adjusting the Sizes of Your ZFS Swap Device and Dump Device.
Consider the following issues when working with your ZFS swap and dump devices:
Separate ZFS volumes must be used for the swap area and the dump device.
Currently, using a swap file on a ZFS file system is not supported.
If you need to change your swap area or dump device after the system is installed or upgraded, use the swap and dumpadm commands as in previous releases. For more information, see Chapter 16, Configuring Additional Swap Space (Tasks), in System Administration Guide: Devices and File Systems and Chapter 17, Managing System Crash Information (Tasks), in System Administration Guide: Advanced Administration.
See the following sections for more information:
You might need to adjust the size of swap and dump devices after installation or possibly, recreate the swap and dump volumes.
You can adjust the size of your swap and dump volumes during an initial installation. For more information, see Example 4-1.
You can create and size your swap and dump volumes before you perform a Live Upgrade operation. For example:
Create your storage pool.
# zpool create rpool mirror c0t0d0s0 c0t1d0s0
Create your dump device.
# zfs create -V 2G rpool/dump
Enable the dump device.
# dumpadm -d /dev/zvol/dsk/rpool/dump Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes Save compressed: on
Create your swap volume:
# zfs create -V 2G rpool/swap
You must enable the swap area when a new swap device is added or changed.
# swap -a /dev/zvol/dsk/rpool/swap
Add an entry for the swap volume to the /etc/vfstab file.
Live Upgrade does not resize existing swap and dump volumes.
You can reset the volsize property of the dump device after a system is installed. For example:
# zfs set volsize=2G rpool/dump # zfs get volsize rpool/dump NAME PROPERTY VALUE SOURCE rpool/dump volsize 2G -
If the current swap area is not in use, you can resize the size of the current swap volume, but you must reboot the system to see the increased swap space size.
# zfs get volsize rpool/swap NAME PROPERTY VALUE SOURCE rpool/swap volsize 4G local # zfs set volsize=8g rpool/swap # zfs get volsize rpool/swap NAME PROPERTY VALUE SOURCE rpool/swap volsize 8G local # init 6
You can attempt to resize the swap volume, but it might be best to remove the swap device. Then, re-create it. For example:
# swap -d /dev/zvol/dsk/rpool/swap # zfs create -V 2g rpool/swap # swap -a /dev/zvol/dsk/rpool/swap
You can adjust the size of the swap and dump volumes in a JumpStart profile by using profile syntax similar to the following:
install_type initial_install cluster SUNWCXall pool rpool 16g 2g 2g c0t0d0s0
In this profile, two 2g entries set the size of the swap volume and dump volume to 2 GB each.
If you need more swap space on a system that is already installed, just add another swap volume. For example:
# zfs create -V 2G rpool/swap2
Then, activate the new swap volume. For example:
# swap -a /dev/zvol/dsk/rpool/swap2 # swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 1058800 1058800 /dev/zvol/dsk/rpool/swap2 256,3 16 4194288 4194288
Finally, add an entry for the second swap volume to the /etc/vfstab file.
Keep the following points in the mind if you remove the default swap and dump volumes and recreate them in a non-root (data) pool:
If you want to create swap and dump devices in a non-root pool, do not create swap and dump volumes in a RAIDZ pool. If a pool includes swap and dump volumes, it must be a one-disk pool or a mirrored pool.
If you use Live Upgrade to update your system, use the -P option to preserve the dump device from the PBE to ABE. For example:
# lucreate -n newBE -P
Review the following if you have problems either capturing a system crash dump or resizing the dump device.
If a crash dump was not created automatically, you can use the savecore command to save the crash dump.
A dump volume is created automatically when you initially install a ZFS root file system or migrate to a ZFS root file system. In most cases, you only need to adjust the size of the dump volume if the default dump volume size is too small. For example, on a large-memory system, the dump volume size is increased to 40 GB as follows:
# zfs set volsize=40G rpool/dump
Resizing a large dump volume can be a time-consuming process.
If, for any reason, you need to enable a dump device after you manually create a dump device, use syntax similar to the following:
# dumpadm -d /dev/zvol/dsk/rpool/dump Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes
A system with 128 GB or greater memory night need a larger dump device than the dump device that is created by default. If the dump device is too small to capture an existing crash dump, a message similar to the following is displayed:
# dumpadm -d /dev/zvol/dsk/rpool/dump dumpadm: dump device /dev/zvol/dsk/rpool/dump is too small to hold a system dump dump size 36255432704 bytes, device size 34359738368 bytes
For information about sizing the swap and dump devices, see Planning for Swap Space in System Administration Guide: Devices and File Systems.
You cannot currently add a dump device to a pool with multiple top-level devices. You will see a message similar to the following:
# dumpadm -d /dev/zvol/dsk/datapool/dump dump is not supported on device '/dev/zvol/dsk/datapool/dump': 'datapool' has multiple top level vdevs
Add the dump device to the root pool, which cannot have multiple top-level devices.