This chapter provides the information necessary for performing a JumpStart installation for ZFS root pool. The following sections provides planning information, profile examples, and profile keyword descriptions.
JumpStart Installation for a ZFS Root (/) File System (Overview and Planning)
JumpStart Keywords for a ZFS Root (/) File System (Reference)
This chapter provides the information for you to create a JumpStart profile to install a ZFS root pool.
If you want to install a UFS root (/) file system, all existing profile keywords work as in previous Solaris releases. For a list of UFS profile keywords, see Chapter 8, Custom JumpStart (Reference).
A ZFS specific profile must contain the pool keyword. The pool keyword installs a new root pool and a new boot environment is created by default. You can provide the name of the boot environment and you can create a separate /var dataset with existing bootenv installbe keywords and the new bename and dataset options. Some keywords that are allowed in a UFS-specific profile are not allowed in a ZFS specific profile, such as those specifying the creation of UFS mount points.
For overall ZFS planning information, see Chapter 6, ZFS Root File System Installation (Planning), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.
Keep the following issues in mind before considering a JumpStart installation of a bootable ZFS root pool.
Table 9–1 JumpStart Limitations for ZFS Root Pools
Limitation |
Description |
For More Information |
||
---|---|---|---|---|
For a JumpStart installation, you cannot use an existing ZFS storage pool to create a bootable ZFS root pool. |
You must create a new ZFS storage pool with syntax similar to the following:
The complete pool keyword line is required because you cannot use an existing pool. The bootenv keyword line is optional. If you do not use bootenv, a default boot environment is created for you. For example:
| |||
You cannot create a pool with whole disks. |
You must create your pool with disk slices rather than whole disks. If in the profile you create a pool with whole disks, such as c0t0d0, the installation fails. You will receive an error message similar to the following.
| |||
Some keywords that are allowed in a UFS specific profile are not allowed in a ZFS specific profile, such as those specifying the creation of UFS mount points. | ||||
You cannot upgrade with JumpStart. You must use Solaris Live Upgrade |
With Solaris Live Upgrade, you can create a copy of the currently running system. This copy can be upgraded and then activated to become the currently running system. |
This section provides examples of ZFS specific JumpStart profiles.
For the ZFS root pool to be upgradeable and bootable, you must create your pool with disk slices rather than whole disks. If in the profile you create a pool with whole disks, such as c0t0d0, you will receive an error message similar to the following.
Invalid disk name (c0t0d0) |
install_type initial_install cluster SUNWCall pool newpool auto auto auto mirror c0t0d0s0 c0t1d0s0 bootenv installbe bename solaris10_6 |
The following list describes some of the keywords and values from this example.
The install_type keyword is required in every profile. The initial_install keyword performs an initial installation that installs a new Solaris OS in a new ZFS root pool.
The Entire Distribution software group, SUNWCall, is installed on the system. For more information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.
The pool keyword defines the characteristics of the new ZFS root pool.
Defines the name of the root pool.
Specifies the size of the disks automatically. The size is determined by the size of the specified disks.
The swap area is automatically sized with the auto keyword. The default size is 1/2 the size of physical memory, but no less than 512 Mbytes and no greater than 2 Gbytes. You can set the size outside this range by using the size option.
The dump device is automatically sized.
The mirrored configuration of disks has the mirror keyword and disk slices specified as c0t0d0s0 and c0t1d0s0.
installbe changes the characteristics of the default boot environment that is created during the installation.
Names the new boot environment solaris10_6.
install_type initial_install cluster SUNWCall pool newpool 80g 2g 2g mirror any any bootenv installbe bename solaris10_6 |
The following list describes some of the keywords and values from this example.
The install_type keyword is required in every profile. The initial_install keyword performs an initial installation that installs a new Solaris OS in a new ZFS root pool.
The Entire Distribution software group, SUNWCall, is installed on the system. For more information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.
The pool keyword defines the characteristics of the new ZFS root pool.
Specifies the name of the root pool.
Specifies the size of the disk slice.
The swap area and dump volumes are 2-Gbytes.
The mirrored configuration of disks has the mirror keyword and disk slices specified as c0t0d0s0 and c0t1d0s0.
The any options in the mirrored configuration finds any two available devices that are large enough to create a 80-Gbyte pool. If two such devices are not available, the install fails.
installbe changes the characteristics of the default boot environment that is created during the installation.
Names the new boot environment solaris10_6.
install_type initial_install cluster SUNWCall root_device c0t0d0s0 pool nrpool auto auto auto rootdisk.s0 bootenv installbe bename bnv dataset /var
The following list describes some of the keywords and values from this example.
The install_type keyword is required in every profile. The initial_install keyword performs an initial installation that installs a new Solaris OS in a new ZFS root pool.
The Entire Distribution software group, SUNWCall, is installed on the system. For more information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.
Specifies the disk slice where the OS is to be installed. The c0t0d0s0 defines the specific disk and slice for the OS.
The pool keyword defines the characteristics of the new ZFS root pool.
Defines the name of the root pool.
Specifies the size of the disks automatically. The size is determined by the size of the specified disks.
The swap area is automatically sized with the auto keyword. The default size is 1/2 the size of physical memory, but no less than 512 Mbytes and no greater than 2 Gbytes. You can set the size outside this range by using the size option.
The dump device is automatically sized.
The device used to create the root pool is specified as slice 0.
installbe changes the characteristics of the default boot environment that is created during the installation.
Names the new boot environment bnv.
Creates a /var dataset that is separate from the ROOT dataset. /var is the only value for dataset.
This section provides descriptions of some of the ZFS specific keywords that you can use in a JumpStart profile. The usage of the keywords in this section is either different from their usage in a UFS profile or used only in a ZFS profile.
For a quick reference of both UFS and ZFS profile keywords, see Profile Keywords Quick Reference.
The following list of keywords can be used in a ZFS profile. The usage is the same for both UFS and ZFS profiles. For descriptions of these keywords, see Profile Keyword Descriptions and Examples.
boot_device
cluster
dontuse
fdisk
filesys (mounting remote file systems)
geo
locale
package
usedisk
The bootenv keyword identifies boot environment characteristics. A boot environment is created by default during installation with the pool keyword. If you use the bootenv keyword with the installbe option, you can name the new boot environment and create a /var dataset within the boot environment.
This keyword can be used in a profile for installing a UFS file system or a ZFS root pool.
In a UFS file system, this keyword is used for creating an empty boot environment for the future installation of a Solaris Flash archive. For the complete description of the bootenv keyword for UFS, see bootenv Profile Keyword (UFS and ZFS).
For a ZFS root pool, the bootenv keyword changes the characteristics of the default boot environment that is created at install time. This boot environment is a copy of the root file system that you are installing.
The bootenv keyword can be used with the installbe, bename and dataset options. These options name the boot environment and create a separate /var dataset.
bootenv installbe bename new-BE-name [dataset mount-point]
Changes the characteristics of the default boot environment that is created during the installation.
Specifies the name of the new boot environment to be created, new_BE_name. The name can be no longer than 30 characters, can contain only alphanumeric characters, and can contain no multibyte characters. The name must be unique on the system.
Use the optional dataset keyword to identify a /var dataset that is separate from the ROOT dataset. The mount-point value is limited to /var. For example, a bootenv syntax line for separate /var dataset would be similar to the following:
bootenv installbe bename zfsroot dataset /var |
For more information about upgrading and activating a boot environment, see Chapter 11, Solaris Live Upgrade and ZFS (Overview), in Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
The install_type keyword is required in every profile. For a UFS installation, Several options are available. The only option available for a ZFS installation is the initial_install keyword. This option installs a new Solaris OS on a system. The profile syntax is the following:
install_type initial_install |
The following UFS options are not available for a ZFS installation.
upgrade - You must use Solaris Live Upgrade to upgrade ZFS root pool. See Chapter 11, Solaris Live Upgrade and ZFS (Overview), in Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
flash_install - A Solaris Flash archive cannot be installed.
flash_update - A Solaris Flash archive cannot be installed.
The pool keyword defines the new root pool to be created. The pool is then installed with a software group specified with the cluster keyword. The poolsize, swapsize, dumpsize, and vdevlist options are needed for creating a new root pool.
pool poolname poolsize swapsize dumpsize vdevlist
Specifies the name of the new pool to be created. A new pool is created with the specified size and with the specified devices, vdevlist.
Size of the new pool to be created. If you denote the amount of space, the size is assumed to be in Mbytes, unless specified by g (Gbytes). You can also use the auto option.
Allocates the largest possible pool size given the constraints, such as size of the disks and preserved slices.
The meaning of auto for the poolsize keyword is different from the filesys keyword use of auto in a UFS file system. In ZFS, the size of the disk is checked to verify that the minimum size can be accommodated. If the minimize size is available, the largest possible pool size is allocated given the constraints, such as size of the disks and preserved slices.
Size of the swap volume (zvol) to be created within a new root pool. The options are either auto or size.
The swap area is automatically sized. The default size is 1/2 the size of physical memory, but no less than 512 Mbytes and no greater than 2 Gbytes. You can set the size outside this range by using the size option.
Can be used to specify an amount. Size is assumed to be in Mbytes, unless specified by g (Gbytes).
Size of the dump volume to be created within a new pool.
Uses the default swap size.
Can be used to specify an amount. Size is assumed to be in Mbytes, unless specified by g (Gbytes).
One or more devices used to create the pool.
Devices in the vdevlist must be slices for the root pool. vdevlist can be either a single-device-name in the form cwtxdysz or mirror or any option.
The format of the vdevlist is the same as the format of the zpool create command.
A disk slice in the form or cwtxdysz, such as c0t0d0s0.
Specifies the mirroring of the disk.
At this time, only mirrored configurations are supported when multiple devices are specified. You can mirror as many as disks you like, but the size of the pool created is determined by the smallest of the specified disks. For more information about creating mirrored storage pools, see Mirrored Storage Pool Configuration in Solaris ZFS Administration Guide.
device-names lists the devices to be mirrored. The names are in the form of cwtxdysz, for example c0t0d0s0 and c0t0d1s5.
The any option enables the installer to choose the devices.
Enables the installer to select a suitable device.
root_device cwtxdysz
root_device specifies the device to be used for the root pool. The root_device keyword determines where the operating system is installed. This keyword is used the same in both ZFS and a UFS file system with some limitations. For the ZFS root pool, the root device is limited to a single system. This keyword is not useful for mirrored pools.
Identifies the root disk where the operating system is installed.
For additional information about the topics included in this chapter, see the resources listed in Table 9–2.
Table 9–2 Additional Resources
Resource |
Location |
---|---|
For ZFS information, including overview, planning, and step-by-step instructions | |
For a list of all JumpStart keywords | |
For information about using Solaris Live Upgrade to migrate from UFS to ZFS or create a new boot environment in a ZFS root pool |