This part provides an overview of several technologies that relate to a Solaris OS installation or upgrade. Guidelines and requirements are also included.
Installation for the ZFS root (/) file system
Booting on x86 or SPARC based systems
Solaris Zones partitioning technology
Solaris Volume Manager components such as RAID-1 volumes
This chapter provides system requirements and limitations to assist you when you install a ZFS root pool. Also provided is an overview of the installation programs that can install a ZFS root pool.
If you have multiple boot environments on your system see Chapter 7, SPARC and x86 Based Booting (Overview and Planning) for information on booting.
Requirement or Limitation |
Description |
Information |
---|---|---|
786 MB is the minimum memory. 1 GB is recommended for overall performance. | ||
Disk space |
The minimum amount of available pool space for a bootable ZFS root file system depends on the amount of physical memory, the disk space available, and the number of boot environments to be created. |
For an explanation, see Disk Space Requirements for a ZFS Installation. |
The ZFS storage pool must be created with slices rather than whole disks to be upgradeable and bootable. |
|
|
When you migrate from a UFS root (/) file system to a ZFS root pool with Solaris Live Upgrade, consider these requirements. |
|
|
Normally, on a system with a UFS root file system, swap and dump are on the same slice. Therefore, UFS shares the swap space with the dump device. In a ZFS root pool, swap and dump are separate zvols, so they do not share the same physical space. When a system is installed or upgraded with a ZFS root file system, the size of the swap area and the dump device are dependent on the amount of physical memory. The minimum amount of available pool space for a bootable ZFS root file system depends on the amount of physical memory, the disk space available, and the number of boot environments to be created. Approximately 1 Gbyte of memory and at least 2 Gbytes of disk space are recommended. The space is consumed as follows:
Swap area and dump device - The default size of swap is 1/2 the size of physical memory, but no less than 512 Mbytes and no greater than 2 Gbytes. The dump device is calculated based on the size of the memory and the contents of the dumpadm.conf file. This file defines what goes into a crash dump. You can adjust the sizes of your swap and device volumes before or after installation. For more information, see Introducing ZFS Properties in Solaris ZFS Administration Guide.
Boot environments - In addition to either new swap and dump space requirements or adjusted swap and dump device sizes, a ZFS boot environment that is migrated from a UFS boot environment needs approximately 6 Gbytes. Each ZFS boot environment that is cloned from another ZFS boot environment does not need additional disk space. However, the boot environment size might increase when patches are applied. All ZFS boot environments in the same root pool use the same swap and dump devices.
The following installation programs perform an initial installation of a ZFS root pool.
Solaris installation program text installer
Custom JumpStart with an installation profile
Solaris Live Upgrade can migrate a UFS file system to a ZFS root pool. Also, Solaris Live Upgrade can create ZFS boot environments that can be upgraded.
Table 6–2 ZFS Installation Programs and Limitations
ZFS Installation Program |
Description |
Limitations |
Information |
---|---|---|---|
Solaris Installation program text installer |
The Solaris text installer performs an initial installation for a ZFS root pool. During the installation, you can choose to install either a UFS file system or a ZFS root pool. You can set up a mirrored ZFS root pool by selecting two or more slices during the installation. Or, you can attach or add additional disks after the installation to create a mirrored ZFS root pool. Swap and dump devices on ZFS volumes are automatically created in the ZFS root pool. |
| |
Solaris Live Upgrade |
You can use the Solaris Live Upgrade feature to perform the following tasks:
After you have used the lucreate command to create a ZFS boot environment, you can use the other Solaris Live Upgrade commands on the boot environment. |
| |
JumpStart |
You can create a profile to create a ZFS storage pool and designate a bootable ZFS file system. New ZFS keywords provide an initial installation. |
|
|
Starting with the Solaris 10 10/08 release, changes in Solaris boot architecture provides many new features, including booting from different file system types, such as ZFS file systems. This chapter describes some of these changes and provides references to more information about booting. Also, this chapter provides an overview of GRUB based booting for x86 systems.
This chapter contains the following sections:
Starting with the Solaris 10 10/08 release, the Solaris SPARC bootstrap process has been redesigned to increase commonality with the Solaris x86 boot architecture. The improved Solaris boot architecture brings direct boot, ramdisk-based booting, and the ramdisk miniroot to the SPARC platform. These enabling technologies support the following functions:
Booting a system from additional file system types, such as a ZFS file system.
Booting a single miniroot for software installation from DVD, NFS, or HTTP
Additional improvements include significantly faster boot times, increased flexibility, and reduced maintenance requirements.
As part of this architecture redesign, the Solaris boot archives and the bootadm command, previously only available on the Solaris x86 platform, are now an integral part of the Solaris SPARC boot architecture.
Although the implementation of the Solaris SPARC boot has changed, no administrative procedures for booting a SPARC-based system have been impacted. Solaris installations have changed to include installing from a ZFS file system, but otherwise have not changed for the new boot architecture.
If your system has more than one OS installed on the system or more than one root boot environment in a ZFS root pool, you can boot from these boot environments for both SPARC and x86 platforms. The boot environments available for booting include boot environments created by Solaris Live Upgrade.
Starting with the Solaris 10 10/08 release for a SPARC based system, you can boot a ZFS root file system in a ZFS pool. For ZFS root pools, you can list the available boot environments with the boot command with the -L option. You can then choose a boot environment and use the OBP boot command with the -Z option to boot that boot environment. The -Z option is an alternative for the luactivate command that is also used to boot a new boot environment for a ZFS root pool. The luactivate command is the preferred method of switching boot environments. For a UFS file system, you continue to use the OpenBootTM PROM OBP as the primary administrative interface, with boot options selected by using OBP commands.
Starting with the Solaris 10 1/06 release for x86 based systems, a GRUB boot menu provides the interface for booting between different boot environments. Starting with the Solaris 10 10/08 release, this menu lists ZFS boot environments that are available for booting. If the default boot environment is a ZFS file system and the GRUB menu is displayed, you can let the default boot environment boot or choose another boot environment to boot. The GRUB menu is an alternative to using the luactivate command that is also used to boot a new boot environment for a ZFS root pool. The 88luactivate is the preferred method of switching boot environments.
On both SPARC and x86 based systems, each ZFS root pool has a dataset designated as the default root file system. If for SPARC, you type the boot command or for x86, you take the default from the GRUB menu, then this default root file system is booted.
Table 7–1 Where to Find Information on Booting
Description |
Information |
---|---|
For a high-level overview of booting features | |
For more detailed overview of booting features | |
x86: For information about modifying boot behavior such as editing the menu.lst file and locating the menu.lst file | |
For procedures for booting a ZFS file system |
Chapter 12, Booting a Solaris System (Tasks), in System Administration Guide: Basic Administration |
For procedures for managing a boot archive, such as locating the GRUB menu.lst file and using the bootadm command |
GRUB, the open source boot loader, is the default boot loader in the Solaris OS.
The boot loader is the first software program that runs after you power on a system. After you power on an x86 based system, the Basic Input/Output System (BIOS) initializes the CPU, the memory, and the platform hardware. When the initialization phase has completed, the BIOS loads the boot loader from the configured boot device, and then transfers control of the system to the boot loader.
GRUB is an open source boot loader with a simple menu interface that includes boot options that are predefined in a configuration file. GRUB also has a command-line interface that is accessible from the menu interface for performing various boot commands. In the Solaris OS, the GRUB implementation is compliant with the Multiboot Specification. The specification is described in detail at http://www.gnu.org/software/grub/grub.html.
Because the Solaris kernel is fully compliant with the Multiboot Specification, you can boot a Solaris x86 based system by using GRUB. With GRUB, you can more easily boot and install various operating systems.
A key benefit of GRUB is that it is intuitive about file systems and kernel executable formats, which enables you to load an operating system without recording the physical position of the kernel on the disk. With GRUB based booting, the kernel is loaded by specifying its file name, and the drive, and the partition where the kernel resides. GRUB based booting replaces the Solaris Device Configuration Assistant and simplifies the booting process with a GRUB menu.
This section describes the basics of GRUB based booting and describes the GRUB menu.
When you install the Solaris OS, two GRUB menu entries are installed on the system by default. The first entry is the Solaris OS entry. The second entry is the failsafe boot archive, which is to be used for system recovery. The Solaris GRUB menu entries are installed and updated automatically as part of the Solaris software installation and upgrade process. These entries are directly managed by the OS and should not be manually edited.
During a standard Solaris OS installation, GRUB is installed on the Solaris fdisk partition without modifying the system BIOS setting. If the OS is not on the BIOS boot disk, you need to do one of the following:
Modify the BIOS setting.
Use a boot manager to bootstrap to the Solaris partition. For more details, see your boot manager.
The preferred method is to install the Solaris OS on the boot disk. If multiple operating systems are installed on the machine, you can add entries to the menu.lst file. These entries are then displayed in the GRUB menu the next time you boot the system.
For additional information on multiple operating systems, see How Multiple Operating Systems Are Supported by GRUB in System Administration Guide: Basic Administration.
Performing a GRUB based network boot requires a DHCP server that is configured for PXE clients and an install server that provides tftp service. The DHCP server must be able to respond to the DHCP classes, PXEClient and GRUBClient. The DHCP response must contain the following information:
IP address of the file server
Name of the boot file (pxegrub)
rpc.bootparamd, which is usually a requirement on the server side for performing a network boot, is not required for a GRUB based network boot.
If no PXE or DHCP server is available, you can load GRUB from CD-ROM or local disk. You can then manually configure the network in GRUB and download the multiboot program and the boot archive from the file server.
For more information, see Overview of Booting and Installing Over the Network With PXE in Solaris 10 5/09 Installation Guide: Network-Based Installations.
This chapter provides an overview of how Solaris Zones partitioning technology relates to upgrading the Solaris OS when non-global zones are configured.
This chapter contains the following sections:
The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. A non-global zone is a virtualized operating system environment created within a single instance of the Solaris OS. When you create a non-global zone, you produce an application execution environment in which processes are isolated from the rest of the system. This isolation prevents processes that are running in one non-global zone from monitoring or affecting processes that are running in other non-global zones. Even a process running with superuser credentials cannot view or affect activity in other zones. A non-global zone also provides an abstract layer that separates applications from the physical attributes of the machine on which they are deployed. Examples of these attributes include physical device paths.
Every Solaris system contains a global zone. The global zone has a dual function. The global zone is both the default zone for the system and the zone used for system-wide administrative control. All processes run in the global zone if no non-global zones are created by the global administrator. The global zone is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled. Only the global zone is bootable from the system hardware. Administration of the system infrastructure, such as physical devices, routing, or dynamic reconfiguration (DR), is only possible in the global zone. Appropriately privileged processes running in the global zone can access objects associated with the non-global zones.
Description |
For More Information |
---|---|
The following sections describe how you can upgrade a system that contains non-global zones. | |
For complete information on creating and configuring non-global zones |
After the Solaris OS is installed, you can install and configure non-global zones. You can upgrade the Solaris OS when non-global zones are installed. If you have branded non-global zones installed, they are ignored during the upgrade process. Installation programs that can accommodate systems that have non-global zones installed are summarized below.
Table 8–1 Choosing an Installation Program to Upgrade With Non-Global Zones
Upgrade Program |
Description |
For More Information |
---|---|---|
Solaris Live Upgrade |
You can upgrade or patch a system that contains non-global zones. If you have a system that contains non-global zones, Solaris Live Upgrade is the recommended upgrade program or program to add patches. Other upgrade programs might require extensive upgrade time, because the time required to complete the upgrade increases linearly with the number of installed non-global zones. If you are patching a system with Solaris Live Upgrade, you do not have to take the system to single-user mode and you can maximize your system's uptime. Starting with the Solaris 10 8/07 release, changes to accommodate systems that have non-global zones installed are the following:
|
|
Solaris Live Upgrade continued |
Note – By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. Updating shared files in the active boot environment also updates data in the inactive boot environment. The /export file system is an example of a shared file system. If you use the -m option and the zonename option, the non-global zone's shared file system is copied to a separate slice and data is not shared. This option prevents non-global zone file systems that were created with the zonecfg add fs command from being shared between the boot environments. Additional changes, starting with the Solaris 10/8/07 release, that accommodate systems with non-global zones installed include the following:
| |
Solaris interactive installation program GUI |
You can upgrade or patch a system when non-global zones are installed. The time to upgrade or patch might be extensive, depending on the number of non-global zones that are installed. |
For more information about installing with this program, see Chapter 2, Installing With the Solaris Installation Program For UFS File Systems (Tasks), in Solaris 10 5/09 Installation Guide: Basic Installations. |
Automated JumpStart installation |
You can upgrade or patch with any keyword that applies to an upgrade or patching. The time to upgrade or patch might be extensive, depending on the number of non-global zones that are installed. |
For more information about installing with this program, see Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations. |
Limitations when upgrading with non-global zones are listed in the following table.
Table 8–2 Limitations When Upgrading With Non-Global Zones
Program or Condition |
Description |
For More Information |
---|---|---|
Consider these issues when using Solaris Live Upgrade on a system with zones installed. It is critical to avoid zone state transitions during lucreate and lumount operations. |
|
|
Problems can occur when the global zone administrator does not notify the non-global zone administrator of an upgrade with Solaris Live Upgrade. |
When Solaris Live Upgrade operations are underway, non-global zone administrator involvement is critical. The upgrade affects the work of the administrators, who will be addressing the changes that occur as a result of the upgrade. Zone administrators should ensure that any local packages are stable throughout the sequence, handle any post-upgrade tasks such as configuration file adjustments, and generally schedule around the system outage. For example, if a non-global zone administrator adds a package while the global zone administrator is copying the file systems with the lucreate command, the new package is not copied with the file systems and the non-global zone administrator is unaware of the problem. | |
Solaris Flash archives cannot be used with non-global zones. |
A Solaris Flash archive cannot be properly created when a non-global zone is installed. The Solaris Flash feature is not compatible with Solaris Zones partitioning technology. If you create a Solaris Flash archive, the resulting archive is not installed properly when the archive is deployed under these conditions:
|
For more information about using Solaris Flash archives, see Solaris 10 5/09 Installation Guide: Solaris Flash Archives (Creation and Installation). |
Using a command that uses the -R option or equivalent must not be used in some situations. |
Any command that accepts an alternate root (/) file system by using the -R option or equivalent must not be used if the following are true:
An example is the -R root_path option to the pkgadd utility run from the global zone with a path to the root (/) file system in a non-global zone. |
For a list of utilities that accept an alternate root (/) file system and more information about zones, see Restriction on Accessing A Non-Global Zone From the Global Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. |
You should back up the global and non-global zones on your Solaris system before you perform the upgrade. For information about backing up a system with zones installed, see Chapter 26, Solaris Zones Administration (Overview), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
When installing the global zone, be sure to reserve enough disk space for all of the zones you might create. Each non-global zone might have unique disk space requirements.
No limits are placed on how much disk space can be consumed by a zone. The global zone administrator is responsible for space restriction. Even a small uniprocessor system can support a number of zones running simultaneously. The characteristics of the packages installed in the global zone affect the space requirements of the non-global zones that are created. The number of packages and space requirements are factors.
For complete planning requirements and recommendations, see Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
This chapter discusses the advantages of creating RAID-1 volumes (mirrors) for the root (/) file system. This chapter also describes the Solaris Volume Manager components that are required to create mirrors for file systems. This chapter describes the following topics.
For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:
For Solaris Live Upgrade: General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems in Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning
For JumpStart:
During the installation or upgrade, you can create RAID-1 volumes to duplicate your system data over multiple physical disks. By duplicating your data over separate disks, you can protect your data from disk corruption or a disk failure.
The Solaris custom JumpStart and Solaris Live Upgrade installation methods use the Solaris Volume Manager technology to create RAID-1 volumes that mirror a file system. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. The custom JumpStart and Solaris Live Upgrade installation methods enable a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system. You can create RAID-1 volumes during your installation or upgrade, eliminating the need to create them after the installation.
For guidelines, see Custom JumpStart and Solaris Live Upgrade Guidelines.
For detailed information about complex Solaris Volume Manager software and components, see Solaris Volume Manager Administration Guide.
Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. A volume is a name for a group of physical slices that appear to the system as a single, logical device. Volumes are actually pseudo, or virtual, devices in standard UNIX® terms.
A volume is functionally identical to a physical disk in the view of an application or a file system (such as UFS). Solaris Volume Manager converts I/O requests that are directed at a volume into I/O requests to the underlying member disks. Solaris Volume Manager volumes are built from slices (disk partitions) or from other Solaris Volume Manager volumes.
You use volumes to increase performance and data availability. In some instances, volumes can also increase I/O performance. Functionally, volumes behave the same way as slices. Because volumes look like slices, they are transparent to end users, applications, and file systems. Like physical devices, you can use Solaris Volume Manager software to access volumes through block or raw device names. The volume name changes, depending on whether the block or raw device is used. The custom JumpStart installation method and Solaris Live Upgrade support the use of block devices to create mirrored file systems. See RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade for details about volume names.
When you create RAID-1 volumes ) with RAID-0 volumes (single-slice concatenations), Solaris Volume Manager duplicates data on the RAID-0 submirrors and treats the submirrors as one volume.
Figure 9–1 shows a mirror that duplicates the root (/) file system over two physical disks.
Figure 9–1 shows a system with the following configuration.
The mirror that is named d30 consists of the submirrors that are named d31 and d32. The mirror, d30, duplicates the data in the root (/) file system on both submirrors.
The root (/) file system on hdisk0 is included in the single-slice concatenation that is named d31.
The root (/) file system is copied to the hard disk named hdisk1. This copy is the single-slice concatenation that is named d32.
The custom JumpStart installation method and Solaris Live Upgrade enable you to create the following components that are required to replicate data.
State database and state database replicas (metadbs)
RAID-1 volumes (mirrors) with single-slice concatenations (submirrors)
This section briefly describes each of these components. For complete information about these components, see Solaris Volume Manager Administration Guide.
The state database is a database that stores information on a physical disk. The state database records and tracks changes that are made to your configuration. Solaris Volume Manager automatically updates the state database when a configuration or state change occurs. Creating a new volume is an example of a configuration change. A submirror failure is an example of a state change.
The state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a state database replica, ensures that the data in the database is always valid. Having copies of the state database protects against data loss from single points of failure. The state database tracks the location and status of all known state database replicas.
Solaris Volume Manager cannot operate until you have created the state database and its state database replicas. A Solaris Volume Manager configuration must have an operating state database.
The state database replicas ensure that the data in the state database is always valid. When the state database is updated, each state database replica is also updated. The updates occur one at a time to protect against corruption of all updates if the system crashes.
If your system loses a state database replica, Solaris Volume Manager must identify which state database replicas still contain valid data. Solaris Volume Manager determines this information by using a majority consensus algorithm. This algorithm requires that a majority (half + 1) of the state database replicas be available and in agreement before any of them are considered valid. Because of this majority consensus algorithm, you must create at least three state database replicas when you set up your disk configuration. A consensus can be reached if at least two of the three state database replicas are available.
Each state database replica occupies 4 Mbytes (8192 disk sectors) of disk storage by default. Replicas can be stored on the following devices:
A dedicated local disk slice
Solaris Live Upgrade only:
A local slice that will be part of a volume
A local slice that will be part of a UFS logging device
Replicas cannot be stored on the root (/), swap, or /usr slices, or on slices that contain existing file systems or data. After the replicas have been stored, volumes or file systems can be placed on the same slice.
You can keep more than one copy of a state database on one slice. However, you might make the system more vulnerable to a single point of failure by placing state database replicas on a single slice.
Description |
For More Information |
---|---|
When using custom JumpStart or Solaris Live Upgrade to install RAID-1 volumes, review these guidelines and requirements. | |
Obtain more detailed information about the state database and state database replicas. |
A RAID-1 volume, or mirror, is a volume that maintains identical copies of the data in RAID-0 volumes (single-slice concatenations). After you configure a RAID-1 volume, the volume can be used just as if it were a physical slice. You can duplicate any file system, including existing file systems. You can also use a RAID-1 volume for any application, such as a database.
Using RAID-1 volumes to mirror file systems has advantages and disadvantages:
With RAID-1 volumes, data can be read from both RAID-0 volumes simultaneously (either volume can service any request), providing improved performance. If one physical disk fails, you can continue to use the mirror with no loss in performance or loss of data.
Using RAID-1 volumes requires an investment in disks. You need at least twice as much disk space as the amount of data.
Because Solaris Volume Manager software must write to all RAID-0 volumes, duplicating the data can also increase the time that is required for write requests to be written to disk.
Description |
For More Information |
---|---|
Planning for RAID-1 volumes | |
Detailed information about RAID-1 volumes |
A RAID-0 volume is a single-slice concatenation. The concatenation is a volume whose data is organized serially and adjacently across components, forming one logical storage unit. The custom JumpStart installation method and Solaris Live Upgrade do not enable you to create stripes or other complex Solaris Volume Manager volumes.
During the installation or upgrade, you can create RAID-1 volumes (mirrors) and attach RAID-0 volumes to these mirrors. The RAID-0 volumes that are mirrored are called submirrors. A mirror is made of one or more RAID-0 volumes. After the installation, you can manage the data on separate RAID-0 submirror volumes by administering the RAID-1 mirror volume through the Solaris Volume Manager software.
The custom JumpStart installation method enables you to create a mirror that consists of up to two submirrors. Solaris Live Upgrade enables you to create a mirror that consists of up to three submirrors. Practically, a two-way mirror is usually sufficient. A third submirror enables you to make online backups without losing data redundancy while one submirror is offline for the backup.
Description |
For More Information |
---|---|
Planning for RAID–0 volumes | |
Detailed information about RAID-0 volumes |
The following figure shows a RAID-1 volume that duplicates the root file system (/) over two physical disks. State database replicas (metadbs) are placed on both disks.
Figure 9–2 shows a system with the following configuration.
The mirror that is named d30 consists of the submirrors that are named d31 and d32. The mirror, d30, duplicates the data in the root (/) file system on both submirrors.
The root (/) file system on hdisk0 is included in the single-slice concatenation that is named d31.
The root (/) file system is copied to the hard disk named hdisk1. This copy is the single-slice concatenation that is named d32.
State database replicas are created on both slices: hdisk0 and hdisk1.
Description |
For More Information |
---|---|
JumpStart profile example |
Profile Examples in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations |
Solaris Live Upgrade step-by-step procedures |
This chapter describes the requirements and guidelines that are necessary to create RAID-1 volumes with the custom JumpStart or Solaris Live Upgrade installation methods.
This chapter describes the following topics.
For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:
For Solaris Live Upgrade: General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems in Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning
For JumpStart:
To create RAID-1 volumes to duplicate data on specific slices, the disks that you plan to use must be directly attached and available to the system during the installation.
You should distribute state database replicas across slices, drives, and controllers, to avoid single points of failure. You want a majority of replicas to survive a single component failure. If you lose a replica, when a device fails, for example, the failure might cause problems with running Solaris Volume Manager software or when rebooting the system. Solaris Volume Manager software requires at least half of the replicas to be available to run, but a majority (half plus one) to reboot into multiuser mode.
For detailed instructions about creating and administering state database replicas, see Solaris Volume Manager Administration Guide.
Before selecting slices for state database replicas, consider the following guidelines and recommendations.
Task |
Description |
---|---|
Choose a dedicated slice |
You should create state database replicas on a dedicated slice of at least 4 MB per replica. If necessary, you could create state database replicas on a slice that is to be used as part of a RAID-0 or RAID-1 volume. You must create the replicas before you add the slice to the volume. |
Resize a slice |
By default, the size of a state database replica is 4 MB or 8192 disk blocks. Because your disk slices might not be that small, you can resize a slice to hold the state database replica. For information about resizing a slice, see Chapter 11, Administering Disks (Tasks), in System Administration Guide: Devices and File Systems. |
Choose a slice that is not in use |
You can create state database replicas on slices that are not in use. The part of a slice that is reserved for the state database replica should not be used for any other purpose. |
You cannot create state database replicas on existing file systems, or the root (/), /usr, and swap file systems. If necessary, you can create a new slice (provided a slice name is available) by allocating space from swap and then put state database replicas on that new slice. |
|
Choosing a slice that becomes a volume |
When a state database replica is placed on a slice that becomes part of a volume, the capacity of the volume is reduced by the space that is occupied by the replica or replicas. The space that is used by a replica is rounded up to the next cylinder boundary and this space is skipped by the volume. |
Before choosing the number of state database replicas, consider the following guidelines.
A minimum of 3 state database replicas are recommended, up to a maximum of 50 replicas per Solaris Volume Manager disk set. The following guidelines are recommended:
For a system with only a single drive: put all three replicas in one slice.
For a system with two to four drives: put two replicas on each drive.
For a system with five or more drives: put one replica on each drive.
Additional state database replicas can improve the mirror's performance. Generally, you need to add two replicas for each mirror you add to the system.
If you have a RAID-1 volume that is to be used for small-sized random I/O (for example, for a database), consider your number of replicas. For best performance, ensure that you have at least two extra replicas per RAID-1 volume on slices (and preferably on disks and controllers) that are unconnected to the RAID-1 volume.
If multiple controllers exist, replicas should be distributed as evenly as possible across all controllers. This strategy provides redundancy if a controller fails and also helps balance the load. If multiple disks exist on a controller, at least two of the disks on each controller should store a replica.
When you are working with RAID-1 volumes (mirrors) and RAID-0 volumes (single-slice concatenations), consider the following guidelines.
The custom JumpStart installation method and Solaris Live Upgrade support a subset of the features that are available in the Solaris Volume Manager software. When you create mirrored file systems with these installation programs, consider the following guidelines.
Installation Program |
Supported Feature |
Unsupported Feature |
---|---|---|
Custom JumpStart and Solaris Live Upgrade |
|
In Solaris Volume manager a RAID-0 volume can refer to disk stripes or disk concatenations. You cannot create RAID-0 stripe volumes during the installation or upgrade. |
Custom JumpStart |
|
|
Solaris Live Upgrade |
For examples, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) in Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning. |
More than three RAID-0 volumes are not supported. |
Creating and Installing a Solaris Flash with RAID-1 volumes |
You can create a Solaris Flash archive created from a master system that has Solaris Volume Manager RAID-1 volumes configured. The Solaris Flash creation software removes all RAID-1 volume information from the archive to keep the integrity of the clone system. With custom JumpStart you can rebuild the RAID-1 volumes by using a JumpStart profile. With Solaris Live Upgrade, you create a boot environment with RAID-1 volumes configured and install the archive. The Solaris installation program cannot be used to install RAID-1 volumes with a Solaris Flash archive. For examples of RAID-1 volumes in JumpStart profiles, see Profile Examples in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations. |
Veritas VxVM stores configuration information in areas not available to Solaris Flash. If Veritas VxVm file systems have been configured, you should not create a Solaris Flash archive. Also, Solaris install, including JumpStart and Solaris Live Upgrade do not support rebuilding VxVM volumes at installation time. Therefore, if you are planning to deploy Veritas VxVM software using a Solaris Flash archive, the archive must be created prior to configuring the VxVM file systems. The clone systems must be then configured individually after the archive has been applied and the system rebooted. |
Observe the following rules when assigning names for volumes.
Use a naming method that maps the slice number and disk number to volume numbers.
Volume names must begin with the letter d followed by a number, for example, d0.
Solaris Volume Manager has 128 default volume names from 0–127. The following list shows some example volume names.
Device /dev/md/dsk/d0 – block volume d0
Device /dev/md/dsk/d1 – block volume d1
Use ranges for each particular type of volume. For example, assign numbers 0–20 for RAID-1 volumes, and 21–40 for RAID-0 volumes.
When you use the Solaris Live Upgrade to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names, or you can assign the names. If you enable the software to detect the names, the software assigns the first mirror or submirror name that is available. If you assign mirror names, assign names ending in zero so that the installation can use the names ending in 1 and 2 for submirrors. If you assign submirror names, assign names ending in 1 or 2. If you assign numbers incorrectly, the mirror might not be created. For example, if you specify a mirror name with a number that ends in 1 or 2 (d1 or d2), Solaris Live Upgrade fails to create the mirror if the mirror name duplicates a submirror's name.
In previous releases, an abbreviated volume name could be entered. Starting with the Solaris 10 10/08 release, only the full volume name can be entered. For example, only the full volume name, such as /dev/md/dsk/d10, can be used to specify a mirror.
In this example, Solaris Live Upgrade assigns the volume names. The RAID-1 volumes d0 and d1 are the only volumes in use. For the mirror d10, Solaris Live Upgrade chooses d2 for the submirror for the device c0t0d0s0 and d3 for the submirror for the device c1t0d0s0.
lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/dsk/c0t0d0s0:attach -m /:/dev/dsk/c1t0d0s0:attach |
In this example, the volume names are assigned in the command. For the mirror d10, d11 is the name for the submirror for the device c0t0d0s0 and d12 is the name for the submirror for the device c1t0d0s0.
lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d11:attach -m /:/dev/dsk/c1t0d0s0,/dev/md/dsk/d12:attach |
For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.
When you use the custom JumpStart installation method to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names to mirrors, or you can assign the names in the profile.
If you enable the software to detect the names, the software assigns the first volume number that is available.
If you assign names in the profile, assign mirror names ending in zero so that the installation can use the names ending in 1 and 2 for submirrors.
If you assign numbers incorrectly, the mirror might not be created. For example, if you specify a mirror name with a number that ends in 1 or 2 (d1 or d2), JumpStart fails to create the mirror if the mirror name duplicates a submirror's name.
You can abbreviate the names of physical disk slices and Solaris Volume Manager volumes. The abbreviation is the shortest name that uniquely identifies a device. Examples follow.
A Solaris Volume Manager volume can be identified by its dnum designation, so that, for example, /dev/md/dsk/d10 becomes simply d10.
If a system has a single controller and multiple disks, you might use t0d0s0, but with multiple controllers use c0t0d0s0.
In the following profile example, the mirror is assigned the first volume numbers that are available. If the next available mirror ending in zero is d10, then the names d11 and d12 are assigned to the submirrors.
filesys mirror c0t0d0s1 /
In the following profile example, the mirror number is assigned in the profile as d30. The submirror names are assigned by the software, based on the mirror number and the first available submirrors. The submirrors are named d31 and d32.
filesys mirror:d30 c0t1d0s0 c0t0d0s0 /
For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.
When you choose the disks and controllers that you want to use to mirror a file system, consider the following guidelines.
Use components that are on different controllers to increase the number of simultaneous reads and writes that can be performed.
Keep the slices of different submirrors on different disks and controllers. Data protection is diminished considerably if slices of two or more submirrors of the same mirror are on the same disk.
Organize submirrors across separate controllers, because controllers and associated cables tend to fail more often than disks. This practice also improves mirror performance.
Use the same type of disks and controllers in a single mirror. Particularly in old SCSI storage devices, different models or brands of disk or controller can have widely varying performance. Mixing the different performance levels in a single mirror can cause performance to degrade significantly.
When you choose the slices that you want to use to mirror a file system, consider the following guidelines.
Any file system, including root (/), swap, and /usr, can use a mirror. Any application, such as a database, also can use a mirror.
Make sure that your submirror slices are of equal size. Submirrors of different sizes result in unused disk space.
If you have a mirrored file system in which the first submirror attached does not start on cylinder 0, all additional submirrors you attach must also not start on cylinder 0. If you attempt to attach a submirror starting on cylinder 0 to a mirror in which the original submirror does not start on cylinder 0, the following error message is displayed:
can't attach labeled submirror to an unlabeled mirror |
You must ensure that all submirrors you plan to attach to a mirror either all start on cylinder 0, or that none of them start on cylinder 0.
Starting cylinders do not have to be identical across all submirrors, but all submirrors must either include or not include cylinder 0.
If a system with mirrors for root (/), /usr, and swap is booted into single-user mode, the system indicates that these mirrors are in need of maintenance. When you view these mirrors with the metastat command, these mirrors, and possibly all mirrors on the system, appear in the “Needing Maintenance” state.
Though this situation appears to be potentially dangerous, do not be concerned. The metasync -r command, which normally occurs during boot to resynchronize mirrors, is interrupted when the system is booted into single-user mode. After the system is rebooted, the metasync -r command runs and resynchronizes all mirrors.
If this interruption is a concern, run the metasync -r command manually.
For more information about the metasync, see the metasync(1M) man page, and Solaris Volume Manager Administration Guide.