Solaris 10 10/09 Installation Guide: Planning for Installation and Upgrade

Part II Understanding Installations That Relate to ZFS, Booting, Solaris Zones, and RAID-1 Volumes

This part provides an overview of several technologies that relate to a Solaris OS installation or upgrade. Guidelines and requirements are also included.

Chapter 6 ZFS Root File System Installation (Planning)

This chapter provides system requirements and limitations to assist you when you install a ZFS root pool. Also provided is an overview of the installation programs that can install a ZFS root pool.

If you have multiple boot environments on your system see Chapter 7, SPARC and x86 Based Booting (Overview and Planning) for information on booting.

What's New in the Solaris 10 10/09 Release

Starting with the Solaris 10 10/09 release, you can set up a JumpStart profile to identify a flash archive of a ZFS root pool.

A Flash archive can be created on a system that is running a UFS root file system or a ZFS root file system. A Flash archive of a ZFS root pool contains the entire pool hierarchy, except for the swap and dump volumes, and any excluded datasets. The swap and dump volumes are created when the Flash archive is installed.

You can use the Flash archive installation method as follows:

For detailed instructions and limitations, see Installing a ZFS Root File System (Flash Archive Installation) in Solaris ZFS Administration Guide.

Requirements for Installing a ZFS Root Pool

Table 6–1 System Requirements and Limitations

Requirement or Limitation 

Description 

Information 

Memory

786 MB is the minimum memory. 1 GB is recommended for overall performance. 

ZFS Administration Guide.

Disk space 

The minimum amount of available pool space for a bootable ZFS root file system depends on the amount of physical memory, the disk space available, and the number of boot environments to be created. 

For an explanation, see Disk Space Requirements for a ZFS Installation.

The ZFS storage pool must be created with slices rather than whole disks to be upgradeable and bootable. 

  • The pool created with slices can be mirrored but not a RAID-Z or non-redundant configuration of multiple disks. The SVM device information must be already available in the /dev/md/[r]dsk directory.

  • The pool must have an SMI label. An EFI-labeled disk cannot be booted.

  • x86 only: The ZFS pool must be in a slice with an fdisk partition.

When you migrate from a UFS root (/) file system to a ZFS root pool with Solaris Live Upgrade, consider these requirements.

  • Migrating from a UFS file system to a ZFS root pool with Solaris Live Upgrade or creating a new boot environment in a root pool is new starting with the Solaris 10 10/08 release. This release contains the software needed to use Solaris Live Upgrade with ZFS. You must have at least this release installed to use ZFS with Solaris Live Upgrade.

  • Migration is possible only from a UFS file system to a ZFS file system.

    • File systems other than a UFS file system cannot be migrated to a ZFS root pool.

    • A UFS file system cannot be created from a ZFS root pool.

  • Before migrating, a ZFS storage pool must exist.

Disk Space Requirements for a ZFS Installation

Normally, on a system with a UFS root file system, swap and dump are on the same slice. Therefore, UFS shares the swap space with the dump device. In a ZFS root pool, swap and dump are separate zvols, so they do not share the same physical space. When a system is installed or upgraded with a ZFS root file system, the size of the swap area and the dump device are dependent on the amount of physical memory. The minimum amount of available pool space for a bootable ZFS root file system depends on the amount of physical memory, the disk space available, and the number of boot environments to be created. Approximately 1 Gbyte of memory and at least 2 Gbytes of disk space are recommended. The space is consumed as follows:

Solaris Installation Programs for Installing ZFS Root Pools

The following installation programs perform an initial installation of a ZFS root pool.

Solaris Live Upgrade can migrate a UFS file system to a ZFS root pool. Also, Solaris Live Upgrade can create ZFS boot environments that can be upgraded.

Table 6–2 ZFS Installation Programs and Limitations

ZFS Installation Program 

Description 

Limitations 

Information 

Solaris Installation program text installer 

The Solaris text installer performs an initial installation for a ZFS root pool. During the installation, you can choose to install either a UFS file system or a ZFS root pool. You can set up a mirrored ZFS root pool by selecting two or more slices during the installation. Or, you can attach or add additional disks after the installation to create a mirrored ZFS root pool. Swap and dump devices on ZFS volumes are automatically created in the ZFS root pool. 

  • The installation GUI is not available to install a ZFS root pool.

  • You cannot use the standard upgrade program to upgrade. You must use Solaris Live Upgrade to upgrade a ZFS root pool.

Chapter 3, Installing With the Solaris Interactive Text Installer for ZFS Root Pools (Planning and Tasks), in Solaris 10 10/09 Installation Guide: Basic Installations

Solaris Live Upgrade 

You can use the Solaris Live Upgrade feature to perform the following tasks:

  • Migrate a UFS root (/) file system to a ZFS root pool

  • Create a new boot environment in the following ways:

    • Within an existing ZFS root pool

    • Within another ZFS root pool

    • From a source other than the currently running system

    • On a system with non-global zones installed

After you have used the lucreate command to create a ZFS boot environment, you can use the other Solaris Live Upgrade commands on the boot environment.

A storage pool must be created before you use the lucreate command.

Chapter 11, Solaris Live Upgrade and ZFS (Overview), in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning

JumpStart 

Starting with the Solaris 10 10/09 release, you can set up a JumpStart profile to identify a flash archive of a ZFS root pool. See What's New in the Solaris 10 10/09 Release.

You can create a profile to create a ZFS storage pool and designate a bootable ZFS file system. New ZFS keywords provide an initial installation. 

  • You cannot use the install_type upgrade keyword to upgrade a ZFS root pool. Nor can you use the Solaris Flash keywords.

  • Some keywords that are allowed in a UFS specific profile are not allowed in a ZFS specific profile.

Chapter 7 SPARC and x86 Based Booting (Overview and Planning)

Starting with the Solaris 10 10/08 release, changes in Solaris boot architecture provides many new features, including booting from different file system types, such as ZFS file systems. This chapter describes some of these changes and provides references to more information about booting. Also, this chapter provides an overview of GRUB based booting for x86 systems.

This chapter contains the following sections:

Booting for Solaris (Overview)

Starting with the Solaris 10 10/08 release, the Solaris SPARC bootstrap process has been redesigned to increase commonality with the Solaris x86 boot architecture. The improved Solaris boot architecture brings direct boot, ramdisk-based booting, and the ramdisk miniroot to the SPARC platform. These enabling technologies support the following functions:

Additional improvements include significantly faster boot times, increased flexibility, and reduced maintenance requirements.

As part of this architecture redesign, the Solaris boot archives and the bootadm command, previously only available on the Solaris x86 platform, are now an integral part of the Solaris SPARC boot architecture.

Although the implementation of the Solaris SPARC boot has changed, no administrative procedures for booting a SPARC-based system have been impacted. Solaris installations have changed to include installing from a ZFS file system, but otherwise have not changed for the new boot architecture.

Booting ZFS Boot Environments (Overview)

If your system has more than one OS installed on the system or more than one root boot environment in a ZFS root pool, you can boot from these boot environments for both SPARC and x86 platforms. The boot environments available for booting include boot environments created by Solaris Live Upgrade.

On both SPARC and x86 based systems, each ZFS root pool has a dataset designated as the default root file system. If for SPARC, you type the boot command or for x86, you take the default from the GRUB menu, then this default root file system is booted.

Table 7–1 Where to Find Information on Booting

Description 

Information 

For a high-level overview of booting features 

Chapter 8, Introduction to Shutting Down and Booting a System, in System Administration Guide: Basic Administration

For more detailed overview of booting features 

Chapter 9, Shutting Down and Booting a System (Overview), in System Administration Guide: Basic Administration

x86: For information about modifying boot behavior such as editing the menu.lst file and locating the menu.lst file

Modifying Solaris Boot Behavior on x86 Based Systems (Task Map) in System Administration Guide: Basic Administration

For procedures for booting a ZFS file system 

Chapter 12, Booting a Solaris System (Tasks), in System Administration Guide: Basic Administration

For procedures for managing a boot archive, such as locating the GRUB menu.lst file and using the bootadm command

Chapter 14, Managing the Solaris Boot Archives (Tasks), in System Administration Guide: Basic Administration

x86: GRUB Based Booting (Overview)

GRUB, the open source boot loader, is the default boot loader in the Solaris OS.

The boot loader is the first software program that runs after you power on a system. After you power on an x86 based system, the Basic Input/Output System (BIOS) initializes the CPU, the memory, and the platform hardware. When the initialization phase has completed, the BIOS loads the boot loader from the configured boot device, and then transfers control of the system to the boot loader.

GRUB is an open source boot loader with a simple menu interface that includes boot options that are predefined in a configuration file. GRUB also has a command-line interface that is accessible from the menu interface for performing various boot commands. In the Solaris OS, the GRUB implementation is compliant with the Multiboot Specification. The specification is described in detail at http://www.gnu.org/software/grub/grub.html.

Because the Solaris kernel is fully compliant with the Multiboot Specification, you can boot a Solaris x86 based system by using GRUB. With GRUB, you can more easily boot and install various operating systems.

A key benefit of GRUB is that it is intuitive about file systems and kernel executable formats, which enables you to load an operating system without recording the physical position of the kernel on the disk. With GRUB based booting, the kernel is loaded by specifying its file name, and the drive, and the partition where the kernel resides. GRUB based booting replaces the Solaris Device Configuration Assistant and simplifies the booting process with a GRUB menu.

x86: GRUB Based Booting (Planning)

This section describes the basics of GRUB based booting and describes the GRUB menu.

When you install the Solaris OS, two GRUB menu entries are installed on the system by default. The first entry is the Solaris OS entry. The second entry is the failsafe boot archive, which is to be used for system recovery. The Solaris GRUB menu entries are installed and updated automatically as part of the Solaris software installation and upgrade process. These entries are directly managed by the OS and should not be manually edited.

During a standard Solaris OS installation, GRUB is installed on the Solaris fdisk partition without modifying the system BIOS setting. If the OS is not on the BIOS boot disk, you need to do one of the following:

The preferred method is to install the Solaris OS on the boot disk. If multiple operating systems are installed on the machine, you can add entries to the menu.lst file. These entries are then displayed in the GRUB menu the next time you boot the system.

For additional information on multiple operating systems, see How Multiple Operating Systems Are Supported by GRUB in System Administration Guide: Basic Administration.

x86: Performing a GRUB Based Installation From the Network

Performing a GRUB based network boot requires a DHCP server that is configured for PXE clients and an install server that provides tftp service. The DHCP server must be able to respond to the DHCP classes, PXEClient and GRUBClient. The DHCP response must contain the following information:


Note –

rpc.bootparamd, which is usually a requirement on the server side for performing a network boot, is not required for a GRUB based network boot.


If no PXE or DHCP server is available, you can load GRUB from CD-ROM or local disk. You can then manually configure the network in GRUB and download the multiboot program and the boot archive from the file server.

For more information, see Overview of Booting and Installing Over the Network With PXE in Solaris 10 10/09 Installation Guide: Network-Based Installations.

Chapter 8 Upgrading When Solaris Zones Are Installed on a System (Planning)

This chapter provides an overview of how Solaris Zones partitioning technology relates to upgrading the Solaris OS when non-global zones are configured.

This chapter contains the following sections:

Solaris Zones (Overview)

The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. A non-global zone is a virtualized operating system environment created within a single instance of the Solaris OS. When you create a non-global zone, you produce an application execution environment in which processes are isolated from the rest of the system. This isolation prevents processes that are running in one non-global zone from monitoring or affecting processes that are running in other non-global zones. Even a process running with superuser credentials cannot view or affect activity in other zones. A non-global zone also provides an abstract layer that separates applications from the physical attributes of the machine on which they are deployed. Examples of these attributes include physical device paths.

Every Solaris system contains a global zone. The global zone has a dual function. The global zone is both the default zone for the system and the zone used for system-wide administrative control. All processes run in the global zone if no non-global zones are created by the global administrator. The global zone is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled. Only the global zone is bootable from the system hardware. Administration of the system infrastructure, such as physical devices, routing, or dynamic reconfiguration (DR), is only possible in the global zone. Appropriately privileged processes running in the global zone can access objects associated with the non-global zones.

Description 

For More Information 

The following sections describe how you can upgrade a system that contains non-global zones. 

Upgrading With Non-Global Zones

For complete information on creating and configuring non-global zones 

Chapter 16, Introduction to Solaris Zones, in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones

Upgrading With Non-Global Zones

After the Solaris OS is installed, you can install and configure non-global zones. You can upgrade the Solaris OS when non-global zones are installed. If you have branded non-global zones installed, they are ignored during the upgrade process. Installation programs that can accommodate systems that have non-global zones installed are summarized below.


Note –

Starting with the Solaris 10 10/09 release, zones parallel patching enhances the standard Solaris 10 patch utilities. This feature improves zones patching performance by patching non-global zones in parallel.

The global zone is still patched before the non-global zones are patched.

For releases prior to the Solaris 10 10/09 release, this feature is delivered in the following patch utilities patches:

For more information, see the following documentation:


Table 8–1 Choosing an Installation Program to Upgrade With Non-Global Zones

Upgrade Program 

Description 

For More Information 

Solaris Live Upgrade 

You can upgrade or patch a system that contains non-global zones. If you have a system that contains non-global zones, Solaris Live Upgrade is the recommended upgrade program or program to add patches. Other upgrade programs might require extensive upgrade time, because the time required to complete the upgrade increases linearly with the number of installed non-global zones. If you are patching a system with Solaris Live Upgrade, you do not have to take the system to single-user mode and you can maximize your system's uptime. Starting with the Solaris 10 8/07 release, changes to accommodate systems that have non-global zones installed are the following:

  • A new package, SUNWlucfg, is required to be installed with the other Solaris Live Upgrade packages, SUNWlur and SUNWluu.

  • Creating a new boot environment from the currently running boot environment remains the same with one exception. You can specify a destination slice for a shared file system within a non-global zone. This exception occurs under the following circumstances:

    • If on the current boot environment the zonecfg add fs command was used that created a separate file system for a non-global zone

    • If this separate file system resides on a shared file system, such as /zone/root/export

    To prevent this separate file system from being shared in the new boot environment, the lucreate command has changed to enable specifying a destination slice for a separate file system for a non-global zone. The argument to the -m option has a new optional field, zonename. This new field places the non-global zone's separate file system on a separate slice in the new boot environment. For more information on setting up a non-global zone with a separate file system, see zonecfg(1M).

Solaris Live Upgrade continued 


Note –

By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. Updating shared files in the active boot environment also updates data in the inactive boot environment. The /export file system is an example of a shared file system. If you use the -m option and the zonename option, the non-global zone's shared file system is copied to a separate slice and data is not shared. This option prevents non-global zone file systems that were created with the zonecfg add fs command from being shared between the boot environments.


Additional changes, starting with the Solaris 10/8/07 release, that accommodate systems with non-global zones installed include the following:

  • Comparing boot environments is enhanced. The lucompare command now generates a comparison of boot environments that includes the contents of any non-global zone.

  • The lumount command now provides non-global zones with access to their corresponding separate file systems that exist on inactive boot environments. When the global zone administrator uses the lumount command to mount an inactive boot environment, the boot environment is mounted for non-global zones as well.

  • Listing file systems with the lufslist command is enhanced to display a list of file systems for both the global zone and the non-global zones.

 

Solaris interactive installation program GUI 

You can upgrade or patch a system when non-global zones are installed. The time to upgrade or patch might be extensive, depending on the number of non-global zones that are installed. 

For more information about installing with this program, see Chapter 2, Installing With the Solaris Installation Program For UFS File Systems (Tasks), in Solaris 10 10/09 Installation Guide: Basic Installations.

Automated JumpStart installation 

You can upgrade or patch with any keyword that applies to an upgrade or patching. The time to upgrade or patch might be extensive, depending on the number of non-global zones that are installed. 

For more information about installing with this program, see Solaris 10 10/09 Installation Guide: Custom JumpStart and Advanced Installations.

Limitations when upgrading with non-global zones are listed in the following table.

Table 8–2 Limitations When Upgrading With Non-Global Zones

Program or Condition 

Description 

For More Information 

Consider these issues when using Solaris Live Upgrade on a system with zones installed. It is critical to avoid zone state transitions during lucreate and lumount operations.

  • When you use the lucreate command to create an inactive boot environment, if a given non-global zone is not running, then the zone cannot be booted until the lucreate operation has completed.

  • When you use the lucreate command to create an inactive boot environment if a given non-global zone is running, the zone should not be halted or rebooted until the lucreate operation has completed.

  • When an inactive boot environment is mounted with the lumount command, you cannot boot non-global zones or reboot them, although zones that were running before the lumount operation can continue to run.

  • Because a non-global zone can be controlled by a non-global zone administrator as well as by the global zone administrator, to prevent any interaction, halt all zones during lucreate or lumount operations.

Problems can occur when the global zone administrator does not notify the non-global zone administrator of an upgrade with Solaris Live Upgrade. 

When Solaris Live Upgrade operations are underway, non-global zone administrator involvement is critical. The upgrade affects the work of the administrators, who will be addressing the changes that occur as a result of the upgrade. Zone administrators should ensure that any local packages are stable throughout the sequence, handle any post-upgrade tasks such as configuration file adjustments, and generally schedule around the system outage.  

For example, if a non-global zone administrator adds a package while the global zone administrator is copying the file systems with the lucreate command, the new package is not copied with the file systems and the non-global zone administrator is unaware of the problem.

 

Solaris Flash archives cannot be used with non-global zones. 

A Solaris Flash archive cannot be properly created when a non-global zone is installed. The Solaris Flash feature is not compatible with Solaris Zones partitioning technology. If you create a Solaris Flash archive, the resulting archive is not installed properly when the archive is deployed under these conditions:

  • The archive is created in a non-global zone.

  • The archive is created in a global zone that has non-global zones installed.

For more information about using Solaris Flash archives, see Solaris 10 10/09 Installation Guide: Solaris Flash Archives (Creation and Installation).

Using a command that uses the -R option or equivalent must not be used in some situations.

Any command that accepts an alternate root (/) file system by using the -R option or equivalent must not be used if the following are true:

  • The command is run in the global zone.

  • The alternative root (/) file system refers to any path within a non-global zone.

An example is the -R root_path option to the pkgadd utility run from the global zone with a path to the root (/) file system in a non-global zone.

For a list of utilities that accept an alternate root (/) file system and more information about zones, see Restriction on Accessing A Non-Global Zone From the Global Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

Backing Up Your System Before Performing an Upgrade With Zones

You should back up the global and non-global zones on your Solaris system before you perform the upgrade. For information about backing up a system with zones installed, see Chapter 26, Solaris Zones Administration (Overview), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

Disk Space Requirements for Non-Global Zones

When installing the global zone, be sure to reserve enough disk space for all of the zones you might create. Each non-global zone might have unique disk space requirements.

No limits are placed on how much disk space can be consumed by a zone. The global zone administrator is responsible for space restriction. Even a small uniprocessor system can support a number of zones running simultaneously. The characteristics of the packages installed in the global zone affect the space requirements of the non-global zones that are created. The number of packages and space requirements are factors.

For complete planning requirements and recommendations, see Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

Chapter 9 Creating RAID-1 Volumes (Mirrors) During Installation (Overview)

This chapter discusses the advantages of creating RAID-1 volumes (mirrors) for the root (/) file system. This chapter also describes the Solaris Volume Manager components that are required to create mirrors for file systems. This chapter describes the following topics.

For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:

Why Use RAID-1 Volumes?

During the installation or upgrade, you can create RAID-1 volumes to duplicate your system data over multiple physical disks. By duplicating your data over separate disks, you can protect your data from disk corruption or a disk failure.

The Solaris custom JumpStart and Solaris Live Upgrade installation methods use the Solaris Volume Manager technology to create RAID-1 volumes that mirror a file system. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. The custom JumpStart and Solaris Live Upgrade installation methods enable a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system. You can create RAID-1 volumes during your installation or upgrade, eliminating the need to create them after the installation.

How Do RAID-1 Volumes Work?

Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. A volume is a name for a group of physical slices that appear to the system as a single, logical device. Volumes are actually pseudo, or virtual, devices in standard UNIX® terms.

A volume is functionally identical to a physical disk in the view of an application or a file system (such as UFS). Solaris Volume Manager converts I/O requests that are directed at a volume into I/O requests to the underlying member disks. Solaris Volume Manager volumes are built from slices (disk partitions) or from other Solaris Volume Manager volumes.

You use volumes to increase performance and data availability. In some instances, volumes can also increase I/O performance. Functionally, volumes behave the same way as slices. Because volumes look like slices, they are transparent to end users, applications, and file systems. Like physical devices, you can use Solaris Volume Manager software to access volumes through block or raw device names. The volume name changes, depending on whether the block or raw device is used. The custom JumpStart installation method and Solaris Live Upgrade support the use of block devices to create mirrored file systems. See RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade for details about volume names.

When you create RAID-1 volumes ) with RAID-0 volumes (single-slice concatenations), Solaris Volume Manager duplicates data on the RAID-0 submirrors and treats the submirrors as one volume.

Figure 9–1 shows a mirror that duplicates the root (/) file system over two physical disks.

Figure 9–1 Creating RAID-1 Volumes on the Root (/) File System on Two Disks

 The context describes the illustration.

Figure 9–1 shows a system with the following configuration.

Overview of Solaris Volume Manager Components

The custom JumpStart installation method and Solaris Live Upgrade enable you to create the following components that are required to replicate data.

This section briefly describes each of these components. For complete information about these components, see Solaris Volume Manager Administration Guide.

State Database and State Database Replicas

The state database is a database that stores information on a physical disk. The state database records and tracks changes that are made to your configuration. Solaris Volume Manager automatically updates the state database when a configuration or state change occurs. Creating a new volume is an example of a configuration change. A submirror failure is an example of a state change.

The state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a state database replica, ensures that the data in the database is always valid. Having copies of the state database protects against data loss from single points of failure. The state database tracks the location and status of all known state database replicas.

Solaris Volume Manager cannot operate until you have created the state database and its state database replicas. A Solaris Volume Manager configuration must have an operating state database.

The state database replicas ensure that the data in the state database is always valid. When the state database is updated, each state database replica is also updated. The updates occur one at a time to protect against corruption of all updates if the system crashes.

If your system loses a state database replica, Solaris Volume Manager must identify which state database replicas still contain valid data. Solaris Volume Manager determines this information by using a majority consensus algorithm. This algorithm requires that a majority (half + 1) of the state database replicas be available and in agreement before any of them are considered valid. Because of this majority consensus algorithm, you must create at least three state database replicas when you set up your disk configuration. A consensus can be reached if at least two of the three state database replicas are available.

Each state database replica occupies 4 Mbytes (8192 disk sectors) of disk storage by default. Replicas can be stored on the following devices:

Replicas cannot be stored on the root (/), swap, or /usr slices, or on slices that contain existing file systems or data. After the replicas have been stored, volumes or file systems can be placed on the same slice.

You can keep more than one copy of a state database on one slice. However, you might make the system more vulnerable to a single point of failure by placing state database replicas on a single slice.

Description 

For More Information 

When using custom JumpStart or Solaris Live Upgrade to install RAID-1 volumes, review these guidelines and requirements. 

State Database Replicas Guidelines and Requirements

Obtain more detailed information about the state database and state database replicas. 

Solaris Volume Manager Administration Guide

RAID-1 Volumes (Mirrors)

A RAID-1 volume, or mirror, is a volume that maintains identical copies of the data in RAID-0 volumes (single-slice concatenations). After you configure a RAID-1 volume, the volume can be used just as if it were a physical slice. You can duplicate any file system, including existing file systems. You can also use a RAID-1 volume for any application, such as a database.

Using RAID-1 volumes to mirror file systems has advantages and disadvantages:

Description 

For More Information 

Planning for RAID-1 volumes 

RAID-1 and RAID-0 Volume Requirements and Guidelines

Detailed information about RAID-1 volumes 

Solaris Volume Manager Administration Guide

RAID-0 Volumes (Concatenations)

A RAID-0 volume is a single-slice concatenation. The concatenation is a volume whose data is organized serially and adjacently across components, forming one logical storage unit. The custom JumpStart installation method and Solaris Live Upgrade do not enable you to create stripes or other complex Solaris Volume Manager volumes.

During the installation or upgrade, you can create RAID-1 volumes (mirrors) and attach RAID-0 volumes to these mirrors. The RAID-0 volumes that are mirrored are called submirrors. A mirror is made of one or more RAID-0 volumes. After the installation, you can manage the data on separate RAID-0 submirror volumes by administering the RAID-1 mirror volume through the Solaris Volume Manager software.

The custom JumpStart installation method enables you to create a mirror that consists of up to two submirrors. Solaris Live Upgrade enables you to create a mirror that consists of up to three submirrors. Practically, a two-way mirror is usually sufficient. A third submirror enables you to make online backups without losing data redundancy while one submirror is offline for the backup.

Description 

For More Information 

Planning for RAID–0 volumes 

RAID-1 and RAID-0 Volume Requirements and Guidelines

Detailed information about RAID-0 volumes 

Solaris Volume Manager Administration Guide

Example of RAID-1 Volume Disk Layout

The following figure shows a RAID-1 volume that duplicates the root file system (/) over two physical disks. State database replicas (metadbs) are placed on both disks.

Figure 9–2 RAID-1 Volume Disk Layout

The context describes the illustration.

Figure 9–2 shows a system with the following configuration.

Description 

For More Information 

JumpStart profile example 

Profile Examples in Solaris 10 10/09 Installation Guide: Custom JumpStart and Advanced Installations

Solaris Live Upgrade step-by-step procedures 

To Create a Boot Environment With RAID-1 Volumes (Mirrors) in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Chapter 10 Creating RAID-1 Volumes (Mirrors) During Installation (Planning)

This chapter describes the requirements and guidelines that are necessary to create RAID-1 volumes with the custom JumpStart or Solaris Live Upgrade installation methods.

This chapter describes the following topics.

For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:

System Requirement

To create RAID-1 volumes to duplicate data on specific slices, the disks that you plan to use must be directly attached and available to the system during the installation.

State Database Replicas Guidelines and Requirements

You should distribute state database replicas across slices, drives, and controllers, to avoid single points of failure. You want a majority of replicas to survive a single component failure. If you lose a replica, when a device fails, for example, the failure might cause problems with running Solaris Volume Manager software or when rebooting the system. Solaris Volume Manager software requires at least half of the replicas to be available to run, but a majority (half plus one) to reboot into multiuser mode.

For detailed instructions about creating and administering state database replicas, see Solaris Volume Manager Administration Guide.

Selecting Slices for State Database Replicas

Before selecting slices for state database replicas, consider the following guidelines and recommendations.

Task 

Description 

Choose a dedicated slice 

You should create state database replicas on a dedicated slice of at least 4 MB per replica. If necessary, you could create state database replicas on a slice that is to be used as part of a RAID-0 or RAID-1 volume. You must create the replicas before you add the slice to the volume. 

Resize a slice 

By default, the size of a state database replica is 4 MB or 8192 disk blocks. Because your disk slices might not be that small, you can resize a slice to hold the state database replica. For information about resizing a slice, see Chapter 11, Administering Disks (Tasks), in System Administration Guide: Devices and File Systems.

Choose a slice that is not in use 

You can create state database replicas on slices that are not in use. The part of a slice that is reserved for the state database replica should not be used for any other purpose.

 

You cannot create state database replicas on existing file systems, or the root (/), /usr, and swap file systems. If necessary, you can create a new slice (provided a slice name is available) by allocating space from swap and then put state database replicas on that new slice.

Choosing a slice that becomes a volume 

When a state database replica is placed on a slice that becomes part of a volume, the capacity of the volume is reduced by the space that is occupied by the replica or replicas. The space that is used by a replica is rounded up to the next cylinder boundary and this space is skipped by the volume.  

Choosing the Number of State Database Replicas

Before choosing the number of state database replicas, consider the following guidelines.

Distributing State Database Replicas Across Controllers

If multiple controllers exist, replicas should be distributed as evenly as possible across all controllers. This strategy provides redundancy if a controller fails and also helps balance the load. If multiple disks exist on a controller, at least two of the disks on each controller should store a replica.

RAID-1 and RAID-0 Volume Requirements and Guidelines

When you are working with RAID-1 volumes (mirrors) and RAID-0 volumes (single-slice concatenations), consider the following guidelines.

Custom JumpStart and Solaris Live Upgrade Guidelines

The custom JumpStart installation method and Solaris Live Upgrade support a subset of the features that are available in the Solaris Volume Manager software. When you create mirrored file systems with these installation programs, consider the following guidelines.

Installation Program 

Supported Feature  

Unsupported Feature 

Custom JumpStart and Solaris Live Upgrade 

  • Supports RAID-0 and RAID-1 volumes, but does not support other Solaris Volume Manager components, such as RAID-5 volumes.

  • RAID-0 volume is supported, but only as a single-slice concatenation.

In Solaris Volume manager a RAID-0 volume can refer to disk stripes or disk concatenations. You cannot create RAID-0 stripe volumes during the installation or upgrade. 

Custom JumpStart 

  • Supports the creation of RAID-1 volumes during an initial installation only.

  • You can create up to two RAID-0 volumes (submirrors) for each RAID-1 volume. Two submirrors usually provide sufficient data redundancy for most applications, and the disk drive costs are less expensive.

  • Does not support an upgrade when RAID-1 volumes are configured.

  • More than two RAID-0 volumes are not supported.

Solaris Live Upgrade 

  • You can create up to three RAID-0 volumes (submirrors) for each RAID-1 volume. Three submirrors enable you to take a submirror offline and perform a backup while maintaining the two remaining submirrors for continued data redundancy.

  • Supports the creation of RAID-1 volumes during an upgrade.

For examples, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

More than three RAID-0 volumes are not supported. 

Creating and Installing a Solaris Flash with RAID-1 volumes 

You can create a Solaris Flash archive created from a master system that has Solaris Volume Manager RAID-1 volumes configured. The Solaris Flash creation software removes all RAID-1 volume information from the archive to keep the integrity of the clone system. With custom JumpStart you can rebuild the RAID-1 volumes by using a JumpStart profile. With Solaris Live Upgrade, you create a boot environment with RAID-1 volumes configured and install the archive. The Solaris installation program cannot be used to install RAID-1 volumes with a Solaris Flash archive. 

For examples of RAID-1 volumes in JumpStart profiles, see Profile Examples in Solaris 10 10/09 Installation Guide: Custom JumpStart and Advanced Installations.

Veritas VxVM stores configuration information in areas not available to Solaris Flash. If Veritas VxVm file systems have been configured, you should not create a Solaris Flash archive. Also, Solaris install, including JumpStart and Solaris Live Upgrade do not support rebuilding VxVM volumes at installation time. Therefore, if you are planning to deploy Veritas VxVM software using a Solaris Flash archive, the archive must be created prior to configuring the VxVM file systems. The clone systems must be then configured individually after the archive has been applied and the system rebooted. 

RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade

Observe the following rules when assigning names for volumes.

RAID Volume Naming Conventions for Solaris Live Upgrade

When you use the Solaris Live Upgrade to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names, or you can assign the names. If you enable the software to detect the names, the software assigns the first mirror or submirror name that is available. If you assign mirror names, assign names ending in zero so that the installation can use the names ending in 1 and 2 for submirrors. If you assign submirror names, assign names ending in 1 or 2. If you assign numbers incorrectly, the mirror might not be created. For example, if you specify a mirror name with a number that ends in 1 or 2 (d1 or d2), Solaris Live Upgrade fails to create the mirror if the mirror name duplicates a submirror's name.


Note –

In previous releases, an abbreviated volume name could be entered. Starting with the Solaris 10 10/08 release, only the full volume name can be entered. For example, only the full volume name, such as /dev/md/dsk/d10, can be used to specify a mirror.



Example 10–1 Solaris Live Upgrade: Enable the Software to Detect and Name the Mirror and Submirror

In this example, Solaris Live Upgrade assigns the volume names. The RAID-1 volumes d0 and d1 are the only volumes in use. For the mirror d10, Solaris Live Upgrade chooses d2 for the submirror for the device c0t0d0s0 and d3 for the submirror for the device c1t0d0s0.


lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs  \
-m /:/dev/dsk/c0t0d0s0:attach -m /:/dev/dsk/c1t0d0s0:attach


Example 10–2 Solaris Live Upgrade: Assign Mirror and Submirror Names

In this example, the volume names are assigned in the command. For the mirror d10, d11 is the name for the submirror for the device c0t0d0s0 and d12 is the name for the submirror for the device c1t0d0s0.


lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs \
-m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d11:attach \
-m /:/dev/dsk/c1t0d0s0,/dev/md/dsk/d12:attach

For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.


RAID-Volume Naming Conventions for Custom JumpStart

When you use the custom JumpStart installation method to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names to mirrors, or you can assign the names in the profile.


Note –

You can abbreviate the names of physical disk slices and Solaris Volume Manager volumes. The abbreviation is the shortest name that uniquely identifies a device. Examples follow.



Example 10–3 Enable the Software to Detect the Mirror and Submirror Names

In the following profile example, the mirror is assigned the first volume numbers that are available. If the next available mirror ending in zero is d10, then the names d11 and d12 are assigned to the submirrors.

filesys                 mirror c0t0d0s1  / 


Example 10–4 Assigning Mirror and Submirror Names

In the following profile example, the mirror number is assigned in the profile as d30. The submirror names are assigned by the software, based on the mirror number and the first available submirrors. The submirrors are named d31 and d32.

filesys                 mirror:d30 c0t1d0s0 c0t0d0s0  /

For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.

Guidelines for Selecting Disks and Controllers

When you choose the disks and controllers that you want to use to mirror a file system, consider the following guidelines.

Guidelines for Selecting Slices

When you choose the slices that you want to use to mirror a file system, consider the following guidelines.

Booting Into Single-User Mode Causes Mirror to Appear to Need Maintenance

If a system with mirrors for root (/), /usr, and swap is booted into single-user mode, the system indicates that these mirrors are in need of maintenance. When you view these mirrors with the metastat command, these mirrors, and possibly all mirrors on the system, appear in the “Needing Maintenance” state.

Though this situation appears to be potentially dangerous, do not be concerned. The metasync -r command, which normally occurs during boot to resynchronize mirrors, is interrupted when the system is booted into single-user mode. After the system is rebooted, the metasync -r command runs and resynchronizes all mirrors.

If this interruption is a concern, run the metasync -r command manually.

For more information about the metasync, see the metasync(1M) man page, and Solaris Volume Manager Administration Guide.