Solaris 10 8/07 Installation Guide: Planning for Installation and Upgrade

Part II Understanding Installations That Relate to GRUB, Solaris Zones, and RAID-1 Volumes

This part provides an overview of several technologies that relate to a Solaris OS installation or upgrade. Guidelines and requirements are also included.

Chapter 6 x86: GRUB Based Booting for Solaris Installation

This chapter describes the GRUB based booting on x86 based systems that relates to Solaris installation. This chapter contains the following sections:

x86: GRUB Based Booting (Overview)

GRUB, the open source boot loader, has been adopted as the default boot loader in the Solaris OS.


Note –

GRUB based booting is not available on SPARC based systems.


The boot loader is the first software program that runs after you power on a system. After you power on an x86 based system, the Basic Input/Output System (BIOS) initializes the CPU, the memory, and the platform hardware. When the initialization phase has completed, the BIOS loads the boot loader from the configured boot device, and then transfers control of the system to the boot loader.

GRUB is an open source boot loader with a simple menu interface that includes boot options that are predefined in a configuration file. GRUB also has a command-line interface that is accessible from the menu interface for performing various boot commands. In the Solaris OS, the GRUB implementation is compliant with the Multiboot Specification. The specification is described in detail at http://www.gnu.org/software/grub/grub.html.

Because the Solaris kernel is fully compliant with the Multiboot Specification, you can boot a Solaris x86 based system by using GRUB. With GRUB, you can more easily boot and install various operating systems. For example, on one system, you could individually boot the following operating systems:

A key benefit of GRUB is that it is intuitive about file systems and kernel executable formats, which enables you to load an operating system without recording the physical position of the kernel on the disk. With GRUB based booting, the kernel is loaded by specifying its file name, and the drive, and the partition where the kernel resides. GRUB based booting replaces the Solaris Device Configuration Assistant and simplifies the booting process with a GRUB menu.

x86: How GRUB Based Booting Works

After GRUB gains control of the system, a menu is displayed on the console. In the GRUB menu, you can do the following:

A configurable timeout is available to boot the default OS entry. Pressing any key aborts the default OS entry boot.

To view an example of a GRUB menu, see Description of the GRUB Main Menu.

x86: GRUB Device Naming Conventions

The device naming conventions that GRUB uses are slightly different from previous Solaris OS versions. Understanding the GRUB device naming conventions can assist you in correctly specifying drive and partition information when you configure GRUB on your system.

The following table describes the GRUB device naming conventions.

Table 6–1 Naming Conventions for GRUB Devices

Device Name 

Description 

(fd0), (fd1)

First diskette, second diskette 

(nd)

Network device 

(hd0,0), (hd0,1)

First and second fdisk partition of first bios disk

(hd0,0,a), (hd0,0,b)

Solaris/BSD slice 0 and 1 on first fdisk partition on the first bios disk


Note –

All GRUB device names must be enclosed in parentheses. Partition numbers are counted from 0 (zero), not from 1.


For more information about fdisk partitions, see Guidelines for Creating an fdisk Partition in System Administration Guide: Devices and File Systems.

x86: Where to Find Information About GRUB Based Installations

For more information about these changes, see the following references.

Table 6–2 Where to Find Information on GRUB Based Installations

Topic 

GRUB Menu Tasks 

For More Information 

Installation 

To install from the Solaris OS CD or DVD media 

Solaris 10 8/07 Installation Guide: Basic Installations.

To install from a network installation image 

Part II, Installing Over a Local Area Network, in Solaris 10 8/07 Installation Guide: Network-Based Installations

 

To configure a DHCP server for network installations 

Preconfiguring System Configuration Information With the DHCP Service (Tasks) in Solaris 10 8/07 Installation Guide: Network-Based Installations

 

To install with the Custom JumpStart program 

Performing a Custom JumpStart Installation in Solaris 10 8/07 Installation Guide: Custom JumpStart and Advanced Installations

 

To activate or fall back to a boot environment by using Solaris Live Upgrade 

System administration 

For more detailed information about GRUB and for administrative tasks 

Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration

x86: GRUB Based Booting (Planning)

This section describes the basics of GRUB based booting and describes the GRUB menu.

When you install the Solaris OS, two GRUB menu entries are installed on the system by default. The first entry is the Solaris OS entry. The second entry is the failsafe boot archive, which is to be used for system recovery. The Solaris GRUB menu entries are installed and updated automatically as part of the Solaris software installation and upgrade process. These entries are directly managed by the OS and should not be manually edited.

During a standard Solaris OS installation, GRUB is installed on the Solaris fdisk partition without modifying the system BIOS setting. If the OS is not on the BIOS boot disk, you need to do one of the following:

The preferred method is to install the Solaris OS on the boot disk. If multiple operating systems are installed on the machine, you can add entries to the menu.lst file. These entries are then displayed in the GRUB menu the next time you boot the system.

For additional information on multiple operating systems, see How Multiple Operating Systems Are Supported in the GRUB Boot Environment in System Administration Guide: Basic Administration.

x86: Performing a GRUB Based Installation From the Network

Performing a GRUB based network boot requires a DHCP server that is configured for PXE clients and an install server that provides tftp service. The DHCP server must be able to respond to the DHCP classes, PXEClient and GRUBClient. The DHCP response must contain the following information:


Note –

rpc.bootparamd, which is usually a requirement on the server side for performing a network boot, is not required for a GRUB based network boot.


If no PXE or DHCP server is available, you can load GRUB from CD-ROM or local disk. You can then manually configure the network in GRUB and download the multiboot program and the boot archive from the file server.

For more information, see Overview of Booting and Installing Over the Network With PXE in Solaris 10 8/07 Installation Guide: Network-Based Installations.

Description of the GRUB Main Menu

When you boot an x86 based system, the GRUB menu is displayed. This menu provides a list of boot entries to choose from. A boot entry is an OS instance that is installed on your system. The GRUB menu is based on the menu.lst file, which is a configuration file. The menu.lst file is created by the Solaris installation program and can be modified after installation. The menu.lst file dictates the list of OS instances that are shown in the GRUB menu.


Example 6–1 GRUB Main Menu

In the following example, the GRUB main menu shows the Solaris and Microsoft Windows operating systems. A Solaris Live Upgrade boot environment is also listed that is named second_disk. See the following for descriptions of each menu item.


GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
+-------------------------------------------------------------------+
|Solaris                                                            |
|Solaris failsafe                                                   |
|second_disk                                                        |
|second_disk failsafe                                               |
|Windows                                                            |
+-------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted. Press
enter to boot the selected OS, 'e' to edit the commands before
booting, or 'c' for a command-line.
Solaris

Specifies the Solaris OS.

Solaris failsafe

Specifies a boot archive that can be used for recovery if the Solaris OS is damaged.

second_disk

Specifies a Solaris Live Upgrade boot environment. The second_disk boot environment was created as a copy of the Solaris OS. It was upgraded and activated with the luactivate command. The boot environment is available for booting.

Windows

Specifies the Microsoft Windows OS. GRUB detects these partitions but does not verify that the OS can be booted.


Description of GRUB menu.lst File

The GRUB menu.lst file lists the contents of the GRUB main menu. The GRUB main menu lists boot entries for all the OS instances that are installed on your system, including Solaris Live Upgrade boot environments. The Solaris software upgrade process preserves any changes that you make to this file.

Any revisions made to the menu.lst file are displayed on the GRUB main menu, along with the Solaris Live Upgrade entries. Any changes that you make to the file become effective at the next system reboot. You can revise this file for the following reasons:


Caution – Caution –

Do not use the GRUB menu.lst file to modify Solaris Live Upgrade entries. Modifications could cause Solaris Live Upgrade to fail.


Although you can use the menu.lst file to customize booting behavior such as booting with the kernel debugger, the preferred method for customization is to use the eeprom command. If you use the menu.lst file to customize, the Solaris OS entries might be modified during a software upgrade. Changes to the file would then be lost.

For information about how to use the eeprom command, see How to Set Solaris Boot Parameters by Using the eeprom Command in System Administration Guide: Basic Administration.


Example 6–2 Menu.lst File

Here is a sample of a menu.lst file:


default 0
timeout 10
title Solaris
  root (hd0,0,a)
  kernel /platform/i86pc/multiboot -B console=ttya
  module /platform/i86pc/boot_archive
title Solaris failsafe
  root (hd0,0,a)
  kernel /boot/multiboot -B console=ttya -s
  module /boot/x86.miniroot.safe
#----- second_disk - ADDED BY LIVE UPGRADE - DO NOT EDIT  -----
title second_disk
  root (hd0,1,a)
  kernel /platform/i86pc/multiboot
  module /platform/i86pc/boot_archive
title second_disk failsafe
  root (hd0,1,a)
  kernel /boot/multiboot kernel/unix -s
  module /boot/x86.miniroot-safe
#----- second_disk -------------- END LIVE UPGRADE ------------
title Windows
  root (hd0,0)
  chainloader -1
default

Specifies which item to boot if the timeout expires. To change the default, you can specify another item in the list by changing the number. The count begins with zero for the first title. For example, change the default to 2 to boot automatically to the second_disk boot environment.

timeout

Specifies the number of seconds to wait for user input before booting the default entry. If no timeout is specified, you are required to choose an entry.

title OS name

Specifies the name of the operating system.

  • If this is a Solaris Live Upgrade boot environment, OS name is the name you gave the new boot environment when it was created. In the previous example, the Solaris Live Upgrade boot environment is named second_disk.

  • If this is a failsafe boot archive, this boot archive is used for recovery when the primary OS is damaged. In the previous example, Solaris failsafe and second_disk failsafe are the recovery boot archives for the Solaris and second_disk operating systems.

root (hd0,0,a)

Specifies on which disk, partition, and slice to load files. GRUB automatically detects the file system type.

kernel /platform/i86pc/multiboot

Specifies the multiboot program. The kernel command must always be followed by the multiboot program. The string after multiboot is passed to the Solaris OS without interpretation.

For a complete description of multiple operating systems, see How Multiple Operating Systems Are Supported in the GRUB Boot Environment in System Administration Guide: Basic Administration.


Locating the menu.lst File to Change the GRUB Menu

You must always use the bootadm command to locate the GRUB menu's menu.lst file. The list-menu subcommand finds the active GRUB menu. The menu.lst file lists all the operating systems that are installed on a system. The contents of this file dictate the list of operating systems that is displayed on the GRUB menu. If you want to make changes to this file, see Locating the GRUB Menu’s menu.lst File (Tasks) in Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Chapter 7 Upgrading When Solaris Zones Are Installed on a System (Planning)

This chapter provides an overview of how Solaris Zones partitioning technology relates to upgrading the Solaris OS when non-global zones are configured.

This chapter contains the following sections:

Solaris Zones (Overview)

The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. A non-global zone is a virtualized operating system environment created within a single instance of the Solaris OS. When you create a non-global zone, you produce an application execution environment in which processes are isolated from the rest of the system. This isolation prevents processes that are running in one non-global zone from monitoring or affecting processes that are running in other non-global zones. Even a process running with superuser credentials cannot view or affect activity in other zones. A non-global zone also provides an abstract layer that separates applications from the physical attributes of the machine on which they are deployed. Examples of these attributes include physical device paths.

Every Solaris system contains a global zone. The global zone has a dual function. The global zone is both the default zone for the system and the zone used for system-wide administrative control. All processes run in the global zone if no non-global zones are created by the global administrator. The global zone is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled. Only the global zone is bootable from the system hardware. Administration of the system infrastructure, such as physical devices, routing, or dynamic reconfiguration (DR), is only possible in the global zone. Appropriately privileged processes running in the global zone can access objects associated with the non-global zones.

Description 

For More Information 

The following sections describe how you can upgrade a system that contains non-global zones. 

Upgrading With Non-Global Zones

For complete information on creating and configuring non-global zones 

Chapter 16, Introduction to Solaris Zones, in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones

Upgrading With Non-Global Zones

After the Solaris OS is installed, you can install and configure non-global zones. You can upgrade the Solaris OS when non-global zones are installed. If you have branded non-global zones installed, they are ignored during the upgrade process. Changes to accommodate systems that have non-global zones installed are summarized below.

For step-by-step instructions on using Solaris Live Upgrade when non-global zones are installed, see Chapter 9, Upgrading the Solaris OS on a System With Non-Global Zones Installed, in Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Table 7–1 Limitations When Upgrading With Non-Global Zones

Program or Condition 

Description 

Solaris Flash archives 

A Solaris Flash archive cannot be properly created when a non-global zone is installed. The Solaris Flash feature is not compatible with Solaris Zones partitioning technology. If you create a Solaris Flash archive, the resulting archive is not installed properly when the archive is deployed under these conditions:

  • The archive is created in a non-global zone.

  • The archive is created in a global zone that has non-global zones installed.

For more information about using Solaris Flash archives, see Solaris 10 8/07 Installation Guide: Solaris Flash Archives (Creation and Installation).

Using a command that uses the -R option or equivalent must not be used in some situations.

Any command that accepts an alternate root (/) file system by using the -R option or equivalent must not be used if the following are true:

  • The command is run in the global zone.

  • The alternative root (/) file system refers to any path within a non-global zone.

An example is the -R root_path option to the pkgadd utility run from the global zone with a path to the root (/) file system in a non-global zone.

For a list of utilities that accept an alternate root (/) file system and more information about zones, see Restriction on Accessing A Non-Global Zone From the Global Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

ZFS file systems and non-global zones 

If a non-global zone is on a ZFS file system, the upgrade process does not upgrade the non-global zone. 

Backing Up Your System Before Performing an Upgrade With Zones

You should back up the global and non-global zones on your Solaris system before you perform the upgrade. For information about backing up a system with zones installed, see Chapter 26, Solaris Zones Administration (Overview), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

Disk Space Requirements for Non-Global Zones

When installing the global zone, be sure to reserve enough disk space for all of the zones you might create. Each non-global zone might have unique disk space requirements.

No limits are placed on how much disk space can be consumed by a zone. The global zone administrator is responsible for space restriction. Even a small uniprocessor system can support a number of zones running simultaneously. The characteristics of the packages installed in the global zone affect the space requirements of the non-global zones that are created. The number of packages and space requirements are factors.

For complete planning requirements and recommendations, see Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

Chapter 8 Creating RAID-1 Volumes (Mirrors) During Installation (Overview)

This chapter discusses the advantages of creating RAID-1 volumes (mirrors) for the root (/) file system. This chapter also describes the Solaris Volume Manager components that are required to create mirrors for file systems. This chapter describes the following topics.

For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:

Why Use RAID-1 Volumes?

During the installation or upgrade, you can create RAID-1 volumes to duplicate your system data over multiple physical disks. By duplicating your data over separate disks, you can protect your data from disk corruption or a disk failure.

The Solaris custom JumpStart and Solaris Live Upgrade installation methods use the Solaris Volume Manager technology to create RAID-1 volumes that mirror a file system. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. The custom JumpStart and Solaris Live Upgrade installation methods enable a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system. You can create RAID-1 volumes during your installation or upgrade, eliminating the need to create them after the installation.

How Do RAID-1 Volumes Work?

Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. A volume is a name for a group of physical slices that appear to the system as a single, logical device. Volumes are actually pseudo, or virtual, devices in standard UNIX® terms.

A volume is functionally identical to a physical disk in the view of an application or a file system (such as UFS). Solaris Volume Manager converts I/O requests that are directed at a volume into I/O requests to the underlying member disks. Solaris Volume Manager volumes are built from slices (disk partitions) or from other Solaris Volume Manager volumes.

You use volumes to increase performance and data availability. In some instances, volumes can also increase I/O performance. Functionally, volumes behave the same way as slices. Because volumes look like slices, they are transparent to end users, applications, and file systems. Like physical devices, you can use Solaris Volume Manager software to access volumes through block or raw device names. The volume name changes, depending on whether the block or raw device is used. The custom JumpStart installation method and Solaris Live Upgrade support the use of block devices to create mirrored file systems. See RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade for details about volume names.

When you create RAID-1 volumes ) with RAID-0 volumes (single-slice concatenations), Solaris Volume Manager duplicates data on the RAID-0 submirrors and treats the submirrors as one volume.

Figure 8–1 shows a mirror that duplicates the root (/) file system over two physical disks.

Figure 8–1 Creating RAID-1 Volumes on the Root (/) File System on Two Disks

 The context describes the illustration.

Figure 8–1 shows a system with the following configuration.

Overview of Solaris Volume Manager Components

The custom JumpStart installation method and Solaris Live Upgrade enable you to create the following components that are required to replicate data.

This section briefly describes each of these components. For complete information about these components, see Solaris Volume Manager Administration Guide.

State Database and State Database Replicas

The state database is a database that stores information on a physical disk. The state database records and tracks changes that are made to your configuration. Solaris Volume Manager automatically updates the state database when a configuration or state change occurs. Creating a new volume is an example of a configuration change. A submirror failure is an example of a state change.

The state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a state database replica, ensures that the data in the database is always valid. Having copies of the state database protects against data loss from single points of failure. The state database tracks the location and status of all known state database replicas.

Solaris Volume Manager cannot operate until you have created the state database and its state database replicas. A Solaris Volume Manager configuration must have an operating state database.

The state database replicas ensure that the data in the state database is always valid. When the state database is updated, each state database replica is also updated. The updates occur one at a time to protect against corruption of all updates if the system crashes.

If your system loses a state database replica, Solaris Volume Manager must identify which state database replicas still contain valid data. Solaris Volume Manager determines this information by using a majority consensus algorithm. This algorithm requires that a majority (half + 1) of the state database replicas be available and in agreement before any of them are considered valid. Because of this majority consensus algorithm, you must create at least three state database replicas when you set up your disk configuration. A consensus can be reached if at least two of the three state database replicas are available.

Each state database replica occupies 4 Mbytes (8192 disk sectors) of disk storage by default. Replicas can be stored on the following devices:

Replicas cannot be stored on the root (/), swap, or /usr slices, or on slices that contain existing file systems or data. After the replicas have been stored, volumes or file systems can be placed on the same slice.

You can keep more than one copy of a state database on one slice. However, you might make the system more vulnerable to a single point of failure by placing state database replicas on a single slice.

Description 

For More Information 

When using custom JumpStart or Solaris Live Upgrade to install RAID-1 volumes, review these guidelines and requirements. 

State Database Replicas Guidelines and Requirements

Obtain more detailed information about the state database and state database replicas. 

Solaris Volume Manager Administration Guide

RAID-1 Volumes (Mirrors)

A RAID-1 volume, or mirror, is a volume that maintains identical copies of the data in RAID-0 volumes (single-slice concatenations). After you configure a RAID-1 volume, the volume can be used just as if it were a physical slice. You can duplicate any file system, including existing file systems. You can also use a RAID-1 volume for any application, such as a database.

Using RAID-1 volumes to mirror file systems has advantages and disadvantages:

Description 

For More Information 

Planning for RAID-1 volumes 

RAID-1 and RAID-0 Volume Requirements and Guidelines

Detailed information about RAID-1 volumes 

Solaris Volume Manager Administration Guide

RAID-0 Volumes (Concatenations)

A RAID-0 volume is a single-slice concatenation. The concatenation is a volume whose data is organized serially and adjacently across components, forming one logical storage unit. The custom JumpStart installation method and Solaris Live Upgrade do not enable you to create stripes or other complex Solaris Volume Manager volumes.

During the installation or upgrade, you can create RAID-1 volumes (mirrors) and attach RAID-0 volumes to these mirrors. The RAID-0 volumes that are mirrored are called submirrors. A mirror is made of one or more RAID-0 volumes. After the installation, you can manage the data on separate RAID-0 submirror volumes by administering the RAID-1 mirror volume through the Solaris Volume Manager software.

The custom JumpStart installation method enables you to create a mirror that consists of up to two submirrors. Solaris Live Upgrade enables you to create a mirror that consists of up to three submirrors. Practically, a two-way mirror is usually sufficient. A third submirror enables you to make online backups without losing data redundancy while one submirror is offline for the backup.

Description 

For More Information 

Planning for RAID–0 volumes 

RAID-1 and RAID-0 Volume Requirements and Guidelines

Detailed information about RAID-0 volumes 

Solaris Volume Manager Administration Guide

Example of RAID-1 Volume Disk Layout

The following figure shows a RAID-1 volume that duplicates the root file system (/) over two physical disks. State database replicas (metadbs) are placed on both disks.

Figure 8–2 RAID-1 Volume Disk Layout

The context describes the illustration.

Figure 8–2 shows a system with the following configuration.

Description 

For More Information 

JumpStart profile example 

Profile Examples in Solaris 10 8/07 Installation Guide: Custom JumpStart and Advanced Installations

Solaris Live Upgrade step-by-step procedures 

To Create a Boot Environment With RAID-1 Volumes (Mirrors) in Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Chapter 9 Creating RAID-1 Volumes (Mirrors) During Installation (Planning)

This chapter describes the requirements and guidelines that are necessary to create RAID-1 volumes with the custom JumpStart or Solaris Live Upgrade installation methods.

This chapter describes the following topics.

For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:

System Requirement

To create RAID-1 volumes to duplicate data on specific slices, the disks that you plan to use must be directly attached and available to the system during the installation.

State Database Replicas Guidelines and Requirements

You should distribute state database replicas across slices, drives, and controllers, to avoid single points of failure. You want a majority of replicas to survive a single component failure. If you lose a replica, when a device fails, for example, the failure might cause problems with running Solaris Volume Manager software or when rebooting the system. Solaris Volume Manager software requires at least half of the replicas to be available to run, but a majority (half plus one) to reboot into multiuser mode.

For detailed instructions about creating and administering state database replicas, see Solaris Volume Manager Administration Guide.

Selecting Slices for State Database Replicas

Before selecting slices for state database replicas, consider the following guidelines and recommendations.

Task 

Description 

Choose a dedicated slice 

You should create state database replicas on a dedicated slice of at least 4 MB per replica. If necessary, you could create state database replicas on a slice that is to be used as part of a RAID-0 or RAID-1 volume. You must create the replicas before you add the slice to the volume. 

Resize a slice 

By default, the size of a state database replica is 4 MB or 8192 disk blocks. Because your disk slices might not be that small, you can resize a slice to hold the state database replica. For information about resizing a slice, see Chapter 11, Administering Disks (Tasks), in System Administration Guide: Devices and File Systems.

Choose a slice that is not in use 

You can create state database replicas on slices that are not in use. The part of a slice that is reserved for the state database replica should not be used for any other purpose.

 

You cannot create state database replicas on existing file systems, or the root (/), /usr, and swap file systems. If necessary, you can create a new slice (provided a slice name is available) by allocating space from swap and then put state database replicas on that new slice.

Choosing a slice that becomes a volume 

When a state database replica is placed on a slice that becomes part of a volume, the capacity of the volume is reduced by the space that is occupied by the replica or replicas. The space that is used by a replica is rounded up to the next cylinder boundary and this space is skipped by the volume.  

Choosing the Number of State Database Replicas

Before choosing the number of state database replicas, consider the following guidelines.

Distributing State Database Replicas Across Controllers

If multiple controllers exist, replicas should be distributed as evenly as possible across all controllers. This strategy provides redundancy if a controller fails and also helps balance the load. If multiple disks exist on a controller, at least two of the disks on each controller should store a replica.

RAID-1 and RAID-0 Volume Requirements and Guidelines

When you are working with RAID-1 volumes (mirrors) and RAID-0 volumes (single-slice concatenations), consider the following guidelines.

Custom JumpStart and Solaris Live Upgrade Guidelines

The custom JumpStart installation method and Solaris Live Upgrade support a subset of the features that are available in the Solaris Volume Manager software. When you create mirrored file systems with these installation programs, consider the following guidelines.

Installation Program 

Supported Feature  

Unsupported Feature 

Custom JumpStart and Solaris Live Upgrade 

  • Supports RAID-0 and RAID-1 volumes, but does not support other Solaris Volume Manager components, such as RAID-5 volumes.

  • RAID-0 volume is supported, but only as a single-slice concatenation.

In Solaris Volume manager a RAID-0 volume can refer to disk stripes or disk concatenations. You cannot create RAID-0 stripe volumes during the installation or upgrade. 

Custom JumpStart 

  • Supports the creation of RAID-1 volumes during an initial installation only.

  • You can create up to two RAID-0 volumes (submirrors) for each RAID-1 volume. Two submirrors usually provide sufficient data redundancy for most applications, and the disk drive costs are less expensive.

  • Does not support an upgrade when RAID-1 volumes are configured.

  • More than two RAID-0 volumes are not supported.

Solaris Live Upgrade 

  • You can create up to three RAID-0 volumes (submirrors) for each RAID-1 volume. Three submirrors enable you to take a submirror offline and perform a backup while maintaining the two remaining submirrors for continued data redundancy.

  • Supports the creation of RAID-1 volumes during an upgrade.

For examples, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) in Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

More than three RAID-0 volumes are not supported. 

Creating and Installing a Solaris Flash with RAID-1 volumes 

You can create a Solaris Flash archive created from a master system that has Solaris Volume Manager RAID-1 volumes configured. The Solaris Flash creation software removes all RAID-1 volume information from the archive to keep the integrity of the clone system. With custom JumpStart you can rebuild the RAID-1 volumes by using a JumpStart profile. With Solaris Live Upgrade, you create a boot environment with RAID-1 volumes configured and install the archive. The Solaris installation program cannot be used to install RAID-1 volumes with a Solaris Flash archive. 

For examples of RAID-1 volumes in JumpStart profiles, see Profile Examples in Solaris 10 8/07 Installation Guide: Custom JumpStart and Advanced Installations.

Veritas VxVM stores configuration information in areas not available to Solaris Flash. If Veritas VxVm file systems have been configured, you should not create a Solaris Flash archive. Also, Solaris install, including JumpStart and Solaris Live Upgrade do not support rebuilding VxVM volumes at installation time. Therefore, if you are planning to deploy Veritas VxVM software using a Solaris Flash archive, the archive must be created prior to configuring the VxVM file systems. The clone systems must be then configured individually after the archive has been applied and the system rebooted. 

RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade

Observe the following rules when assigning names for volumes.

RAID Volume Naming Conventions for Solaris Live Upgrade

When you use the Solaris Live Upgrade to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names, or you can assign the names. If you enable the software to detect the names, the software assigns the first mirror or submirror name that is available. If you assign mirror names, assign names ending in zero so that the installation can use the names ending in 1 and 2 for submirrors. If you assign submirror names, assign names ending in 1 or 2. If you assign numbers incorrectly, the mirror might not be created. For example, if you specify a mirror name with a number that ends in 1 or 2 (d1 or d2), Solaris Live Upgrade fails to create the mirror if the mirror name duplicates a submirror's name.


Note –

In previous releases, an abbreviated volume name could be entered. Starting with the 10 8/07 release, only the full volume name can be entered. For example, only the full volume name, such as /dev/md/dsk/d10, can be used to specify a mirror.



Example 9–1 Solaris Live Upgrade: Enable the Software to Detect and Name the Mirror and Submirror

In this example, Solaris Live Upgrade assigns the volume names. The RAID-1 volumes d0 and d1 are the only volumes in use. For the mirror d10, Solaris Live Upgrade chooses d2 for the submirror for the device c0t0d0s0 and d3 for the submirror for the device c1t0d0s0.


lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/dsk/c0t0d0s0:attach
-m /:/dev/dsk/c1t0d0s0:attach


Example 9–2 Solaris Live Upgrade: Assign Mirror and Submirror Names

In this example, the volume names are assigned in the command. For the mirror d10, d11 is the name for the submirror for the device c0t0d0s0 and d12 is the name for the submirror for the device c1t0d0s0.


lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d11:attach
-m /:/dev/dsk/c1t0d0s0,/dev/md/dsk/d12:attach

For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.


RAID-Volume Naming Conventions for Custom JumpStart

When you use the custom JumpStart installation method to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names to mirrors, or you can assign the names in the profile.


Note –

You can abbreviate the names of physical disk slices and Solaris Volume Manager volumes. The abbreviation is the shortest name that uniquely identifies a device. Examples follow.



Example 9–3 Enable the Software to Detect the Mirror and Submirror Names

In the following profile example, the mirror is assigned the first volume numbers that are available. If the next available mirror ending in zero is d10, then the names d11 and d12 are assigned to the submirrors.

filesys                 mirror c0t0d0s1  / 


Example 9–4 Assigning Mirror and Submirror Names

In the following profile example, the mirror number is assigned in the profile as d30. The submirror names are assigned by the software, based on the mirror number and the first available submirrors. The submirrors are named d31 and d32.

filesys                 mirror:d30 c0t1d0s0 c0t0d0s0  /

For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.

Guidelines for Selecting Disks and Controllers

When you choose the disks and controllers that you want to use to mirror a file system, consider the following guidelines.

Guidelines for Selecting Slices

When you choose the slices that you want to use to mirror a file system, consider the following guidelines.

Booting Into Single-User Mode Causes Mirror to Appear to Need Maintenance

If a system with mirrors for root (/), /usr, and swap is booted into single-user mode, the system indicates that these mirrors are in need of maintenance. When you view these mirrors with the metastat command, these mirrors, and possibly all mirrors on the system, appear in the “Needing Maintenance” state.

Though this situation appears to be potentially dangerous, do not be concerned. The metasync -r command, which normally occurs during boot to resynchronize mirrors, is interrupted when the system is booted into single-user mode. After the system is rebooted, the metasync -r command runs and resynchronizes all mirrors.

If this interruption is a concern, run the metasync -r command manually.

For more information about the metasync, see the metasync(1M) man page, and Solaris Volume Manager Administration Guide.