This part provides an overview of several technologies that relate to a Solaris OS installation or upgrade. Guidelines and requirements are also included.
GRUB based booting on x86 based systems
Solaris Zones partitioning technology
Solaris Volume Manager components such as RAID-1 volumes
This chapter describes the GRUB based booting on x86 based systems that relates to Solaris installation. This chapter contains the following sections:
GRUB, the open source boot loader, has been adopted as the default boot loader in the Solaris OS.
GRUB based booting is not available on SPARC based systems.
The boot loader is the first software program that runs after you power on a system. After you power on an x86 based system, the Basic Input/Output System (BIOS) initializes the CPU, the memory, and the platform hardware. When the initialization phase has completed, the BIOS loads the boot loader from the configured boot device, and then transfers control of the system to the boot loader.
GRUB is an open source boot loader with a simple menu interface that includes boot options that are predefined in a configuration file. GRUB also has a command-line interface that is accessible from the menu interface for performing various boot commands. In the Solaris OS, the GRUB implementation is compliant with the Multiboot Specification. The specification is described in detail at http://www.gnu.org/software/grub/grub.html.
Because the Solaris kernel is fully compliant with the Multiboot Specification, you can boot a Solaris x86 based system by using GRUB. With GRUB, you can more easily boot and install various operating systems. For example, on one system, you could individually boot the following operating systems:
Solaris OS
Microsoft Windows
GRUB detects Microsoft Windows partitions but does not verify that the OS can be booted.
A key benefit of GRUB is that it is intuitive about file systems and kernel executable formats, which enables you to load an operating system without recording the physical position of the kernel on the disk. With GRUB based booting, the kernel is loaded by specifying its file name, and the drive, and the partition where the kernel resides. GRUB based booting replaces the Solaris Device Configuration Assistant and simplifies the booting process with a GRUB menu.
After GRUB gains control of the system, a menu is displayed on the console. In the GRUB menu, you can do the following:
Select an entry to boot your system
Modify a boot entry by using the built-in GRUB edit menu
Manually load an OS kernel from the command line
A configurable timeout is available to boot the default OS entry. Pressing any key aborts the default OS entry boot.
To view an example of a GRUB menu, see Description of the GRUB Main Menu.
The device naming conventions that GRUB uses are slightly different from previous Solaris OS versions. Understanding the GRUB device naming conventions can assist you in correctly specifying drive and partition information when you configure GRUB on your system.
The following table describes the GRUB device naming conventions.
Table 6–1 Naming Conventions for GRUB Devices
Device Name |
Description |
---|---|
(fd0), (fd1) |
First diskette, second diskette |
(nd) |
Network device |
(hd0,0), (hd0,1) |
First and second fdisk partition of first bios disk |
(hd0,0,a), (hd0,0,b) |
Solaris/BSD slice 0 and 1 on first fdisk partition on the first bios disk |
All GRUB device names must be enclosed in parentheses. Partition numbers are counted from 0 (zero), not from 1.
For more information about fdisk partitions, see Guidelines for Creating an fdisk Partition in System Administration Guide: Devices and File Systems.
For more information about these changes, see the following references.
Table 6–2 Where to Find Information on GRUB Based Installations
Topic |
GRUB Menu Tasks |
For More Information |
---|---|---|
Installation |
To install from the Solaris OS CD or DVD media | |
To install from a network installation image | ||
To configure a DHCP server for network installations | ||
To install with the Custom JumpStart program | ||
To activate or fall back to a boot environment by using Solaris Live Upgrade | ||
System administration |
For more detailed information about GRUB and for administrative tasks |
Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration |
This section describes the basics of GRUB based booting and describes the GRUB menu.
When you install the Solaris OS, two GRUB menu entries are installed on the system by default. The first entry is the Solaris OS entry. The second entry is the failsafe boot archive, which is to be used for system recovery. The Solaris GRUB menu entries are installed and updated automatically as part of the Solaris software installation and upgrade process. These entries are directly managed by the OS and should not be manually edited.
During a standard Solaris OS installation, GRUB is installed on the Solaris fdisk partition without modifying the system BIOS setting. If the OS is not on the BIOS boot disk, you need to do one of the following:
Modify the BIOS setting.
Use a boot manager to bootstrap to the Solaris partition. For more details, see your boot manager.
The preferred method is to install the Solaris OS on the boot disk. If multiple operating systems are installed on the machine, you can add entries to the menu.lst file. These entries are then displayed in the GRUB menu the next time you boot the system.
For additional information on multiple operating systems, see How Multiple Operating Systems Are Supported in the GRUB Boot Environment in System Administration Guide: Basic Administration.
Performing a GRUB based network boot requires a DHCP server that is configured for PXE clients and an install server that provides tftp service. The DHCP server must be able to respond to the DHCP classes, PXEClient and GRUBClient. The DHCP response must contain the following information:
IP address of the file server
Name of the boot file (pxegrub)
rpc.bootparamd, which is usually a requirement on the server side for performing a network boot, is not required for a GRUB based network boot.
If no PXE or DHCP server is available, you can load GRUB from CD-ROM or local disk. You can then manually configure the network in GRUB and download the multiboot program and the boot archive from the file server.
For more information, see Overview of Booting and Installing Over the Network With PXE in Solaris 10 5/08 Installation Guide: Network-Based Installations.
When you boot an x86 based system, the GRUB menu is displayed. This menu provides a list of boot entries to choose from. A boot entry is an OS instance that is installed on your system. The GRUB menu is based on the menu.lst file, which is a configuration file. The menu.lst file is created by the Solaris installation program and can be modified after installation. The menu.lst file dictates the list of OS instances that are shown in the GRUB menu.
If you install or upgrade the Solaris OS, the GRUB menu is automatically updated. The Solaris OS is then displayed as a new boot entry.
If you install an OS other than the Solaris OS, you must modify the menu.lst configuration file to include the new OS instance. Adding the new OS instance enables the new boot entry to appear in the GRUB menu the next time that you boot the system.
In the following example, the GRUB main menu shows the Solaris and Microsoft Windows operating systems. A Solaris Live Upgrade boot environment is also listed that is named second_disk. See the following for descriptions of each menu item.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | |Windows | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
Specifies the Solaris OS.
Specifies a boot archive that can be used for recovery if the Solaris OS is damaged.
Specifies a Solaris Live Upgrade boot environment. The second_disk boot environment was created as a copy of the Solaris OS. It was upgraded and activated with the luactivate command. The boot environment is available for booting.
Specifies the Microsoft Windows OS. GRUB detects these partitions but does not verify that the OS can be booted.
The GRUB menu.lst file lists the contents of the GRUB main menu. The GRUB main menu lists boot entries for all the OS instances that are installed on your system, including Solaris Live Upgrade boot environments. The Solaris software upgrade process preserves any changes that you make to this file.
Any revisions made to the menu.lst file are displayed on the GRUB main menu, along with the Solaris Live Upgrade entries. Any changes that you make to the file become effective at the next system reboot. You can revise this file for the following reasons:
To add to the GRUB menu entries for operating systems other than Solaris
To customize booting behavior such as specifying the default OS on the GRUB menu
Do not use the GRUB menu.lst file to modify Solaris Live Upgrade entries. Modifications could cause Solaris Live Upgrade to fail.
Although you can use the menu.lst file to customize booting behavior such as booting with the kernel debugger, the preferred method for customization is to use the eeprom command. If you use the menu.lst file to customize, the Solaris OS entries might be modified during a software upgrade. Changes to the file would then be lost.
For information about how to use the eeprom command, see How to Set Solaris Boot Parameters by Using the eeprom Command in System Administration Guide: Basic Administration.
Here is a sample of a menu.lst file:
default 0 timeout 10 title Solaris root (hd0,0,a) kernel /platform/i86pc/multiboot -B console=ttya module /platform/i86pc/boot_archive title Solaris failsafe root (hd0,0,a) kernel /boot/multiboot -B console=ttya -s module /boot/x86.miniroot.safe #----- second_disk - ADDED BY LIVE UPGRADE - DO NOT EDIT ----- title second_disk root (hd0,1,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive title second_disk failsafe root (hd0,1,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe #----- second_disk -------------- END LIVE UPGRADE ------------ title Windows root (hd0,0) chainloader -1 |
Specifies which item to boot if the timeout expires. To change the default, you can specify another item in the list by changing the number. The count begins with zero for the first title. For example, change the default to 2 to boot automatically to the second_disk boot environment.
Specifies the number of seconds to wait for user input before booting the default entry. If no timeout is specified, you are required to choose an entry.
Specifies the name of the operating system.
If this is a Solaris Live Upgrade boot environment, OS name is the name you gave the new boot environment when it was created. In the previous example, the Solaris Live Upgrade boot environment is named second_disk.
If this is a failsafe boot archive, this boot archive is used for recovery when the primary OS is damaged. In the previous example, Solaris failsafe and second_disk failsafe are the recovery boot archives for the Solaris and second_disk operating systems.
Specifies on which disk, partition, and slice to load files. GRUB automatically detects the file system type.
Specifies the multiboot program. The kernel command must always be followed by the multiboot program. The string after multiboot is passed to the Solaris OS without interpretation.
For a complete description of multiple operating systems, see How Multiple Operating Systems Are Supported in the GRUB Boot Environment in System Administration Guide: Basic Administration.
You must always use the bootadm command to locate the GRUB menu's menu.lst file. The list-menu subcommand finds the active GRUB menu. The menu.lst file lists all the operating systems that are installed on a system. The contents of this file dictate the list of operating systems that is displayed on the GRUB menu. If you want to make changes to this file, see Locating the GRUB Menu’s menu.lst File (Tasks) in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
This chapter provides an overview of how Solaris Zones partitioning technology relates to upgrading the Solaris OS when non-global zones are configured.
This chapter contains the following sections:
The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. A non-global zone is a virtualized operating system environment created within a single instance of the Solaris OS. When you create a non-global zone, you produce an application execution environment in which processes are isolated from the rest of the system. This isolation prevents processes that are running in one non-global zone from monitoring or affecting processes that are running in other non-global zones. Even a process running with superuser credentials cannot view or affect activity in other zones. A non-global zone also provides an abstract layer that separates applications from the physical attributes of the machine on which they are deployed. Examples of these attributes include physical device paths.
Every Solaris system contains a global zone. The global zone has a dual function. The global zone is both the default zone for the system and the zone used for system-wide administrative control. All processes run in the global zone if no non-global zones are created by the global administrator. The global zone is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled. Only the global zone is bootable from the system hardware. Administration of the system infrastructure, such as physical devices, routing, or dynamic reconfiguration (DR), is only possible in the global zone. Appropriately privileged processes running in the global zone can access objects associated with the non-global zones.
Description |
For More Information |
---|---|
The following sections describe how you can upgrade a system that contains non-global zones. | |
For complete information on creating and configuring non-global zones |
After the Solaris OS is installed, you can install and configure non-global zones. You can upgrade the Solaris OS when non-global zones are installed. If you have branded non-global zones installed, they are ignored during the upgrade process. Changes to accommodate systems that have non-global zones installed are summarized below.
For the Solaris interactive installation program, you can upgrade or patch a system when non-global zones are installed. The time to upgrade or patch might be extensive, depending on the number of non-global zones that are installed. For more information about installing with this program, see Chapter 2, Installing With the Solaris Installation Program (Tasks), in Solaris 10 5/08 Installation Guide: Basic Installations.
For an automated JumpStart installation, you can upgrade or patch with any keyword that applies to an upgrade or patching. The time to upgrade or patch might be extensive, depending on the number of non-global zones that are installed. For more information about installing with this program, see Solaris 10 5/08 Installation Guide: Custom JumpStart and Advanced Installations.
For Solaris Live Upgrade, you can upgrade or patch a system that contains non-global zones. If you have a system that contains non-global zones, Solaris Live Upgrade is the recommended upgrade program or program to add patches. Other upgrade programs might require extensive upgrade time, because the time required to complete the upgrade increases linearly with the number of installed non-global zones. If you are patching a system with Solaris Live Upgrade, you do not have to take the system to single-user mode and you can maximize your system's uptime. Changes to accommodate systems that have non-global zones installed are the following:
A new package, SUNWlucfg, is required to be installed with the other Solaris Live Upgrade packages, SUNWlur and SUNWluu.
Creating a new boot environment from the currently running boot environment remains the same with one exception. You can specify a destination slice for a shared file system within a non-global zone. This exception occurs under the following circumstances:
If on the current boot environment the zonecfg add fs command was used that created a separate file system for a non-global zone
If this separate file system resides on a shared file system, such as /zone/root/export
To prevent this separate file system from being shared in the new boot environment, the lucreate command has changed to enable specifying a destination slice for a separate file system for a non-global zone. The argument to the -m option has a new optional field, zonename. This new field places the non-global zone's separate file system on a separate slice in the new boot environment. For more information on setting up a non-global zone with a separate file system, see zonecfg(1M).
By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. Updating shared files in the active boot environment also updates data in the inactive boot environment. The /export file system is an example of a shared file system. If you use the -m option and the zonename option, the non-global zone's shared file system is copied to a separate slice and data is not shared. This option prevents non-global zone file systems that were created with the zonecfg add fs command from being shared between the boot environments.
Comparing boot environments is enhanced. The lucompare command now generates a comparison of boot environments that includes the contents of any non-global zone.
The lumount command now provides non-global zones with access to their corresponding separate file systems that exist on inactive boot environments. When the global zone administrator uses the lumount command to mount an inactive boot environment, the boot environment is mounted for non-global zones as well.
Listing file systems with the lufslist command is enhanced to display a list of file systems for both the global zone and the non-global zones.
For step-by-step instructions on using Solaris Live Upgrade when non-global zones are installed, see Chapter 9, Upgrading the Solaris OS on a System With Non-Global Zones Installed, in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
Table 7–1 Limitations When Upgrading With Non-Global Zones
Program or Condition |
Description |
---|---|
Solaris Flash archives |
A Solaris Flash archive cannot be properly created when a non-global zone is installed. The Solaris Flash feature is not compatible with Solaris Zones partitioning technology. If you create a Solaris Flash archive, the resulting archive is not installed properly when the archive is deployed under these conditions:
For more information about using Solaris Flash archives, see Solaris 10 5/08 Installation Guide: Solaris Flash Archives (Creation and Installation). |
Using a command that uses the -R option or equivalent must not be used in some situations. |
Any command that accepts an alternate root (/) file system by using the -R option or equivalent must not be used if the following are true:
An example is the -R root_path option to the pkgadd utility run from the global zone with a path to the root (/) file system in a non-global zone. For a list of utilities that accept an alternate root (/) file system and more information about zones, see Restriction on Accessing A Non-Global Zone From the Global Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. |
ZFS file systems and non-global zones |
If a non-global zone is on a ZFS file system, the upgrade process does not upgrade the non-global zone. |
You should back up the global and non-global zones on your Solaris system before you perform the upgrade. For information about backing up a system with zones installed, see Chapter 26, Solaris Zones Administration (Overview), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
When installing the global zone, be sure to reserve enough disk space for all of the zones you might create. Each non-global zone might have unique disk space requirements.
No limits are placed on how much disk space can be consumed by a zone. The global zone administrator is responsible for space restriction. Even a small uniprocessor system can support a number of zones running simultaneously. The characteristics of the packages installed in the global zone affect the space requirements of the non-global zones that are created. The number of packages and space requirements are factors.
For complete planning requirements and recommendations, see Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
This chapter discusses the advantages of creating RAID-1 volumes (mirrors) for the root (/) file system. This chapter also describes the Solaris Volume Manager components that are required to create mirrors for file systems. This chapter describes the following topics.
For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:
For Solaris Live Upgrade: General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning
For JumpStart:
During the installation or upgrade, you can create RAID-1 volumes to duplicate your system data over multiple physical disks. By duplicating your data over separate disks, you can protect your data from disk corruption or a disk failure.
The Solaris custom JumpStart and Solaris Live Upgrade installation methods use the Solaris Volume Manager technology to create RAID-1 volumes that mirror a file system. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. The custom JumpStart and Solaris Live Upgrade installation methods enable a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system. You can create RAID-1 volumes during your installation or upgrade, eliminating the need to create them after the installation.
For guidelines, see Custom JumpStart and Solaris Live Upgrade Guidelines.
For detailed information about complex Solaris Volume Manager software and components, see Solaris Volume Manager Administration Guide.
Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. A volume is a name for a group of physical slices that appear to the system as a single, logical device. Volumes are actually pseudo, or virtual, devices in standard UNIX® terms.
A volume is functionally identical to a physical disk in the view of an application or a file system (such as UFS). Solaris Volume Manager converts I/O requests that are directed at a volume into I/O requests to the underlying member disks. Solaris Volume Manager volumes are built from slices (disk partitions) or from other Solaris Volume Manager volumes.
You use volumes to increase performance and data availability. In some instances, volumes can also increase I/O performance. Functionally, volumes behave the same way as slices. Because volumes look like slices, they are transparent to end users, applications, and file systems. Like physical devices, you can use Solaris Volume Manager software to access volumes through block or raw device names. The volume name changes, depending on whether the block or raw device is used. The custom JumpStart installation method and Solaris Live Upgrade support the use of block devices to create mirrored file systems. See RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade for details about volume names.
When you create RAID-1 volumes ) with RAID-0 volumes (single-slice concatenations), Solaris Volume Manager duplicates data on the RAID-0 submirrors and treats the submirrors as one volume.
Figure 8–1 shows a mirror that duplicates the root (/) file system over two physical disks.
Figure 8–1 shows a system with the following configuration.
The mirror that is named d30 consists of the submirrors that are named d31 and d32. The mirror, d30, duplicates the data in the root (/) file system on both submirrors.
The root (/) file system on hdisk0 is included in the single-slice concatenation that is named d31.
The root (/) file system is copied to the hard disk named hdisk1. This copy is the single-slice concatenation that is named d32.
The custom JumpStart installation method and Solaris Live Upgrade enable you to create the following components that are required to replicate data.
State database and state database replicas (metadbs)
RAID-1 volumes (mirrors) with single-slice concatenations (submirrors)
This section briefly describes each of these components. For complete information about these components, see Solaris Volume Manager Administration Guide.
The state database is a database that stores information on a physical disk. The state database records and tracks changes that are made to your configuration. Solaris Volume Manager automatically updates the state database when a configuration or state change occurs. Creating a new volume is an example of a configuration change. A submirror failure is an example of a state change.
The state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a state database replica, ensures that the data in the database is always valid. Having copies of the state database protects against data loss from single points of failure. The state database tracks the location and status of all known state database replicas.
Solaris Volume Manager cannot operate until you have created the state database and its state database replicas. A Solaris Volume Manager configuration must have an operating state database.
The state database replicas ensure that the data in the state database is always valid. When the state database is updated, each state database replica is also updated. The updates occur one at a time to protect against corruption of all updates if the system crashes.
If your system loses a state database replica, Solaris Volume Manager must identify which state database replicas still contain valid data. Solaris Volume Manager determines this information by using a majority consensus algorithm. This algorithm requires that a majority (half + 1) of the state database replicas be available and in agreement before any of them are considered valid. Because of this majority consensus algorithm, you must create at least three state database replicas when you set up your disk configuration. A consensus can be reached if at least two of the three state database replicas are available.
Each state database replica occupies 4 Mbytes (8192 disk sectors) of disk storage by default. Replicas can be stored on the following devices:
A dedicated local disk slice
Solaris Live Upgrade only:
A local slice that will be part of a volume
A local slice that will be part of a UFS logging device
Replicas cannot be stored on the root (/), swap, or /usr slices, or on slices that contain existing file systems or data. After the replicas have been stored, volumes or file systems can be placed on the same slice.
You can keep more than one copy of a state database on one slice. However, you might make the system more vulnerable to a single point of failure by placing state database replicas on a single slice.
Description |
For More Information |
---|---|
When using custom JumpStart or Solaris Live Upgrade to install RAID-1 volumes, review these guidelines and requirements. | |
Obtain more detailed information about the state database and state database replicas. |
A RAID-1 volume, or mirror, is a volume that maintains identical copies of the data in RAID-0 volumes (single-slice concatenations). After you configure a RAID-1 volume, the volume can be used just as if it were a physical slice. You can duplicate any file system, including existing file systems. You can also use a RAID-1 volume for any application, such as a database.
Using RAID-1 volumes to mirror file systems has advantages and disadvantages:
With RAID-1 volumes, data can be read from both RAID-0 volumes simultaneously (either volume can service any request), providing improved performance. If one physical disk fails, you can continue to use the mirror with no loss in performance or loss of data.
Using RAID-1 volumes requires an investment in disks. You need at least twice as much disk space as the amount of data.
Because Solaris Volume Manager software must write to all RAID-0 volumes, duplicating the data can also increase the time that is required for write requests to be written to disk.
Description |
For More Information |
---|---|
Planning for RAID-1 volumes | |
Detailed information about RAID-1 volumes |
A RAID-0 volume is a single-slice concatenation. The concatenation is a volume whose data is organized serially and adjacently across components, forming one logical storage unit. The custom JumpStart installation method and Solaris Live Upgrade do not enable you to create stripes or other complex Solaris Volume Manager volumes.
During the installation or upgrade, you can create RAID-1 volumes (mirrors) and attach RAID-0 volumes to these mirrors. The RAID-0 volumes that are mirrored are called submirrors. A mirror is made of one or more RAID-0 volumes. After the installation, you can manage the data on separate RAID-0 submirror volumes by administering the RAID-1 mirror volume through the Solaris Volume Manager software.
The custom JumpStart installation method enables you to create a mirror that consists of up to two submirrors. Solaris Live Upgrade enables you to create a mirror that consists of up to three submirrors. Practically, a two-way mirror is usually sufficient. A third submirror enables you to make online backups without losing data redundancy while one submirror is offline for the backup.
Description |
For More Information |
---|---|
Planning for RAID–0 volumes | |
Detailed information about RAID-0 volumes |
The following figure shows a RAID-1 volume that duplicates the root file system (/) over two physical disks. State database replicas (metadbs) are placed on both disks.
Figure 8–2 shows a system with the following configuration.
The mirror that is named d30 consists of the submirrors that are named d31 and d32. The mirror, d30, duplicates the data in the root (/) file system on both submirrors.
The root (/) file system on hdisk0 is included in the single-slice concatenation that is named d31.
The root (/) file system is copied to the hard disk named hdisk1. This copy is the single-slice concatenation that is named d32.
State database replicas are created on both slices: hdisk0 and hdisk1.
Description |
For More Information |
---|---|
JumpStart profile example |
Profile Examples in Solaris 10 5/08 Installation Guide: Custom JumpStart and Advanced Installations |
Solaris Live Upgrade step-by-step procedures |
This chapter describes the requirements and guidelines that are necessary to create RAID-1 volumes with the custom JumpStart or Solaris Live Upgrade installation methods.
This chapter describes the following topics.
For additional information specific to Solaris Live Upgrade or JumpStart, see the following references:
For Solaris Live Upgrade: General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning
For JumpStart:
To create RAID-1 volumes to duplicate data on specific slices, the disks that you plan to use must be directly attached and available to the system during the installation.
You should distribute state database replicas across slices, drives, and controllers, to avoid single points of failure. You want a majority of replicas to survive a single component failure. If you lose a replica, when a device fails, for example, the failure might cause problems with running Solaris Volume Manager software or when rebooting the system. Solaris Volume Manager software requires at least half of the replicas to be available to run, but a majority (half plus one) to reboot into multiuser mode.
For detailed instructions about creating and administering state database replicas, see Solaris Volume Manager Administration Guide.
Before selecting slices for state database replicas, consider the following guidelines and recommendations.
Task |
Description |
---|---|
Choose a dedicated slice |
You should create state database replicas on a dedicated slice of at least 4 MB per replica. If necessary, you could create state database replicas on a slice that is to be used as part of a RAID-0 or RAID-1 volume. You must create the replicas before you add the slice to the volume. |
Resize a slice |
By default, the size of a state database replica is 4 MB or 8192 disk blocks. Because your disk slices might not be that small, you can resize a slice to hold the state database replica. For information about resizing a slice, see Chapter 11, Administering Disks (Tasks), in System Administration Guide: Devices and File Systems. |
Choose a slice that is not in use |
You can create state database replicas on slices that are not in use. The part of a slice that is reserved for the state database replica should not be used for any other purpose. |
You cannot create state database replicas on existing file systems, or the root (/), /usr, and swap file systems. If necessary, you can create a new slice (provided a slice name is available) by allocating space from swap and then put state database replicas on that new slice. |
|
Choosing a slice that becomes a volume |
When a state database replica is placed on a slice that becomes part of a volume, the capacity of the volume is reduced by the space that is occupied by the replica or replicas. The space that is used by a replica is rounded up to the next cylinder boundary and this space is skipped by the volume. |
Before choosing the number of state database replicas, consider the following guidelines.
A minimum of 3 state database replicas are recommended, up to a maximum of 50 replicas per Solaris Volume Manager disk set. The following guidelines are recommended:
For a system with only a single drive: put all three replicas in one slice.
For a system with two to four drives: put two replicas on each drive.
For a system with five or more drives: put one replica on each drive.
Additional state database replicas can improve the mirror's performance. Generally, you need to add two replicas for each mirror you add to the system.
If you have a RAID-1 volume that is to be used for small-sized random I/O (for example, for a database), consider your number of replicas. For best performance, ensure that you have at least two extra replicas per RAID-1 volume on slices (and preferably on disks and controllers) that are unconnected to the RAID-1 volume.
If multiple controllers exist, replicas should be distributed as evenly as possible across all controllers. This strategy provides redundancy if a controller fails and also helps balance the load. If multiple disks exist on a controller, at least two of the disks on each controller should store a replica.
When you are working with RAID-1 volumes (mirrors) and RAID-0 volumes (single-slice concatenations), consider the following guidelines.
The custom JumpStart installation method and Solaris Live Upgrade support a subset of the features that are available in the Solaris Volume Manager software. When you create mirrored file systems with these installation programs, consider the following guidelines.
Installation Program |
Supported Feature |
Unsupported Feature |
---|---|---|
Custom JumpStart and Solaris Live Upgrade |
|
In Solaris Volume manager a RAID-0 volume can refer to disk stripes or disk concatenations. You cannot create RAID-0 stripe volumes during the installation or upgrade. |
Custom JumpStart |
|
|
Solaris Live Upgrade |
For examples, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning. |
More than three RAID-0 volumes are not supported. |
Creating and Installing a Solaris Flash with RAID-1 volumes |
You can create a Solaris Flash archive created from a master system that has Solaris Volume Manager RAID-1 volumes configured. The Solaris Flash creation software removes all RAID-1 volume information from the archive to keep the integrity of the clone system. With custom JumpStart you can rebuild the RAID-1 volumes by using a JumpStart profile. With Solaris Live Upgrade, you create a boot environment with RAID-1 volumes configured and install the archive. The Solaris installation program cannot be used to install RAID-1 volumes with a Solaris Flash archive. For examples of RAID-1 volumes in JumpStart profiles, see Profile Examples in Solaris 10 5/08 Installation Guide: Custom JumpStart and Advanced Installations. |
Veritas VxVM stores configuration information in areas not available to Solaris Flash. If Veritas VxVm file systems have been configured, you should not create a Solaris Flash archive. Also, Solaris install, including JumpStart and Solaris Live Upgrade do not support rebuilding VxVM volumes at installation time. Therefore, if you are planning to deploy Veritas VxVM software using a Solaris Flash archive, the archive must be created prior to configuring the VxVM file systems. The clone systems must be then configured individually after the archive has been applied and the system rebooted. |
Observe the following rules when assigning names for volumes.
Use a naming method that maps the slice number and disk number to volume numbers.
Volume names must begin with the letter d followed by a number, for example, d0.
Solaris Volume Manager has 128 default volume names from 0–127. The following list shows some example volume names.
Device /dev/md/dsk/d0 – block volume d0
Device /dev/md/dsk/d1 – block volume d1
Use ranges for each particular type of volume. For example, assign numbers 0–20 for RAID-1 volumes, and 21–40 for RAID-0 volumes.
When you use the Solaris Live Upgrade to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names, or you can assign the names. If you enable the software to detect the names, the software assigns the first mirror or submirror name that is available. If you assign mirror names, assign names ending in zero so that the installation can use the names ending in 1 and 2 for submirrors. If you assign submirror names, assign names ending in 1 or 2. If you assign numbers incorrectly, the mirror might not be created. For example, if you specify a mirror name with a number that ends in 1 or 2 (d1 or d2), Solaris Live Upgrade fails to create the mirror if the mirror name duplicates a submirror's name.
In previous releases, an abbreviated volume name could be entered. Starting with the Solaris 10 8/07 release, only the full volume name can be entered. For example, only the full volume name, such as /dev/md/dsk/d10, can be used to specify a mirror.
In this example, Solaris Live Upgrade assigns the volume names. The RAID-1 volumes d0 and d1 are the only volumes in use. For the mirror d10, Solaris Live Upgrade chooses d2 for the submirror for the device c0t0d0s0 and d3 for the submirror for the device c1t0d0s0.
lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/dsk/c0t0d0s0:attach -m /:/dev/dsk/c1t0d0s0:attach |
In this example, the volume names are assigned in the command. For the mirror d10, d11 is the name for the submirror for the device c0t0d0s0 and d12 is the name for the submirror for the device c1t0d0s0.
lucreate -n newbe -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d11:attach -m /:/dev/dsk/c1t0d0s0,/dev/md/dsk/d12:attach |
For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.
When you use the custom JumpStart installation method to create RAID-1 volumes (mirrors) and RAID-0 volumes (submirrors), you can enable the software to detect and assign volume names to mirrors, or you can assign the names in the profile.
If you enable the software to detect the names, the software assigns the first volume number that is available.
If you assign names in the profile, assign mirror names ending in zero so that the installation can use the names ending in 1 and 2 for submirrors.
If you assign numbers incorrectly, the mirror might not be created. For example, if you specify a mirror name with a number that ends in 1 or 2 (d1 or d2), JumpStart fails to create the mirror if the mirror name duplicates a submirror's name.
You can abbreviate the names of physical disk slices and Solaris Volume Manager volumes. The abbreviation is the shortest name that uniquely identifies a device. Examples follow.
A Solaris Volume Manager volume can be identified by its dnum designation, so that, for example, /dev/md/dsk/d10 becomes simply d10.
If a system has a single controller and multiple disks, you might use t0d0s0, but with multiple controllers use c0t0d0s0.
In the following profile example, the mirror is assigned the first volume numbers that are available. If the next available mirror ending in zero is d10, then the names d11 and d12 are assigned to the submirrors.
filesys mirror c0t0d0s1 /
In the following profile example, the mirror number is assigned in the profile as d30. The submirror names are assigned by the software, based on the mirror number and the first available submirrors. The submirrors are named d31 and d32.
filesys mirror:d30 c0t1d0s0 c0t0d0s0 /
For detailed information about Solaris Volume Manager naming requirements, see Solaris Volume Manager Administration Guide.
When you choose the disks and controllers that you want to use to mirror a file system, consider the following guidelines.
Use components that are on different controllers to increase the number of simultaneous reads and writes that can be performed.
Keep the slices of different submirrors on different disks and controllers. Data protection is diminished considerably if slices of two or more submirrors of the same mirror are on the same disk.
Organize submirrors across separate controllers, because controllers and associated cables tend to fail more often than disks. This practice also improves mirror performance.
Use the same type of disks and controllers in a single mirror. Particularly in old SCSI storage devices, different models or brands of disk or controller can have widely varying performance. Mixing the different performance levels in a single mirror can cause performance to degrade significantly.
When you choose the slices that you want to use to mirror a file system, consider the following guidelines.
Any file system, including root (/), swap, and /usr, can use a mirror. Any application, such as a database, also can use a mirror.
Make sure that your submirror slices are of equal size. Submirrors of different sizes result in unused disk space.
If you have a mirrored file system in which the first submirror attached does not start on cylinder 0, all additional submirrors you attach must also not start on cylinder 0. If you attempt to attach a submirror starting on cylinder 0 to a mirror in which the original submirror does not start on cylinder 0, the following error message is displayed:
can't attach labeled submirror to an unlabeled mirror |
You must ensure that all submirrors you plan to attach to a mirror either all start on cylinder 0, or that none of them start on cylinder 0.
Starting cylinders do not have to be identical across all submirrors, but all submirrors must either include or not include cylinder 0.
If a system with mirrors for root (/), /usr, and swap is booted into single-user mode, the system indicates that these mirrors are in need of maintenance. When you view these mirrors with the metastat command, these mirrors, and possibly all mirrors on the system, appear in the “Needing Maintenance” state.
Though this situation appears to be potentially dangerous, do not be concerned. The metasync -r command, which normally occurs during boot to resynchronize mirrors, is interrupted when the system is booted into single-user mode. After the system is rebooted, the metasync -r command runs and resynchronizes all mirrors.
If this interruption is a concern, run the metasync -r command manually.
For more information about the metasync, see the metasync(1M) man page, and Solaris Volume Manager Administration Guide.