This chapter provides general best practices information from real world storage scenarios using Solaris Volume Manager. In this section, you will see a typical configuration, followed by an analysis, followed by a recommended (“Best Practices”) configuration to meet the same needs.
This chapter includes the following information:
Distributed computing environments, from ISPs to geographically distributed sales offices to telecommunication service providers, often need to deploy similar or identical servers at multiple locations. These servers might provide router or firewall services, email services, DNS caches, Usenet (Network News) servers, DHCP services, or other services best provided at a variety of locations. These small servers have several characteristics in common:
Routine hardware and performance requirements
As a starting point, consider a Netra with a single SCSI bus and two internal disks—an off-the-shelf configuration, and a good starting point for distributed servers. Solaris Volume Manager could easily be used to mirror some or all of the slices, thus providing redundant storage to help guard against disk failure. See the following figure for an example.
A configuration like this example might include mirrors for the root (/), /usr, swap, /var, and /export file systems, plus state database replicas (one per disk). As such, a failure of either side of any of the mirrors would not necessarily result in system failure, and up to five discrete failures could possibly be tolerated. However, the system is not sufficiently protected against disk or slice failure. A variety of potential failures could result in a complete system failure, requiring operator intervention.
While this configuration does help provide some protection against catastrophic disk failure, it exposes key possible single points of failure:
The single SCSI controller represents a potential point of failure. If the controller fails, the system will be down, pending replacement of the part.
The two disks do not provide adequate distribution of state database replicas. The majority consensus algorithm requires that half of the state database replicas be available for the system to continue to run, and half plus one replica for a reboot. So, if one state database replica were on each disk and one disk or the slice containing the replica failed, the system could not reboot (thus making a mirrored root ineffective). If two or more state database replicas were on each disk, a single slice failure would likely not be problematic, but a disk failure would still prevent a reboot. If different number of replicas were on each disk, one would have more than half and one fewer than half. If the disk with fewer replicas failed, the system could reboot and continue. However, if the disk with more replicas failed, the system would immediately panic.
A “Best Practices” approach would be to modify the configuration by adding one more controller and one more hard drive. The resulting configuration could be far more resilient.
Solaris Volume Manager works well with networked storage devices, particularly those devices that provide RAID levels that can be configured and flexible options. Usually, the combination of Solaris Volume Manager and other devices can result in performance and flexibility superior to either product alone.
Generally, do not establish Solaris Volume Manager RAID 5 volumes on any hardware storage devices that provide redundancy (for example, RAID 1 and RAID 5 volumes). Unless you have a very unusual situation, performance will suffer, and you will gain very little in terms of redundancy or higher availability
Configuring underlying hardware storage devices with RAID 5 volumes, on the other hand, is very effective, as it provides a good foundation for Solaris Volume Manager volumes. Hardware RAID 5 provides some additional redundancy for Solaris Volume Manager RAID 1 volumes, soft partitions, or other volumes.
Do not configure similar software and hardware devices. For example, do not build software RAID 1 volumes on top of hardware RAID 1 devices. Configuring similar devices in hardware and software results in performance penalties without offsetting any gains in reliability.
Solaris Volume Manager RAID 1 volumes built on underlying hardware storage devices are not RAID 1+0, as Solaris Volume Manager cannot understand the underlying storage well enough to offer RAID 1+0 capabilities.
Configuring soft partitions on top of an Solaris Volume Manager RAID 1 volume, built in turn on a hardware RAID 5 device is a very flexible and resilient configuration.