You should understand the features of the SunOSTM swap mechanism to determine the following:
Swap space requirements
The relationship between swap space and the TMPFS file system
How to recover from error messages related to swap space
Solaris software uses some disk slices for temporary storage rather than for file systems. These slices are called swap slices. Swap slices are used as virtual memory storage areas when the system does not have enough physical memory to handle current processes.
The virtual memory system maps physical copies of files on disk to virtual addresses in memory. Physical memory pages that contain the data for these mappings can be backed by regular files in the file system, or by swap space. If the memory is backed by swap space it is referred to as anonymous memory because no identity is assigned to the disk space that is backing the memory.
The Solaris OS uses the concept of virtual swap space, a layer between anonymous memory pages and the physical storage (or disk-backed swap space) that actually back these pages. A system's virtual swap space is equal to the sum of all its physical (disk-backed) swap space plus a portion of the currently available physical memory.
Virtual swap space has these advantages:
The need for large amounts of physical swap space is reduced because virtual swap space does not necessarily correspond to physical (disk) storage.
A pseudo file system called SWAPFS provides addresses for anonymous memory pages. Because SWAPFS controls the allocation of memory pages, it has greater flexibility in deciding what happens to a page. For example, SWAPFS might change the page's requirements for disk-backed swap storage.
The TMPFS file system is activated automatically in the Solaris environment by an entry in the /etc/vfstab file. The TMPFS file system stores files and their associated information in memory (in the /tmp directory) rather than on disk, which speeds access to those files. This feature results in a major performance enhancement for applications such as compilers and DBMS products that use /tmp heavily.
The TMPFS file system allocates space in the /tmp directory from the system's swap resources. This feature means that as you use up space in the /tmp directory, you are also using up swap space. So, if your applications use the /tmp directory heavily and you do not monitor swap space usage, your system could run out of swap space.
Do use the following if you want to use TMPFS, but your swap resources are limited:
Mount the TMPFS file system with the size option (-o size) to control how much swap resources TMPFS can use.
Use your compiler's TMPDIR environment variable to point to another larger directory.
Using your compiler's TMPDIR variable only controls whether the compiler is using the /tmp directory. This variable has no effect on other programs' use of the /tmp directory.
A dump device is usually disk space that is reserved to store system crash dump information. By default, a system's dump device is configured to be a swap slice in a UFS root environment. If possible, you should configure an alternate disk partition as a dedicated dump device instead to provide increased reliability for crash dumps and faster reboot time after a system failure. You can configure a dedicated dump device by using the dumpadm command. For more information, see Chapter 17, Managing System Crash Information (Tasks), in System Administration Guide: Advanced Administration.
In a ZFS root environment, swap and dump are configured as separate ZFS volumes. The advantages to this model are as follows:
You don't have to partition a disk to include swap and dump areas
Swap and dump devices benefit from the underlying ZFS I/O pipeline architecture
You can set characteristics, such as compression, on swap and dump devices
You can easily reset swap and dump device sizes. For example:
# zfs set volsize=2G rpool/dump # zfs get volsize rpool/dump NAME PROPERTY VALUE SOURCE rpool/dump volsize 2G -
Keep in mind that reallocating a large dump device is a time-consuming process.
For more information about using ZFS swap and dump devices, see ZFS Support for Swap and Dump Devices in Solaris ZFS Administration Guide.
If you are using a volume manager to manage your disks in a UFS environment, such as Solaris Volume Manager, do not configure your dedicated dump device to be under its control. You can keep your swap areas under Solaris Volume Manager's control, which is a recommended practice. However, for accessibility and performance reasons, configure another disk as a dedicated dump device outside of Solaris Volume Manager's control.
A good practice is to allocate enough swap space to support a failing CPU or system board during dynamic reconfiguration. Otherwise, a CPU or system board failure might result in your host or domain rebooting with less memory.
Without having this additional swap space available, one or more of your applications might fail to start due to insufficient memory. This problem would require manual intervention either to add additional swap space or to reconfigure the memory usage of these applications.
If you have allocated additional swap space to handle a potential loss of memory on reboot, all of your intensive applications might start as usual. This means the system will be available to the users, perhaps possibly slower due to some additional swapping.
For more information, see your hardware dynamic reconfiguration guide.
Review the following points to determine whether you might configure swap space on a network-connected disk, such as in a SAN environment:
Diagnosing swap space issues on a locally-attached disk is easier than diagnosing swap space issues on a network-connected disk.
The performance of swap space over a SAN should be comparable to swap space configured on a locally-attached disk.
Adding more memory to a system with performance issues, after analyzing performance data, might resolve a swap over SAN performance problem better than moving the swap to a locally-attached disk.