Managing file systems is one of your most important system administration tasks.
This is a list of the overview information in this chapter.
This section describes new file system features in the Solaris Solaris Express release.
Solaris Express 4/06: A new file system monitoring tool, fsstat, is available to report file system operations. You can use several options to report activity, such as by mount point or by file system type.
For example, the following fsstat command displays all ZFS file system operations since the ZFS module was loaded:
$ fsstat zfs new name name attr attr lookup rddir read read write write file remov chng get set ops ops ops bytes ops bytes 268K 145K 93.6K 28.0M 71.1K 186M 2.74M 12.9M 56.2G 1.61M 9.46G zfs |
For example, the following fsstat command displays all file system operations since the /export/ws file system mounted.
$ fsstat /export/ws new name name attr attr lookup rddir read read write write file remov chng get set ops ops ops bytes ops bytes 0 0 0 18.1K 0 12.6M 52 0 0 0 0 /export/ws |
The default form is to report statistical information in easy to understand values, such as Gbytes, Kbytes, and Mbytes.
For more information, see fsstat(1M).
Solaris Express 12/05: ZFS, a revolutionary new file system, provides simple administration, transactional semantics, end-to-end data integrity, and immense scalability. In addition, ZFS provides the following administration features:
Backup and restore capabilities
Device management support
GUI administration tool
Persistent snapshots and cloning features
Quotas that can be set for file systems
RBAC-based access control
Storage pool space reservations for file systems
Support for Solaris systems that have zones installed
For more information about using ZFS, see Solaris ZFS Administration Guide.
Solaris Express 12/05: The file system check utility, fsck, has been enhanced to include features from the FreeBSD 4.9 version of the fsck program, as well as other enhancements.
The fsck utility in this Solaris release includes the following improvements:
Checks and repairs file systems more thoroughly and provides improved error messages. For example, in some scenarios, fsck determines what structures are missing and replaces them appropriately.
Automatically searches for backup superblocks.
Reports when fsck needs to be rerun.
When clearing directories, fsck now attempts to recover directory contents immediately and therefore, reduces the time spent rerunning this utility.
If fsck finds duplicate blocks, and not all files that reference the duplicate blocks were cleared, fsck reports the inode numbers at the end of the fsck run. Then, you can use the find command to review the inodes that are damaged.
Improved error messages regarding the status of extended attributes and other special files, such as device files and ACL entries, are included.
Includes a -v option to enable more verbose messages.
In addition, the newfs and mkfs commands have been updated to include new options for displaying a file system's superblock information in text or dumping the superblock information in binary format.
newfs [ -S or -B ] /dev/rdsk/... |
Displays the file system's superblock in text
Dumps the file system's superblock in binary
mkfs [ -o calcsb or -o calcbinsb ] /dev/rdsk/... size |
Displays the file system's superblock in text
Dumps the file system's superblock in binary
The fsck utility uses this superblock information to search for backup superblocks.
The following sections describe specific fsck enhancements and their corresponding error messages. For step-by-step instructions on using the fsck utility to repair a damaged superblock, see How to Restore a Bad Superblock ( Solaris Express Release).
The following fsck error message examples illustrate the automatic backup superblock discovery feature.
If a file system has a damaged superblock and it was created with newfs or mkfs customized parameters, such as ntrack or nsect, using fsck's automatically discovered superblock for the repair process could damage your file system.
In the case of a file system that was created with customized parameters and it has a bad superblock, fsck provides the prompt to cancel the fsck session:
CANCEL FILESYSTEM CHECK? |
If the file system was created with the newfs command and fsck responds that just the primary superblocks are corrupted, then consider letting fsck restore the superblock.
# fsck /dev/dsk/c1t2d0s0 ** /dev/rdsk/c1t2d0s0 BAD SUPERBLOCK AT BLOCK 16: BLOCK SIZE LARGER THAN MAXIMUM SUPPORTED LOOK FOR ALTERNATE SUPERBLOCKS WITH MKFS? no LOOK FOR ALTERNATE SUPERBLOCKS WITH NEWFS? yes FOUND ALTERNATE SUPERBLOCK 32 WITH NEWFS USE ALTERNATE SUPERBLOCK? yes FOUND ALTERNATE SUPERBLOCK AT 32 USING NEWFS If filesystem was created with manually-specified geometry, using auto-discovered superblock may result in irrecoverable damage to filesystem and user data. CANCEL FILESYSTEM CHECK? no ** Last Mounted on ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3a - Check Connectivity ** Phase 3b - Verify Shadows/ACLs ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cylinder Groups CORRECT GLOBAL SUMMARY SALVAGE? y UPDATE STANDARD SUPERBLOCK? y 81 files, 3609 used, 244678 free (6 frags, 30584 blocks, 0.0% fragmentation) ***** FILE SYSTEM WAS MODIFIED ***** |
If the file system was created with the mkfs command and fsck responds that just the primary superblocks are corrupted, then consider letting fsck restore the superblock.
# fsck /dev/dsk/c1t2d0s0 ** /dev/rdsk/c1t2d0s0 BAD SUPERBLOCK AT BLOCK 16: BLOCK SIZE LARGER THAN MAXIMUM SUPPORTED LOOK FOR ALTERNATE SUPERBLOCKS WITH MKFS? yes FOUND ALTERNATE SUPERBLOCK 32 WITH MKFS USE ALTERNATE SUPERBLOCK? yes FOUND ALTERNATE SUPERBLOCK AT 32 USING MKFS If filesystem was created with manually-specified geometry, using auto-discovered superblock may result in irrecoverable damage to filesystem and user data. CANCEL FILESYSTEM CHECK? no ** Last Mounted on ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3a - Check Connectivity ** Phase 3b - Verify Shadows/ACLs ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cylinder Groups CORRECT GLOBAL SUMMARY SALVAGE? y UPDATE STANDARD SUPERBLOCK? y 81 files, 3609 used, 243605 free (117 frags, 30436 blocks, 0.0% fragmentation) ***** FILE SYSTEM WAS MODIFIED ***** |
The following example illustrates what would happen if you specified fsck's -y option in a damaged superblock scenario. You are automatically dropped out of the fsck session. A message is displayed to rerun it with the alternate superblock.
# fsck -y /dev/dsk/c1t2d0s0 # ** /dev/rdsk/c1t2d0s0 BAD SUPERBLOCK AT BLOCK 16: BLOCK SIZE LARGER THAN MAXIMUM SUPPORTED LOOK FOR ALTERNATE SUPERBLOCKS WITH MKFS? yes LOOK FOR ALTERNATE SUPERBLOCKS WITH NEWFS? yes SEARCH FOR ALTERNATE SUPERBLOCKS FAILED. USE GENERIC SUPERBLOCK FROM MKFS? yes CALCULATED GENERIC SUPERBLOCK WITH MKFS If filesystem was created with manually-specified geometry, using auto-discovered superblock may result in irrecoverable damage to filesystem and user data. CANCEL FILESYSTEM CHECK? yes Please verify that the indicated block contains a proper superblock for the filesystem (see fsdb(1M)). FSCK was running in YES mode. If you wish to run in that mode using the alternate superblock, run `fsck -y -o b=453920 /dev/rdsk/c1t2d0s0'. |
The following fsck error message scenario illustrates the new prompts for the backup superblock, but the fsck run is not canceled, in this example. Canceling the fsck session would be an appropriate response if this file system was created with customized parameters or if there is some other concern about running fsck on this file system.
The various superblock error conditions are provided in italics as follows:
# fsck /dev/rdsk/c0t1d0s0 ** /dev/rdsk/c0t1d0s0 BAD SUPERBLOCK AT BLOCK 16: BLOCK SIZE LARGER THAN MAXIMUM SUPPORTED BAD SUPERBLOCK AT BLOCK 16: NUMBER OF DATA BLOCKS OUT OF RANGE BAD SUPERBLOCK AT BLOCK 16: INODES PER GROUP OUT OF RANGE BAD SUPERBLOCK AT BLOCK 16: MAGIC NUMBER WRONG BAD SUPERBLOCK AT BLOCK 16: BAD VALUES IN SUPER BLOCK BAD SUPERBLOCK AT BLOCK 16: NCG OUT OF RANGE BAD SUPERBLOCK AT BLOCK 16: CPG OUT OF RANGE BAD SUPERBLOCK AT BLOCK 16: NCYL IS INCONSISTENT WITH NCG*CPG BAD SUPERBLOCK AT BLOCK 16: SIZE OUT OF RANGE BAD SUPERBLOCK AT BLOCK 16: NUMBER OF DIRECTORIES OUT OF RANGE BAD SUPERBLOCK AT BLOCK 16: ROTATIONAL POSITION TABLE SIZE OUT OF RANGE BAD SUPERBLOCK AT BLOCK 16: SIZE OF CYLINDER GROUP SUMMARY AREA WRONG BAD SUPERBLOCK AT BLOCK 16: INOPB NONSENSICAL RELATIVE TO BSIZE LOOK FOR ALTERNATE SUPERBLOCKS WITH MKFS? yes FOUND ALTERNATE SUPERBLOCK 32 WITH MKFS USE ALTERNATE SUPERBLOCK? yes FOUND ALTERNATE SUPERBLOCK AT 32 USING MKFS If filesystem was created with manually-specified geometry, using auto-discovered superblock may result in irrecoverable damage to filesystem and user data. CANCEL FILESYSTEM CHECK? no ** Last Mounted on ** Phase 1 - Check Blocks and Sizes ** Phase 2a - Check Duplicated Names ** Phase 2b - Check Pathnames ** Phase 3a - Check Connectivity ** Phase 3b - Verify Shadows/ACLs ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cylinder Groups SALVAGE? yes UPDATE STANDARD SUPERBLOCK? yes 82 files, 3649 used, 244894 free (6 frags, 30611 blocks, 0.0% fragmentation) ***** FILE SYSTEM WAS MODIFIED ***** |
Better reporting by fsck about when it needs to be rerun should alleviate the time and necessity of running it multiple times, which can be particularly time consuming on large file systems.
The following new messages prompt you to rerun the fsck utility at the end of an error scenario:
***** PLEASE RERUN FSCK ***** |
Or:
Please rerun fsck(1M) to correct this. |
These new prompts resolve the previous difficulty in determining whether fsck should be rerun or not.
Unless you are prompted to rerun fsck as in the above messages, there is no need to run fsck, even after you see the following message:
***** FILE SYSTEM WAS MODIFIED ***** |
However, it doesn't harm the file system to rerun fsck after this message. This message is just informational about fsck's corrective actions.
New fsck messages are included that report on and repair files with extended attributes. For example:
BAD ATTRIBUTE REFERENCE TO I=1 FROM I=96 |
Attribute directory I=97 not attached to file I=96 I=96 OWNER=root MODE=40755 SIZE=512 MTIME=Jun 20 12:25 2008 DIR= <xattr> FIX? yes |
ZERO LENGTH ATTR DIR I=12 OWNER=root MODE=160755 SIZE=0 MTIME=Jun 20 12:26 2008 CLEAR? yes |
File should BE marked as extended attribute I=22 OWNER=root MODE=100644 SIZE=0 MTIME= Jun 20 12:27 2008 FILE= <xattr> FIX? yes |
UNREF ATTR DIR I=106 OWNER=root MODE=160755 SIZE=512 MTIME=Jun 20 12:28 2008 RECONNECT? yes |
File I=107 should NOT be marked as extended attribute I=107 OWNER=root MODE=100644 SIZE=0 MTIME=Jun 20 12:29 2008 FILE=?/attfsdir-7-att FIX? yes DIR I=106 CONNECTED. |
The fsck error messages now reports information about blocks, fragments, or a LFNs, which are the logical fragment numbers from the start of the file. For example, you might see output similar to the following:
** Phase 1 - Check Blocks and Sizes FRAGMENT 784 DUP I=38 LFN 0 FRAGMENT 785 DUP I=38 LFN 1 FRAGMENT 786 DUP I=38 LFN 2 . . . |
fsck processes objects as fragments, but in previous Solaris releases, only reported object information as blocks. It now correctly reports as fragments.
If fsck finds error conditions that involve duplicate blocks or fragments, fsck offers to display the uncleared files at end of the fsck output. For example, you might see output similar to the following:
LIST REMAINING DUPS? yes Some blocks that were found to be in multiple files are still assigned to file(s). Fragments sorted by inode and logical offsets: Inode 38: Logical Offset 0x00000000 Physical Fragment 784 Logical Offset 0x00000800 Physical Fragment 786 Logical Offset 0x00001000 Physical Fragment 788 Logical Offset 0x00001800 Physical Fragment 790 |
Then, you can use the find -i inode-number command to identify the name of inode 38, in this example.
Use these references to find step-by-step instructions for managing file systems.
File System Management Task |
For More Information |
---|---|
Create new file systems. |
Chapter 18, Creating UFS, TMPFS, and LOFS File Systems (Tasks) and Chapter 20, Using The CacheFS File System (Tasks) |
Make local and remote files available to users. | |
Connect and configure new disk devices. | |
Design and implement a backup schedule and restore files and file systems, as needed. |
Chapter 24, Backing Up and Restoring UFS File Systems (Overview) |
Check for and correct file system inconsistencies. |
A file system is a structure of directories that is used to organize and store files. The term file system is used to describe the following:
A particular type of file system: disk-based, network-based, or virtual
The entire file tree, beginning with the root (/) directory
The data structure of a disk slice or other media storage device
A portion of a file tree structure that is attached to a mount point on the main file tree so that the files are accessible
Usually, you know from the context which meaning is intended.
The Solaris OS uses the virtual file system (VFS) architecture, which provides a standard interface for different file system types. The VFS architecture enables the kernel to handle basic operations, such as reading, writing, and listing files. The VFS architecture also makes it easier to add new file systems.
The Solaris OS supports three types of file systems:
Disk-based
Network-based
Virtual
To identify the file system type, see Determining a File System's Type.
Disk-based file systems are stored on physical media such as hard disks, CD-ROMs, and diskettes. Disk-based file systems can be written in different formats. The available formats are described in the following table.
Disk-Based File System |
Format Description |
---|---|
UFS |
UNIX file system (based on the BSD Fat Fast File system that was provided in the 4.3 Tahoe release). UFS is the default disk-based file system for the Solaris OS. Before you can create a UFS file system on a disk, you must format the disk and divide it into slices. For information on formatting disks and dividing disks into slices, see Chapter 10, Managing Disks (Overview). |
ZFS |
The ZFS file system is new in the Solaris Express release. For more information, see the Solaris ZFS Administration Guide. |
HSFS |
High Sierra, Rock Ridge, and ISO 9660 file system. High Sierra is the first CD-ROM file system. ISO 9660 is the official standard version of the High Sierra file system. The HSFS file system is used on CD-ROMs, and is a read-only file system. Solaris HSFS supports Rock Ridge extensions to ISO 9660. When present on a CD-ROM, these extensions provide all UFS file system features and file types, except for writability and hard links. |
PCFS |
PC file system, which allows read- and write- access to data and programs on DOS-formatted disks that are written for DOS-based personal computers. |
UDFS |
The Universal Disk Format (UDFS) file system, the industry-standard format for storing information on the optical media technology called DVD (Digital Versatile Disc or Digital Video Disc). |
Each type of disk-based file system is customarily associated with a particular media device, as follows:
UFS with hard disk
HSFS with CD-ROM
PCFS with diskette
UDF with DVD
However, these associations are not restrictive. For example, CD-ROMs and diskettes can have UFS file systems created on them.
For information about creating a UDFS file system on removable media, see How to Create a File System on Removable Media.
The UDF file system is the industry-standard format for storing information on DVD (Digital Versatile Disc or Digital Video Disc) optical media.
The UDF file system is provided as dynamically loadable 32-bit and 64-bit modules, with system administration utilities for creating, mounting, and checking the file system on both SPARC and x86 platforms. The Solaris UDF file system works with supported ATAPI and SCSI DVD drives, CD-ROM devices, and disk and diskette drives. In addition, the Solaris UDF file system is fully compliant with the UDF 1.50 specification.
The UDF file system provides the following features:
Ability to access the industry-standard CD-ROM and DVD-ROM media when they contain a UDF file system
Flexibility in exchanging information across platforms and operating systems
A mechanism for implementing new applications rich in broadcast-quality video, high-quality sound, and interactivity using the DVD video specification based on UDF format
The following features are not included in the UDF file system:
Support for write-once media, (CD-RW), with either the sequential disk-at-once recording and incremental recording
UFS components such as quotas, ACLs, transaction logging, file system locking, and file system threads, which are not part of the UDF 1.50 specification
The UDF file system requires the following:
At least the Solaris 7 11/99 release
Supported SPARC or x86 platform
Supported CD-ROM or DVD-ROM device
The Solaris UDF file system implementation provides the following:
Support for industry-standard read/write UDF version 1.50
Fully internationalized file system utilities
Network-based file systems can be accessed from the network. Typically, network-based file systems reside on one system, typically a server, and are accessed by other systems across the network.
With NFS, you can administer distributed resources (files or directories) by exporting them from a server and mounting them on individual clients. For more information, see The NFS Environment.
Virtual file systems are memory-based file systems that provide access to special kernel information and facilities. Most virtual file systems do not use file system disk space. However, the CacheFS file system uses a file system on the disk to contain the cache. Also, some virtual file systems, such as the temporary file system (TMPFS), use the swap space on a disk.
The CacheFSTM file system can be used to improve the performance of remote file systems or slow devices such as CD-ROM drives. When a file system is cached, the data that is read from the remote file system or CD-ROM is stored in a cache on the local system.
If you want to improve the performance and scalability of an NFS or CD-ROM file system, you should use the CacheFS file system. The CacheFS software is a general purpose caching mechanism for file systems that improves NFS server performance and scalability by reducing server and network load.
Designed as a layered file system, the CacheFS software provides the ability to cache one file system on another. In an NFS environment, CacheFS software increases the client per server ratio, reduces server and network loads, and improves performance for clients on slow links, such as Point-to-Point Protocol (PPP). You can also combine a CacheFS file system with the AutoFS service to help boost performance and scalability.
For detailed information about the CacheFS file system, see Chapter 20, Using The CacheFS File System (Tasks).
If both the CacheFS client and the CacheFS server are running NFS version 4, files are no longer cached in a front file system. All file access is provided by the back file system. Also, since no files are being cached in the front file system, CacheFS-specific mount options, which are meant to affect the front file system, are ignored. CacheFS-specific mount options do not apply to the back file system.
The first time you configure your system for NFS version 4, a warning appears on the console to indicate that caching is no longer performed.
If you want to implement your CacheFS mounts as in previous Solaris releases, then specify NFS version 3 in your CacheFS mount commands. For example:
mount -F cachefs -o backfstype=nfs,cachedir=/local/mycache,vers=3 starbug:/docs /docs |
The temporary file system (TMPFS) uses local memory for file system reads and writes. Typically, using memory for file system reads and writes is much faster than using a UFS file system. Using TMPFS can improve system performance by saving the cost of reading and writing temporary files to a local disk or across the network. For example, temporary files are created when you compile a program. The OS generates a much disk activity or network activity while manipulating these files. Using TMPFS to hold these temporary files can significantly speed up their creation, manipulation, and deletion.
Files in TMPFS file systems are not permanent. These files are deleted when the file system is unmounted and when the system is shut down or rebooted.
TMPFS is the default file system type for the /tmp directory in the Solaris OS. You can copy or move files into or out of the /tmp directory, just as you would in a UFS file system.
The TMPFS file system uses swap space as a temporary backing store. If a system with a TMPFS file system does not have adequate swap space, two problems can occur:
The TMPFS file system can run out of space, just as regular file systems do.
Because TMPFS allocates swap space to save file data (if necessary), some programs might not execute because of insufficient swap space.
For information about creating TMPFS file systems, see Chapter 18, Creating UFS, TMPFS, and LOFS File Systems (Tasks). For information about increasing swap space, see Chapter 21, Configuring Additional Swap Space (Tasks).
The loopback file system (LOFS) lets you create a new virtual file system so that you can access files by using an alternative path name. For example, you can create a loopback mount of the root (/) directory on /tmp/newroot. This loopback mounts make the entire file system hierarchy appear as if it is duplicated under /tmp/newroot, including any file systems mounted from NFS servers. All files will be accessible either with a path name starting from root (/), or with a path name that starts from /tmp/newroot.
For information on how to create LOFS file systems, see Chapter 18, Creating UFS, TMPFS, and LOFS File Systems (Tasks).
The process file system (PROCFS) resides in memory and contains a list of active processes, by process number, in the /proc directory. Information in the /proc directory is used by commands such as ps. Debuggers and other development tools can also access the address space of the processes by using file system calls.
Do not delete files in the /proc directory. The deletion of processes from the /proc directory does not kill them. /proc files do not use disk space, so there is no reason to delete files from this directory.
The /proc directory does not require administration.
These additional types of virtual file systems are listed for your information. They do not require administration.
The mount output on an x86 system might include a loopback mount of a libc_hwcap library, a hardware-optimized implementation of libc. This libc implementation is intended to optimize the performance of 32-bit applications.
This loopback mount requires no administration and consumes no disk space.
The UFS, NFS, and TMPFS file systems have been enhanced to include extended file attributes. Extended file attributes enable application developers to associate specific attributes to a file. For example, a developer of an application used to manage a windowing system might choose to associate a display icon with a file. Extended file attributes are logically represented as files within a hidden directory that is associated with the target file.
You can use the runat command to add attributes and execute shell commands in the extended attribute namespace. This namespace is a hidden attribute directory that is associated with the specified file.
To use the runat command to add attributes to a file, you first have to create the attributes file.
$ runat filea cp /tmp/attrdata attr.1 |
Then, use the runat command to list the attributes of the file.
$ runat filea ls -l |
For more information, see the runat(1) man page.
Many Solaris file system commands have been modified to support file system attributes by providing an attribute-aware option. Use this option to query, copy, or find file attributes. For more information, see the specific man page for each file system command.
The Solaris OS uses some disk slices for temporary storage rather than for file systems. These slices are called swap slices, or swap space. Swap space is used for virtual memory storage areas when the system does not have enough physical memory to handle current processes.
Since many applications rely on swap space, you should know how to plan for, monitor, and add more swap space, when needed. For an overview about swap space and instructions for adding swap space, see Chapter 21, Configuring Additional Swap Space (Tasks).
Most commands for file system administration have both a generic component and a file system–specific component. Whenever possible, you should use the generic commands, which call the file system–specific component. The following table lists the generic commands for file system administration. These commands are located in the /usr/sbin directory.
Table 17–1 Generic Commands for File System Administration
Command |
Description |
Man Page |
---|---|---|
Clears inodes | ||
Reports the number of free disk blocks and files | ||
Lists file names and statistics for a file system | ||
Checks the integrity of a file system and repairs any damage found | ||
Debugs the file system | ||
Determines the file system type | ||
Lists or provides labels for file systems when they are copied to tape (for use only by the volcopy command) | ||
Creates a new file system | ||
mount |
Mounts local and remote file systems | |
Mounts all file systems that are specified in the virtual file system table (/etc/vfstab) | ||
Generates a list of path names with their inode numbers | ||
Unmounts local and remote file systems | ||
Unmounts all file systems that are specified in the virtual file system table (/etc/vfstab) | ||
Creates an image copy of a file system |
The generic file system commands determine the file system type by following this sequence:
From the -F option, if supplied.
By matching a special device with an entry in the /etc/vfstab file (if the special device is supplied). For example, fsck first looks for a match against the fsck device field. If no match is found, the command then checks the special device field.
By using the default specified in the /etc/default/fs file for local file systems and in the /etc/dfs/fstypes file for remote file systems.
Both the generic commands and specific commands have manual pages in the man pages section 1M: System Administration Commands. The manual pages for the generic file system commands provide information about generic command options only. The manual page for a specific file system command has information about options for that file system. To look at a manual page for a specific file system, append an underscore and the abbreviation for the file system type to the generic command name. For example, to see the specific manual page for mounting a UFS file system, type the following:
$ man mount_ufs |
The Solaris UFS file system is hierarchical, starting with the root directory (/) and continuing downwards through a number of directories. The Solaris installation process enables you to install a default set of directories and uses a set of conventions to group similar types of files together.
For a description of the contents of Solaris file systems and directories, see filesystem(5).
The following table provides a summary of the default Solaris file systems.
Table 17–2 The Default Solaris File Systems
The root (/) and /usr file systems are required to run a system. Some of the most basic commands in the /usr file system (like mount) are also included in the root (/) file system. As such, they are available when the system boots or is in single-user mode, and /usr is not mounted. For more detailed information on the default directories for the root (/) and /usr file systems, see Chapter 23, UFS File System (Reference).
See the following sections for details about the UFS file system.
UFS is the default disk-based file system in Solaris OS. Most often, when you administer a disk-based file system, you are administering UFS file systems. UFS provides the following features.
For detailed information about the UFS file system structure, see Chapter 23, UFS File System (Reference).
When laying out file systems, you need to consider possible conflicting demands. Here are some suggestions:
Distribute the workload as evenly as possible among different I/O systems and disk drives. Distribute the /export/home file system and swap space evenly across disks.
Keep pieces of projects or members of groups within the same file system.
Use as few file systems per disk as possible. On the system (or boot) disk, you should have three file systems: root (/), /usr, and swap space. On other disks, create one or at most two file systems, with one file system preferrably being additional swap space. Fewer, roomier file systems cause less file fragmentation than many small, over crowded file systems. Higher-capacity tape drives and the ability of the ufsdump command to handle multiple volumes make it easier to back up larger file systems.
If you have some users who consistently create very small files, consider creating a separate file system with more inodes. However, most sites do not need to keep similar types of user files in the same file system.
For information on default file system parameters as well as procedures for creating new UFS file systems, see Chapter 18, Creating UFS, TMPFS, and LOFS File Systems (Tasks).
This Solaris release provides support for multiterabyte UFS file systems on systems that run a 64-bit Solaris kernel.
Previously, UFS file systems were limited to approximately 1 terabyte on both 64-bit and 32-bit systems. All UFS file system commands and utilities have been updated to support multiterabyte UFS file systems.
For example, the ufsdump command has been updated with a larger block size for dumping large UFS file systems:
# ufsdump 0f /dev/md/rdsk/d97 /dev/md/rdsk/d98 DUMP: Date of this level 0 dump: Fri Oct 10 17:22:13 2008 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/md/rdsk/d97 to /dev/md/rdsk/d98 DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Writing 32 Kilobyte records DUMP: Estimated 17439410 blocks (8515.34MB). DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] |
Administering UFS file systems that are less than 1 terabyte remains the same. No administration differences exist between UFS file systems that are less than one terabyte and file systems that are greater than 1 terabyte.
You can initially create a UFS file system that is less than 1 terabyte and specify that it can eventually be expanded into a multiterabyte file system by using the newfs -T option. This option sets the inode and fragment density to scale appropriately for a multiterabyte file system.
Using the newfs -T option when you create a UFS file system less than 1 terabyte on a system running a 32-bit kernel enables you to eventually expand this file system by using the growfs command when you boot this system under a 64-bit kernel. For more information, see newfs(1M).
You can use the fstyp -v command to identify whether a UFS file system has multiterabyte support by checking the following value in the magic column:
# /usr/sbin/fstyp -v /dev/md/rdsk/d3 | head -5 ufs magic decade format dynamic time Thu Jul 17 11:15:36 2008 |
A UFS file system with no multiterabyte support has the following fstyp output:
# /usr/sbin/fstyp -v /dev/md/rdsk/d0 | head -5 ufs magic 11954 format dynamic time Thu Jul 17 12:43:29 MDT 2008 |
You can use the growfs command to expand a UFS file system to the size of the slice or the volume without loss of service or data. For more information, see growfs(1M).
Two new related features are multiterabyte volume support with the EFI disk label and multiterabyte volume support with Solaris Volume Manager. For more information, see EFI Disk Label and the Solaris Volume Manager Administration Guide.
Multiterabyte UFS file systems include the following features:
Provides the ability to create a UFS file system up to 16 terabytes in size.
Provides the ability to create a file system less than 16 terabytes that can later be increased in size up to 16 terabytes.
Multiterabyte file systems can be created on physical disks, Solaris Volume Manager's logical volumes, and Veritas' VxVM logical volumes.
Multiterabyte file systems benefit from the performance improvements of having UFS logging enabled. Multiterabyte file systems also benefit from the availability of logging because the fsck command might not have to be run when logging is enabled.
When you create a partition for your multiterabyte UFS file system, the disk will be labeled automatically with an EFI disk label. For more information on EFI disk labels, see EFI Disk Label.
Provides the ability to snapshot a multiterabyte file system by creating multiple backing store files when a file system is over 512 Gbytes.
Limitations of multiterabyte UFS file systems are as follows:
This feature is not supported on 32-bit systems.
You cannot mount a file system greater than 1 terabyte on a system that is running a 32-bit Solaris kernel.
You cannot boot from a file system greater than 1 terabyte on a system that is running a 64-bit Solaris kernel. This limitation means that you cannot put a root (/) file system on a multiterabyte file system.
There is no support for individual files greater than 1 terabyte.
The maximum number of files is 1 million files per terabyte of a UFS file system. For example, a 4-terabyte file system can contain 4 million files.
This limit is intended to reduce the time it takes to check the file system with the fsck command.
The maximum quota that you can set on a multiterabyte UFS file system is 2 terabytes of 1024-byte blocks.
Use these references to find step-by-step instructions for working with multiterabyte UFS file systems.
Multiterabyte UFS Task |
For More Information |
---|---|
Create multiterabyte UFS file systems |
How to Create a Multiterabyte UFS File System How to Expand a Multiterabyte UFS File System How to Expand a UFS File System to a Multiterabyte UFS File System |
Create a multiterabyte UFS snapshot | |
Troubleshoot multiterabyte UFS problems |
UFS logging bundles the multiple metadata changes that comprise a complete UFS operation into a transaction. Sets of transactions are recorded in an on-disk log. Then, they are applied to the actual UFS file system's metadata.
At reboot, the system discards incomplete transactions, but applies the transactions for completed operations. The file system remains consistent because only completed transactions are ever applied. This consistency remains even when a system crashes. A system crash might interrupt system calls and introduces inconsistencies into a UFS file system.
UFS logging provides two advantages:
If the file system is already consistent due to the transaction log, you might not have to run the fsck command after a system crash or an unclean shutdown. For more information on unclean shutdowns, see What the fsck Command Checks and Tries to Repair.
Starting in the Solaris 9 12/02 release, the performance of UFS logging improves or exceeds the level of performance of non logging file systems. This improvement can occur because a file system with logging enabled converts multiple updates to the same data into single updates. Thus, reduces the number of overhead disk operations required.
Logging is enabled by default for all UFS file systems, except under the following conditions:
When logging is explicitly disabled.
If there is insufficient file system space for the log.
In previous Solaris releases, you had to manually enable UFS logging.
Keep the following issues in mind when using UFS logging:
Ensure that you have enough disk space for your general system needs, such as for users and applications, and for UFS logging.
If you don't have enough disk space for logging data, a message similar to the following is displayed:
# mount /dev/dsk/c0t4d0s0 /mnt /mnt: No space left on device Could not enable logging for /mnt on /dev/dsk/c0t4d0s0. # |
However, the file system is still mounted. For example:
# df -h /mnt Filesystem size used avail capacity Mounted on /dev/dsk/c0t4d0s0 142M 142M 0K 100% /mnt # |
A UFS file system with logging enabled that is generally empty will have some disk space consumed for the log.
If you upgrade to this Solaris release from a previous Solaris release, your UFS file systems will have logging enabled, even if the logging option was not specified in the /etc/vfstab file. To disable logging, add the nologging option to the UFS file system entries in the /etc/vfstab file.
The UFS transaction log has the following characteristics:
Is allocated from free blocks on the file system
Sized at approximately 1 Mbyte per 1 Gbyte of file system space, up to 256 Mbytes. The log size might be larger, up to a maximum of 512 Mbytes, if the file system has a large number of cylinder groups.
Continually flushed as it fills up
Also flushed when the file system is unmounted or as a result of any lockfs command.
If you need to enable UFS logging, specify the -o logging option with the mount command in the /etc/vfstab file or when you manually mount the file system. Logging can be enabled on any UFS file system, including the root (/) file system. Also, the fsdb command has new debugging commands to support UFS logging.
In some operating systems, a file system with logging enabled is known as a journaling file system.
You can use the fssnap command to create a read-only snapshot of a file system. A snapshot is a file system's temporary image that is intended for backup operations.
See Chapter 26, Using UFS Snapshots (Tasks) for more information.
Direct I/O is intended to boost bulk I/O operations. Bulk I/O operations use large buffer sizes to transfer large files (larger than 256 Kbytes).
Using UFS direct I/O might benefit applications, such as database engines, that do their own internal buffering. Starting with the Solaris 8 1/01 release, UFS direct I/O has been enhanced to allow the same kind of I/O concurrency that occurs when raw devices are accessed. Now you can get the benefit of file system naming and flexibility with very little performance penalty. Check with your database vendor to see if it can enable UFS direct I/O in its product configuration options.
Direct I/O can also be enabled on a file system by using the forcedirectio option to the mount command. Enabling direct I/O is a performance benefit only when a file system is transferring large amounts of sequential data.
When a file system is mounted with this option, data is transferred directly between a user's address space and the disk. When forced direct I/O is not enabled for a file system, data transferred between a user's address space and the disk is first buffered in the kernel address space.
The default behavior is no forced direct I/O on a UFS file system. For more information, see mount_ufs(1M).
Before you can access the files on a file system, you need to mount the file system. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. The root (/) file system is always mounted. Any other file system can be connected or disconnected from the root (/) file system.
When you mount a file system, any files or directories in the underlying mount point directory are unavailable as long as the file system is mounted. These files are not permanently affected by the mounting process. They become available again when the file system is unmounted. However, mount directories are typically empty because you usually do not want to obscure existing files.
For example, the following figure shows a local file system, starting with a root (/) file system and the sbin, etc, and opt subdirectories.
To access a local file system from the /opt file system that contains a set of unbundled products, you must do the following:
First, you must create a directory to use as a mount point for the file system you want to mount, for example, /opt/unbundled.
Once the mount point is created, you can mount the file system by using the mount command. This command makes all of the files and directories in /opt/unbundled available, as shown in the following figure.
For step-by-step instructions on how to mount file systems, see Chapter 19, Mounting and Unmounting File Systems (Tasks).
Whenever you mount or unmount a file system, the /etc/mnttab (mount table) file is modified with the list of currently mounted file systems. You can display the contents of this file by using the cat or more commands. However, you cannot edit this file. Here is an example of an /etc/mnttab file:
$ more /etc/mnttab rpool/ROOT/zfs509BE / zfs dev=4010002 0 /devices /devices devfs dev=5000000 1235087509 ctfs /system/contract ctfs dev=5040001 1235087509 proc /proc proc dev=5080000 1235087509 mnttab /etc/mnttab mntfs dev=50c0001 1235087509 swap /etc/svc/volatile tmpfs xattr,dev=5100001 1235087510 objfs /system/object objfs dev=5140001 1235087510 sharefs /etc/dfs/sharetab sharefs dev=5180001 1235087510 fd /dev/fd fd rw,dev=52c0001 1235087527 swap /tmp tmpfs xattr,dev=5100002 1235087543 swap /var/run tmpfs xattr,dev=5100003 1235087543 rpool/export /export zfs rw,devices,setuid,nonbmand,exec,xattr,... rpool/export/home /export/home zfs rw,devices,setuid,nonbmand,exec,... rpool /rpool zfs rw,devices,setuid,nonbmand,exec,xattr,atime,dev=4010005 1235087656 |
Manually mounting file systems every time you wanted to access them would be a very time-consuming and error-prone. To avoid these problems, the virtual file system table (the /etc/vfstab file) provides a list of file systems and information on how to mount them.
The /etc/vfstab file provides two important features:
You can specify file systems to automatically mount when the system boots. ZFS file systems are automatically mounted at boot time by an SMF service without entries in the vfstab file.
You can mount file systems by using only the mount point name. The /etc/vfstab file contains the mapping between the mount point and the actual device slice name.
A default /etc/vfstab file is created when you install a system, depending on the selections during installation. However, you can edit the /etc/vfstab file on a system whenever you want. To add an entry, the information you need to specify is as follows:
The device where the file system resides
The file system mount point
File system type
Whether you want the file system to mount automatically when the system boots (by using the mountall command)
Any mount options
The following is an example of an /etc/vfstab file for a system that runs a UFS root file system. Comment lines begin with #. This example shows an /etc/vfstab file for a system with two disks (c0t0d0 and c0t3d0).
$ more /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/dsk/c0t0d0s1 - - swap - no - /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no - /dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /usr ufs 1 no - /dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export/home ufs 2 yes - /dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /opt ufs 2 yes - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetabsharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - |
In this example, root (/) and /usr, the mount at boot field value is specified as no. These file systems are mounted by the kernel as part of the boot sequence before the mountall command is run.
The following vfstab example if from a system that runs a ZFS root file system.
# cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/zvol/dsk/rpool/swap - - swap - no - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetabsharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - |
ZFS file systems are mounted automatically by the SMF service at boot time. You can mount ZFS file systems from the vfstab by using the legacy mount feature. For more information, see Solaris ZFS Administration Guide.
For descriptions of each /etc/vfstab field and information on how to edit and use the file, see Chapter 19, Mounting and Unmounting File Systems (Tasks).
NFS is a distributed file system service that can be used to share resources (files or directories) from one system, typically a server, with other systems on the network. For example, you might want to share third-party applications or source files with users on other systems.
NFS makes the actual physical location of the resource irrelevant to the user. Instead of placing copies of commonly used files on every system, NFS allows you to place one copy on one system's disk and let all other systems access it from the network. Under NFS, remote files are virtually indistinguishable from local files.
For more information, see Chapter 4, Managing Network File Systems (Overview), in System Administration Guide: Network Services.
A system becomes an NFS server if it has resources to share on the network. A server keeps a list of currently shared resources and their access restrictions (such as read/write or read-only access).
When you share a resource, you make it available for mounting by remote systems.
You can share a resource in these ways:
By adding an entry to the /etc/dfs/dfstab (distributed file system table) file and rebooting the system
For information on how to share resources, see Chapter 19, Mounting and Unmounting File Systems (Tasks). For a complete description of NFS, see Chapter 4, Managing Network File Systems (Overview), in System Administration Guide: Network Services.
Sun's implementation of the NFS version 4 distributed file access protocol is included in the Solaris release.
NFS version 4 integrates file access, file locking, and mount protocols into a single, unified protocol to ease traversal through a firewall and improve security. The Solaris implementation of NFS version 4 is fully integrated with Kerberos V5, also known as SEAM, thus providing authentication, integrity, and privacy. NFS version 4 also enables the negotiation of security flavors to be used between the client and the server. With NFS version 4, a server can offer different security flavors for different file systems.
For more information about NFS Version 4 features, see What’s New With the NFS Service in System Administration Guide: Network Services.
You can mount NFS file system resources by using a client-side service called automounting (or AutoFS). AutoFS enables a system to automatically mount and unmount NFS resources whenever you access them. The resource remains mounted as long as you remain in the directory and are using a file within that directory. If the resource is not accessed for a certain period of time, it is automatically unmounted.
AutoFS provides the following features:
NFS resources don't need to be mounted when the system boots, which saves booting time.
Users don't need to know the root password to mount and unmount NFS resources.
Network traffic might be reduced because NFS resources are mounted only when they are in use.
The AutoFS service is initialized by the automount utility, which runs automatically when a system is booted. The automountd daemon runs continuously and is responsible for the mounting and unmounting of NFS file systems on an as-needed basis. By default, the /home file system is mounted by the automount daemon.
With AutoFS, you can specify multiple servers to provide the same file system. This way, if one of these servers is down, AutoFS can try to mount the file system from another machine.
For complete information on how to set up and administer AutoFS, see System Administration Guide: IP Services.
You can determine a file system's type by using one of the following:
This procedure works whether or not the file system is mounted.
Determine a file system's type by using the grep command.
$ grep mount-point fs-table |
Specifies the mount point name of the file system for which you want to know the file system type. For example, the /var directory.
Specifies the absolute path to the file system table in which to search for the file system's type. If the file system is mounted, fs-table should be /etc/mnttab. If the file system isn't mounted, fs-table should be /etc/vfstab.
Information for the mount point is displayed.
If you have the raw device name of a disk slice, you can use the fstyp command to determine a file system's type (if the disk slice contains a file system). For more information, see fstyp(1M).
The following example uses the /etc/vfstab file to determine the file system type for the /export file system.
$ grep /export /etc/vfstab /dev/dsk/c0t3d0s6 /dev/rdsk/c0t3d0s6 /export ufs 2 yes - $ |
The following example uses the /etc/mnttab file to determine the file system type of the currently mounted diskette.
$ grep floppy /etc/mnttab /dev/diskette0 /media/floppy ufs rw,nosuid,intr,largefiles,logging,xattr,onerror=panic,dev=900002 1165251037 |
The following example uses the fstyp command to determine the file system type.
# fstyp /dev/rdsk/c0t0d0s0 zfs |