Commonly used file systems, such as the following, are frequently the most heavily used file systems on an NFS server:
/usr directory for diskless clients
Local tools and libraries
Read-only source code archives
The best way to improve performance for these file systems is to replicate them. One NFS server is limited by disk bandwidth when handling requests for only one file system. Replicating the data increases the size of the aggregate "pipe" from NFS clients to the data. However, replication is not a viable strategy for improving performance with writable data, such as a file system of home directories. Use replication with read-only data.
To replicate file systems, do the following:
Identify the file or file systems to be replicated.
If several individual files are candidates, consider merging them in a single file system. The potential decrease in performance that arises from combining heavily used files on one disk is more than offset by performance gains through replication.
Use nfswatch, to identify the most commonly used files and file systems in a group of NFS servers. Table A-1 in Appendix A, Using NFS Performance-Monitoring and Benchmarking Toolslists performance monitoring tools, including nfswatch, and explains how to obtain nfswatch.
Determine how clients will choose a replica.
Specify a server name in the /etc/vfstab file to create a permanent binding from NFS client to the server. Alternatively, listing all server names in an automounter map entry allows completely dynamic binding, but may also lead to a client imbalance on some NFS servers. Enforcing "workgroup" partitions in which groups of clients have their own replicated NFS server strikes a middle ground between the extremes and often provides the most predictable performance.
Choose an update schedule and method for distributing the new data.
The frequency of change of the read-only data determines the schedule and the method for distributing the new data. File systems that undergo a complete change in contents, for example, a flat file with historical data that is updated monthly, can be best handled by copying data from the distribution media on each machine, or using a combination of ufsdump and restore. File systems with few changes can be handled using management tools such as rdist.
Evaluate what penalties, if any, are involved if users access old data on a replica that is not current. One possible way of doing this is with the Solaris 2.x JumpStartTM facilities in combination with cron.