A PFS I/O daemon must be running on each PFS I/O server, and a PFS proxy daemon must be running on each node that will access PFS file systems. This chapter describes the procedures for starting and stopping these daemons, as well as are how to create and mount PFS file systems.
The PFS I/O daemons will start automatically at boot time on nodes configured as PFS I/O servers. However, if you have just configured new PFS I/O servers, you can start their daemons manually without having to reboot. To do so, enter the following on each newly created PFS I/O server node.
pfs-srv0# /etc/init.d/sunhpc.pfs_server start pfs-srv1# /etc/init.d/sunhpc.pfs_server start :
Alternatively, you can launch both the PFS proxy and I/O daemons on all the nodes in the cluster. To do this, execute the following command once, on any node in the cluster.
# /opt/SUNWhpc/etc/pfs/pfsstart
The following command starts the proxy daemon. Execute it on each node that will need to access PFS file systems.
node0# /etc/init.d/sunhpc.pfs_client start node1# /etc/init.d/sunhpc.pfs_client start :
The following command stops the I/O server daemons. Execute it on each I/O server node.
node0# /etc/init.d/sunhpc.pfs_server stop node1# /etc/init.d/sunhpc.pfs_server stop :
The following command stops the proxy daemon. Execute it on each node that has a proxy daemon running.
node0# /etc/init.d/sunhpc.pfs_client stop node1# /etc/init.d/sunhpc.pfs_client stop :
Alternatively, you can manually stop both the PFS client and I/O server daemons on every node in the cluster by executing the following command on any one node in the cluster.
# /opt/SUNWhpc/etc/pfs/pfsstop
As with UFS file systems, you can use the Solaris utilities mkfs and mount to create and mount PFS file systems. For example, the following creates a 64-Kbyte PFS file system named pfs-demo0. Execute it on any server node.
adm# mkfs -F pfs pfs-demo0 64K
The -F option specifies the file system's type, which is pfs.
Next, mount the file system on each client node that has a PFS proxy daemon.
hpc-node0# mount -F pfs pfs-demo0 /pfs_demo0 hpc-node1# mount -F pfs pfs-demo0 /pfs_demo0 :
Alternatively, you can execute the following on a single node. This will cause the PFS file system to be mounted on all nodes in the cluster.
# /opt/SUNWhpc/bin/pfsmount pfs-demo0 /pfs_demo0
You may also want to add an entry for each PFS file system in the file /etc/vfstab. This will make it unnecessary to include the -F option when making and mounting the file systems. Example 5-1 shows how a PFS file system entry might look in the file /etc/vfstab.
#device device mount FS fsck mount mount #to mount to fsck point type pass at boot options pfs-demo0 - /pfs_demo0 pfs - no - |
Before anyone attempts to use a newly created PFS file system, it is a good idea to verify that it is correctly mounted. This can easily be done by invoking df -f PFS on any node that has a PFS proxy daemon. Example 5-2 shows an example of this, with the PFS file system pfs-demo0 included in the df output.
hpc-node1# df -f PFS /dev/pfs_psuedo (pfs_pseudo): 0 blocks 0 files |
Alternatively, if you execute the pfsmount command without any arguments, it will list every mounted PFS file system in the cluster. This is illustrated in Example 5-3.
hpc-node1# /opt/SUNWhpc/bin/pfsmount Mounted PFS filesystems: hpc-node4: pfs-demo0 on pfs-demo0 hpc-node5: pfs-demo0 on pfs-demo0 |
The PFS file system pfs-demo0 is now ready to use. You can use Solaris utilities to create and delete PFS files and directories in the directory /pfs-demo0, just as you would in any UFS file system. To achieve best performance, however, applications should access the PFS facilities via MPI I/O calls.