Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Service for Network File System (NFS) Guide |
1. Installing and Configuring HA for NFS
Overview of the Installation and Configuration Process for HA for NFS
Planning the HA for NFS Installation and Configuration
Service Management Facility Restrictions
Loopback File System Restrictions
Zettabyte File System (ZFS) Restrictions
Installing the HA for NFS Packages
How to Install the HA for NFS Packages
Registering and Configuring HA for NFS
Setting HA for NFS Extension Properties
Tools for Registering and Configuring HA for NFS
How to Register and Configure the Oracle Solaris Cluster HA for NFS by Using clsetup
How to Change Share Options on an NFS File System
How to Dynamically Update Shared Paths on an NFS File System
How to Tune HA for NFS Method Timeouts
Configuring SUNW.HAStoragePlus Resource Type
How to Set Up the HAStoragePlus Resource Type for an NFS-Exported Zettabyte File System
Tuning the HA for NFS Fault Monitor
Operations of HA for NFS Fault Monitor During a Probe
NFS System Fault Monitoring Process
NFS Resource Fault Monitoring Process
Upgrading the SUNW.nfs Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
You can secure HA for NFS with Kerberos V5 by configuring the Kerberos client. This configuration includes adding a Kerberos principal for NFS over the logical hostnames on all cluster nodes.
To configure the Kerberos client, perform the following procedures.
Prepare the nodes. See How to Prepare the Nodes.
Create Kerberos principals. See How to Create Kerberos Principals.
Enable the secured NFS. See Enabling Secure NFS.
Refer to Solaris Kerberos/SEAM (Sun Enterprise Authentication Mechanism) documentation for details.
The KDC server must be time synchronized with the cluster nodes as well as any clients that will be using the HA for NFS services from the cluster. The Network Time Protocol (NTP) method performs time corrections with greater granularity than other methods, and therefore the time synchronization is more reliable. To benefit from this greater reliability, use NTP for the time synchronization.
The DNS client configuration must be complete and working on all cluster nodes as well as on any NFS clients that will be using secure NFS services from the cluster. Use resolv.conf(4) to verify the DNS client configuration.
The DNS domain name must be made known to the Kerberos configuration by keeping a mapping in the domain_realm section of krb5.conf(4) file.
The following example shows a mapping of DNS domain name mydept.company.com to Kerberos realm ACME.COM.
[domain_realm] .mydept.company.com = ACME.COM
The /etc/krb5/krb5.conf file must be configured the same on all the cluster nodes. In addition, the default Kerberos keytab file (service key table), /etc/krb5/krb5.keytab, must be configured the same on all the cluster nodes. Consistent configuration can be achieved by copying the files to all cluster nodes. Alternately, you can keep a single copy of each file on a global file system and install symbolic links to /etc/krb5/krb5.conf and /etc/krb5/krb5.keytab on all cluster nodes.
You can also use a failover file system to make files available to all cluster nodes. However, a failover file system is visible on only one node at a time. Therefore, if Oracle Solaris Cluster HA for NFS is being used in different resource groups, potentially mastered on different nodes, the files are not visible to all cluster nodes. In addition, this configuration complicates Kerberos client administrative tasks.
On all cluster nodes, as well as on any NFS clients that are configured to use secure NFS services from the cluster, all Kerberos-related entries in the file /etc/nfssec.conf must be uncommented. See nfssec.conf(4).
The following steps create the required Kerberos principals and keytab entries in the KDC database. For each cluster node, the keytab entries for which service principals are created depend on the version of Solaris that is running on the cluster node.
The principal for the “nfs” service over the logical hostname is created on one node only and then added manually to the default Kerberos keytab file on each cluster node. The Kerberos configuration file krb5.conf and the keytab file krb5.keytab must be stored as individual copies on each cluster node and must not be shared on a global file system.
Principals must be created using the fully qualified domain names.
Add these entries to the default keytab file on each node. These steps can be greatly simplified with the use of cluster console utilities (see cconsole(1M)) .
The following example creates the root and host entries. Perform this step on all cluster nodes, substituting the physical hostname of each cluster node for the hostname in the example.
# kadmin -p username/admin Enter Password: kadmin: addprinc -randkey host/phys-red-1.mydept.company.com Principal "host/phys-red-1.mydept.company.com@ACME.COM" created.
kadmin: addprinc -randkey root/phys-red-1.mydept.company.com Principal "root/phys-red-1.mydept.company.com@ACME.COM" created.
kadmin: ktadd host/phys-red-1.mydept.company.com Entry for principal host/phys-red-1.mydept.company.com with kvno 2, encryption type DES-CBC-CRC added to keytab WRFILE:/etc/krb5/krb5.keytab.
kadmin: ktadd root/phys-red-1.mydept.company.com Entry for principal root/phys-red-1.mydept.company.com with kvno 2, encryption type DES-CBC-CRC added to keytab WRFILE:/etc/krb5/krb5.keytab.
kadmin: quit #
Principals must be created using the fully qualified domain names. Perform this step on only one cluster node.
# kadmin -p username/admin Enter Password: kadmin: addprinc -randkey nfs/relo-red-1.mydept.company.com Principal "nfs/relo-red-1.mydept.company.com@ACME.COM" created.
kadmin: ktadd -k /var/tmp/keytab.hanfs nfs/relo-red-1.mydept.company.com Entry for principal nfs/relo-red-1.mydept.company.com with kvno 3, encryption type DES-CBC-CRC added to keytab WRFILE:/var/tmp/keytab.hanfs.
kadmin: quit #
In the above example, relo-red-1 is the logical hostname used with HA for NFS.
Do not use insecure copying methods such as regular ftp or rcp, and so forth. For additional security, you can use the cluster private interconnect to copy the database.
The following example copies the database.
# scp /var/tmp/keytab.hanfs clusternode2-priv:/var/tmp/keytab.hanfs # scp /var/tmp/keytab.hanfs clusternode3-priv:/var/tmp/keytab.hanfs
The following example uses the ktutil(1M) command to add the entry. Remove the temporary keytab file /var/tmp/keytab.hanfs on all cluster nodes after it has been added to the default keytab database /etc/krb5/krb5.keytab.
# ktutil ktutil: rkt /etc/krb5/krb5.keytab ktutil: rkt /var/tmp/keytab.hanfs ktutil: wkt /etc/krb5/krb5.keytab ktutil: quit# # rm /var/tmp/keytab.hanfs
List the default keytab entries on each cluster node and make sure that the key version number (KVNO) for the “nfs” service principal is the same on all cluster nodes.
# klist -k Keytab name: FILE:/etc/krb5/krb5.keytab KVNO Principal ---- --------------------------------- 2 host/phys-red-1.mydept.company.com@ACME.COM 2 root/phys-red-1.mydept.company.com@ACME.COM 3 nfs/relo-red-1.mydept.company.com@ACME.COM
On all cluster nodes, the principal for the “nfs” service over the logical host must have the same KVNO number. In the above example, the principal for the “nfs” service over the logical host is nfs/relo-red-1.mydept.company.com@ACME.COM, and the KVNO is 3.
Build the user credential database by running the following command on all cluster nodes.
# gsscred -m kerberos_v5 -a
See gsscred(1M) man pages for details.
Note that the above approach builds the user credentials database only once. Some other mechanism must be employed, for example, cron(1M), to keep the local copy of this database up-to-date with changes in the user population.
This step is not necessary for Solaris release 10.
Use the -o sec=option option of the share(1M) command in the dfstab.resource-name entry to share your file systems securely. See nfssec(5) man pages for details of specific option settings. If the HA for NFS resource is already configured and running, see How to Change Share Options on an NFS File System for information about updating the entries in the dfstab.resource-name file. Note that the sec=dh option is not supported on Solaris Cluster configurations.