You can secure Sun Cluster HA for NFS with Kerberos V5 by configuring the Kerberos client. This configuration includes adding a Kerberos principal for NFS over the logical hostnames on all cluster nodes.
To configure the Kerberos client, perform the following procedures.
Prepare the nodes. See How to Prepare the Nodes.
Create Kerberos principals. See How to Create Kerberos Principals.
Enable the secured NFS. See Enabling Secure NFS.
Configure the Key Distribution Center (KDC) server that will be used by the cluster nodes.
Refer to Solaris Kerberos/SEAM (Sun Enterprise Authentication Mechanism) documentation for details.
Set up the time synchronization.
The KDC server must be time synchronized with the cluster nodes as well as any clients that will be using the Sun Cluster HA for NFS services from the cluster. The Network Time Protocol (NTP) method performs time corrections with greater granularity than other methods, and therefore the time synchronization is more reliable. To benefit from this greater reliability, use NTP for the time synchronization.
Verify the DNS client configuration.
The DNS client configuration must be complete and working on all cluster nodes as well as on any NFS clients that will be using secure NFS services from the cluster. Use resolv.conf(4) to verify the DNS client configuration.
The DNS domain name must be made known to the Kerberos configuration by keeping a mapping in the domain_realm section of krb5.conf(4) file.
The following example shows a mapping of DNS domain name mydept.company.com to Kerberos realm ACME.COM.
[domain_realm] .mydept.company.com = ACME.COM
Ensure that the Master KDC server is up when the Kerberos client software is configured on the cluster nodes.
Ensure that the same configuration file and the same service key table file are available to all cluster nodes.
The /etc/krb5/krb5.conf file must be configured the same on all the cluster nodes. In addition, the default Kerberos keytab file (service key table), /etc/krb5/krb5.keytab, must be configured the same on all the cluster nodes. Consistent configuration can be achieved by copying the files to all cluster nodes. Alternately, you can keep a single copy of each file on a global file system and install symbolic links to /etc/krb5/krb5.conf and /etc/krb5/krb5.keytab on all cluster nodes.
You can also use a failover file system to make files available to all cluster nodes. However, a failover file system is visible on only one node at a time. Therefore, if Sun Cluster HA for NFS is being used in different resource groups, potentially mastered on different nodes, the files are not visible to all cluster nodes. In addition, this configuration complicates Kerberos client administrative tasks.
Ensure that all Kerberos-related entries in the file /etc/nfssec.conf are uncommented.
On all cluster nodes, as well as on any NFS clients that are configured to use secure NFS services from the cluster, all Kerberos-related entries in the file /etc/nfssec.conf must be uncommented. See nfssec.conf(4).
The following steps create the required Kerberos principals and keytab entries in the KDC database. For each cluster node, the keytab entries for which service principals are created depend on the version of Solaris that is running on the cluster node.
In Solaris 8, both the “root” and the “host” entries must be created.
In Solaris 9, only the “host” entry must be created.
The principal for the “nfs” service over the logical hostname is created on one node only and then added manually to the default Kerberos keytab file on each cluster node. The Kerberos configuration file krb5.conf and the keytab file krb5.keytab must be stored as individual copies on each cluster node and must not be shared on a global file system.
On each cluster node, log in to the KDC server as the administrator and create the host principal for each cluster node.
Note that, with Solaris 8, you must create both host and root principals for each cluster node.
Principals must be created using the fully qualified domain names.
Add these entries to the default keytab file on each node. These steps can be greatly simplified with the use of cluster console utilities (see cconsole(1M)) .
The following example creates the root and host entries. Perform this step on all cluster nodes, substituting the physical hostname of each cluster node for the hostname in the example.
# kadmin -p username/admin Enter Password: kadmin: addprinc -randkey host/phys-red-1.mydept.company.com Principal "host/phys-red-1.mydept.company.com@ACME.COM" created.
kadmin: addprinc -randkey root/phys-red-1.mydept.company.com Principal "root/phys-red-1.mydept.company.com@ACME.COM" created.
kadmin: ktadd host/phys-red-1.mydept.company.com Entry for principal host/phys-red-1.mydept.company.com with kvno 2, encryption type DES-CBC-CRC added to keytab WRFILE:/etc/krb5/krb5.keytab.
kadmin: ktadd root/phys-red-1.mydept.company.com Entry for principal root/phys-red-1.mydept.company.com with kvno 2, encryption type DES-CBC-CRC added to keytab WRFILE:/etc/krb5/krb5.keytab.
kadmin: quit #
On one cluster node, create the principal for the Sun Cluster HA for NFS service for the logical hostnames which provide Sun Cluster HA for NFS service.
Principals must be created using the fully qualified domain names. Perform this step on only one cluster node.
# kadmin -p username/admin Enter Password: kadmin: addprinc -randkey nfs/relo-red-1.mydept.company.com Principal "nfs/relo-red-1.mydept.company.com@ACME.COM" created.
kadmin: ktadd -k /var/tmp/keytab.hanfs nfs/relo-red-1.mydept.company.com Entry for principal nfs/relo-red-1.mydept.company.com with kvno 3, encryption type DES-CBC-CRC added to keytab WRFILE:/var/tmp/keytab.hanfs.
kadmin: quit #
In the above example, relo-red-1 is the logical hostname used with Sun Cluster HA for NFS.
Securely copy the keytab database /var/tmp/keytab.hanfs specified in Step 2 to the rest of the cluster nodes.
Do not use insecure copying methods such as regular ftp or rcp, and so forth. For additional security, you can use the cluster private interconnect to copy the database.
The following example copies the database.
# scp /var/tmp/keytab.hanfs clusternode2-priv:/var/tmp/keytab.hanfs # scp /var/tmp/keytab.hanfs clusternode3-priv:/var/tmp/keytab.hanfs
On all cluster nodes, add the keytab entry for the “nfs” service over logical hostname to the local keytab database.
The following example uses the ktutil(1M) command to add the entry. Remove the temporary keytab file /var/tmp/keytab.hanfs on all cluster nodes after it has been added to the default keytab database /etc/krb5/krb5.keytab.
# ktutil ktutil: rkt /etc/krb5/krb5.keytab ktutil: rkt /var/tmp/keytab.hanfs ktutil: wkt /etc/krb5/krb5.keytab ktutil: quit# # rm /var/tmp/keytab.hanfs
Verify the Kerberos client configuration.
List the default keytab entries on each cluster node and make sure that the key version number (KVNO) for the “nfs” service principal is the same on all cluster nodes.
# klist -k Keytab name: FILE:/etc/krb5/krb5.keytab KVNO Principal ---- --------------------------------- 2 host/phys-red-1.mydept.company.com@ACME.COM 2 root/phys-red-1.mydept.company.com@ACME.COM 3 nfs/relo-red-1.mydept.company.com@ACME.COM
On all cluster nodes, the principal for the “nfs” service over the logical host must have the same KVNO number. In the above example, the principal for the “nfs” service over the logical host is nfs/relo-red-1.mydept.company.com@ACME.COM, and the KVNO is 3.
(Solaris 9 only) The user credentials database gsscred must be up-to-date for all users who access secure NFS services from the cluster.
Build the user credential database by running the following command on all cluster nodes.
# gsscred -m kerberos_v5 -a
See gsscred(1M) man pages for details.
Note that the above approach builds the user credentials database only once. Some other mechanism must be employed, for example, cron(1M), to keep the local copy of this database up-to-date with changes in the user population.
This step is not necessary for Solaris release 10.
Use the -o sec=option option of the share(1M) command in the dfstab.resource-name entry to share your file systems securely. See nfssec(5) man pages for details of specific option settings. If the Sun Cluster HA for NFS resource is already configured and running, see How to Change Share Options on an NFS File System for information about updating the entries in the dfstab.resource-name file. Note that the sec=dh option is not supported on Sun Cluster configurations.