JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Network File System (NFS) Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for NFS

Overview of the Installation and Configuration Process for HA for NFS

Planning the HA for NFS Installation and Configuration

Service Management Facility Restrictions

NFSV3 Restrictions

Loopback File System Restrictions

Zettabyte File System (ZFS) Restrictions

Installing the HA for NFS Packages

How to Install the HA for NFS Packages

Registering and Configuring HA for NFS

Setting HA for NFS Extension Properties

Tools for Registering and Configuring HA for NFS

How to Register and Configure the Oracle Solaris Cluster HA for NFS by Using clsetup

How to Register and Configure HA for NFS by using Oracle Solaris Cluster Command Line Interface (CLI)

How to Change Share Options on an NFS File System

How to Dynamically Update Shared Paths on an NFS File System

How to Tune HA for NFS Method Timeouts

Configuring SUNW.HAStoragePlus Resource Type

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported UNIX File System Using the Command Line Interface

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported Zettabyte File System

Securing HA for NFS With Kerberos V5

How to Prepare the Nodes

How to Create Kerberos Principals

Enabling Secure NFS

Tuning the HA for NFS Fault Monitor

Fault Monitor Startup

Fault Monitor Stop

Operations of HA for NFS Fault Monitor During a Probe

NFS System Fault Monitoring Process

NFS Resource Fault Monitoring Process

Monitoring of File Sharing

Upgrading the SUNW.nfs Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

A.  HA for NFS Extension Properties

Index

Chapter 1

Installing and Configuring HA for NFS

This chapter describes the steps to install and configure Oracle Solaris Cluster HA for Network File System (NFS) on your Oracle Solaris Cluster nodes.


Note - If you are using the Solaris 10 OS, install and configure this data service to run only in the global zone. At publication of this document, this data service is not supported in non-global zones. For updated information about supported configurations of this data service, contact your Oracle service representative.


This chapter contains the following sections.

You must configure Oracle Solaris Cluster HA for NFS as a failover data service. See Chapter 1, Planning for Oracle Solaris Cluster Data Services, in Oracle Solaris Cluster Data Services Planning and Administration Guide and the Oracle Solaris Cluster Concepts Guide document for general information about data services, resource groups, resources, and other related topics.


Note - You can use the Oracle Solaris Cluster Manager to install and configure this data service. See the Oracle Solaris Cluster Manager online help for details.


Use the worksheets in Configuration Worksheets in Oracle Solaris Cluster Data Services Planning and Administration Guide to plan your resources and resource groups before you install and configure Oracle Solaris Cluster HA for NFS.

The NFS mount points that are placed under the control of the data service must be the same on all of the nodes that can master the disk device group that contains those file systems.

Oracle Solaris Cluster HA for NFS requires that all NFS client mounts be “hard” mounts.

No Oracle Solaris Cluster node may be an NFS client of a file system that is exported by Oracle Solaris Cluster HA for NFS and is being mastered on a node in the same cluster. Such cross-mounting of Oracle Solaris Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.

If Solaris Resource Manager is used to manage system resources allocated to NFS on a cluster, all Oracle Solaris Cluster HA for NFS resources which can fail over to a common cluster node must have the same Solaris Resource Manager project ID. This project ID is set with the Resource_project_name resource property.


Caution

Caution - If you use Veritas Volume Manager (available for use in SPARC based clusters only), you can avoid “stale file handle” errors on the client during NFS failover. Ensure that the vxio driver has identical pseudo-device major numbers on all of the cluster nodes. You can find this number in the /etc/name_to_major file after you complete the installation.