JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Network File System (NFS) Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for NFS

Overview of the Installation and Configuration Process for HA for NFS

Planning the HA for NFS Installation and Configuration

Service Management Facility Restrictions

NFSV3 Restrictions

Loopback File System Restrictions

Zettabyte File System (ZFS) Restrictions

Installing the HA for NFS Packages

How to Install the HA for NFS Packages

Registering and Configuring HA for NFS

Setting HA for NFS Extension Properties

Tools for Registering and Configuring HA for NFS

How to Register and Configure the Oracle Solaris Cluster HA for NFS by Using clsetup

How to Register and Configure HA for NFS by using Oracle Solaris Cluster Command Line Interface (CLI)

How to Change Share Options on an NFS File System

How to Dynamically Update Shared Paths on an NFS File System

How to Tune HA for NFS Method Timeouts

Configuring SUNW.HAStoragePlus Resource Type

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported UNIX File System Using the Command Line Interface

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported Zettabyte File System

Securing HA for NFS With Kerberos V5

How to Prepare the Nodes

How to Create Kerberos Principals

Enabling Secure NFS

Tuning the HA for NFS Fault Monitor

Fault Monitor Startup

Fault Monitor Stop

Operations of HA for NFS Fault Monitor During a Probe

NFS System Fault Monitoring Process

NFS Resource Fault Monitoring Process

Monitoring of File Sharing

Upgrading the SUNW.nfs Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

A.  HA for NFS Extension Properties

Index

Configuring SUNW.HAStoragePlus Resource Type

HA for NFS is a disk-intensive data service. Therefore, you should configure the SUNW.HAStoragePlus resource type for use with this data service. For an overview of the SUNW.HAStoragePlus resource type, see Understanding HAStoragePlus in Oracle Solaris Cluster Data Services Planning and Administration Guide.

The procedure for configuring the SUNW.HAStoragePlus resource type depends on the type of the file system that NFS is sharing. For more information, see the following sections:

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported UNIX File System Using the Command Line Interface

The HAStoragePlus resource type synchronizes the startups between resource groups and disk device groups. The HAStoragePlus resource type has an additional feature to make a local file system highly available. For background information about making a local file system highly available, see Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide. To use both of these features, set up the HAStoragePlus resource type.


Note - These instructions explain how to use the HAStoragePlus resource type with the UNIX file system (UFS). For information about using the HAStoragePlus resource type with the Sun QFS file system, see your Sun QFS documentation.


The following example uses a simple NFS service that exports home directory data from a locally mounted directory /global/local-fs/nfs/export/ home. The example assumes the following:

  1. On a cluster node, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
  2. Determine whether the HAStoragePlus resource type and the SUNW.nfs resource type are registered.

    The following command prints a list of registered resource types.

    # clresourcetype show | egrep Type
  3. If necessary, register the HAStoragePlus resource type and the SUNW.nfs resource type.
    # clresourcetype register SUNW.HAStoragePlus
    # clresourcetype register SUNW.nfs
  4. Create the failover resource group nfs-rg.
    # clresourcegroup create -p PathPrefix=/global/local-fs/nfs nfs-rg
  5. Create a logical host resource of type SUNW.LogicalHostname.
    # clreslogicalhostname create -g nfs-rg -L -h log-nfs nfs-lh-rs

    Note - If you require a fully qualified hostname, you must specify the fully qualified name with the -h option and you cannot use the fully qualified form in the resource name.


  6. Create the resource nfs-hastp-rs of type HAStoragePlus.
    # clresource create -g nfs-rg -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=/global/local-fs/nfs \
    -p AffinityOn=True nfs-hastp-rs

    The resource is created in the enabled state.


    Note - You can use the FilesystemMountPoints extension property to specify a list of one or more mount points for file systems. This list can consist of mount points for both local file systems and global file systems. The mount at boot flag is ignored by HAStoragePlus for global file systems.


  7. Bring online the resource group nfs-rg on a cluster node.

    The node or zone where the resource group is brought online becomes the primary node for the /global/local-fs/nfs file system's underlying global device partition. The file system /global/local-fs/nfs is then mounted on this node or zone.

    # clresourcegroup online -M nfs-rg
  8. Create the resource nfs-rs of type SUNW.nfs and specify its resource dependency on the resource nfs-hastp-rs.

    The file dfstab.nfs-rs must be present in /global/local-fs/nfs/SUNW.nfs.

    # clresource create -g nfs-rg -t SUNW.nfs \
    -p Resource_dependencies_offline_restart=nfs-hastp-rs nfs-rs

    The resource is created in the enabled state.


    Note - Before you can set the dependency in the nfs-rs resource, the nfs-hastp-rs resource must be online.


  9. Take offline the resource group nfs-rg.
    # clresourcegroup offline nfs-rg
  10. Bring online the nfs-rg group on a cluster node or zone.
    # clresourcegroup online -M nfs-rg

    Caution

    Caution - Ensure that you switch only the resource group. Do not attempt to switch the device group. If you attempt to switch the device group, the states of the resource group and the device group become inconsistent, causing the resource group to fail over.


    Whenever the service is migrated to a new node, the primary I/O path for /global/local-fs/nfs will always be online and colocated with the NFS servers. The file system /global/local-fs/nfs is locally mounted before the NFS server is started.

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported Zettabyte File System

The following procedure uses a simple NFS service.

See Creating a ZFS Storage Pool in Solaris ZFS Administration Guide for information about how to create a ZFS pool. See Creating a ZFS File System Hierarchy in Solaris ZFS Administration Guide for information about how to create a ZFS file system in that ZFS pool.

  1. On a cluster node, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
  2. Determine whether the HAStoragePlus resource type and the SUNW.nfs resource type are registered.

    The following command prints a list of registered resource types.

    # clresourcetype list
  3. If necessary, register the HAStoragePlus resource type and the SUNW.nfs resource type.
    # clresourcetype register SUNW.HAStoragePlus SUNW.nfs
  4. Create the failover resource group.
    # clresourcegroup create -p PathPrefix=path resource-group
  5. Create a logical host resource of type SUNW.LogicalHostname.
    # clreslogicalhostname create -g resource-group \
    -h logical-hostname logicalhost-resource

    Note - If you require a fully qualified hostname, you must specify the fully qualified name with the -h option and you cannot use the fully qualified form in the resource name.


  6. Create the ZFS file system resource of type HAStoragePlus.
    # clresource create -g resource-group -t SUNW.HAStoragePlus \
    -p Zpools=zpool HASP-resource

    The resource is created in the enabled state.


    Note - You can specify a list of one or more ZFS pools for the Zpools extension property.


  7. Bring online the resource group on a cluster node in a managed state.

    The node on which the resource group is brought online becomes the primary node for the ZFS file system. The ZFS pool zpool is imported on this node. The ZFS file system is consequently mounted locally on this node.

    # clresourcegroup online -M resource-group
  8. Create the resource of type SUNW.nfs and specify its resource dependency on the resource of type SUNW.HAStoragePlus.

    The file dfstab.nfs-rs must be present in zpool/nfs/SUNW.nfs.

    # clresource create -g resource-group -t SUNW.nfs \
    -p Resource_dependencies=HASP-resource NFS-resource

    The resource is created in the enabled state.


    Note - Before you can set the dependency in the NFS-resource resource, the HASP-resource resource must be online.


  9. Bring online the resource-group group on a cluster node in a managed state.
    # clresourcegroup online -M resource-group

Example 1-2 Setting Up the HAStoragePlus Resource Type for an NFS-Exported ZFS File System

The following example uses a simple NFS service. The example assumes the following:

phys-schost-1% su
Password: 
# clresourcetype list
SUNW.LogicalHostname:2
SUNW.SharedAddress:2
# clresourcetype register SUNW.HAStoragePlus SUNW.nfs
# clresourcegroup create -p PathPrefix=/nfszpool/nfs nfs-rg
# clreslogicalhostname create -g nfs-rg -h log-nfs nfs-lh-rs
# clresource create -g nfs-rg -t SUNW.HAStoragePlus \
                    -p Zpools=nfszpool nfs-hastp-rs
# clresourcegroup online -M nfs-rg
# clresource create -g nfs-rg -t SUNW.nfs \
                    -p Resource_dependencies=nfs-hastp-rs nfs-rs
# clresourcegroup online -M nfs-rg