Go to main content

Oracle® Solaris Cluster With Network-Attached Storage Device Manual

Exit Print View

Updated: February 2017
 
 

How to Add Oracle ZFS Storage Appliance Directories and Projects to a Cluster

Before You Begin

An NFS file system or directory from the Oracle ZFS Storage Appliance is already created in a project, which is itself in one of the storage pools of the device. It is important that in order for a directory (for example, the NFS file system) to be used by the cluster, to perform the configuration at the project level, as described below.

To perform this procedure, assume the root role or a role that provides solaris.cluster.read and solaris.cluster.modify authorization.

  1. Use the Oracle ZFS Storage Appliance GUI to identify the project associated with the NFS file systems for use by the cluster.

    After you have identified the appropriate project, click Edit for that project.

  2. If read/write access to the project has not been configured, set up read/write access to the project for the cluster nodes.
    1. Access the NFS properties for the project.

      In the Oracle ZFS Storage Appliance GUI, select the Protocols tab in the Edit Project page.

    2. Set the Share Mode for the project to None, Read/Write or Read only, depending on the desired access rights for nonclustered systems.

      In the Protocols tab of the project, you can set the Share Mode to None, Read/Write, or Read only. Although, all three share modes are supported, as a best practice it is recommended that you use None, and not use the Read/Write mode.

      The Share Mode can be set to Read/Write if it is required to make the project world-writable, but it is not recommended.

    3. Add a read/write NFS Exception for each cluster node by performing the following steps.

      Note -  Adding exceptions to the cluster nodes enables the cluster software to fence and unfence the cluster nodes, when nodes leave or join the cluster.
      • Under NFS Exceptions, click +.
      • Use the pull-down menu to select a Type.

        Note -  If you are using AK 2013.1.6 or later versions, you must select IPv4 Subnet as the Type. If you are using versions earlier than AK 2013.1.6, then select Network as the Type.
      • Enter the public IP address the cluster node will use to access the appliance as the Entity.

        Use a CIDR mask of /32. For example, 192.168.254.254/32.

      • Select Read/Write as the Access Mode.
      • If desired, select Root Access.

        Root Access is required when configuring applications, such as Oracle RAC or HA for Oracle Database.

      • Add exceptions for all cluster nodes.
      • Click Apply after the exceptions have been added for all IP addresses.
  3. Ensure that the directory being added is set to inherit its NFS properties from its parent project.
    1. Navigate to the Shares tab in the Oracle ZFS Storage Appliance GUI.
    2. Click Edit Entry to the right of the Share that will have fencing enabled.
    3. Navigate to the Protocols tab for that share, and ensure that the Inherit from project property is set in the NFS section.

    If you are adding multiple directories within the same project, verify that each directory that needs to be protected by cluster fencing has the Inherit from project property set.

  4. If the project has not already been configured with the cluster, add the project to the cluster configuration.
    1. Use the clnasdevice show -v command to determine whether the project has already been configured with the cluster.
      # clnasdevice show -v
      
      ===NAS Devices===
      Nas Device:                  device1.us.example.com
      Type:                       sun_uss
      userid:                     osc_agent
      nodeIPs{node1}                  10.111.11.111
      nodeIPs{node2}                  10.111.11.112
      nodeIPs{node3}                  10.111.11.113
      nodeIPs{node4}                  10.111.11.114
      Project:                    pool-0/local/projecta
      Project:                    pool-0/local/projectb
    2. If you need to add a project to the cluster configuration, perform this command from any cluster node:
      # clnasdevice add-dir -d project1,project2 myfiler
      –d project1, project2

      Specifies the project or projects that you are adding. Specify the full path name of the project, including the pool. For example, pool-0/local/projecta.

      myfiler

      Specifies the name of the NAS device containing the projects.

      For example:

      # clnasdevice add-dir -d pool-0/local/projecta device1.us.example.com
      # clnasdevice add-dir -d pool-0/local/projectb device1.us.example.com
      # clnasdevice find-dir -v
      === NAS Devices ===
      
      Nas Device:                      device1.us.example.com
      Type:                          sun_uss
      Unconfigured Project:            pool-0/local/projecta
      File System:                       /export/projecta/filesystem-1
      File System:                       /export/projecta/filesystem-2
      Unconfigured Project:            pool-0/local/projectb
      File System:                       /export/projectb/filesystem-1

      For more information about the clnasdevice command, see the clnasdevice (1CL) man page.


      Note -  If you want to add the project from an Oracle ZFS Storage Appliance to a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the –Z option:
      # clnasdevice add-dir -d project1,project2 -Z zcname myfiler
      –Z zcname

      Specifies the name of the zone cluster where the NAS projects are being added.


  5. Confirm that the directory and project have been configured.

    Perform this command from any cluster node:

    # clnasdevice show -v -d all

    For example:

    # clnasdevice show -v -d all
    
    ===NAS Devices===
    Nas Device:                  device1.us.example.com
    Type:                       sun_uss
    nodeIPs{node1}                  10.111.11.111
    nodeIPs{node2}                  10.111.11.112
    nodeIPs{node3}                  10.111.11.113
    nodeIPs{node4}                  10.111.11.114
    userid:                     osc_agent
    Project:                    pool-0/local/projecta
    File System:                   /export/projecta/filesystem-1
    File System:                   /export/projecta/filesystem-2
    Project:                    pool-0/local/projectb
    File System:                   /export/projectb/filesystem-1

    Note -  If you want to check the projects for a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the –Z option:
    # clnasdevice show -v -Z zcname

    You can also perform zone cluster-related commands inside the zone cluster by omitting the –Z option. For more information about the clnasdevice command, see the clnasdevice (1CL) man page.


    After you confirm that a project name is associated with the desired NFS file system, use that project name in the configuration command.

  6. If you do not use the automounter, mount the directories manually.
    1. On each node in the cluster, create a mount-point directory for each Oracle ZFS Storage Appliance NAS project that you added.
      # mkdir -p /path-to-mountpoint
      path-to-mountpoint

      Name of the directory on which to mount the project.

    2. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

      If you are using your Oracle ZFS Storage Appliance NAS device for Oracle RAC or HA for Oracle Database, consult your Oracle Database guide or log into My Oracle Support for a current list of supported files and mount options. After you log into My Oracle Support, click the Knowledge tab and search for Bulletin 359515.1.

      When mounting Oracle ZFS Storage Appliance NAS directories, select the mount options appropriate to your cluster applications. Mount the directories on each node that will access the directories. Oracle Solaris Cluster places no additional restrictions or requirements on the options that you use.

  7. To enable file system monitoring, configure a resource of type SUNW.ScalMountPoint for the file systems.

    For more information, see Configuring Failover and Scalable Data Services on Shared File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.