Go to main content

Administering an Oracle® Solaris Cluster 4.4 Configuration

Exit Print View

Updated: November 2019
 
 

How to Configure a zpool for Globally Mounted ZFS File Systems Without HAStoragePlus HAStoragePlus

This procedure describes how to configure a ZFS storage pool (zpool) for globally mounted ZFS file systems without configuring an HAStoragePlus resource. This approach can be useful in configurations where the global file systems are to be mounted on every node at boot time and there are no explicit dependencies of data services upon the file systems. For information on how to use HAStoragePlus to manage ZFS pools for global access by data services, see Configuring an HAStoragePlus Resource for Cluster File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

  1. List the DID mappings and identify the shared device to use.

    To configure a globally accessible ZFS pool, choose one or more multi-hosted devices from the output of the cldevice show command. Make a note of both the cNtXdY device name and the /dev/did/rdsk/dN DID device name for the chosen device(s). The chosen device(s) must have connectivity to all cluster nodes.

    phys-schost-1# cldevice show | grep Device

    In the following example, the entries for DID devices /dev/did/rdsk/d1 and /dev/did/rdsk/d2 show that those drives are connected only to phys-schost-1, while /dev/did/rdsk/d3 is accessible by both nodes of this two-node cluster, phys-schost-1 and phys-schost-2. In this example, DID device /dev/did/rdsk/d3 with device name c1t1d0 will be used for global access by both nodes.

    === DID Device Instances ===
    DID Device Name: /dev/did/rdsk/d1
    Full Device Path: phys-schost-1:/dev/rdsk/c0t0d0
    DID Device Name: /dev/did/rdsk/d2
    Full Device Path: phys-schost-1:/dev/rdsk/c0t6d0
    DID Device Name: /dev/did/rdsk/d3
    Full Device Path: phys-schost-1:/dev/rdsk/c1t1d0
    Full Device Path: phys-schost-2:/dev/rdsk/c1t1d0
    ...
  2. Create a ZFS pool for the DID device(s) that you chose.

    The name gpool is chosen for this pool. Although ZFS can use individual slices or partitions, the zpool(8) man page recommends the use of a whole disk.

    phys-schost-1# zpool create gpool c1t1d0
  3. Create a named device group.

    The device group must have the same name gpool as chosen for the pool. The poolaccess property is set to global to indicate that the file systems of this pool will be globally accessible across the nodes of the cluster.

    phys-schost-1# cldevicegroup create -p poolaccess=global \
    -n phys-schost-1,phys-schost-2 -t zpool gpool
  4. (Optional) Create one or more ZFS datasets.
    phys-schost-1# zfs create gpool/myfilesystem

    All file system datasets in the pool will be mounted globally and accessible cluster-wide.

  5. Bring the device group online.
    phys-schost-1# cldevicegroup online gpool
  6. Verify the new zpool.

    The following zpool list command must be executed on the device group primary node. To determine the primary node, execute the cldevicegroup status command.

    phys-schost-1# cldevicegroup status gpool
    
    === Cluster Device Groups ===
    
    --- Device Group Status ---
    
    Device Group Name   Primary        Secondary      Status
    -----------------   -------        ---------      ------
    gpool               phys-schost-1  phys-schost-2  Online
    
    phys-schost-1# zpool list
    NAME      SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
    gpool    49.8G  2.16G  47.6G   4%  1.00x  ONLINE  /
    ...