JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Using Sun QFS and Sun Storage Archive Manager With Oracle Solaris Cluster     Sun QFS and Sun Storage Archive Manager 5.3 Information Library
search filter icon
search icon

Document Information

Preface

1.  Using SAM-QFS With Oracle Solaris Cluster

2.  Requirements for Using SAM-QFS With Oracle Solaris Cluster

3.  Configuring Sun QFS Local Failover File Systems With Oracle Solaris Cluster

Task Map: Configuring Local Failover File Systems With Oracle Solaris Cluster

Preparing the Host Systems

Editing mcf Files for a Local Failover File System

How to Prepare to Create a Local Sun QFS File System and Create an mcf File

How to Configure a Failover File System as a SUNW.HAStoragePlus Resource

How to Verify the Resource Group on All Nodes

4.  Configuring Sun QFS Shared File Systems With Oracle Solaris Cluster

5.  Configuring SAM-QFS Archiving in an Oracle Solaris Cluster Environment (HA-SAM)

6.  Configuring Clients Outside of the Cluster

Editing mcf Files for a Local Failover File System

The lines that define a particular file system must be identical in the mcf files on all host systems that support the file system. Only one mcf file can reside on a host. Because you can have other, additional file systems defined in an mcf file, the mcf files on different hosts might not be identical.


Note - If you update a metadata server's mcf file at any time after the shared file system is mounted, you must also update the mcf files as necessary on all hosts that can access that shared file system.


How to Prepare to Create a Local Sun QFS File System and Create an mcf File

  1. Log in to the Oracle Solaris Cluster node.
  2. Become superuser.
  3. Use the format utility to lay out the partitions on /dev/global/dsk/d4.
    # format /dev/global/rdsk/d4s2
    
    # format> partition
    
    [ output deleted ]
    
    # partition> print
    
    Current partition table (original):
    
    Total disk cylinders available: 34530 + 2 (reserved cylinders)
    
    Part      Tag     Flag      Cylinders         Size            Blocks
    
     0   unassigned    wm       1 -  3543        20.76GB    (3543/0/0)   43536384
    
     1   unassigned    wm    3544 - 34529       181.56GB    (30986/0/0) 380755968
    
     2   backup        wu       0 - 34529       202.32GB    (34530/0/0) 424304640
    
     3   unassigned    wu       0                 0         (0/0/0)             0
    
     4   unassigned    wu       0                 0         (0/0/0)             0
    
     5   unassigned    wu       0                 0         (0/0/0)             0
    
     6   unassigned    wu       0                 0         (0/0/0)             0
    
     7   unassigned    wu       0                 0         (0/0/0)             0
    
     
    
    NOTE: Partition 2 (backup) will not be used and was created by format(1m) by default.

    Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20-gigabyte partition. The remaining space is configured into partition 1.

  4. Replicate the global device d4 partitioning to global devices d5 through d7.

    This example shows the command for global device d5:

    # prtvtoc /dev/global/rdsk/d4s2 | fmthard \ -s - /dev/global/rdsk/d5s2
  5. On all nodes that are potential hosts of the file system, perform the following:
    1. Configure the eight partitions (four global devices, with two partitions each) into a Sun QFS file system by adding a new file system entry to the mcf file.
      # cat >> /etc/opt/SUNWsamfs/mcf <<EOF
      
      #
      # StorageTek QFS file system configurations
      #
      # Equipment                  Equipment        Equipment      Family        Device  Additional
      
      # Identifier                 Ordinal           Type           Set          State   Parameters
      
      # --------------     ---------    ---------   -------  ------  -----------
      
      qfsnfs1                     100               ma         qfsnfs1     on
      
      /dev/global/dsk/d4s0           101               mm         qfsnfs1
      
      /dev/global/dsk/d5s0           102               mm         qfsnfs1
      
      /dev/global/dsk/d6s0           103               mm         qfsnfs1
      
      /dev/global/dsk/d7s0           104               mm         qfsnfs1
      
      /dev/global/dsk/d4s1           105               mr         qfsnfs1
      
      /dev/global/dsk/d5s1           106               mr         qfsnfs1
      
      /dev/global/dsk/d6s1           107               mr         qfsnfs1
      
      /dev/global/dsk/d7s1           108               mr         qfsnfs1
      EOF
    2. Validate that the configuration information you added to the mcf file is correct, and fix any errors in the mcf file before proceeding.

      It is important to complete this step before you configure the Sun QFS file system under the HAStoragePlus resource type.

      # /opt/SUNWsamfs/sbin/sam-fsd

How to Configure a Failover File System as a SUNW.HAStoragePlus Resource

Perform this task if you are configuring a local file system to failover on an Oracle Solaris Cluster platform.

How to Verify the Resource Group on All Nodes

This task ensures that the file system can move from node to node when the Oracle Solaris Cluster software performs a failover.

Perform these steps for each node in the cluster, with a final return to the original server.

  1. From any node in the Oracle Solaris Cluster environment, use the scswitch command to move the file system resource from one node to another.

    For example:

    server# scswitch -z -g qfs-rg -h elm
  2. Use the scstat command to verify that the file system resource was moved successfully.

    For example:

    server# scstat
    -- Resources --
    Resource Name    Node Name  State     Status Message
    -------------    ---------  -----     --------------
    Resource: qfs-res   ash     Offline   Offline
    Resource: qfs-res   elm     Online    Online
    Resource: qfs-res   oak     Offline   Offline