JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Using Sun QFS and Sun Storage Archive Manager With Oracle Solaris Cluster     Sun QFS and Sun Storage Archive Manager 5.3 Information Library
search filter icon
search icon

Document Information

Preface

1.  Using SAM-QFS With Oracle Solaris Cluster

2.  Requirements for Using SAM-QFS With Oracle Solaris Cluster

Basic Product Familiarity

Hardware Requirements

Software Requirements

Disk Device Requirements

Requirements for Clustered (Shared) File Systems

Requirements for Failover (Local) File Systems

Disk Device Redundancy

Storage Redundancy

Data Path Redundancy

Performance Considerations

Example - Verifying Devices and Device Redundancy

Determining High Availability

Analyzing the Output From the Commands

3.  Configuring Sun QFS Local Failover File Systems With Oracle Solaris Cluster

4.  Configuring Sun QFS Shared File Systems With Oracle Solaris Cluster

5.  Configuring SAM-QFS Archiving in an Oracle Solaris Cluster Environment (HA-SAM)

6.  Configuring Clients Outside of the Cluster

Performance Considerations

For optimal file system performance, the metadata and file data should be accessible through multiple interconnects and multiple disk controllers. In addition, plan to write file data to separate, redundant, highly available disk devices.

Plan to write your file system's metadata to RAID-1 disks. You can write file data to either RAID-1 or RAID-5 disks.

If you are configuring a Sun QFS highly available local file system and you are using a volume manager, you will get the best performance when the file system is striping data over all controllers and disks, rather than when the volume manager performs the striping. You should use a volume manager only to provide redundancy.

Example - Verifying Devices and Device Redundancy

This example shows how to use output from the cldevice command to find the devices in the Oracle Solaris Cluster environment, determine which devices are highly available, and then determine which devices are redundant.

Determining High Availability

The following example shows the use of the cldevice show Oracle Solaris Cluster command to list paths of the devices in the DID configuration file for all nodes. In the output from the cldevice show command, look for output that shows a device that is visible from two or more nodes and that has the same World Wide Name. These are global devices.

The example uses Oracle's StorageTek T3 arrays in a RAID-5 configuration. The output shows that you can use devices 3 and 4 for configuring the disk cache for a file system.

Example 2-1 cldevice Command Example

ash# cldevice show | grep Device

+=== DID Device Instances ===
DID Device Name:                                /dev/did/rdsk/d1
  Full Device Path:                               ash:/dev/rdsk/c0t0d0
DID Device Name:                                /dev/did/rdsk/d2
  Full Device Path:                               ash:/dev/rdsk/c0t6d0
DID Device Name:                                /dev/did/rdsk/d3
  Full Device Path:                               ash:/dev/rdsk/c6t50020F2300004921d1
  Full Device Path:                               elm:/dev/rdsk/c6t50020F2300004921d1
DID Device Name:                                /dev/did/rdsk/d4
  Full Device Path:                               ash:/dev/rdsk/c6t50020F2300004921d0
  Full Device Path:                               elm:/dev/rdsk/c6t50020F2300004921d0
...

     _# The preceding output indicates that both ash and elm can 
access DID devices {{d3}} and {{d4}}._
     _# These disks are highly available._


ash# format /dev/did/rdsk/d4s2
selecting /dev/did/rdsk/d4s2
[disk formatted]


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name

        <cmd>     - execute <cmd>, then return
        quit
format> verify

Primary label contents:

Volume name = <        >
ascii name  = <SUN-T300-0118 cyl 34530 alt 2 hd 192 sec 64>
pcyl        = 34532
ncyl        = 34530
acyl        =    2
nhead       =  192
nsect       =   64
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 17264      101.16GB    (17265/0/0) 212152320
  1        usr    wm   17265 - 34529      101.16GB    (17265/0/0) 212152320
  2     backup    wu       0 - 34529      202.32GB    (34530/0/0) 424304640
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0

Analyzing the Output From the Commands

The cldevice show command in this example lists device /dev/rdsk/c6t50020F2300004921d0, which is DID device /dev/did/rdsk/d4 or global device /dev/global/rdsk/d4. This device has two partitions (0 and 1), each of which yields 212152320 blocks for use by a Sun QFS highly available local file system as /dev/global/rdsk/d4s0 and /dev/global/rdsk/d4s1.

You need to issue the cldevice show and format commands for all devices to be configured for use by the Sun QFS highly available local file system.

For more information about configuring devices that are on redundant storage, see the Oracle Solaris Cluster software installation documentation.