Hardware Certification Test Suite 2.3 User's Guide

Storage Controller Certification

The HCTS Storage Controller certification enables you to test the following technologies:

The testing procedure is the same for certifying RAID, SCSI, and Fibre Channel controllers. See How to Certify RAID, SCSI, and Fibre Channel Controllers for more information. The certification process for USB Hard Drives is slightly different. See How to Certify a USB Hard Drive for information.

For specific information about each step and values that can be entered, see the HCTS online help.

Before certifying storage controllers, you must run the make.slices utility to prepare the correct slice layout. See Running make.slices Before Running Storage Tests for information about this process.

Running make.slices Before Running Storage Tests

Before running the storage controller certification in HCTS, you must run the make.slices utility to prepare the correct slice layout.

ProcedureHow to Run the make.slices Utility

Steps
  1. Become superuser.

    $ su

  2. Change directories to /opt/SUNWstaf/config-util.

    # cd /opt/SUNWstaf/config-util

  3. Run the make.slices utility.

    # ./make.slices

    The make.slices utility runs the config.s+h program. For information about config.s+h, see Configuring a System to Run HCTS Storage Tests by Running config.s+h.

  4. Continue running HCTS to certify your storage controller.

Configuring a System to Run HCTS Storage Tests by Running config.s+h

This utility configures a system to be used for stress testing by the HCTS storage tests. The usage for this utility is as follows:

Usage: config.s+h [ -d ] [ -v ]

where -d enables debug mode and -v enables verbose mode.

To use the config.s+h utility, the system at least one disk that does not contain system files. This disk must have an fdisk table, a Solaris partition, and existing device nodes in /dev. No limit exists on the number of disks that can be used for testing. Configuration is done according to information supplied to this program in the user-created ./config.s+h.info file (known as the info-file). This file has entries in the following format:

device [slices] [type] [mount]

where

device

The device name indicated by cntndn or cndn format.

slices

The number of slices to create on the disk. This field is optional. The default is MAXSLICE, which is currently 15.

type

The type of slices to create (raw or file system). This field is optional. The default is raw. 

mount

The mount-point prefix for file systems. This field is optional. The default is MOUNT, which is currently /slice.

The following sample commented entries illustrate the use of the info-file.

c0t1d0 8 f test 
0# Put eight (8) slices on /dev/rdsk/c0t1d0.
# Build file systems on all eight slices.
# Mount the file systems on /test.c0t1d0s1
# through /test.c0t1d0s8. Create /etc/vfstab
# entries for all file systems.

c0d0 10 r
# Put ten (10) slices on /dev/rdsk/c0d0.
# Use all of them as raw devices.

c2t0d0 6 fr tst 
# Put six (6) slices on /dev/rdsk/c2t0d0.
# Build a file system on slice 1, use slice 2 as
# a raw device, build a file system on slice 3,
# use slice 4 as a raw device, etc. Mount the
# three file systems on /tst.c2t0d0s1,
# /tst.c2t0d0s3 and /tst.c2t0d0s5

c2t0d0 6 er tst 
# Same as above, but export the file systems
# and create /etc/dfs/dfstab entries for them.

c2t0d0
# Put MAXSLICE (15) slices on /dev/rdsk/c2t0d0.
# Use all of them as raw devices.

c2t0d0 e
# Put MAXSLICE (15) slices on /dev/rdsk/c2t0d0.
# Build file systems on all slices. Mount the
# file systems on /slice.c2t0d0s0 through
# /slice.c2t0d0s14. Create /etc/vfstab entries
# for all file systems. Export all file systems
# and create /etc/dfs/dfstab entries for them.