Solstice DiskSuite 4.2.1 User's Guide

Checking Status of DiskSuite Objects

This section contains the tasks that check the status of DiskSuite objects, including state database replicas, metadevices, hot spares, and disksets. Check an object's status before performing any of the tasks below, which are described later in this chapter:

Using DiskSuite Tool to Check Status

DiskSuite Tool gives you three ways to check the status of a DiskSuite object:

Using the Command Line to Check Status

Two commands, metadb(1M) and metastat(1M), check the status of DiskSuite objects.


# metadb [-s setname] [-i]

In this command,

-s setnameSpecifies the name of the diskset on which the metadb command will work.
 -i Displays a legend that describes the status flags.


# metastat [-s setname] [-p] [-t] [object]

In this command,

-s setname

Specifies the name of the diskset on which metastat will work.

-p

Displays the status in a format similar to that of the md.tab file.

-t

Displays the time of the last state change. 

object 

Is the name of the stripe, concatenation, concatenated stripe, mirror, RAID5 metadevice, trans metadevice, or hot spare pool. If you omit a specific object, the status of all metadevices and hot spare pools is displayed. 

How to Check the Status of State Database Replicas (DiskSuite Tool)

  1. Make sure you have met the prerequisites ("Prerequisites for Maintaining DiskSuite Objects").

  2. Check the status of the MetaDB object by displaying the object's Information window.

    For other ways of checking status, see "Using DiskSuite Tool to Check Status".

  3. Refer to Table 3-1 for explanations of the Status fields of the MetaDB object, and possible actions to take.

    Table 3-1 MetaDB Object Status Keywords

    Keyword 

    Meaning 

    Action 

    OK 

    The MetaDB object (state database) has no errors and is functioning correctly.

    None. 

    Attention 

    The number of good state database replicas is less than three, or at least one replica is broken. 

     

    This status is also displayed if the metadevice state database replicas have been created on fewer than three different controllers. 

    Add more replicas, preferably spread across different controllers, or fix the broken replicas. 

     

    If possible, add another controller and create state database replicas on drives attached to the new controller. 

     

    See "How to Create Additional State Database Replicas (DiskSuite Tool)" to add more state database replicas. See "How to Enable a State Database Replica (DiskSuite Tool)" to fix broken replicas.

    Urgent 

    The number of good state database replicas is less than two, or one or more state database replicas are broken. 

    Add more replicas, preferably spread across different controllers, or fix the broken replicas. 

     

    See "How to Create Additional State Database Replicas (DiskSuite Tool)" to add more state database replicas. See "How to Enable a State Database Replica (DiskSuite Tool)" to fix broken replicas.

    Critical 

    There are no good state database replicas. 

    Create at least three state database replicas from scratch, before rebooting. Otherwise the system will not boot properly. See "How to Create Initial State Database Replicas From Scratch (DiskSuite Tool)".

How to Check the Status of State Database Replicas (Command Line)

After checking the prerequisites ("Prerequisites for Maintaining DiskSuite Objects"), use the metadb(1M) command with the -i option to view the status of state database replicas. Refer to the metadb(1M) man page for more information.

Example -- Checking Status of All State Database Replicas


# metadb -i
        flags           first blk       block count
     a        u         16              1034            /dev/dsk/c4t3d0s2
     a        u         16              1034            /dev/dsk/c3t3d0s2
     a        u         16              1034            /dev/dsk/c2t3d0s2
 o - state database replica active prior to last mddb configuration change
 u - state database replica is up to date
 l - locator for this state database replica was read successfully
 c - state database replica's location was in /etc/opt/SUNWmd/mddb.cf
 p - state database replica's location was patched in kernel
 m - state database replica is master, this is state database replica selected as input
 W - state database replica has device write errors
 a - state database replica is active, commits are occurring to this state database replica
 M - state database replica had problem with master blocks
 D - state database replica had problem with data blocks
 F - state database replica had format problems
 S - state database replica is too small to hold current data base
 R - state database replica had device read errors

The characters in the front of the device name represent the status. All of the state database replicas in this example are active, as indicated by the a flag. A legend of all the flags follows the status.

Uppercase letters indicate a problem status. Lowercase letters indicate an "Okay" status.

How to Check the Status of Metadevices and Hot Spare Pools (DiskSuite Tool)

Use this procedure to view and interpret metadevice and hot spare pool status information.

  1. Make sure you have met the prerequisites ("Prerequisites for Maintaining DiskSuite Objects").

  2. Check the status of a metadevice or hot spare pool by displaying the object's Information window.

    For other ways of checking status, see "Using DiskSuite Tool to Check Status".

  3. Refer to Table 3-2 for explanations of the status keywords used by metadevices and hot spare pools.

    Table 3-2 General Status Keywords

    Keyword 

    Meaning 

    Used By ... 

    OK 

    The metadevice or hot spare pool has no errors and is functioning correctly. 

    All metadevice types and hot spare pools 

    Attention 

    The metadevice or hot spare pool has a problem, but there is no immediate danger of losing data. 

    All metadevice types and hot spare pools 

    Urgent 

    The metadevice is only one failure away from losing data. 

    Mirrors/submirrors, RAID5 metadevices, and trans metadevices 

    Critical 

    Data potentially has been corrupted. For example, all submirrors in a mirror have errors, or a RAID5 metadevice has errors on more than one slice. Template objects, except the hot spare pool template, also show a Critical status if the metadevice configuration is invalid. 

    Mirrors/submirrors, RAID5 metadevices, trans metadevices, and all template objects 


    Note -

    If the fan fails on a SPARCstorage Array, all metadevices and slices on that SPARCstorage Array are marked "Critical."


  4. Use the following to find the appropriate section about a specific DiskSuite object's status and possible actions to take.

Stripe and Concatenation Status (DiskSuite Tool)

DiskSuite does not report a state change for a concatenation or stripe that experiences errors, unless the concatenation or stripe is used as a submirror. If there is a slice error, or other device problem, DiskSuite returns an error to the requesting application, and outputs it to the console, such as:


WARNING: md d4: read error on /dev/dsk/c1t3d0s6

Note -

DiskSuite can send SNMP trap data (alerts), such as the message above, to any network management console capable of receiving SNMP messages. Refer to "How to Configure DiskSuite SNMP Support (Command Line)", for more information.


Because concatenations and stripes do not contain replicated data, to recover from slice errors on simple metadevices you must replace the physical disk, recreate the metadevice, and restore data from backup. Refer to "How to Recreate a Stripe or Concatenation After Slice Failure (DiskSuite Tool)", or "How to Recreate a Stripe or Concatenation After Slice Failure (Command Line)".

Mirror and Submirror Status (DiskSuite Tool)

A Mirror object has two Status fields: one for the mirror device itself, and individual Status fields for each submirror. The Status field for a mirror, as explained in Table 3-3, gives a high-level status.

Table 3-3 Mirror Status Keywords

Keyword 

Meaning 

OK 

The mirror has no errors and is functioning correctly. 

Attention 

A submirror has a problem, but there is no immediate danger of losing data. There are still two copies of the data (the mirror is three-way mirror and only one submirror failed), or a hot spare has kicked in. 

Urgent 

The mirror contains only a single good submirror, providing only one copy of the data. The mirror is only one failure away from losing data. 

Critical 

All submirrors have errors and data has potentially been corrupted. 

Table 3-4 shows the Status fields of submirrors, and possible actions to take.

Table 3-4 Submirror Status Keywords

Keyword 

Meaning 

Action 

OK 

The submirror has no errors and is functioning correctly. 

None. 

Resyncing 

The submirror is actively being resynced.  

None. An error has occurred and been corrected, the submirror has just been brought back online, or a new submirror has been added. 

Component Resyncing 

A slice in the submirror is actively being resynced. 

None. Either a hot spare slice or another slice has replaced an errored slice in the submirror. 

Attaching 

The submirror is being attached. 

None. 

Attached (resyncing) 

The entire submirror is being resynced after the attach occurred. 

None. 

Online (scheduled) 

The submirror will be brought online the next time you click Commit. 

Click the Commit button to enable the submirror. 

Offline (scheduled) 

The submirror will be brought offline the next time you click Commit. 

Click the Commit button to offline the submirror. 

Offlined 

The submirror is offline. 

When appropriate, bring the submirror back online, for example, after performing maintenance. See "How to Place a Submirror Offline and Online (DiskSuite Tool)".

Maintenance 

The submirror has an error. 

Repair the submirror. You can fix submirrors in the "Errored" state in any order. See "How to Enable a Slice in a Submirror (DiskSuite Tool)", or "How to Replace a Slice in a Submirror (DiskSuite Tool)".

Last Erred 

The submirror has errors, and data for the mirror has potentially been corrupted. 

Fix submirrors in the "Maintenance" state first, then fix the submirror in the "Last Erred" state. See "How to Enable a Slice in a Submirror (DiskSuite Tool)", or "How to Replace a Slice in a Submirror (DiskSuite Tool)". After fixing the error, validate the data.


Note -

DiskSuite does not retain state and hot spare information for simple metadevices that are not submirrors.


RAID5 Metadevice Status (DiskSuite Tool)

Table 3-5 explains the keywords in the Status fields of RAID5 objects, and possible actions to take.

Table 3-5 RAID5 Status Keywords

Keyword 

Meaning 

Action 

OK 

The RAID5 metadevice has no errors and is functioning correctly. 

None. 

Attached/initialize (resyncing) 

The RAID5 metadevice is being resynced after an attach occurred, or after being created. 

Normally none. During the initialization of a new RAID5 metadevice, if an I/O error occurs, the device goes into the "Maintenance" state. If the initialization fails, the metadevice is in the "Initialization Failed" state and the slice is in the "Maintenance" state. If this happens, clear the metadevice and recreate it. 

Attention 

There is a problem with the RAID5 metadevice, but there is no immediate danger of losing data. 

Continue to monitor the status of the device. 

Urgent 

The RAID5 metadevice has a slice error and you are only one failure away from losing data. 

Fix the errored slice. See "How to Enable a Slice in a RAID5 Metadevice (DiskSuite Tool)", or "How to Replace a RAID5 Slice (DiskSuite Tool)".

Critical 

The RAID5 metadevice has more than one slice with an error. Data has potentially been corrupted. 

To fix the errored slices, see "How to Enable a Slice in a RAID5 Metadevice (DiskSuite Tool)", or "How to Replace a RAID5 Slice (DiskSuite Tool)". You may need to restore data from backup.

Trans Metadevice Status (DiskSuite Tool)

Table 3-6 explains the keywords in the Status fields of Trans Metadevice objects, and possible actions to take.

Table 3-6 Trans Metadevice Status Keywords

Keyword 

Meaning 

Action 

OK 

The device is functioning properly. If mounted, the file system is logging and will not be checked at boot (that is, the file system will not be checked by fsck at boot).

None. 

Detach Log (in progress) 

The trans metadevice log will be detached when the Trans metadevice is unmounted or at the next reboot. 

None. 

Detach Log (scheduled) 

The trans metadevice log will be detached the next time you click the Commit button. 

Click Commit to detach the log. The detach takes place at the next reboot, or when the file system is unmounted and remounted. 

Attention 

There is a problem with the trans metadevice, but there is no immediate danger of losing data. 

Continue to monitor the status of the trans metadevice. 

Urgent 

There is a problem with the trans metadevice and it is only one failure away from losing data. This state can only exist if the trans metadevice contains a RAID5 metadevice or mirror. 

Fix the errored mirror or RAID5 master device. See "Overview of Replacing and Enabling Slices in Mirrors and RAID5 Metadevices".

Critical (log missing) 

The trans metadevice does not have a logging device attached. 

Attach a logging device. Logging for the file system cannot start until a logging device is attached. 

Critical (log hard error) 

A device error or file system panic has occurred while the device was in use. An I/O error is returned for every read or write until the device is closed or unmounted. The first open causes the device to transition to the Error state. 

Fix the trans metadevice. See "How to Recover a Trans Metadevice With a File System Panic (Command Line)", or "How to Recover a Trans Metadevice With Hard Errors (Command Line)".

Critical (error) 

The device can be read and written. The file system can be mounted read-only. However, an I/O error is returned for every read or write that actually gets a device error. The device does not transition back to the Hard Error state, even when a later device error of file system panic occurs. 

Fix the trans metadevice. See "How to Recover a Trans Metadevice With a File System Panic (Command Line)", or "How to Recover a Trans Metadevice With Hard Errors (Command Line)". Successfully completing fsck(1M) or newfs(1M) transitions the device into the Okay state. When the device is in the Hard Error or Error state, fsck automatically checks and repairs the file system at boot time. newfs destroys whatever data may be on the device.

Hot Spare Pool and Hot Spare Status (DiskSuite Tool)

Table 3-7 explains the keywords in the Status fields of Hot Spare Pool objects, and possible actions to take.

Table 3-7 Hot Spare Pool Status Keywords

Keyword 

Meaning 

Action 

OK 

The hot spares are running and ready to accept data, but are not currently being written to or read from. 

None. 

In-use 

Hot spares are currently being written to and read from. 

Diagnose how the hot spares are being used. Then repair the slice in the metadevice for which the hot spare is being used. 

Attention 

There is a problem with a hot spare or hot spare pool, but there is no immediate danger of losing data. This status is also displayed if there are no hot spares in the Hot Spare Pool, or if all the hot spares are in use or any are broken. 

Diagnose how the hot spares are being used or why they are broken. You can add more hot spares to the hot spare pool if necessary. 

How to Check the Status of Metadevices and Hot Spare Pools (Command Line)

Make sure you have met the prerequisites ("Prerequisites for Maintaining DiskSuite Objects"). Use the metastat(1M) command to view metadevice or hot spare pool status. Refer to the metastat(1M) man pages for more information.

Use the following to find an explanation of the command line output and possible actions to take.


Note -

Refer to Table 3-2 for an explanation of DiskSuite's general status keywords.


Stripe and Concatenation Status (Command Line)

DiskSuite does not report a state change for a concatenation or a stripe, unless the concatenation or stripe is used as a submirror. Refer to "Stripe and Concatenation Status (DiskSuite Tool)" for more information.

Mirror and Submirror Status (Command Line)

Running metastat(1M) on a mirror displays the state of each submirror, the pass number, the read option, the write option, and the size of the total number of blocks in the mirror. Refer to "How to Change a Mirror's Options (Command Line)" to change a mirror's pass number, read option, or write option.

Here is sample mirror output from metastat.


# metastat
d0: Mirror
    Submirror 0: d1
      State: Okay        
    Submirror 1: d2
      State: Okay        
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 5600 blocks
 
d1: Submirror of d0
    State: Okay        
    Size: 5600 blocks
    Stripe 0:
        Device              Start Block  Dbase State        Hot Spare
        c0t2d0s7                   0     No    Okay        
 
...

For each submirror in the mirror, metastat shows the state, an "invoke" line if there is an error, the assigned hot spare pool (if any), size in blocks, and information about each slice in the submirror.

Table 3-8 explains submirror states.

Table 3-8 Submirror States (Command Line)

State 

Meaning 

Okay 

The submirror has no errors and is functioning correctly. 

Resyncing 

The submirror is actively being resynced. An error has occurred and been corrected, the submirror has just been brought back online, or a new submirror has been added. 

Needs Maintenance 

A slice (or slices) in the submirror has encountered an I/O error or an open error. All reads and writes to and from this slice in the submirror have been discontinued. 

Additionally, for each stripe in a submirror, metastat shows the "Device" (device name of the slice in the stripe); "Start Block" on which the slice begins; "Dbase" to show if the slice contains a state database replica; "State" of the slice; and "Hot Spare" to show the slice being used to hot spare a failed slice.

The slice state is perhaps the most important information when troubleshooting mirror errors. The submirror state only provides general status information, such as "Okay" or "Needs Maintenance." If the submirror reports a "Needs Maintenance" state, refer to the slice state. You take a different recovery action if the slice is in the "Maintenance" or "Last Erred" state. If you only have slices in the "Maintenance" state, they can be repaired in any order. If you have a slices in the "Maintenance" state and a slice in the "Last Erred" state, you must fix the slices in the "Maintenance" state first then the "Last Erred" slice. Refer to "Overview of Replacing and Enabling Slices in Mirrors and RAID5 Metadevices".

Table 3-9 explains the slice states for submirrors and possible actions to take.

Table 3-9 Submirror Slice States (Command Line)

State 

Meaning 

Action 

Okay 

The slice has no errors and is functioning correctly. 

None. 

Resyncing 

The slice is actively being resynced. An error has occurred and been corrected, the submirror has just been brought back online, or a new submirror has been added. 

If desired, monitor the submirror status until the resync is done. 

Maintenance 

The slice has encountered an I/O error or an open error. All reads and writes to and from this slice have been discontinued. 

Enable or replace the errored slice. See "How to Enable a Slice in a Submirror (Command Line)", or "How to Replace a Slice in a Submirror (Command Line)". Note: The metastat(1M) command will show an invoke recovery message with the appropriate action to take with the metareplace(1M) command. You can also use the metareplace -e command.

Last Erred 

The slice has encountered an I/O error or an open error. However, the data is not replicated elsewhere due to another slice failure. I/O is still performed on the slice. If I/O errors result, the mirror I/O will fail. 

First, enable or replace slices in the "Maintenance" state. See "How to Enable a Slice in a Submirror (Command Line)", or "How to Replace a Slice in a Submirror (Command Line)". Usually, this error results in some data loss, so validate the mirror after it is fixed. For a file system, use the fsck(1M) command to validate the "metadata" then check the user-data. An application or database must have its own method of validating the metadata.

RAID5 Metadevice Status (Command Line)

Running the metastat(1M) command on a RAID5 metadevice shows the status of the metadevice. Additionally, for each slice in the RAID5 metadevice, metastat shows the "Device" (device name of the slice in the stripe); "Start Block" on which the slice begins; "Dbase" to show if the slice contains a state database replica; "State" of the slice; and "Hot Spare" to show the slice being used to hot spare a failed slice.

Here is sample RAID5 metadevice output from metastat.


# metastat
d10: RAID
    State: Okay        
    Interlace: 32 blocks
    Size: 10080 blocks
Original device:
    Size: 10496 blocks
        Device              Start Block  Dbase State        Hot Spare
        c0t0d0s1                 330     No    Okay        
        c1t2d0s1                 330     No    Okay        
        c2t3d0s1                 330     No    Okay 

Table 3-10 explains RAID5 metadevice states.

Table 3-10 RAID5 States (Command Line)

State 

Meaning 

Initializing 

Slices are in the process of having all disk blocks zeroed. This is necessary due to the nature of RAID5 metadevices with respect to data and parity interlace striping.  

 

Once the state changes to the "Okay," the initialization process is complete and you are able to open the device. Up to this point, applications receive error messages. 

Okay 

The device is ready for use and is currently free from errors.  

Maintenance 

A single slice has been marked as errored due to I/O or open errors encountered during a read or write operation. 

The slice state is perhaps the most important information when troubleshooting RAID5 metadevice errors. The RAID5 state only provides general status information, such as "Okay" or "Needs Maintenance." If the RAID5 reports a "Needs Maintenance" state, refer to the slice state. You take a different recovery action if the slice is in the "Maintenance" or "Last Erred" state. If you only have a slice in the "Maintenance" state, it can be repaired without loss of data. If you have a slice in the "Maintenance" state and a slice in the "Last Erred" state, data has probably been corrupted. You must fix the slice in the "Maintenance" state first then the "Last Erred" slice. Refer to "Overview of Replacing and Enabling Slices in Mirrors and RAID5 Metadevices".

Table 3-11 explains the slice states for a RAID5 metadevice and possible actions to take.

Table 3-11 RAID5 Slice States (Command Line)

State 

Meaning 

Action 

Initializing 

Slices are in the process of having all disk blocks zeroed. This is necessary due to the nature of RAID5 metadevices with respect to data and parity interlace striping.  

Normally none. If an I/O error occurs during this process, the device goes into the "Maintenance" state. If the initialization fails, the metadevice is in the "Initialization Failed" state and the slice is in the "Maintenance" state. If this happens, clear the metadevice and recreate it. 

Okay 

The device is ready for use and is currently free from errors.  

None. Slices may be added or replaced, if necessary. 

Resyncing 

The slice is actively being resynced. An error has occurred and been corrected, a slice has been enabled, or a slice has been added. 

If desired, monitor the RAID5 metadevice status until the resync is done. 

Maintenance 

A single slice has been marked as errored due to I/O or open errors encountered during a read or write operation. 

Enable or replace the errored slice. See "How to Enable a Slice in a RAID5 Metadevice (Command Line)", or "How to Replace a RAID5 Slice (Command Line)". Note: The metastat(1M) command will show an invoke recovery message with the appropriate action to take with the metareplace(1M) command.

Maintenance/ Last Erred 

Multiple slices have encountered errors. The state of the errored slices is either "Maintenance" or "Last Erred." In this state, no I/O is attempted on the slice that is in the "Maintenance" state, but I/O is attempted to the slice marked "Last Erred" with the outcome being the overall status of the I/O request. 

Enable or replace the errored slices. See "How to Enable a Slice in a RAID5 Metadevice (Command Line)", or "How to Replace a RAID5 Slice (Command Line)". Note: The metastat(1M) command will show an invoke recovery message with the appropriate action to take with the metareplace(1M) command, which must be run with the -f flag. This indicates that data might be fabricated due to multiple errored slices.

Trans Metadevice Status (Command Line)

Running the metastat(1M) command on a trans metadevice shows the status of the metadevice.

Here is sample trans metadevice output from metastat:


# metastat
d20: Trans
    State: Okay        
    Size: 102816 blocks
    Master Device: c0t3d0s4
    Logging Device: c0t2d0s3
 
        Master Device       Start Block  Dbase
        c0t3d0s4                   0     No  
 
c0t2d0s3: Logging device for d0
    State: Okay        
    Size: 5350 blocks
 
        Logging Device      Start Block  Dbase
        c0t2d0s3                 250     No 

The metastat command also shows master and logging devices. For each device, the following information is displayed: the "Device" (device name of the slice or metadevice); "Start Block" on which the device begins; "Dbase" to show if the device contains a state database replica; and for the logging device, the "State."

Table 3-12 explains trans metadevice states and possible actions to take.

Table 3-12 Trans Metadevice States (Command Line)

State 

Meaning 

Action 

Okay 

The device is functioning properly. If mounted, the file system is logging and will not be checked at boot. 

None. 

Attaching 

The logging device will be attached to the trans metadevice when the trans is closed or unmounted. When this occurs, the device is transitioned to the Okay state. 

Refer to the metattach(1M) man page.

Detached 

The trans metadevice does not have a logging device. All benefits from UFS logging are disabled. 

fsck(1M) automatically checks the device at boot time. Refer to the metadetach(1M) man page.

Detaching 

The logging device will be detached from the trans metadevice when the trans is closed or unmounted. When this occurs, the device transitions to the Detached state. 

Refer to the metadetach(1M) man page.

Hard Error 

A device error or file system panic has occurred while the device was in use. An I/O error is returned for every read or write until the device is closed or unmounted. The first open causes the device to transition to the Error state. 

Fix the trans metadevice. See "How to Recover a Trans Metadevice With a File System Panic (Command Line)", or "How to Recover a Trans Metadevice With Hard Errors (Command Line)".

Error 

The device can be read and written. The file system can be mounted read-only. However, an I/O error is returned for every read or write that actually gets a device error. The device does not transition back to the Hard Error state, even when a later device error of file system panic occurs. 

Fix the trans metadevice. See "How to Recover a Trans Metadevice With a File System Panic (Command Line)", or "How to Recover a Trans Metadevice With Hard Errors (Command Line)". Successfully completing fsck(1M) or newfs(1M) transitions the device into the Okay state. When the device is in the Hard Error or Error state, fsck automatically checks and repairs the file system at boot time. newfs destroys whatever data may be on the device.

Hot Spare Pool and Hot Spare Status (Command Line)

Running the metastat(1M) command on a hot spare pool shows the status of the hot spare pool and its hot spares.

Here is sample hot spare pool output from metastat.


# metastat hsp001
hsp001: 1 hot spare
        c1t3d0s2                Available       16800 blocks

Table 3-13 explains hot spare pool states and possible actions to take.

Table 3-13 Hot Spare Pool States (Command Line)

State 

Meaning 

Action 

Available 

The hot spares are running and ready to accept data, but are not currently being written to or read from. 

None. 

In-use 

Hot spares are currently being written to and read from. 

Diagnose how the hot spares are being used. Then repair the slice in the metadevice for which the hot spare is being used. 

Attention 

There is a problem with a hot spare or hot spare pool, but there is no immediate danger of losing data. This status is also displayed if there are no hot spares in the Hot Spare Pool or all the hot spares are in use or any are broken. 

Diagnose how the hot spares are being used or why they are broken. You can add more hot spares to the hot spare pool if desired. 

How to Check the Status of a Diskset (Command Line)

Make sure you have met the prerequisites ("Prerequisites for Maintaining DiskSuite Objects"). Use the metaset(1M) command to view diskset status. Refer to the metaset(1M) man page for more information.


Note -

Diskset ownership is only shown on the owning host.


Example -- Checking Status of a Specified Diskset


red# metaset -s relo-red
Set name = relo-red, Set number = 1
 
Host                Owner
  red                Yes
  blue
 
Drive               Dbase
  c1t2d0             Yes
  c1t3d0             Yes
  c2t2d0             Yes
  c2t3d0             Yes
  c2t4d0             Yes
  c2t5d0             Yes

The metaset(1M) command with the -s option followed by the name of the relo-red diskset displays status information for that diskset. By issuing the metaset command from the owning host, red, it is determined that red is in fact the diskset owner. The metaset command also displays drives in the diskset.

Example -- Checking Status of All Disksets


red# metaset
Set name = relo-red, Set number = 1
 
Host                Owner
  red                Yes
  blue
 
Drive               Dbase
  c1t2d0             Yes
  c1t3d0             Yes
  c2t2d0             Yes
  c2t3d0             Yes
  c2t4d0             Yes
  c2t5d0             Yes
 
Set name = relo-blue, Set number = 2
 
Host                Owner
  red
  blue
 
Drive               Dbase
  c3t2d0             Yes
  c3t3d0             Yes
  c3t4d0             Yes
  c3t5d0             Yes
 
Set name = rimtic, Set number = 3
 
Host                Owner
  red
  blue
 
Drive               Dbase
  c4t2d0             Yes
  c4t3d0             Yes
  c4t4d0             Yes
  c4t5d0             Yes

The metaset command by itself displays the status of all disksets. In this example, three disksets named relo-red, relo-blue, and rimtic are configured. Because host red owns the relo-red diskset, metaset shows red as the owner. Host blue owns the other two disksets, relo-blue and rimtic. This could only be determined if metaset were run from host blue.