Sun Cluster 2.2 Cluster Volume Manager Guide

Chapter 3 Tutorial

Here are step-by-step examples of the initialization of the disks in two or more SPARCstorage Array units. By following these examples you can create shared volumes for running Oracle7 or Oracle8 Parallel Server (OPS) software on a Sun Cluster system.

3.1 Assumptions

For this tutorial, make the following assumptions. The actual conditions for your system may vary.

  1. The system configuration consists of:

    • Two Enterprise server systems

  2. Two SPARCstorage Array units, each containing 30 1-Gbyte disk drives

  3. You want to set up areas for the database in the following sizes. Each area is mirrored, so there are two of each type.

    Table 3-1 Example Database Specifications

    Area 

    Disks 

    Account table 

    2  

    History table 

    2  

    Administration and system areas 

    2  

    log_node1 

    2  

    log_node2 

    2  

  1. The root disk group, rootdg, has already been created during the system installation. Therefore, CVM recognizes rootdg.

  2. None of the disks in the SPARCstorage Arrays is currently under CVM control. Therefore, CVM does not currently recognize these disks.

  3. The 60 SPARCstorage Array disks are arranged by controller, target, and disk numbers (c#, t#, and d#, respectively) as shown in Figure 3-1.

    Figure 3-1 Example Disks

    Graphic

  4. The operating environment recognizes the 60 disks by both the disk (dsk) and raw disk (rdsk) names. Table 3-2 lists examples of the disk and raw disk names used in this tutorial. Note that these names include the slice number (where s2 signifies the entire disk), but the CVM names do not. Both Node 0 and Node 1 use the same name for each disk, because the two array units are connected to the same physical locations on both systems.

    Table 3-2 Disk Names

     

    Node 0 

    Node 0 

    Node 1 

    Node 1 

    Disk # 

    Disk name 

    Raw disk name 

    Disk name 

    Raw disk name 

    /dev/dsk/c1t0d0s2 

    /dev/rdsk/c1t0d0s2 

    /dev/dsk/c1t0d0s2 

    /dev/rdsk/c1t0d0s2 

    /dev/dsk/c1t0d1s2 

    /dev/rdsk/c1t0d1s2 

    /dev/dsk/c1t0d1s2 

    /dev/rdsk/c1t0d1s2 

    /dev/dsk/c1t0d2s2 

    /dev/rdsk/c1t0d2s2 

    /dev/dsk/c1t0d2s2 

    /dev/rdsk/c1t0d2s2 

     

    ... 

    ... 

    ... 

    ... 

    60 

    /dev/dsk/c2t5d4s2 

    /dev/rdsk/c2t5d4s2 

    /dev/dsk/c2t5d4s2 

    /dev/rdsk/c2t5d47s2 

3.2 Initializing Disks Through CVM

Exercise 1: Initialize all of the disks that are to be managed by CVM.

You may use either the vxva tool (the graphical user interface or GUI) or the command line interface (CLI).

For the CLI, the command takes the general form:


# /etc/vx/bin/vxdisksetup -i devname

The term devname consists of the controller, target, and disk numbers. For example, devname is c1t0d0 for the first disk.

Repeat the command for each disk that you would like to place under CVM control.

3.3 Starting the Sun Cluster Software

Exercise 2: Bring up the Sun Cluster software.


Note -

Run the Sun Cluster software on one node only.


Use the command:


# scadmin startcluster nodename cluster_name

If you do not specify a cluster name, the default name will be used. After you execute this command, there should be one node (the master node) in the cluster.

3.4 Creating Shared Disk Groups

Exercise 3: Create a shared disk group.

The command syntax is:


# vxdg -s init group_name device_name(s)

You can specify multiple device names in a single command. For example, you can add the two disks (two mirrored sets of one) in the group acct at one time:


# vxdg -s init acct c1t0d0 c2t0d0

The disks listed here were selected for convenience, because they occupy the same relative locations in the two SPARCstorage Array units.


Note -

If you select some other combination of locations, consider how this may complicate disk organization and maintenance.


You can check the results:


# vxdisk list

The acct disk group is a shared disk group. The term shared means that after another node joins the cluster, it will automatically import the acct disk group. The volumes can be accessed from both nodes.

The second set of disks on Controller c2 will be used as the mirror disks.

As an alternative to listing all devices in one command, you can start with a single device in a shareable disk group, then add more disks to that disk group. For example, to initialize a single device in a shareable disk group named acct:


# vxdg -s init acct c1t0d0

To add a second disk to the acct disk group:


# vxdg -g acct adddisk c2t0d0

Exercise 4: Continue by creating disk groups named history, admin_system, log_node1, and log_node2. Table 3-3 lists the total number of disks that you might use for each group. For simplicity in your exercises, use full disks (not partial disks).

Table 3-3 Disk Totals

Group Name 

First Group 

Mirrored Group 

Total  

acct 

1 disk 

1 disk 

2 disks 

history 

1 disk 

1 disk 

2 disks 

admin_system 

1 disk 

1 disk 

2 disks 

log_node1 

1 disk 

1 disk 

2 disks 

log_node2 

1 disk 

1 disk 

2 disks 

To list the disk groups on your system, use vxdg list.


# vxdg list
rootdg	 enabled
acct 	enabled 	shared	 nnnnnnnnn.nnnn.node
admin_system	 enabled	 shared	 nnnnnnnnn.nnnn.node
history	enabled	 shared	 nnnnnnnnn.nnnn.node
log_node1	enabled	 shared	 nnnnnnnnn.nnnn.node
log_node2	enabled	shared 	nnnnnnnnn.nnnn.node

When you have created the disk groups, display the CVM configuration records.


# vxprint

This command lists all of the volume records on the system.

To examine a specific group, use the -g option.


# vxprint -g group_name

3.5 Creating Volumes

3.5.1 Creating a Volume

Exercise 5: Create volumes in each disk group.

The syntax for the command is:


# vxassist -g group_name -U gen make volume_name volume_size disk_name

For example, the command for vol01, the first volume in the acct disk group is:


# vxassist -g acct -U gen make vol01 500m c1t0d0

3.5.2 Adding a Mirror to a Volume

Exercise 6: Create an associated mirror for each volume.

The syntax for the command is:


# vxassist -g group_name mirror volume_name disk_name

For example, the command for vol1 is:


# vxassist -g acct mirror vol01 c2t0d0

The examples above use a volume size of 500 Mbytes (500m) for simplicity of calculations. The actual size of a CVM device is slightly less than the full disk drive size. CVM reserves a small amount of space for private use, called the private region.

Exercise 7: Repeat the procedure for each volume in the disk group.

Create one volume in each disk group. When you have finished, you should have these mirrored volumes:

Table 3-4 Volume Totals

Group 

Volume Name 

Total 

acct 

vol01 

history 

vol01 

admin_system 

vol01 

log_node1 

vol01 

log_node2 

vol01 


Note -

In the example, the five volumes all use the same name, vol01. The use of the same name is allowed if the volumes belong to different groups.


3.5.3 Creating a Log File for an Existing Volume

Exercise 8: Create a log for an existing volume.

The syntax for the command is:


# vxassist -g group_name addlog volume_name disk_name

For example, the command for vol1 is:


# vxassist -g acct addlog vol01 c2t0d0

3.6 Bringing Up the Second Node

Exercise 9: Bring up the Sun Cluster software on the other node(s) by entering:


# scadmin startnode

Alternately, you can specify the cluster name.


# scadmin startnode cluster_name

After the Sun Cluster software completes the cluster configuration process, you should be able to see the same volume configuration from any node in the cluster.

To review information about the volumes, use vxprint.

CVM allows the use of private disk groups. These are limited to individual systems, and are not discussed in this tutorial.

3.7 Reorganizing Disks and Disk Groups


Note -

There are several mirrored configurations that are not recommended for the cluster Sun Cluster environment. For example, mirroring across the same disk or within the same tray is unsafe.


3.7.1 Disk Group Expansion

The following procedure is one method of adding disks to a disk group.


Note -

You may find that the CVM software offers many ways to produce the same result. Choose the method with which you feel the most comfortable.


There are now five disk groups in your cluster system, with two disks in each of the disk groups.

To inventory your disk resources, enter:


# vxdisk list

For example, presume that the command shows there are two free disks, c1t0d1 and c2t0d1.

  1. Initialize the disks.


    # /etc/vx/bin/vxdisksetup -i c1t0d1
    # /etc/vx/bin/vxdisksetup -i c2t0d1
    
  2. Add disks into an existing disk group:

    The command syntax is:


    # vxdg -g disk_group_name adddisk devices ...
    

    For the present example, the command is:


    # vxdg -g acct adddisk c1t0d1 c2t0d1
    
  3. View the new expanded disk group.

    The command syntax is:


    # vxprint -g disk_group_name 
    

    For the present example, the command would be:


    # vxprint -g acct
    TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
    dg acct         acct         -        -        -        -        -       -
    
    dm c1t0d0       c1t0d0s2     -        2050272  -        -        -       -
    dm c1t0d1       c1t0d1s2     -        2050272  -        -        -       -
    dm c2t0d0       c2t0d0s2     -        2050272  -        -        -       -
    dm c2t0d1       c2t0d1s2     -        2050272  -        -        -       -
    
    v  vol01        gen          ENABLED  1024000  -        ACTIVE   -       -
    pl vol01-01     vol01        ENABLED  1024128  -        ACTIVE   -       -
    sd c1t0d0-01    vol01-01     ENABLED  1024128  0        -        -       -
    pl vol01-02     vol01        ENABLED  1024128  -        ACTIVE   -       -
    sd c2t0d0-01    vol01-02     ENABLED  1024128  0        -        -       -
  4. Create mirrored volumes.

    The command syntaxes are:


    # vxassist make vol_name length disk_name
    # vxassist mirror vol_name disk_name
    

    For the present example:


    # vxassist make newvol 100m c1t0d1
    # vxassist mirror newvol c2t0d1
    

3.7.2 Moving a VM Disk to a Different Disk Group

To move a disk between disk groups, remove the disk from one disk group and add it to the other.

For example, to move the physical disk c1t0d1 from disk group acct to disk group log_node1:

  1. Determine if the disk is in use:


    # vxprint -g acct
    TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
    dg acct         acct         -        -        -        -        -       -
    
    dm c1t0d0       c1t0d0s2     -        2050272  -        -        -       -
    dm c1t0d1       c1t0d1s2     -        2050272  -        -        -       -
    dm c2t0d0       c2t0d0s2     -        2050272  -        -        -       -
    dm c2t0d1       c2t0d1s2     -        2050272  -        -        -       -
    
    v  newvol       gen          ENABLED  204800   -        ACTIVE   -       -
    pl newvol-01    newvol       ENABLED  205632   -        ACTIVE   -       -
    sd c1t0d1-01    newvol-01    ENABLED  205632   0        -        -       -
    pl newvol-02    newvol       ENABLED  205632   -        ACTIVE   -       -
    sd c2t0d1-01    newvol-02    ENABLED  205632   0        -        -       -
    
    v  vol01        gen          ENABLED  1024000  -        ACTIVE   -       -
    pl vol01-01     vol01        ENABLED  1024128  -        ACTIVE   -       -
    sd c1t0d0-01    vol01-01     ENABLED  1024128  0        -        -       -
    pl vol01-02     vol01        ENABLED  1024128  -        ACTIVE   -       -
    sd c2t0d0-01    vol01-02     ENABLED  1024128  0        -        -       -
  2. Remove the volume to free up the c1t0d1 disk:


    # vxedit -g acct -fr rm newvol
    

    The -f option forces an operation. The -r option makes the operation recursive.

  3. Remove the c1t0d1 disk from the acct disk group:


    # vxdg -g acct rmdisk c1t0d1
    
  4. Add the c1t0d1 disk to the log_node1 disk group:


    # vxdg -g log_node1 adddisk c1t0d1
    

    Caution - Caution -

    This procedure does NOT save the configuration or data on the disk.


    This is the acct disk group after c1t0d1 is removed.


    # vxprint -g acct
    TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
    dg acct         acct         -        -        -        -        -       -
    
    dm c1t0d0       c1t0d0s2     -        2050272  -        -        -       -
    dm c2t0d0       c2t0d0s2     -        2050272  -        -        -       -
    dm c2t0d1       c2t0d1s2     -        2050272  -        -        -       -
    
    v  vol01        gen          ENABLED  1024000  -        ACTIVE   -       -
    pl vol01-01     vol01        ENABLED  1024128  -        ACTIVE   -       -
    sd c1t0d0-01    vol01-01     ENABLED  1024128  0        -        -       -
    pl vol01-02     vol01        ENABLED  1024128  -        ACTIVE   -       -
    sd c2t0d0-01    vol01-02     ENABLED  1024128  0        -        -       -

    This is the log_node1 disk group after c1t0d1 is added.


    # vxprint -g log_node1
    TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
    dg log_node1    log_node1    -        -        -        -        -       -
    
    dm c1t0d1       c1t0d1s2     -        2050272  -        -        -       -
    dm c1t3d0       c1t3d0s2     -        2050272  -        -        -       -
    dm c2t3d0       c2t3d0s2     -        2050272  -        -        -       -
    # 

    To change permissions or ownership of volumes, you must use the vxedit command.


    Caution - Caution -

    Do not use chmod or chgrp. The permissions and/or ownership set by chmod or chgrp are automatically reset to root during a reboot.


    Here is an example of the permissions and ownership of the volumes vol01 and vol02 in the directory /dev/vx/rdsk before a change.


    # ls -l
    crw-------	1	root 	root	 nnn,nnnnn		date 	time 	vol01
    crw-------	1	root	 root 	nnn,nnnnn		date 	time 	vol02
    ...

    This an example for changing the permissions and ownership for vol01.


    # vxedit -g group_name set mode=755 user=oracle vol01
    

    After the edit, note how the permissions and ownership have changed.


    # ls -l
    crwxr-xr-x	1 	oracle	 root	 nnn,nnnnn	 	date 	time 	vol01
    crw-------	1	 root	 root	 nnn,nnnnn 		date 	time 	vol02
    ...

3.8 Mirroring the System Boot Disk

This section describes the procedure for converting an existing Sun Cluster system that has a defined rootdg and that must be converted to full encapsulation.

This procedure has two parts. The first part (Procedure A) uses a single disk slice for temporary CVM data while doing the encapsulation. The second part (Procedure B) uses an entire disk for the temporary CVM data.

3.8.1 Mirroring the System Boot Disk on an Existing Node

Use one of the following procedures on an existing Sun Cluster node to mirror the boot disk:

Both procedures use these assumptions:

  1. The boot disk has at least two free partitions.

  2. The beginning or end of the disk has two cylinders of free space available.

  3. rootdg was created from a simple partition.

  4. An extra disk or partition can be used as temporary spare space. This space should be on a disk other than the boot disk you are encapsulating.

3.8.1.1 Procedure A: Using a Disk Slice

Use this procedure to encapsulate and mirror the boot disk if only one partition is available for temporary storage. If an entire disk is available for temporary storage, see "3.8.1.2 Procedure B: Using an Entire Disk"."

Configuration

Procedure A

  1. Stop the Sun Cluster software on the current node:


    # scadmin stopnode 
    
  2. If the simple partition for rootdg is on the boot disk, find a separate partition (two cylinders in size) that is not on the boot disk. If the simple partition for rootdg is not on the boot disk, proceed directly to Step 10.

  3. Use the format command to reserve and label the new partition:


    # format c0t3d0
    

    Make c0t3d0s7 the new partition, two cylinders in size.

  4. Define the new partition to CVM:


    # vxdisk -f init c0t3d0s7 
    
  5. Add the new partition to rootdg:


    # vxdg adddisk c0t3d0s7 
    
  6. Add the new rootdg partition to the volboot file:


    # vxdctl add disk c0t3d0s7 
    
  7. Use the format command to free up the old disk partition:


    # format c0t0d0 
    

    This frees up the space for partition c0t0d0s5.

  8. Remove the original disk from rootdg:


    # vxdg rmdisk c0t0d0s5
    # vxdisk rm c0t0d0s5 
    
  9. Clean up the old partition of rootdg in the volboot file:


    # vxdctl rm disk c0t0d0s5 
    
  10. Enter vxdiskadm to encapsulate the boot disk (c0t0d0 in this example):


    # vxdiskadm -Select an operation to perform: 2 
    -Select disk devices to encapsulate:
    [,all,list,q,?] c0t0d0 
    -Continue operation? [y,n,q,?] (default: y) y 
    -Which disk group [,list,q,?] (default: rootdg) rootdg 
    -Use a default disk name for the disk? [y,n,q,?] (default: y) n 
    -Continue with operation? [y,n,q,?] (default: y) y 
    -Continue with encapsulation? [y,n,q,?] (default: y) y 
    -Enter disk name for  [,q,?] (default: disk01) disk01 
    -Encapsulate other disks? [y,n,q,?] (default: n) n 
    -Select an operation to perform: q
    

    Verify each step carefully.

  11. Reboot the system:


    # shutdown -g0 -y -i6 
    

    The system will reboot one more time to complete the process.

    Your boot disk is now encapsulated and managed by CVM.

  12. To verify the encapsulation:

    1. Invoke vxva and open up the icon of the rootdg disk group.

      You should see the encapsulated boot disk and the four volumes that have been created from it.

    2. Verify that the /etc/vfstab file now refers to device files in the /dev/vx/dsk directory rather than the /dev/dsk directory.

  13. To mirror the encapsulated disk, choose a disk (for example, c0t2d0) and a media name (for example, mirrorroot), and enter:


    # /etc/vx/bin/vxdisksetup -i c0t2d0
    # /usr/sbin/vxdg adddisk mirrorroot=c0t2d0 
    

    You should now see the new disk in the rootdg disk group of vxva.

  14. Mirror the encapsulated boot disk on mirrorroot:


    # /etc/vx/bin/vxmirror disk01 mirrorroot 
    

    The vxmirror command prints a series of commands corresponding to the mirroring of volumes in the encapsulated boot disk.

  15. After the vxmirror command is completed, verify that the process was successful by looking for the mirrored volumes in the rootdg disk group of vxva.

    It is now possible to boot from the mirror of the boot disk.

  16. Remove the two-cylinder simple disk in the rootdg disk group:


    # vxdg rmdisk c0t3d0s7
    # vxdisk rm c0t3d0s7
    # vxdctl rm disk c0t3d0s7 
    
  17. Start the Sun Cluster software:


    # scadmin startnode 
    
  18. If you wish to mirror the other system, repeat Procedure A.

  19. Un-encapsulate the boot disk.

    When you upgrade your system or the volume manager you need to un-encapsulate the boot disk first. Use the script provided under /CD_path/CVM/scripts called upgrade_start to automatically convert the file systems on volumes back to regular disk partitions. Reboot the system to complete the conversion to regular disk partitions.

3.8.1.2 Procedure B: Using an Entire Disk

This procedure contains step-by-step instructions for encapsulating and mirroring the boot disk when you are using an entire disk for temporary storage. If an entire disk is not available, see "3.8.1.1 Procedure A: Using a Disk Slice"."

Configuration

Procedure B

  1. Stop the Sun Cluster software on the current node:


    # scadmin stopnode 
    
  2. Initial the spare disk for use with CVM:


    # /etc/vx/bin/vxdisksetup -i c1t0d0 
    
  3. Add the new disk to rootdg:


    # vxdg adddisk c1t0d0 
    
  4. Use the format command to release the old disk partition, which is a part of boot disk for rootdg:


    # format c0t0d0 
    
  5. Remove the original disk from rootdg:


    # vxdg rmdisk c0t0d0s5
    # vxdisk rm c0t0d0s5 
    
  6. Clean up the old partition of rootdg in the volboot file:


    # vxdctl rm disk c0t0d0s5 
    
  7. Use the vxdiskadm command to encapsulate the boot disk (for example, c0t0d0):


    # vxdiskadm
    -Select an operation to perform: 2
    -Select disk devices to encapsulate:                                  
    [,all,list,q,?] c0t0d0
    -Continue operation? [y,n,q,?] (default: y) y
    -Which disk group [,list,q,?] (default: rootdg) rootdg
    -Use a default disk name for the disk? [y,n,q,?] (default: y) n
    -Continue with operation? [y,n,q,?] (default: y) y
    -Continue with encapsulation? [y,n,q,?] (default: y) y
    -Enter disk name for [,q,?] (default: disk01) disk01
    -Encapsulate other disks? [y,n,q,?] (default: n) n
    -Select an operation to perform: q
    
  8. Reboot the system:


    # shutdown -g0 -y -i6 
    

    The system will reboot a second time to complete the process.

    The boot disk is now encapsulated and managed by CVM.

  9. To verify the encapsulation:

    1. Invoke vxva and open up the icon of the rootdg disk group.

      You should see the encapsulated boot disk and the four volumes that have been created from it.

    2. Verify that the /etc/vfstab file now refers to device files in the /dev/vx/dsk directory rather than the /dev/dsk directory.

  10. To mirror the encapsulated disk, choose a disk (for example, c0t2d0) and a media name (for example, mirrorroot), and enter:


    # /etc/vx/bin/vxdisksetup -i c0t2d0
    # /usr/sbin/vxdg adddisk mirrorroot=c0t2d0 
    

    You should see the new disk in the rootdg disk group of vxva.

  11. Mirror the encapsulated boot disk on mirrorroot:


    # /etc/vx/bin/vxmirror disk01 mirrorroot 
    

    vxmirror displays a series of commands corresponding to the mirroring of volumes in the encapsulated boot disk.

  12. After vxmirror is completed, verify that the process was successful by looking for the mirrored volumes in the rootdg disk group of vxva.

    It is now possible to boot from the mirror of the boot disk.

  13. Remove the temporary disk from the rootdg disk group:


    # vxdg rmdisk c1t0d0
    # vxdisk rm c1t0d0 
    
  14. Start the Sun Cluster software:


    # scadmin startnode 
    
  15. If you would like to mirror the other system, repeat Procedure B.

3.8.2 Description of vxconfigd

The vxconfigd command takes requests from other utilities for volume/disk configuration changes, and communicates those changes to the kernel and modifies configuration information stored on disk. vxconfigd is also responsible for initializing CVM when the system is booted.

3.8.3 Error Handling

The vxconfigd command may exit if serious errors occur. By default, vxconfigd issues errors to the console. However, vxconfigd can be configured to log errors to a log file with various parameters for debugging. (See the man page for further information.)

This is an example which enables debugging parameters and logs messages to a file:


# vxdctl stop
# vxconfigd -x 1 -x logfile=filename -x mstimestamp > /dev/null 2>&1 &

The options are:

3.9 Moving the Quorum Controller

When a quorum controller fails, you must select a new quorum controller.


Caution - Caution -

Select a new quorum controller before taking a node down. If you fail to do this first, the entire cluster will shut down.


  1. Obtain the current SPARCstorage Array Sun Cluster configuration:


    # scconf cluster_name -p 
    

    Look for the quorum controller serial numbers. The first controller should be the quorum controller (check the Quorum Controller: line in the display).

  2. On both nodes, change the quorum controller by selecting a new quorum controller serial number (validate against the SPARCstorage Array LCD display) and enter:


    # scconf cluster_name -q -m host1 host2 ctlr
    

    Caution - Caution -

    If you do not make the change on both nodes, the database may be destroyed.


  3. Verify that the ctrl2 SPARCstorage Array is now the new quorum controller:


    # scconf cluster_name -p
    

3.10 Replacing a Bad Disk in a SPARCstorage Array Tray

It is possible to replace a SPARCstorage Array disk without halting system operations.


Note -

The following procedure is used to replace a failed disk in a SPARCstorage Array 100 Series. When removable storage media (RSM) disks are used, follow the procedure in the Sun Cluster 2.2 System Administration Guide and the applicable hardware service manual.


  1. Identify all the volumes and corresponding plexes on the disks in the tray which contains the faulty disk.

    1. From the physical device address cNtNdN, obtain the controller number and the target number.

      For example, if the device address is c3t2d0, the controller number is 3 and the target is 2.

    2. Identify devices from a vxdisk list output:

      If the target is 0 or 1, identify all devices with physical addresses beginning with cNt0 and cNt1. If the target is 2 or 3, identify all devices with physical addresses beginning with cNt2 and cNt3. If the target is 4 or 5, identify all devices with physical addresses beginning with cNt4 and cNt5.

      For example:


      # vxdisk -g diskgroup -q list | egrep c3t2\|c3t3 | nawk '{print $3}'
      

      Record the volume media name for the faulty disk from the output of the command. The volume media name is the variable device_media_name used in Step 8.

    3. Identify all plexes on the above devices by using the appropriate version (csh, ksh, or Bourne shell) of the following command:


      PLLIST=`vxprint -ptq -g diskgroup -e '(aslist.sd_dm_name in
      ("c3t2d0","c3t3d0","c3t3d1")) && (pl_kstate=ENABLED)' | nawk '{print $2}'`
      

      For csh, the syntax is set PLLIST .... For ksh, the syntax is export PLLIST= .... The Bourne shell requires the command export PLLIST after the variable is set.

  2. After you have set the variable, detach all plexes in the tray:


    # vxplex det ${PLLIST}
    

    An alternate command for detaching each plex in a tray is:


    # vxplex -g diskgroup -v volume det plex
    

    Note -

    The volumes will still be active because the other mirror is still available.


  3. Spin down the disks in the tray:


    # ssaadm stop -t tray controller
    
  4. Replace the faulty disk.

  5. Spin up the drives:


    # ssaadm start -t tray controller
    
  6. Initialize the replacement disk:


    # vxdisksetup -i devicename
    
  7. Scan the current disk configuration again:

    Enter the following commands on both nodes in the cluster


    # vxdctl enable
    # vxdisk -a online
    
  8. Add the new disk to the disk group:


    # vxdg -g diskgroup -k adddisk device_media_name=device_name
    
  9. Resynchronize the volumes:


    # vxrecover -b -o iosize=192K
    

3.11 Reattaching a Disk Drive

Disk drives sometimes become detached due to power failures, cable or controller problems, and so on. Detached disk drives result in plexes becoming detached and thus, unavailable. The remaining plex(es) in a mirrored volume are still available and therefore the volume is still active. It is possible to reattach the disk drives and recover from this condition without halting either node.

The following example configuration consists of five mirrored volumes. SPARCstorage Array c1 has been powered off. The mirrors are broken, but the volume remains active, as the remaining plex is still working.


# vxprint -g toi
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg toi          toi          -        -        -        -        -       -

dm c1t5d0       -            -        -        -        NODEVICE -       -
dm c1t5d1       -            -        -        -        NODEVICE -       -
dm c1t5d2       -            -        -        -        NODEVICE -       -
dm c1t5d3       -            -        -        -        NODEVICE -       -
dm c1t5d4       -            -        -        -        NODEVICE -       -
dm c2t5d0       c2t5d0s2     -        2050272  -        -        -       -
dm c2t5d1       c2t5d1s2     -        2050272  -        -        -       -
dm c2t5d2       c2t5d2s2     -        2050272  -        -        -       -
dm c2t5d3       c2t5d3s2     -        2050272  -        -        -       -
dm c2t5d4       c2t5d4s2     -        2050272  -        -        -       -

v  toi-1        gen          ENABLED  61440    -        ACTIVE   -       -
pl toi-1-01     toi-1        DISABLED 65840    -        NODEVICE -       -
sd c1t5d0-01    toi-1-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d1-01    toi-1-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d2-01    toi-1-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d3-01    toi-1-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d4-01    toi-1-01     DISABLED 13104    0        NODEVICE -       -
pl toi-1-02     toi-1        ENABLED  65840    -        ACTIVE   -       -
sd c2t5d0-01    toi-1-02     ENABLED  13104    0        -        -       -
sd c2t5d1-01    toi-1-02     ENABLED  13104    0        -        -       -
sd c2t5d2-01    toi-1-02     ENABLED  13104    0        -        -       -
sd c2t5d3-01    toi-1-02     ENABLED  13104    0        -        -       -
sd c2t5d4-01    toi-1-02     ENABLED  13104    0        -        -       -

v  toi-2        gen          ENABLED  61440    -        ACTIVE   -       -
pl toi-2-01     toi-2        DISABLED 65840    -        NODEVICE -       -
sd c1t5d0-02    toi-2-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d1-02    toi-2-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d2-02    toi-2-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d3-02    toi-2-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d4-02    toi-2-01     DISABLED 13104    0        NODEVICE -       -
pl toi-2-02     toi-2        ENABLED  65840    -        ACTIVE   -       -
sd c2t5d0-02    toi-2-02     ENABLED  13104    0        -        -       -
sd c2t5d1-02    toi-2-02     ENABLED  13104    0        -        -       -
sd c2t5d2-02    toi-2-02     ENABLED  13104    0        -        -       -
sd c2t5d3-02    toi-2-02     ENABLED  13104    0        -        -       -
sd c2t5d4-02    toi-2-02     ENABLED  13104    0        -        -       -

v  toi-3        gen          ENABLED  61440    -        ACTIVE   -       -
pl toi-3-01     toi-3        DISABLED 65840    -        NODEVICE -       -
sd c1t5d0-03    toi-3-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d1-03    toi-3-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d2-03    toi-3-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d3-03    toi-3-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d4-03    toi-3-01     DISABLED 13104    0        NODEVICE -       -
pl toi-3-02     toi-3        ENABLED  65840    -        ACTIVE   -       -
sd c2t5d0-03    toi-3-02     ENABLED  13104    0        -        -       -
sd c2t5d1-03    toi-3-02     ENABLED  13104    0        -        -       -
sd c2t5d2-03    toi-3-02     ENABLED  13104    0        -        -       -
sd c2t5d3-03    toi-3-02     ENABLED  13104    0        -        -       -
sd c2t5d4-03    toi-3-02     ENABLED  13104    0        -        -       -

v  toi-4        gen          ENABLED  61440    -        ACTIVE   -       -
pl toi-4-01     toi-4        DISABLED 65840    -        NODEVICE -       -
sd c1t5d0-04    toi-4-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d1-04    toi-4-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d2-04    toi-4-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d3-04    toi-4-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d4-04    toi-4-01     DISABLED 13104    0        NODEVICE -       -
pl toi-4-02     toi-4        ENABLED  65840    -        ACTIVE   -       -
sd c2t5d0-04    toi-4-02     ENABLED  13104    0        -        -       -
sd c2t5d1-04    toi-4-02     ENABLED  13104    0        -        -       -
sd c2t5d2-04    toi-4-02     ENABLED  13104    0        -        -       -
sd c2t5d3-04    toi-4-02     ENABLED  13104    0        -        -       -
sd c2t5d4-04    toi-4-02     ENABLED  13104    0        -        -       -

v  toi-5        gen          ENABLED  61440    -        ACTIVE   -       -
pl toi-5-01     toi-5        DISABLED 65840    -        NODEVICE -       -
sd c1t5d0-05    toi-5-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d1-05    toi-5-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d2-05    toi-5-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d3-05    toi-5-01     DISABLED 13104    0        NODEVICE -       -
sd c1t5d4-05    toi-5-01     DISABLED 13104    0        NODEVICE -       -
pl toi-5-02     toi-5        ENABLED  65840    -        ACTIVE   -       -
sd c2t5d0-05    toi-5-02     ENABLED  13104    0        -        -       -
sd c2t5d1-05    toi-5-02     ENABLED  13104    0        -        -       -
sd c2t5d2-05    toi-5-02     ENABLED  13104    0        -        -       -
sd c2t5d3-05    toi-5-02     ENABLED  13104    0        -        -       -
sd c2t5d4-05    toi-5-02     ENABLED  13104    0        -        -       -
# vxdisk list
DEVICE       TYPE      DISK         GROUP        STATUS
c0t0d0s4     simple    c0t0d0s4     rootdg       online
c1t5d0s2     sliced    -            -            online
c1t5d1s2     sliced    -            -            online
c1t5d2s2     sliced    -            -            online
c1t5d3s2     sliced    -            -            online
c1t5d4s2     sliced    -            -            online
c2t5d0s2     sliced    c2t5d0       toi          online shared
c2t5d1s2     sliced    c2t5d1       toi          online shared
c2t5d2s2     sliced    c2t5d2       toi          online shared
c2t5d3s2     sliced    c2t5d3       toi          online shared
c2t5d4s2     sliced    c2t5d4       toi          online shared
-            -         c1t5d0       toi          failed was:c1t5d0s2
-            -         c1t5d1       toi          failed was:c1t5d1s2
-            -         c1t5d2       toi          failed was:c1t5d2s2
-            -         c1t5d3       toi          failed was:c1t5d3s2
-            -         c1t5d4       toi          failed was:c1t5d4s2

To reattach a detached disk, complete the following steps.

  1. Fix the condition that resulted in the problem.

    Be sure that the disks are spun up before proceeding further.

  2. Enter the following commands on both nodes in the cluster.

    In some cases, the drive(s) must be rediscovered by the node(s).


    # drvconfig
    # disks
    
  3. Enter the following commands on both nodes in the cluster.

    CVM must scan the current disk configuration again.


    # vxdctl enable
    # vxdisk -a online
    
  4. From the master node, repeat the following command for each disk that has been disconnected.

    The physical disk and the CVM access name for that disk must be reconnected.


    # vxdg -g disk_group_name -k adddisk medianame=accessname
    

    Note -

    The values for medianame and accessname can be obtained from the end of the vxdisk list command output.


    For the example configuration:


    # vxdg -g toi -k adddisk c1t5d0=c1t5d0s2
    # vxdg -g toi -k adddisk c1t5d1=c1t5d1s2
    # vxdg -g toi -k adddisk c1t5d2=c1t5d2s2
    # vxdg -g toi -k adddisk c1t5d3=c1t5d3s2
    # vxdg -g toi -k adddisk c1t5d4=c1t5d4s2
    
  5. From the master node, start volume recovery.


    # vxrecover -svc
    

    For the example configuration:


    # vxrecover -svc
    job 028125 dg toi volume toi-1: reattach plex toi-1-01
    job 028125 done status=0
    job 028126 dg toi volume toi-2: reattach plex toi-2-01
    job 028126 done status=0
    job 028127 dg toi volume toi-3: reattach plex toi-3-01
    job 028127 done status=0
    job 028129 dg toi volume toi-4: reattach plex toi-4-01
    job 028129 done status=0
    job 028130 dg toi volume toi-5: reattach plex toi-5-01
    job 028130 done status=0
  6. (Optional) Enter the vxprint -g command to see the changes.


    # vxprint -g toi
    TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
    dg toi          toi          -        -        -        -        -       -
    
    dm c1t5d0       c1t5d0s2     -        2050272  -        -        -       -
    dm c1t5d1       c1t5d1s2     -        2050272  -        -        -       -
    dm c1t5d2       c1t5d2s2     -        2050272  -        -        -       -
    dm c1t5d3       c1t5d3s2     -        2050272  -        -        -       -
    dm c1t5d4       c1t5d4s2     -        2050272  -        -        -       -
    dm c2t5d0       c2t5d0s2     -        2050272  -        -        -       -
    dm c2t5d1       c2t5d1s2     -        2050272  -        -        -       -
    dm c2t5d2       c2t5d2s2     -        2050272  -        -        -       -
    dm c2t5d3       c2t5d3s2     -        2050272  -        -        -       -
    dm c2t5d4       c2t5d4s2     -        2050272  -        -        -       -
    
    v  toi-1        gen          ENABLED  61440    -        ACTIVE   -       -
    pl toi-1-01     toi-1        ENABLED  65840    -        ACTIVE   -       -
    sd c1t5d0-01    toi-1-01     ENABLED  13104    0        -        -       -
    sd c1t5d1-01    toi-1-01     ENABLED  13104    0        -        -       -
    sd c1t5d2-01    toi-1-01     ENABLED  13104    0        -        -       -
    sd c1t5d3-01    toi-1-01     ENABLED  13104    0        -        -       -
    sd c1t5d4-01    toi-1-01     ENABLED  13104    0        -        -       -
    pl toi-1-02     toi-1        ENABLED  65840    -        ACTIVE   -       -
    sd c2t5d0-01    toi-1-02     ENABLED  13104    0        -        -       -
    sd c2t5d1-01    toi-1-02     ENABLED  13104    0        -        -       -
    sd c2t5d2-01    toi-1-02     ENABLED  13104    0        -        -       -
    sd c2t5d3-01    toi-1-02     ENABLED  13104    0        -        -       -
    sd c2t5d4-01    toi-1-02     ENABLED  13104    0        -        -       -
    
    v  toi-2        gen          ENABLED  61440    -        ACTIVE   -       -
    pl toi-2-01     toi-2        ENABLED  65840    -        ACTIVE   -       -
    sd c1t5d0-02    toi-2-01     ENABLED  13104    0        -        -       -
    sd c1t5d1-02    toi-2-01     ENABLED  13104    0        -        -       -
    sd c1t5d2-02    toi-2-01     ENABLED  13104    0        -        -       -
    sd c1t5d3-02    toi-2-01     ENABLED  13104    0        -        -       -
    sd c1t5d4-02    toi-2-01     ENABLED  13104    0        -        -       -
    pl toi-2-02     toi-2        ENABLED  65840    -        ACTIVE   -       -
    sd c2t5d0-02    toi-2-02     ENABLED  13104    0        -        -       -
    sd c2t5d1-02    toi-2-02     ENABLED  13104    0        -        -       -
    sd c2t5d2-02    toi-2-02     ENABLED  13104    0        -        -       -
    sd c2t5d3-02    toi-2-02     ENABLED  13104    0        -        -       -
    sd c2t5d4-02    toi-2-02     ENABLED  13104    0        -        -       -
    
    v  toi-3        gen          ENABLED  61440    -        ACTIVE   -       -
    pl toi-3-01     toi-3        ENABLED  65840    -        ACTIVE   -       -
    sd c1t5d0-03    toi-3-01     ENABLED  13104    0        -        -       -
    sd c1t5d1-03    toi-3-01     ENABLED  13104    0        -        -       -
    sd c1t5d2-03    toi-3-01     ENABLED  13104    0        -        -       -
    sd c1t5d3-03    toi-3-01     ENABLED  13104    0        -        -       -
    sd c1t5d4-03    toi-3-01     ENABLED  13104    0        -        -       -
    pl toi-3-02     toi-3        ENABLED  65840    -        ACTIVE   -       -
    sd c2t5d0-03    toi-3-02     ENABLED  13104    0        -        -       -
    sd c2t5d1-03    toi-3-02     ENABLED  13104    0        -        -       -
    sd c2t5d2-03    toi-3-02     ENABLED  13104    0        -        -       -
    sd c2t5d3-03    toi-3-02     ENABLED  13104    0        -        -       -
    sd c2t5d4-03    toi-3-02     ENABLED  13104    0        -        -       -
    
    v  toi-4        gen          ENABLED  61440    -        ACTIVE   -       -
    pl toi-4-01     toi-4        ENABLED  65840    -        ACTIVE   -       -
    sd c1t5d0-04    toi-4-01     ENABLED  13104    0        -        -       -
    sd c1t5d1-04    toi-4-01     ENABLED  13104    0        -        -       -
    sd c1t5d2-04    toi-4-01     ENABLED  13104    0        -        -       -
    sd c1t5d3-04    toi-4-01     ENABLED  13104    0        -        -       -
    sd c1t5d4-04    toi-4-01     ENABLED  13104    0        -        -       -
    pl toi-4-02     toi-4        ENABLED  65840    -        ACTIVE   -       -
    sd c2t5d0-04    toi-4-02     ENABLED  13104    0        -        -       -
    sd c2t5d1-04    toi-4-02     ENABLED  13104    0        -        -       -
    sd c2t5d2-04    toi-4-02     ENABLED  13104    0        -        -       -
    sd c2t5d3-04    toi-4-02     ENABLED  13104    0        -        -       -
    sd c2t5d4-04    toi-4-02     ENABLED  13104    0        -        -       -
    
    v  toi-5        gen          ENABLED  61440    -        ACTIVE   -       -
    pl toi-5-01     toi-5        ENABLED  65840    -        ACTIVE   -       -
    sd c1t5d0-05    toi-5-01     ENABLED  13104    0        -        -       -
    sd c1t5d1-05    toi-5-01     ENABLED  13104    0        -        -       -
    sd c1t5d2-05    toi-5-01     ENABLED  13104    0        -        -       -
    sd c1t5d3-05    toi-5-01     ENABLED  13104    0        -        -       -
    sd c1t5d4-05    toi-5-01     ENABLED  13104    0        -        -       -
    pl toi-5-02     toi-5        ENABLED  65840    -        ACTIVE   -       -
    sd c2t5d0-05    toi-5-02     ENABLED  13104    0        -        -       -
    sd c2t5d1-05    toi-5-02     ENABLED  13104    0        -        -       -
    sd c2t5d2-05    toi-5-02     ENABLED  13104    0        -        -       -
    sd c2t5d3-05    toi-5-02     ENABLED  13104    0        -        -       -
    sd c2t5d4-05    toi-5-02     ENABLED  13104    0        -        -       -

3.12 Changing Permission or Ownership

To run applications such as Oracle Parallel Server, it may be necessary to change read/write permissions and ownership of the volumes. To change the permissions or ownership, use the vxedit command. vxedit will set the necessary fields in a CVM record.