Sun Cluster 3.0 Installation Guide

Setting Up a rootdg Disk Group Overview

Each cluster node requires the creation of a rootdg disk group. This disk group is used by VxVM to store configuration information, and has the following restrictions.

Sun Cluster software supports the following methods for configuring the rootdg disk group.

Refer to your VxVM installation documentation for more information.

How to Encapsulate the Root Disk

Use this procedure to create a rootdg disk group by encapsulating the root disk.

  1. Have available the VERITAS Volume Manager (VxVM) license keys.

  2. Become superuser on a node in the cluster.

  3. Ensure that the root disk has at least two slices with 0 cylinders and one or more free cylinders at the end or beginning of the disk.

    If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice. If slice 7 was reserved for volume manager use, formatting slice 7 also frees the extra space needed at the end of the disk.

  4. Start the vxinstall(1M) utility.


    # vxinstall
    

    When prompted, make the following choices or entries.

    • Choose Custom Installation.

    • Encapsulate the root disk.

    • Choose a name for the root disk that is unique throughout the cluster. A simple way to name the root disk is to add an extra letter to the default name.

      For example, if the default name given is rootdisk, name the root disk rootdiska on one node, rootdiskb on the next node, and so forth.

    • Do not add any other disks to the rootdg disk group.

    • For any other controllers, choose 4 (Leave these disks alone).

    • Do not accept shutdown and reboot.

    Refer to the VxVM installation documentation for details.


    Note -

    Because Dynamic Multipathing (DMP) is disabled, an error message similar to the following might be generated. You can safely ignore it.



    vxvm:vxdmpadm: ERROR: vxdmp module is not loaded on the system. Command invalid.
  5. Edit the /etc/vfstab file device names for the /global/.devices/node@nodeid file system.


    Note -

    You need to make this modification in order for VxVM to recognize that the /global/.devices/node@nodeid file system is on the root disk.


    Replace the existing device names with the names used in the /globaldevices entry, which is commented out. For example, consider the following /etc/vfstab file entries for /globaldevices and /global/.devices/node@2.


    #device            device             mount         FS   fsck  mount   mount
    #to mount          to fsck            point         type pass  at boot options
    ...
    #/dev/dsk/c1t3d0s3 /dev/rdsk/c1t3d0s3 /globaldevices ufs 2     yes     -
    ...
    /dev/did/dsk/d4s3  /dev/did/rdsk/d4s3 /global/.devices/node@2 ufs 2 no global

    You would change the /global/.devices/node@2 entry to the following.


    #device            device             mount         FS   fsck  mount   mount
    #to mount          to fsck            point         type pass  at boot options
    ...
    #/dev/dsk/c1t3d0s3 /dev/rdsk/c1t3d0s3 /globaldevices ufs 2     yes     -
    ...
    /dev/dsk/c1t3d0s3  /dev/rdsk/c1t3d0s3 /global/.devices/node@2 ufs 2 no global
  6. Repeat Step 2 through Step 5 on each node of the cluster.

  7. From one node, use the scshutdown(1M) command to shut down the cluster.


    # scshutdown
    
  8. Reboot each node in non-cluster mode.

    1. Run the following command on each node to reboot in non-cluster mode.


      ok boot -x
      

      Note -

      Do not reboot the node in cluster mode.


    2. If a node displays a message similar to the following, press Control-D to continue the boot.

      Ignore the instruction to run fsck manually. Instead, press Control-D to continue with the boot and complete the remaining root disk encapsulation procedures.


      WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
      Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk3vola). Exit 
      the shell when done to continue the boot process.
       
      Type control-d to proceed with normal startup,
      (or give root password for system maintenance): 

      The /global/.devices/node@nodeid file system still requires additional changes before the cluster can globally mount it on each node. Because of this requirement, all but one node will fail to mount the /global/.devices/node@nodeid file system during this reboot, resulting in a warning message.

    VxVM encapsulates the root disk and updates the /etc/vfstab entries.

  9. Unmount the /global/.devices/node@nodeid file system that successfully mounted in Step 8.


    # umount /global/.devices/node@nodeid
    

    Unmounting this file system enables you to reminor the disk group during Step 10 without needing to reboot the node twice to initialize the change. This file system is automatically remounted when you reboot during Step 14.

  10. Re-minor the rootdg disk group on each node of the cluster.

    Specify a rootdg minor number that is unique throughout the cluster and smaller than 1000 to prevent minor number conflicts with shared disk groups. An effective re-minoring scheme is to assign 100 on the first node, 200 on the second, and so forth.


    # vxdg reminor rootdg n
    

    n

    Specifies the rootdg minor number

    After executing this command, warning messages similar to the following might be displayed. You can safely ignore them.


    vxvm:vxdg: WARNING: Volume swapvol: Device is open, will renumber on reboot

    The new minor number is applied to the root disk volumes. The swap volume is renumbered after you reboot.


    # ls -l /dev/vx/dsk/rootdg
    total 0
    brw------- 1 root       root    55,100 Apr  4 10:48 rootdiska3vol
    brw------- 1 root       root    55,101 Apr  4 10:48 rootdiska7vol
    brw------- 1 root       root    55,  0 Mar 30 16:37 rootvol
    brw------- 1 root       root    55,  7 Mar 30 16:37 swapvol
  11. On each node of the cluster, if the /usr file system is not collocated with the root (/) file system on the root disk, manually update the device nodes for the /usr volume.

    1. Remove existing /usr device nodes.


      # rm /dev/vx/dsk/usr
      # rm /dev/vx/dsk/rootdg/usr
      # rm /dev/vx/rdsk/usr
      # rm /dev/vx/rdsk/rootdg/usr
      
    2. Determine the new minor number assigned to the /usr file system.


      # vxprint -l -v usrvol
      Disk group: rootdg Volume:   usrvol
      ...
      device:   minor=102 bdev=55/102 cdev=55/102 path=/dev/vx/dsk/rootdg/usrvol
    3. Create new /usr device nodes by using the new minor number.


      # mknod /dev/vx/dsk/usr b major_number new-minor-number
      # mknod /dev/vx/dsk/rootdg/usr b major_number new-minor-number
      # mknod /dev/vx/rdsk/usr c major_number new-minor-number
      # mknod /dev/vx/rdsk/rootdg/usr c major_number new-minor-number
      
  12. On each node of the cluster, if the /var file system is not collocated with the root (/) file system on the root disk, manually update the device nodes for the /var volume.

    1. Remove existing /var device nodes.


      # rm /dev/vx/dsk/var
      # rm /dev/vx/dsk/rootdg/var
      # rm /dev/vx/rdsk/var
      # rm /dev/vx/rdsk/rootdg/var
      
    2. Determine the new minor number assigned to the /var file system.


      # vxprint -l -v usrvol
      Disk group: rootdg Volume:   usrvol
      ...
      device:   minor=103 bdev=55/102 cdev=55/102 path=/dev/vx/dsk/rootdg/usrvol
    3. Create new /var device nodes by using the new minor number.


      # mknod b /dev/vx/dsk/var major_number new-minor-number
      # mknod b /dev/vx/dsk/rootdg/var major_number new-minor-number
      # mknod c /dev/vx/rdsk/var major_number new-minor-number 
      # mknod c /dev/vx/rdsk/rootdg/var major_number new-minor-number
      
  13. From one node, shut down the cluster.


    # scshutdown
    
  14. Reboot each node into cluster mode.


    ok boot
    
  15. (Optional) Mirror the root disk on each node of the cluster.

    Refer to your VxVM documentation for instructions on mirroring root.

  16. If you mirrored the root disk, on each node of the cluster enable the localonly property of the raw disk device group associated with the disk used to mirror that node's root disk.

    For each node, configure a different raw disk device group, which will be used exclusively by that node to mirror the root disk. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.


    # scconf -c -D name=rawdisk_groupname,localonly=true
    
    -D name=rawdisk_groupname

    Specifies the cluster-unique name of the raw disk device group

    Use the scdidadm -L command to display the full device ID (DID) pseudo-driver name of the raw disk device group. In the following example, the raw disk device group name dsk/d1 is extracted from the third column of output, which is the full DID pseudo-driver name. The scconf command then configures the dsk/d1 raw disk device group to be used exclusively by the node phys-schost-3 to mirror its root disk.


    # scdidadm -L
    ...
    1         phys-schost-3:/dev/rdsk/c0t0d0     /dev/did/rdsk/d1
    phys-schost-3# scconf -c -D name=dsk/d1,localonly=true
    

    For more information about the localonly property, refer to the scconf_dg_rawdisk(1M) man page.

Where to Go From Here

To create shared disk groups, go to "How to Create and Register a Shared Disk Group".