JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Patching Oracle Solaris Cluster Software and Firmware

12.  Backing Up and Restoring a Cluster

Backing Up a Cluster

How to Find File System Names to Back Up

How to Determine the Number of Tapes Needed for a Full Backup

How to Back Up the Root (/) File System

How to Perform Online Backups for Mirrors (Solaris Volume Manager)

How to Perform Online Backups for Volumes (Veritas Volume Manager)

How to Back Up the Cluster Configuration

Restoring Cluster Files

How to Restore Individual Files Interactively (Solaris Volume Manager)

How to Restore the Root (/) File System (Solaris Volume Manager)

How to Restore a Root (/) File System That Was on a Solaris Volume Manager Volume

How to Restore a Nonencapsulated Root (/) File System (Veritas Volume Manager)

How to Restore an Encapsulated Root (/) File System (Veritas Volume Manager)

13.  Administering Oracle Solaris Cluster With the Graphical User Interfaces

A.  Example

Index

Restoring Cluster Files

The ufsrestore(1M) command copies files to disk, relative to the current working directory, from backups created by using the ufsdump(1M) command. You can use ufsrestore to reload an entire file system hierarchy from a level 0 dump and incremental dumps that follow it, or to restore one or more single files from any dump tape. If ufsrestore is run as superuser or assumed an equivalent role, files are restored with their original owner, last modification time, and mode (permissions).

Before you start to restore files or file systems, you need to know the following information.

Table 12-2 Task Map: Restoring Cluster Files

Task
Instructions
For Solaris Volume Manager, restore files interactively
For Solaris Volume Manager, restore the root (/) file system
 
For Veritas Volume Manager, restore a root ( /) file system
For Veritas Volume Manager, restore an encapsulated root ( /) file system

How to Restore Individual Files Interactively (Solaris Volume Manager)

Use this procedure to restore one or more individual files. Ensure that the cluster is running without errors before performing the restore procedure.

  1. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on the cluster node you are restoring.
  2. Stop all the data services that are using the files to be restored.
    # clresourcegroup offline resource-group
  3. Restore the files.
    # ufsrestore

How to Restore the Root (/) File System (Solaris Volume Manager)

Use this procedure to restore the root (/) file systems to a new disk, such as after replacing a bad root disk. The node being restored should not be booted. Ensure that the cluster is running without errors before performing the restore procedure.


Note - Because you must partition the new disk by using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on a cluster node with access to the disksets to which the node to be restored is also attached.

    Use a node other than the node that you are restoring.

  2. Remove the hostname of the node being restored from all metasets.

    Run this command from a node in the metaset other than the node that you are removing. Because the recovering node is offline, the system will display an RPC: Rpcbind failure - RPC: Timed out error. Ignore this error and continue to the next step.

    # metaset -s setname -f -d -h nodelist
    -s setname

    Specifies the disk set name.

    -f

    Deletes the last host from the disk set.

    -d

    Deletes from the disk set.

    -h nodelist

    Specifies the name of the node to delete from the disk set.

  3. Restore the root (/) and /usr file systems.

    To restore the root and /usr file systems, follow the procedure in Chapter 26, Restoring UFS Files and File Systems (Tasks), in System Administration Guide: Devices and File Systems. Omit the step in the Oracle Solaris OS procedure to reboot the system.


    Note - Ensure that you create the /global/.devices/node@nodeid file system.


  4. Reboot the node in multiuser mode.
    # reboot
  5. Replace the device ID.
    # cldevice repair rootdisk
  6. Use the metadb(1M) command to recreate the state database replicas.
    # metadb -c copies -af raw-disk-device
    -c copies

    Specifies the number of replicas to create.

    -f raw-disk-device

    Raw disk device on which to create replicas.

    -a

    Adds replicas.

  7. From a cluster node other than the restored node add the restored node to all disksets.
    phys-schost-2# metaset -s setname -a -h nodelist
    -a

    Creates and adds the host to the disk set.

    The node is rebooted into cluster mode. The cluster is ready to use.

Example 12-6 Restoring the Root (/) File System (Solaris Volume Manager)

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the disk set schost-1. All other commands are run from phys-schost-1 . A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4 .

[Become superuser or assume a  role that provides solaris.cluster.modify RBAC authorization on a cluster node
    other than the node to be restored.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]
Restore the root (/) and /usr file system using the procedure in the Solaris system
    administration documentation
[Reboot:]
# reboot
[Replace the disk ID:]
# cldevice repair /dev/dsk/c0t0d0
[Re-create state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

How to Restore a Root (/) File System That Was on a Solaris Volume Manager Volume

Use this procedure to restore a root (/) file system that was on a Solaris Volume Manager volume when the backups were performed. Perform this procedure under circumstances such as when a root disk is corrupted and replaced with a new disk. The node being restored should not be booted. Ensure that the cluster is running without errors before performing the restore procedure.


Note - Because you must partition the new disk by using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.modifiy RBAC authorization on a cluster node with access to the disk set, other than the node you restoring.

    Use a node other than the node that you are restoring.

  2. Remove the hostname of the node being restored from all disksets to which it is attached. Execute the following command once for each diskset.
    # metaset -s setname -d -h hostname
    -s setname

    Specifies the metaset name.

    -f

    Deletes the last host from the disk set.

    -d

    Deletes from the metaset.

    -h nodelist

    Specifies the name of the node to delete from the metaset.

    -h hostname

    Specifies the name of the host.

    -m mediator_host_list

    Specifies the name of the mediator host to add or delete from the disk set.

  3. If the node is a dual-string mediator host, remove the mediator. Execute the following command once for each diskset to which the node is attached.
    # metaset -ssetname-d -m hostname
  4. Replace the failed disk on the node on which the root (/) file system will be restored.

    Refer to disk replacement procedures in the documentation that shipped with your server.

  5. Boot the node that you are restoring. The repaired node is booted into single user mode from the CD-ROM, so Solaris Volume Manager is not running on the node.
    • If you are using the Oracle Solaris OS CD, note the following:

      • SPARC: Type:

        ok boot cdrom -s
      • x86:Insert the CD into the system's CD drive and boot the system by shutting it down and then turning it off and on. In the Current Boot Parameters screen, type b or i.

                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
    • If you are using a Solaris JumpStart server, note the following:

      • SPARC: Type:

        ok boot net -s
      • x86:Insert the CD into the system's CD drive and boot the system by shutting it down and then turning it off and on. In the Current Boot Parameters screen, type b or i.

                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
  6. Create all the partitions and swap space on the root disk by using the format command.

    Re-create the original partitioning scheme that was on the failed disk.

  7. Create the root (/) file system and other file systems as appropriate, by using the newfs command

    Re-create the original file systems that were on the failed disk.


    Note - Ensure that you create the /global/.devices/node@nodeid file system.


  8. Mount the root (/) file system on a temporary mount point.
    # mount device temp-mountpoint
  9. Use the following commands to restore the root (/) file system.
    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
  10. Install a new boot block on the new disk.
    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk 
    raw-disk-device
  11. Remove the lines in the /temp-mountpoint/etc/system file for MDD root information.
    * Begin MDD root info (do not edit)
    forceload: misc/md_trans
    forceload: misc/md_raid
    forceload: misc/md_mirror
    forceload: misc/md_hotspares
    forceload: misc/md_stripe
    forceload: drv/pcipsy
    forceload: drv/glm
    forceload: drv/sd
    rootdev:/pseudo/md@0:0,10,blk
    * End MDD root info (do not edit)
  12. Edit the /temp-mountpoint/etc/vfstab file to change the root entry from a Solaris Volume Manager volume to a corresponding normal slice for each file system on the root disk that is part of the metadevice or volume.
    Example: 
    Change from—
    /dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /      ufs   1     no       -
  13. Unmount the temporary file system, and check the raw disk device.
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
  14. Reboot the node in multiuser mode.
    # reboot
  15. Replace the device ID.
    # cldevice repair rootdisk
  16. Use the metadb command to recreate the state database replicas.
    # metadb -c copies -af raw-disk-device
    -c copies

    Specifies the number of replicas to create.

    -af raw-disk-device

    Creates initial state database replicas on the named raw disk device.

  17. From a cluster node other than the restored node, add the restored node to all disksets.
    phys-schost-2# metaset -s setname -a -h nodelist
    -a

    Adds (creates) the metaset.

    Set up the volume/mirror for root ( /) according to the documentation.

    The node is rebooted into cluster mode.

  18. If the node was a dual-string mediator host, re-add the mediator.
    phys-schost-2# metaset -s setname -a -m hostname 

Example 12-7 Restoring a Root (/) File System That Was on a Solaris Volume Manager Volume

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the metaset schost-1. All other commands are run from phys-schost-1 . A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4 .

[Become superuser or assume a role that provides solaris.cluster.modify RBAC
   authorization on a cluster node with access to the metaset, other than the node to be restored.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -d -h phys-schost-1
[Replace the failed disk and boot the node:]

Boot the node from the Oracle Solaris OS CD:

[Use format and newfs to recreate partitions and file systems
.]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

[Remove the lines in / temp-mountpoint/etc/system file for MDD root information:
]
* Begin MDD root info (do not edit)
forceload: misc/md_trans
forceload: misc/md_raid
forceload: misc/md_mirror
forceload: misc/md_hotspares
forceload: misc/md_stripe
forceload: drv/pcipsy
forceload: drv/glm
forceload: drv/sd
rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
[Edit the /temp-mountpoint/etc/vfstab file]
Example: 
Change from—
/dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /usr   ufs   1     no       -
[Unmount the temporary file system and check the raw disk device:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Reboot:]
# reboot
[Replace the disk ID:]
# cldevice repair /dev/rdsk/c0t0d0
[Re-create state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

How to Restore a Nonencapsulated Root (/) File System (Veritas Volume Manager)

Use this procedure to restore a nonencapsulated root (/) file system to a node. The node being restored should not be booted. Ensure the cluster is running without errors before performing the restore procedure.


Note - Because you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that shipped with your server.

  2. Boot the node that you are restoring.
    • If you are using the Oracle Solaris OS CD, at the OpenBoot PROM ok prompt, type the following command:

      ok boot cdrom -s
    • If you are using a Solaris JumpStart server, at the OpenBoot PROM ok prompt, type the following command:

      ok boot net -s
  3. Create all the partitions and swap on the root disk by using the format command.

    Re-create the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, using the newfs command.

    Re-create the original file systems that were on the failed disk.


    Note - Ensure that you create the /global/.devices/node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.
    # mount device temp-mountpoint
  6. Restore the root (/) file system from backup, and unmount and check the file system.
    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device

    The file system is now restored.

  7. Install a new boot block on the new disk.
    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
  8. Reboot the node in multiuser mode.
    # reboot
  9. Update the device ID.
    # cldevice repair /dev/rdsk/disk-device
  10. Press Control-d to resume in multiuser mode.

    The node reboots into cluster mode. The cluster is ready to use.

Example 12-8 Restoring a Nonencapsulated Root (/) File System (Veritas Volume Manager)

The following example shows a nonencapsulated root (/) file system that is restored to the node phys-schost-1 from the tape device /dev/rmt/0.

[Replace the failed disk and boot the node:]

Boot the node from the Oracle Solaris OS CD. At the OpenBoot PROM ok prompt, type the following command:

ok boot cdrom -s
...
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

[Reboot:]
# reboot
[Update the disk ID:]
# cldevice repair /dev/rdsk/c0t0d0

How to Restore an Encapsulated Root (/) File System (Veritas Volume Manager)

Use this procedure to restore an encapsulated root (/) file system to a node. The node being restored should not be booted. Ensure the cluster is running with errors before performing the restore procedure.


Note - Because you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that shipped with your server.

  2. Boot the node that you are restoring.
    • If you are using the Oracle Solaris OS CD, at the OpenBoot PROM ok prompt, type the following command:

      ok boot cdrom -s
    • If you are using a Solaris JumpStart server, at the OpenBoot PROM ok prompt, type the following command:

      ok boot net -s
  3. Create all the partitions and swap space on the root disk by using the format command.

    Re-create the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, by using the newfs command.

    Re-create the original file systems that were on the failed disk.


    Note - Ensure that you create the /global/.devices/ node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.
    # mount device temp-mountpoint
  6. Restore the root (/) file system from backup.
    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
  7. Create an empty install-db file.

    This file puts the node in VxVM installation mode at the next reboot.

    # touch \
    /temp-mountpoint/etc/vx/reconfig.d/state.d/install-db
  8. Remove the following entries from the / temp-mountpoint/etc/system file.
    * rootdev:/pseudo/vxio@0:0
    * set vxio:vol_rootdev_is_volume=1
  9. Edit the /temp-mountpoint /etc/vfstab file and replace all VxVM mount points with the standard disk devices for the root disk, such as /dev/dsk/c0t0d0s0.
    Example: 
    Change from—
    /dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol /      ufs   1     no -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -
  10. Unmount the temporary file system and check the file system.
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
  11. Install the boot block on the new disk.
    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
  12. Reboot the node in multiuser mode.
    # reboot
  13. Update the device ID by using scdidadm(1M).
    # cldevice repair /dev/rdsk/c0t0d0
  14. Run the clvxvm encapsulate command to encapsulate the disk and reboot.
  15. If a conflict in minor number occurs with any other system, unmount the global devices and re-minor the disk group.
    • Unmount the global devices file system on the cluster node.

      # umount /global/.devices/node@nodeid
    • Re-minor the rootdg disk group on the cluster node.

      # vxdg reminor rootdg 100
  16. Shut down and reboot the node in cluster mode.
    # shutdown -g0 -i6 -y

Example 12-9 Restoring an Encapsulated root (/) File System (Veritas Volume Manager)

The following example shows an encapsulated root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0.

[Replace the failed disk and boot the node:]

Boot the node from the Oracle Solaris OS CD. At the OpenBoot PROM ok prompt, type the following command:

ok boot cdrom -s
...
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Create an empty install-db file:]
# touch /a/etc/vx/reconfig.d/state.d/install-db
[Edit /etc/system on the temporary file system and 
remove or comment out the following entries:]
    # rootdev:/pseudo/vxio@0:0
    # set vxio:vol_rootdev_is_volume=1
[Edit /etc/vfstab on the temporary file system:]
Example: 
Change from—
/dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol / ufs 1 no-

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -
[Unmount the temporary file system, then check the file system:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot:]
# reboot
[Update the disk ID:]
# cldevice repair /dev/rdsk/c0t0d0
[Encapsulate the disk::]
# vxinstall
Choose to encapsulate the root disk.
[If a conflict  in minor number occurs, reminor the rootdg disk group:]
# umount /global/.devices/node@nodeid
# vxdg reminor rootdg 100
# shutdown -g0 -i6 -y

See Also

For instructions about how to mirror the encapsulated root disk, see the Oracle Solaris Cluster Software Installation Guide.