Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster 3.3 3/13 |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
7. Administering Cluster Interconnects and Public Networks
10. Configuring Control of CPU Usage
11. Patching Oracle Solaris Cluster Software and Firmware
12. Backing Up and Restoring a Cluster
How to Find File System Names to Back Up
How to Determine the Number of Tapes Needed for a Full Backup
How to Back Up the Root (/) File System
How to Perform Online Backups for Mirrors (Solaris Volume Manager)
How to Restore Individual Files Interactively (Solaris Volume Manager)
How to Restore the Root (/) File System (Solaris Volume Manager)
How to Restore a Root (/) File System That Was on a Solaris Volume Manager Volume
13. Administering Oracle Solaris Cluster With the Graphical User Interfaces
Table 12-1 Task Map: Backing Up Cluster Files
|
Use this procedure to determine the names of the file systems that you want to back up.
You do not need to be superuser or assume an equivalent role to run this command.
# more /etc/vfstab
Use this name when you back up the file system.
# more /etc/vfstab
Example 12-1 Finding File System Names to Back Up
The following example displays the names of available file systems that are listed in the /etc/vfstab file.
# more /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # #/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes - f - /dev/fd fd - no - /proc - /proc proc - no - /dev/dsk/c1t6d0s1 - - swap - no - /dev/dsk/c1t6d0s0 /dev/rdsk/c1t6d0s0 / ufs 1 no - /dev/dsk/c1t6d0s3 /dev/rdsk/c1t6d0s3 /cache ufs 2 yes - swap - /tmp tmpfs - yes -
Use this procedure to calculate the number of tapes that you need to back up a file system.
# ufsdump S filesystem
Displays the estimated number of bytes needed to perform the backup.
Specifies the name of the file system you want to back up.
Example 12-2 Determining the Number of Tapes Needed
In the following example, the file system size of 905,881,620 bytes easily fits on a 4-Gbyte tape (905,881,620 ÷ 4,000,000,000).
# ufsdump S /global/phys-schost-1 905881620
Use this procedure to back up the root (/) file system of a cluster node. Ensure that the cluster is running without errors before performing the backup procedure.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# clnode evacuate node
Specifies the node from which you are switching resource groups and device groups.
# shutdown -g0 -y -i0
On SPARC based systems, run the following command.
ok boot -xs
On x86 based systems, run the following commands.
phys-schost# shutdown -g -y -i0 Press any key to continue
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.-
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
# df -k
# ls /backing-store-file
# fssnap -F ufs -o bs=/backing-store-file /file-system
# /usr/lib/fs/ufs/fssnap -i /file-system
# ufsdump 0ucf /dev/rmt/0 snapshot-name
For example:
# ufsdump 0ucf /dev/rmt/0 /dev/rfssnap/1
# ufsrestore ta /dev/rmt/0
# init 6
Example 12-3 Backing Up the Root (/) File System
In the following example, a snapshot of the root (/) file system is saved to /scratch/usr.back.file in the /usr directory. `
# fssnap -F ufs -o bs=/scratch/usr.back.file /usr /dev/fssnap/1
A mirrored Solaris Volume Manager volume can be backed up without unmounting it or taking the entire mirror offline. One of the submirrors must be taken offline temporarily, thus losing mirroring, but it can be placed online and resynchronized as soon as the backup is complete, without halting the system or denying user access to the data. Using mirrors to perform online backups creates a backup that is a “snapshot” of an active file system.
A problem might occur if a program writes data onto the volume immediately before the lockfs command is run. To prevent this problem, temporarily stop all the services running on this node. Also, ensure the cluster is running without errors before performing the backup procedure.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# metaset -s setname
Specifies the disk set name.
# lockfs -w mountpoint
Note - You must lock the file system only if a UFS file system resides on the mirror. For example, if the Solaris Volume Manager volume is set up as a raw device for database management software or some other specific application, you do not need to use the lockfs command. You might, however, run the appropriate vendor-dependent utility to flush any buffers and lock access.
# metastat -s setname -p
Displays the status in a format similar to the md.tab file.
# metadetach -s setname mirror submirror
Note - Reads continue to be made from the other submirrors. However, the offline submirror is unsynchronized as soon as the first write is made to the mirror. This inconsistency is corrected when the offline submirror is brought back online. You do not need to run fsck.
# lockfs -u mountpoint
# fsck /dev/md/diskset/rdsk/submirror
Use the ufsdump(1M) command or the backup utility that you usually use.
# ufsdump 0ucf dump-device submirror
Note - Use the raw device (/rdsk) name for the submirror, rather than the block device (/dsk) name.
# metattach -s setname mirror submirror
When the metadevice or volume is placed online, it is automatically resynchronized with the mirror.
# metastat -s setname mirror
Example 12-4 Performing Online Backups for Mirrors (Solaris Volume Manager)
In the following example, the cluster node phys-schost-1 is the owner of the metaset schost-1, therefore the backup procedure is performed from phys-schost-1. The mirror /dev/md/schost-1/dsk/d0 consists of the submirrors d10 , d20, and d30.
[Determine the owner of the metaset:] # metaset -s schost-1 Set name = schost-1, Set number = 1 Host Owner phys-schost-1 Yes ... [Lock the file system from writes:] # lockfs -w /global/schost-1 [List the submirrors:] # metastat -s schost-1 -p schost-1/d0 -m schost-1/d10 schost-1/d20 schost-1/d30 1 schost-1/d10 1 1 d4s0 schost-1/d20 1 1 d6s0 schost-1/d30 1 1 d8s0 [Take a submirror offline:] # metadetach -s schost-1 d0 d30 [Unlock the file system:] # lockfs -u / [Check the file system:] # fsck /dev/md/schost-1/rdsk/d30 [Copy the submirror to the backup device:] # ufsdump 0ucf /dev/rmt/0 /dev/md/schost-1/rdsk/d30 DUMP: Writing 63 Kilobyte records DUMP: Date of this level 0 dump: Tue Apr 25 16:15:51 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/md/schost-1/rdsk/d30 to /dev/rdsk/c1t9d0s0. ... DUMP: DUMP IS DONE [Bring the submirror back online:] # metattach -s schost-1 d0 d30 schost-1/d0: submirror schost-1/d30 is attached [Resynchronize the submirror:] # metastat -s schost-1 d0 schost-1/d0: Mirror Submirror 0: schost-0/d10 State: Okay Submirror 1: schost-0/d20 State: Okay Submirror 2: schost-0/d30 State: Resyncing Resync in progress: 42% done Pass: 1 Read option: roundrobin (default) ...
To ensure that your cluster configuration is archived and to facilitate easy recovery of the your cluster configuration, periodically back up your cluster configuration. Oracle Solaris Cluster provides the ability to export your cluster configuration to an eXtensible Markup Language (XML) file.
# /usr/cluster/bin/cluster export -o configfile
The name of the XML configuration file that the cluster command is exporting the cluster configuration information to. For information about the XML configuration file, see clconfiguration(5CL).
# vi configfile