Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Installing and Configuring Veritas Volume Manager
Installing and Configuring VxVM Software
Setting Up a Root Disk Group Overview
How to Install Veritas Volume Manager Software
SPARC: How to Encapsulate the Root Disk
How to Create a Root Disk Group on a Nonroot Disk
How to Mirror the Encapsulated Root Disk
Creating Disk Groups in a Cluster
How to Assign a New Minor Number to a Device Group
How to Verify the Disk Group Configuration
6. Creating a Cluster File System
7. Creating Non-Global Zones and Zone Clusters
8. Installing the Oracle Solaris Cluster Module to Sun Management Center
9. Uninstalling Software From the Cluster
A. Oracle Solaris Cluster Installation and Configuration Worksheets
This section describes how to unencapsulate the root disk in an Oracle Solaris Cluster configuration.
Perform this procedure to unencapsulate the root disk.
Before You Begin
Perform the following tasks:
Ensure that only Solaris root file systems are present on the root disk. The Solaris root file systems are root (/), swap, the global devices namespace, /usr, /var, /opt, and /home.
Back up and remove from the root disk any file systems other than Solaris root file systems that reside on the root disk.
phys-schost# clnode evacuate from-node
Specifies the name of the node from which to move resource or device groups.
phys-schost# clinfo -n
phys-schost# umount /global/.devices/node@N
phys-schost# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated #partition cNtXdYsZ
phys-schost# vxedit -g rootdiskgroup -rf rm rootdiskxNvol
Note - Do not accept the shutdown request from the command.
phys-schost# /etc/vx/bin/vxunroot
See your VxVM documentation for details.
Tip - Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.
phys-schost# newfs /dev/rdsk/cNtXdYsZ
phys-schost# cldevice list cNtXdY dN
The original entry would look similar to the following.
phys-schost# vi /etc/vfstab /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs 2 no global
The revised entry that uses the DID path would look similar to the following.
/dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2 no global
phys-schost# mount /global/.devices/node@N
phys-schost# cldevice populate
VxVM devices are recreated during the next reboot.
The cldevice populate command executes remotely on all nodes, even through the command is issued from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.
phys-schost# ps -ef | grep scgdevs
phys-schost# shutdown -g0 -y -i6