|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster|
This section describes how to unencapsulate the root disk in an Oracle Solaris Cluster configuration.
Perform this procedure to unencapsulate the root disk.
Before You Begin
Perform the following tasks:
Ensure that only Solaris root file systems are present on the root disk. The Solaris root file systems are root (/), swap, the global devices namespace, /usr, /var, /opt, and /home.
Back up and remove from the root disk any file systems other than Solaris root file systems that reside on the root disk.
phys-schost# clnode evacuate from-node
Specifies the name of the node from which to move resource or device groups.
phys-schost# clinfo -n
phys-schost# umount /global/.devices/node@N
phys-schost# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated #partition cNtXdYsZ
phys-schost# vxedit -g rootdiskgroup -rf rm rootdiskxNvol
Note - Do not accept the shutdown request from the command.
See your VxVM documentation for details.
Tip - Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.
phys-schost# newfs /dev/rdsk/cNtXdYsZ
phys-schost# cldevice list cNtXdY dN
The original entry would look similar to the following.
phys-schost# vi /etc/vfstab /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs 2 no global
The revised entry that uses the DID path would look similar to the following.
/dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2 no global
phys-schost# mount /global/.devices/node@N
phys-schost# cldevice populate
VxVM devices are recreated during the next reboot.
The cldevice populate command executes remotely on all nodes, even through the command is issued from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.
phys-schost# ps -ef | grep scgdevs
phys-schost# shutdown -g0 -y -i6