Oracle® VM Server User's Guide Release 2.1 Part Number E10898-04 |
|
|
View PDF |
This Chapter discusses live migration of domains to other, identical computers. You must use identical computers to perform live migrations, that is, the computer make and model number must be identical.
To perform live migration of domains, you must create a shared virtual disk before you perform the migration. This Chapter contains:
If you want to perform live migration of domains to other, identical, computers, you must create a shared virtual disk to be used during the live migration. You can set up a shared virtual disk in the following configurations:
OCFS2 (Oracle Cluster File System) using the iSCSI (Internet SCSI) network protocol
OCFS2 using SAN (Storage Area Network)
NFS (Network File System)
You must make sure all Virtual Machine Servers in the server pool:
Use the same shared storage.
Are in the same OCFS2 or NFS cluster.
This section discusses creating a shared virtual disk to use for live migration.
To create a shared virtual disk using OCFS2 on iSCSI:
Install the iscsi-initiator-utils RPM on the Oracle VM Server. The iscsi-initiator-utils RPM is available on the Oracle VM Server CDROM or ISO file.
# rpm -Uvh iscsi-initiator-utils-version.el5.i386.rpm
Start the iSCSI service:
# service iscsi start
Run discovery on the iSCSI target. In this example, the target is 10.1.1.1:
# iscsiadm -m discovery -t sendtargets -p 10.1.1.1
This command returns output similar to:
10.1.1.1:3260,5 iqn.1992-04.com.emc:cx.apm00070202838.a2 10.1.1.1:3260,6 iqn.1992-04.com.emc:cx.apm00070202838.a3 10.2.1.250:3260,4 iqn.1992-04.com.emc:cx.apm00070202838.b1 10.1.0.249:3260,1 iqn.1992-04.com.emc:cx.apm00070202838.a0 10.1.1.249:3260,2 iqn.1992-04.com.emc:cx.apm00070202838.a1 10.2.0.250:3260,3 iqn.1992-04.com.emc:cx.apm00070202838.b0
Delete entries that you do not want to use, for example:
# iscsiadm -m node -p 10.2.0.250:3260,3 -T iqn.1992-04.com.emc:cx.apm00070202838.b0 -o delete # iscsiadm -m node -p 10.1.0.249:3260,1 -T iqn.1992-04.com.emc:cx.apm00070202838.a0 -o delete # iscsiadm -m node -p 10.2.1.250:3260,4 -T iqn.1992-04.com.emc:cx.apm00070202838.b1 -o delete # iscsiadm -m node -p 10.1.1.249:3260,2 -T iqn.1992-04.com.emc:cx.apm00070202838.a1 -o delete # iscsiadm -m node -p 10.0.1.249:3260,5 -T iqn.1992-04.com.emc:cx.apm00070202838.a2 -o delete
Verify that only the iSCSI targets you want to use for the server pool are visible:
# iscsiadm -m node
Review the partitions by checking /proc/partitions:
# cat /proc/partitions major minor #blocks name 8 0 71687372 sda 8 1 104391 sda1 8 2 71577607 sda2 253 0 70516736 dm-0 253 1 1048576 dm-1
Restart the iSCSI service:
# service iscsi restart
Review the partitions by checking /proc/partitions. A new device is listed.
# cat /proc/partitions major minor #blocks name 8 0 71687372 sda 8 1 104391 sda1 8 2 71577607 sda2 253 0 70516736 dm-0 253 1 1048576 dm-1 8 16 1048576 sdb
The new device can now be used.
# fdisk -l /dev/sdb
Create a new directory named /etc/ocfs2 directory:
# mkdir /etc/ocfs2
Create the OCSF2 configuration file as /etc/ocfs2/cluster.conf. The following is a sample cluster.conf file:
node: ip_port = 7777 ip_address = 10.1.1.1 number = 0 name = example1.com cluster = ocfs2 node: ip_port = 7777 ip_address = 10.1.1.2 number = 1 name = example2.com cluster = ocfs2 cluster: node_count = 2 name = ocfs2
Review the status of the OCFS2 cluster service:
# service o2cb status
Load the OCFS2 module:
# service o2cb load
Set the OCFS2 service to be online:
# service o2cb online
Configure the OCFS2 service to start automatically when the computer boots:
# service o2cb configure
Start up the OCFS2 service.
# service o2cb start
Format the shared virtual disk from any of the Oracle VM Servers in the cluster:
# mkfs.ocfs2 /dev/sdb1
Mount the shared virtual disk from all the Oracle VM Servers in the cluster on /OVS/remote:
# mount /dev/sdb1 /OVS/remote/ -t ocfs2
Change the /etc/fstab file to include the shared virtual disk mounted at boot:
/dev/sdb1 /OVS/remote ocfs2 defaults 1 0
To create a shared virtual disk using OCFS2 on SAN:
Review the partitions by checking /proc/partitions:
# cat /proc/partitions major minor #blocks name 8 0 71687372 sda 8 1 104391 sda1 8 2 71577607 sda2 253 0 70516736 dm-0 253 1 1048576 dm-1 8 16 1048576 sdb
Determine the share disk volume you want to use.
Create a new directory named /etc/ocfs2 directory:
# mkdir /etc/ocfs2
Create the OCSF2 configuration file as /etc/ocfs2/cluster.conf. The following is a sample cluster.conf file:
node: ip_port = 7777 ip_address = 10.1.1.1 number = 0 name = example1.com cluster = ocfs2 node: ip_port = 7777 ip_address = 10.1.1.2 number = 1 name = example2.com cluster = ocfs2 cluster: node_count = 2 name = ocfs2
Review the status of the OCFS2 cluster service:
# service o2cb status
Load the OCFS2 module:
# service o2cb load
Set the OCFS2 service to be online:
# service o2cb online
Configure the OCFS2 service to start automatically when the computer boots:
# service o2cb configure
Start up the OCFS2 service.
# service o2cb start
Format the shared virtual disk from any of the Oracle VM Servers in the cluster:
# mkfs.ocfs2 /dev/sdb
Mount the shared virtual disk from all the Oracle VM Servers in the cluster on /OVS/remote:
# mount /dev/sdb /OVS/remote/ -t ocfs2
Change the /etc/fstab file to include the shared virtual disk mounted at boot:
/dev/sdb /OVS/remote ocfs2 defaults 1 0
To add a shared virtual disk using NFS:
Find an NFS mount point to use. This example uses the mount point:
mycomputer:/vol/vol1/data/ovs
Add the following entry to the /etc/fstab file:
myfileserver:/vol/vol1/data/ovs /OVS/remote nfs defaults 0 0
Mount the shared virtual disk:
# mount /OVS/remote
To migrate a domain from one computer to another identical computer:
Create a shared virtual disk to use during the domain migration. See Section 6.1, "Creating a Shared Virtual Disk for Live Migration". Each computer involved with the domain migration must have access to the shared virtual disk in the same way, either as an NFS or a SAN virtual disk.
On the Oracle VM Server that contains the existing domain, migrate the domain to to the remote computer with the following command:
# xm migrate mydomain myremotecomputer
The domain is migrated to the remote computer.
To perform live migration of the domain, use the command:
# xm migrate -l mydomain myremotecomputer