In this procedure, several vdisks are added to the target guest domain to support the incoming zones and the application data.
As you perform this procedure, take into account the state of your target system and adjust or omit steps as needed.
This is an example of adding one vdisk, but the commands must be repeated to add all the disks that were provisioned in Prepare the Target System.
root@TargetControlDom# ldm add-vdsdev /dev/rdsk/c0t600144F09F2C0BFD00005BE4A90F0004d0s2 solaris10-vol6@ovmt-vds0 root@TargetControlDom# ldm add-vdisk vdisk6 solaris10-vol6@ovmt-vds0 solaris10
root@TargetControlDom# ldm list -o disk primary NAME primary VDS NAME VOLUME OPTIONS MPGROUP DEVICE ovmt-vds0 solaris10-vol0 /dev/rdsk/c0t600144F09F2C0BFD00005BE4A8500003d0s2 solaris10-vol1 /dev/rdsk/c0t600144F09F2C0BFD00005BE4A90F0004d0s2 solaris10-vol2 /dev/rdsk/c0t600144F09F2C0BFD00005BE4A9A90005d0s2 solaris10-vol3 /dev/rdsk/c0t600144F09F2C0BFD00005BE4BF670006d0s2 solaris10-vol4 /dev/rdsk/c0t600144F09F2C0BFD00005BE4C4BB000Dd0s2 solaris10-vol5 /dev/rdsk/c0t600144F09F2C0BFD00005BE4C4E4000Ed0s2 solaris10-vol6 /dev/rdsk/c0t600144F09F2C0BFD00005BE4BFC90007d0s2 solaris10-vol7 /dev/rdsk/c0t600144F09F2C0BFD00005BE4BFE60008d0s2 solaris10-vol8 /dev/rdsk/c0t600144F09F2C0BFD00005BE4C0410009d0s2 solaris10-vol9 /dev/rdsk/c0t600144F09F2C0BFD00005BE4C06B000Ad0s2 solaris10-vol10 /dev/rdsk/c0t600144F09F2C0BFD00005BE4C3EC000Bd0s2 solaris10-vol11 /dev/rdsk/c0t600144F09F2C0BFD00005BE4C424000Cd0s2
The vdisks assigned to the target guest domain are vdisk0 through vdisk10, however when you display the disks with the format command, they are listed as c0d0 through c0d10.
root@TargetControlDom# ldm ls -o disk solaris10 NAME solaris10 DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP vdisk0 solaris10-vol0@ovmt-vds0 0 disk@0 primary vdisk1 solaris10-vol1@ovmt-vds0 1 disk@1 primary vdisk2 solaris10-vol2@ovmt-vds0 2 disk@2 primary vdisk3 solaris10-vol3@ovmt-vds0 3 disk@3 primary vdisk4 solaris10-vol4@ovmt-vds0 4 disk@4 primary vdisk5 solaris10-vol5@ovmt-vds0 5 disk@5 primary vdisk6 solaris10-vol6@ovmt-vds0 6 disk@6 primary vdisk7 solaris10-vol7@ovmt-vds0 7 disk@7 primary vdisk8 solaris10-vol8@ovmt-vds0 8 disk@8 primary vdisk9 solaris10-vol9@ovmt-vds0 9 disk@9 primary vdisk10 solaris10-vol10@ovmt-vds0 10 disk@10 primary vdisk11 solaris10-vol11@ovmt-vds0 11 disk@11 primary
root@TargetGuestDom# echo|format Searching for disks...done c0d1: configured with capacity of 599.92GB c0d2: configured with capacity of 299.91GB c0d3: configured with capacity of 299.91GB c0d4: configured with capacity of 149.91GB c0d5: configured with capacity of 149.91GB c0d6: configured with capacity of 199.93GB c0d7: configured with capacity of 199.93GB c0d8: configured with capacity of 249.92GB c0d9: configured with capacity of 249.92GB c0d10: configured with capacity of 1023.75MB c0d11: configured with capacity of 1023.75MB AVAILABLE DISK SELECTIONS: 0. c0d0 <SUN-DiskImage-16GB cyl 17064 alt 2 hd 96 sec 768> /virtual-devices@100/channel-devices@200/disk@0 1. c0d1 <SUN-ZFSStorage7355-1.0 cyl 19501 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@1 2. c0d2 <SUN-ZFSStorage7355-1.0 cyl 9749 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@2 3. c0d3 <SUN-ZFSStorage7355-1.0 cyl 9749 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@3 4. c0d4 <SUN-ZFSStorage7355-1.0 cyl 4873 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@4 5. c0d5 <SUN-ZFSStorage7355-1.0 cyl 4873 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@5 6. c0d6 <SUN-ZFSStorage7355-1.0 cyl 6499 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@6 7. c0d7 <SUN-ZFSStorage7355-1.0 cyl 6499 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@7 8. c0d8 <SUN-ZFSStorage7355-1.0 cyl 8124 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@8 9. c0d9 <SUN-ZFSStorage7355-1.0 cyl 8124 alt 2 hd 254 sec 254> /virtual-devices@100/channel-devices@200/disk@9 10. c0d10 <SUN-ZFSStorage7355-1.0 cyl 8190 alt 2 hd 8 sec 32> /virtual-devices@100/channel-devices@200/disk@a 11. c0d11 <SUN-ZFSStorage7355-1.0 cyl 8190 alt 2 hd 8 sec 32> /virtual-devices@100/channel-devices@200/disk@b Specify disk (enter its number): Specify disk (enter its number):
In this example, the rpool disk is c0d0s0.
root@TargetGuestDom# zpool status rpool pool: rpool state: ONLINE config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0d0s0 ONLINE 0 0 0 errors: No known data errors
Values for this example are shown in parenthesis.
Sectors per track (768)
Tracks per cylinder (96)
Accessible cylinders (17064)
These values are used to configure the disk geometry of the rpool mirror disk so that the two disks match.
root@TargetGuestDom# prtvtoc /dev/rdsk/c0d0s2 * /dev/rdsk/c0d1s2 partition map * * Dimensions: * 512 bytes/sector * 768 sectors/track * 96 tracks/cylinder * 73728 sectors/cylinder * 17066 cylinders * 17064 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 2 00 0 1258094592 1258094591 2 5 00 0 1258094592 1258094591
root@TargetGuestDom# format c0d1 selecting c0d1 [disk formatted] FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show disk ID volname - set 8-character volume name !<cmd> - execute <cmd>, then return quit format> type AVAILABLE DRIVE TYPES: 0. Auto configure 1. Quantum ProDrive 80S 2. Quantum ProDrive 105S 3. CDC Wren IV 94171-344 4. SUN0104 5. SUN0207 6. SUN0327 7. SUN0340 8. SUN0424 9. SUN0535 10. SUN0669 11. SUN1.0G 12. SUN1.05 13. SUN1.3G 14. SUN2.1G 15. SUN2.9G 16. Zip 100 17. Zip 250 18. Peerless 10GB 19. SUN-ZFS Storage 7355-1.0 20. other Specify disk type (enter its number)[19]: 20 Enter number of data cylinders: 17064 Enter number of alternate cylinders[2]: Enter number of physical cylinders[17066]: Enter number of heads: 96 Enter physical number of heads[default]: Enter number of data sectors/track: 768 Enter number of physical sectors/track[default]: Enter rpm of drive[3600]: Enter format time[default]: Enter cylinder skew[default]: Enter track skew[default]: Enter tracks per zone[default]: Enter alternate tracks[default]: Enter alternate sectors[default]: Enter cache control[default]: Enter prefetch threshold[default]: Enter minimum prefetch[default]: Enter maximum prefetch[default]: Enter disk type name (remember quotes): "600GB disk for guest second root disk" selecting c0d1 [disk formatted] format> l Ready to label disk, continue? y format> q
root@TargetGuestDom# prtvtoc /dev/rdsk/c0d1s2 * /dev/rdsk/c0d1s2 partition map * * Dimensions: * 512 bytes/sector * 768 sectors/track * 96 tracks/cylinder * 73728 sectors/cylinder * 17066 cylinders * 17064 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 2 00 0 294912 294911 1 3 01 294912 294912 589823 2 5 01 0 1258094592 1258094591 6 4 00 589824 1257504768 1258094591
root@TargetGuestDom# prtvtoc /dev/rdsk/c0d0s2 | fmthard -s - /dev/rdsk/c0d1s2 fmthard: New volume table of contents now in place.
These substeps describe how to configure the ASM target disks with the same disk label type and partition table as the ASM disks on the source system.
root@SourceGlobal# prtvtoc /dev/rdsk/c2t600144F0E635D8C700005AC56B080015d0s2 > /ovas1/dbzone_asm_disk1.txt
In the Dimensions list, SMI labels describe the disk geometry with cylinders. EFI labels do not specify cylinders, as shown in the following example.
root@TargetGuestDom# cat /ovas1/dbzone_asm_disk1.txt * /dev/rdsk/c2t600144F0E635D8C700005AC56B080015d0s2 partition map * * Dimensions: * 512 bytes/sector * 419430400 sectors * 419430333 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 34 419413949 419413982 8 11 00 419413983 16384 419430366
root@TargetGuestDom# ldm ls -o disk | egrep "c0t600144F09F2C0BFD00005BE4BFC90007d0|c0t600144F09F2C0BFD00005BE4BFE60008d0" solaris10-vol6 /dev/rdsk/c0t600144F09F2C0BFD00005BE4BFC90007d0s2 solaris10-vol7 /dev/rdsk/c0t600144F09F2C0BFD00005BE4BFE60008d0s2 root@TargetGuestDom# ldm ls -o disk solaris10 | egrep "solaris10-vol6|solaris10-vol7" vdisk6 solaris10-vol6@ovmt-vds0 6 disk@6 primary vdisk7 solaris10-vol7@ovmt-vds0 7 disk@7 primary
root@TargetGuestDom# format -e c0d6 selecting c0d6 ... format> label [0] SMI Label [1] EFI Label Specify Label type[1]: 1 root@TargetGuestDom# format -e c0d7 selecting c0d6 ... format> label [0] SMI Label [1] EFI Label Specify Label type[1]: 1
Because the target disks for ASM are provisioned with the same size as the source disks, the source disk partition table can be transferred to the target disk partition table using the fmthard command.
root@TargetGuestDom# fmthard -s /ovas1/dbzone_asm_disk1.txt /dev/rdsk/c0d6s2 root@TargetGuestDom# fmthard -s /ovas1/dbzone_asm_disk1.txt /dev/rdsk/c0d7s2
root@TargetGuestDom# zpool attach rpool c0d0s0 c0d1s0 Make sure to wait until resilver is done before rebooting.
The source dbzone uses two separate UFS file systems on SVM metadevices for the redo and archive logs, so the same metadevices are created on the target guest domain. One metadevice, d20, is created with 10 GB of storage for the Redo log file system. Another metadevice, d30, is created with 200 GB of storage for the Archive Redo log file system.
root@TargetGuestDom# metadb -a -c 3 -f /dev/dsk/c0d10s4 root@TargetGuestDom# metadb -a -c 3 -f /dev/dsk/c0d11s4 root@TargetGuestDom# metadb flags first blk block count a u 16 8192 /dev/dsk/c0d10s4 a u 8208 8192 /dev/dsk/c0d10s4 a u 16400 8192 /dev/dsk/c0d10s4 a u 16 8192 /dev/dsk/c0d11s4 a u 8208 8192 /dev/dsk/c0d11s4 a u 16400 8192 /dev/dsk/c0d11s4 root@TargetGuestDom# metainit d11 1 1 c0d8s0 d11: Concat/Stripe is setup root@TargetGuestDom# metainit d12 1 1 c0d9s0 d12: Concat/Stripe is setup root@TargetGuestDom# metainit d0 -m d11 d12 metainit: d0: WARNING: This form of metainit is not recommended. The submirrors may not have the same data. Please see ERRORS in metainit(1M) for additional information. d0: Mirror is setup root@TargetGuestDom# metainit d20 -p d0 10g d20: Soft Partition is setup root@TargetGuestDom# metainit d30 -p d0 200g d30: Soft Partition is setup root@TargetGuestDom# metastat d30: Soft Partition Device: d0 State: Okay Size: 419430400 blocks (200 GB) Extent Start Block Block count 0 21036096 419430400 d0: Mirror Submirror 0: d11 State: Okay Submirror 1: d12 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 524127984 blocks (249 GB) d11: Submirror of d0 State: Okay Size: 524127984 blocks (249 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c0d8s0 0 No Okay Yes d12: Submirror of d0 State: Okay Size: 524127984 blocks (249 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c0d9s0 0 No Okay Yes d20: Soft Partition Device: d0 State: Okay Size: 20971520 blocks (10 GB) Extent Start Block Block count 0 64544 20971520 Device Relocation Information: Device Reloc Device ID c0d9 Yes id1,vdc@n600144f09f2c0bfd00005be4c06b000a c0d8 Yes id1,vdc@n600144f09f2c0bfd00005be4c0410009 root@TargetGuestDom# newfs /dev/md/rdsk/d20 newfs: construct a new file system /dev/md/rdsk/d20: (y/n)? y Warning: 4096 sector(s) in last cylinder unallocated /dev/md/rdsk/d20: 20971520 sectors in 3414 cylinders of 48 tracks, 128 sectors 10240.0MB in 214 cyl groups (16 c/g, 48.00MB/g, 5824 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920, 20055584, 20154016, 20252448, 20350880, 20449312, 20547744, 20646176, 20744608, 20843040, 20941472 root@TargetGuestDom# newfs /dev/md/rdsk/d30 newfs: construct a new file system /dev/md/rdsk/d30: (y/n)? y Warning: 2048 sector(s) in last cylinder unallocated /dev/md/rdsk/d30: 419430400 sectors in 68267 cylinders of 48 tracks, 128 sectors 204800.0MB in 4267 cyl groups (16 c/g, 48.00MB/g, 5824 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920, Initializing cylinder groups: ............................................................................... super-block backups for last 10 cylinder groups at: 418484384, 418582816, 418681248, 418779680, 418878112, 418976544, 419074976, 419173408, 419271840, 419370272
root@TargetGuestDom# mkdir -p /zones/dbzone root@TargetGuestDom# zpool create -m /zones/dbzone dbzone mirror c0d2 c0d3 root@TargetGuestDom# zpool create -f dbzone_db_binary mirror c0d4 c0d5
root@TargetGuestDom# zpool status pool: dbzone state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM dbzone ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0d2 ONLINE 0 0 0 c0d3 ONLINE 0 0 0 errors: No known data errors pool: dbzone_db_binary state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM dbzone_db_binary ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0d4 ONLINE 0 0 0 c0d5 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: resilvered 11.5G in 0h1m with 0 errors on Mon Jul 30 15:02:20 2018 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0d0s0 ONLINE 0 0 0 c0d1s0 ONLINE 0 0 0 errors: No known data errors