This section provides examples for deploying Oracle Solaris Zones on shared storage resources.
Example 14-6 Oracle Solaris Zones Using iSCSI-Based Shared Storage DevicesThis exercise sets up a sample configuration on an Oracle Solaris 11 server that will provide shared storage via an iSCSI target. We will then configure and install a zone on a second server running Oracle Solaris, and use those iSCSI-based shared storage resources to host a zone.
First, we install the corresponding package, using one of the following pkg install commands. The first command installs the entire multi-protocol storage-server group package. The second command installs only the target support for iSCSI within the common multi-protocol SCSI target (COMSTAR) framework, as described in the itadm(1M) and stmfadm(1M) man pages.
root@target:~# pkg install group/feature/storage-server root@target:~# pkg install system/storage/iscsi/iscsi-target
Then, create the backing store for the iSCSI targets to be exported from this server. Create three ZFS volumes as the backing store for three iSCSI target logical units, each 10GB in size, stored in the target servers rpool/export dataset with the zfs command.
root@target:~# zfs create -V 10G rpool/export/zonevol1 root@target:~# zfs create -V 10G rpool/export/zonevol2 root@target:~# zfs create -V 10G rpool/export/zonevol3
After setting up the backing store, use the stmfadm command to create target logical units for each ZFS volume. This will give us the corresponding device ID (WWN) for each, which will be used later in the storage URI for iSCSI target discovery on the client host.
root@target:~# stmfadm create-lu /dev/zvol/rdsk/rpool/export/zonevol1 Logical unit created: 600144F035FF8500000050C884E50001 root@target:~# stmfadm create-lu /dev/zvol/rdsk/rpool/export/zonevol2 Logical unit created: 600144F035FF8500000050C884E80002 root@target:~# stmfadm create-lu /dev/zvol/rdsk/rpool/export/zonevol3 Logical unit created: 600144F035FF8500000050C884EC0003
You can view configured logical units with the stmfadm list-lu syntax.
root@target:~# stmfadm list-lu LU Name: 600144F035FF8500000050C884E50001 LU Name: 600144F035FF8500000050C884E80002 LU Name: 600144F035FF8500000050C884EC0003
You can query for details about configured logical units with the stmfadm list-lu -v syntax.
root@target:~# stmfadm list-lu -v LU Name: 600144F035FF8500000050C884E50001 Operational Status : Online Provider Name : sbd Alias : /dev/zvol/rdsk/rpool/export/zonevol1 View Entry Count : 0 Data File : /dev/zvol/rdsk/rpool/export/zonevol1 Meta File : not set Size : 10737418240 Block Size : 512 Management URL : not set Software ID : not set Vendor ID : SUN Product ID : COMSTAR Serial Num : not set Write Protect : Disabled Write Cache Mode Select: Enabled Writeback Cache : Enabled Access State : Active
To make the logical unit available to iSCSI initiators, add a logical unit view to the target server with the stmfadm add-view command.
root@target:~# stmfadm add-view 600144F035FF8500000050C884E50001 root@target:~# stmfadm add-view 600144F035FF8500000050C884E80002 root@target:~# stmfadm add-view 600144F035FF8500000050C884EC0003
Now we configure the iSCSI target on the target server. First, enable the iSCSI target SMF service with svcadm enable.
root@target:~# svcadm enable -r svc:/network/iscsi/target:default
Then, create the iSCSI target itself using itadm create-target.
root@target:~# itadm create-target Target iqn.1986-03.com.sun:02:b62a8291-b89e-41ba-9aef-e93836ad0d6a successfully created
You can query for the details about configured iSCSI targets using either itadm list-target or stmfadm list-target.
root@target:~# itadm list-target -v TARGET NAME STATE SESSIONS iqn.1986-03.com.sun:02:b62a8291-b89e-41ba-9aef-e93836ad0d6a online 0 alias: - auth: none (defaults) targetchapuser: - targetchapsecret: unset tpg-tags: default root@target:~# stmfadm list-target -v Target: iqn.1986-03.com.sun:02:b62a8291-b89e-41ba-9aef-e93836ad0d6a Operational Status : Online Provider Name : iscsit Alias : - Protocol : iSCSI Sessions : 0
The last step is to use suriadm(1M) to obtain the corresponding storage URIs to be used in the zone configuration on the second server. For each logical unit, a local device path entry has been created in /dev. The suriadm command is used to create the iSCSI storage URI.
root@target:~# suriadm lookup-uri -t iscsi /dev/dsk/c0t600144F035FF8500000050C884E50001d0 iscsi://target/luname.naa.600144f035ff8500000050c884e50001 root@target:~# suriadm lookup-uri -t iscsi /dev/dsk/c0t600144F035FF8500000050C884E80002d0 iscsi://target/luname.naa.600144f035ff8500000050c884e80002 root@target:~# suriadm lookup-uri -t iscsi /dev/dsk/c0t600144F035FF8500000050C884EC0003d0 iscsi://target/luname.naa.600144f035ff8500000050c884ec0003
This completes all of the tasks required on the sample server providing the iSCSI target storage.
We can now move on to configuring and installing a zone on the second server using this shared storage provided over iSCSI.
The first step is to install the corresponding package on the client server selected to be the iSCSI initiator.
root@initiator:~# pkg install pkg:/system/storage/iscsi/iscsi-initiator
Next, we use the zonecfg command to configure a zone with a rootzpool and a zpool resource. We will use the three iSCSI target logical units we configured as shared storage resources to host the zone. We will use the iSCSI storage URIs we obtained with suriadm previously, on the target server.
root@initiator:~# zonecfg -z iscsi Use 'create' to begin configuring a new zone. zonecfg:iscsi> create create: Using system default template 'SYSdefault' zonecfg:iscsi> set zonepath=/iscsi zonecfg:iscsi> add rootzpool zonecfg:iscsi:rootzpool> add storage iscsi://target/luname.naa.600144F035FF8500000050C884E50001 zonecfg:iscsi:rootzpool> end zonecfg:iscsi> add zpool zonecfg:iscsi:zpool> set name=data zonecfg:iscsi:zpool> add storage iscsi://target/luname.naa.600144F035FF8500000050C884E80002 zonecfg:iscsi:zpool> add storage iscsi://target/luname.naa.600144F035FF8500000050C884EC0003 zonecfg:iscsi:zpool> end zonecfg:iscsi> commit zonecfg:iscsi> exit
We are now ready to install the zone using zoneadm install.
root@initiator:~# zoneadm -z iscsi install Configured zone storage resource(s) from: iscsi://target/luname.naa.600144F035FF8500000050C884E50001 Created zone zpool: iscsi_rpool Configured zone storage resource(s) from: iscsi://target/luname.naa.600144F035FF8500000050C884E80002 iscsi://target/luname.naa.600144F035FF8500000050C884EC0003 Created zone zpool: iscsi_data Progress being logged to /var/log/zones/zoneadm.20130125T112209Z.iscsi.install Image: Preparing at /iscsi/root. AI Manifest: /tmp/manifest.xml.pmai7h SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: iscsi Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.oracle.com/solaris/release/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 3.4M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 266.487 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /iscsi/root/var/log/zones/zoneadm.20130125T112209Z.iscsi.install root@initiator:~#
With the zone installation completed, we verify that the zone has been properly installed with zoneadm(1M) list.
root@initiator:~# zoneadm list -cp 0:global:running:/::solaris:shared:-:none -:iscsi:installed:/iscsi:a0a4ba0d-9d6d-cf2c-cc42-f123a5e3ee11:solaris:excl:-:
Finally, we can observe the newly created ZFS storage pools associated with this zone by using the zpool command.
root@initiator:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT iscsi_data 9.94G 83.5K 9.94G 0% 1.00x ONLINE - iscsi_rpool 9.94G 436M 9.51G 4% 1.00x ONLINE - root@initiator:~# zpool status -v iscsi_rpool pool: iscsi_rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM iscsi_rpool ONLINE 0 0 0 c0t600144F035FF8500000050C884E50001d0 ONLINE 0 0 0 root@initiator:~# zpool status -v iscsi_data pool: iscsi_data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM iscsi_data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t600144F035FF8500000050C884E80002d0 ONLINE 0 0 0 c0t600144F035FF8500000050C884EC0003d0 ONLINE 0 0 0
The zone installation will be entirely contained within this ZFS storage pool. The ZFS dataset layout for this zone follows.
root@initiator:~# zfs list -t all|grep iscsi iscsi_data 83.5K 9.78G 31K /iscsi_data iscsi_rpool 436M 9.36G 32K /iscsi iscsi_rpool/rpool 436M 9.36G 31K /rpool iscsi_rpool/rpool/ROOT 436M 9.36G 31K legacy iscsi_rpool/rpool/ROOT/solaris 436M 9.36G 390M /iscsi/root iscsi_rpool/rpool/ROOT/solaris@install 64K - 390M - iscsi_rpool/rpool/ROOT/solaris/var 46.1M 9.36G 45.4M /iscsi/root/var iscsi_rpool/rpool/ROOT/solaris/var@install 644K - 45.4M - iscsi_rpool/rpool/VARSHARE 31K 9.36G 31K /var/share iscsi_rpool/rpool/export 62K 9.36G 31K /export iscsi_rpool/rpool/export/home 31K 9.36G 31K /export/home
The new zone hosted on iSCSI-based shared storage resources has been successfully installed and can now be booted using zoneadm(1M) boot.
After the zone has been booted, the zone administrator observes virtualized ZFS datasets and storage pools from within the zone.
root@iscsi:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT data 9.94G 85K 9.94G 0% 1.00x ONLINE - rpool 9.94G 449M 9.50G 4% 1.00x ONLINE - root@iscsi:~# zpool status -v pool: data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t600144F035FF8500000050C884E80002d0 ONLINE 0 0 0 c0t600144F035FF8500000050C884EC0003d0 ONLINE 0 0 0 pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0t600144F035FF8500000050C884E50001d0 ONLINE 0 0 0 root@iscsi:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT data 85K 9.78G 31K /data rpool 464M 9.33G 31K /rpool rpool/ROOT 464M 9.33G 31K legacy rpool/ROOT/solaris 464M 9.33G 416M / rpool/ROOT/solaris@install 1.83M - 390M - rpool/ROOT/solaris/var 46.2M 9.33G 45.6M /var rpool/ROOT/solaris/var@install 674K - 45.4M - rpool/VARSHARE 39K 9.33G 39K /var/share rpool/export 96.5K 9.33G 32K /export rpool/export/home 64.5K 9.33G 32K /export/home rpool/export/home/user 32.5K 9.33G 32.5K /export/home/userExample 14-7 Example Oracle Solaris Zones Using DAS Storage Devices
This exercise uses direct attached local storage devices to configure and install a zone on Oracle Solaris. Note that this method is usually not portable across different hosts.
First, discover the available local disks with the format command and use the suriadm lookup-uri to construct the corresponding storage URIs to be used within the zone configuration.
root@host:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 1. c4t1d0 <SEAGATE-ST336704LSUN36G-0326-33.92GB> /pci@0,0/pci1022,7450@a/pci17c2,20@4/sd@1,0 2. c4t2d0 <FUJITSU-MAT3073NC-0104-68.49GB> /pci@0,0/pci1022,7450@a/pci17c2,20@4/sd@2,0 3. c4t3d0 <SEAGATE-ST336704LSUN36G-0326-33.92GB> /pci@0,0/pci1022,7450@a/pci17c2,20@4/sd@3,0 4. c4t4d0 <FUJITSU-MAW3073NC-0103-68.49GB> /pci@0,0/pci1022,7450@a/pci17c2,20@4/sd@4,0 root@host:~# suriadm lookup-uri -t dev /dev/dsk/c4t1d0 dev:dsk/c4t1d0 root@host:~# suriadm lookup-uri -t dev /dev/dsk/c4t2d0 dev:dsk/c4t2d0 root@host:~# suriadm lookup-uri -t dev /dev/dsk/c4t3d0 dev:dsk/c4t3d0 root@host:~# suriadm lookup-uri -t dev /dev/dsk/c4t4d0 dev:dsk/c4t4d0
Using those storage URIs we configure a zone with a rootzpool and a zpool resource, both representing mirrored ZFS storage pools.
root@host:~# zonecfg -z disk Use 'create' to begin configuring a new zone. zonecfg:disk> create create: Using system default template 'SYSdefault' zonecfg:disk> set zonepath=/disk zonecfg:disk> add rootzpool zonecfg:disk:rootzpool> add storage dev:dsk/c4t1d0 zonecfg:disk:rootzpool> add storage dev:dsk/c4t3d0 zonecfg:disk:rootzpool> end zonecfg:disk> add zpool zonecfg:disk:zpool> set name=dpool zonecfg:disk:zpool> add storage dev:dsk/c4t2d0 zonecfg:disk:zpool> add storage dev:dsk/c4t4d0 zonecfg:disk:zpool> end zonecfg:disk> commit zonecfg:disk> exit
Now install the zone.
root@host:~# zoneadm -z disk install Created zone zpool: disk_rpool Created zone zpool: disk_dpool Progress being logged to /var/log/zones/zoneadm.20130213T132236Z.disk.install Image: Preparing at /disk/root. AI Manifest: /tmp/manifest.xml.rOaOhe SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: disk Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.oracle.com/solaris/release/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.0M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 308.358 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /disk/root/var/log/zones/zoneadm.20130213T132236Z.disk.install root@host:~#
After zone installation, the following two new ZFS storage pools will be online.
root@host:/# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT disk_dpool 68G 83.5K 68.0G 0% 1.00x ONLINE - disk_rpool 33.8G 434M 33.3G 1% 1.00x ONLINE - root@host:/# zpool status -v disk_rpool pool: disk_rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM disk_rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c4t1d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 root@host:/# zpool status -v disk_dpool pool: disk_dpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM disk_dpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0
The zone installation will be entirely contained within this ZFS storage pool. The following ZFS dataset layout for this Zone will be present.
root@host:~# zfs list -t all|grep disk disk_dpool 83.5K 66.9G 31K /disk_dpool disk_rpool 434M 32.8G 32K /disk disk_rpool/rpool 433M 32.8G 31K /rpool disk_rpool/rpool/ROOT 433M 32.8G 31K legacy disk_rpool/rpool/ROOT/solaris 433M 32.8G 389M /disk/root disk_rpool/rpool/ROOT/solaris@install 63K - 389M - disk_rpool/rpool/ROOT/solaris/var 43.8M 32.8G 43.2M /disk/root/var disk_rpool/rpool/ROOT/solaris/var@install 584K - 43.2M - disk_rpool/rpool/VARSHARE 31K 32.8G 31K /var/share disk_rpool/rpool/export 62K 32.8G 31K /export disk_rpool/rpool/export/home 31K 32.8G 31K /export/home
The new zone hosted on local device storage resources has been successfully installed and can now be booted using the zoneadm boot command.
After the zone has been booted, the zone administrator can observe virtualized ZFS datasets and storage pools from inside the zone.
root@disk:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 68G 83.5K 68.0G 0% 1.00x ONLINE - rpool 33.8G 472M 33.3G 1% 1.00x ONLINE - root@disk:~# zpool status -v pool: dpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM dpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c4t1d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 root@disk:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT dpool 83.5K 66.9G 31K /dpool rpool 465M 32.8G 31K /rpool rpool/ROOT 465M 32.8G 31K legacy rpool/ROOT/solaris 465M 32.8G 416M / rpool/ROOT/solaris@install 5.60M - 389M - rpool/ROOT/solaris/var 43.9M 32.8G 43.3M /var rpool/ROOT/solaris/var@install 618K - 43.2M - rpool/VARSHARE 39K 32.8G 39K /var/share rpool/export 96.5K 32.8G 32K /export rpool/export/home 64.5K 32.8G 32K /export/home rpool/export/home/user 32.5K 32.8G 32.5K /export/home/userExample 14-8 Oracle Solaris Zones Using fibre Channel-Based Storage Devices
This exercise uses a shared storage device provided over fibre channel to configure and install a zone on Oracle Solaris.
First, discover the fibre channel logical units currently visible to our host by using the fcinfo lu command.
root@host:~# fcinfo lu -v OS Device Name: /dev/rdsk/c0t600144F0DBF8AF190000510979640005d0s2 HBA Port WWN: 10000000c9991d8c Remote Port WWN: 21000024ff3ee89f LUN: 5 Vendor: SUN Product: ZFS Storage 7120 Device Type: Disk Device
Use suriadm lookup-uri to construct a storage URI based on the device path. Remove the slice portion of the device name for the query to retrieve a storage URI representing an entire LU.
root@host:~# suriadm lookup-uri /dev/dsk/c0t600144F0DBF8AF190000510979640005d0 lu:luname.naa.600144f0dbf8af190000510979640005 lu:initiator.naa.10000000c9991d8c,target.naa.21000024ff3ee89f,luname.naa.600144f0dbf8af190000510979640005 dev:dsk/c0t600144F0DBF8AF190000510979640005d0
From the three URIs displayed, we select the luname-only form of the logical unit storage URI for use in the zone configuration.
root@host:~# zonecfg -z fc Use 'create' to begin configuring a new zone. zonecfg:fc> create create: Using system default template 'SYSdefault' zonecfg:fc> set zonepath=/fc zonecfg:fc> add rootzpool zonecfg:fc:rootzpool> add storage lu:luname.naa.600144f0dbf8af190000510979640005 zonecfg:fc:rootzpool> end zonecfg:fc> commit zonecfg:fc> exit
We are now ready to install the zone.
root@host:~# zoneadm -z fc install Created zone zpool: fc_rpool Progress being logged to /var/log/zones/zoneadm.20130214T045957Z.fc.install Image: Preparing at /fc/root. AI Manifest: /tmp/manifest.xml.K9aaow SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: fc Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.oracle.com/solaris/release/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 190/190 34246/34246 231.3/231.3 7.2M/s PHASE ITEMS Installing new actions 48231/48231 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 104.318 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /fc/root/var/log/zones/zoneadm.20130214T045957Z.fc.install root@host:~#
After zone installation, the following new ZFS storage pool will be online.
root@host:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT fc_rpool 39.8G 441M 39.3G 1% 1.00x ONLINE - root@host:~# zpool status -v fc_rpool pool: fc_rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM fc_rpool ONLINE 0 0 0 c0t600144F0DBF8AF190000510979640005d0 ONLINE 0 0 0
The zone installation will be entirely contained within this ZFS storage pool. The zone has the following ZFS dataset layout.
root@host:~# zfs list -t all|grep fc fc_rpool 440M 38.7G 32K /fc fc_rpool/rpool 440M 38.7G 31K /rpool fc_rpool/rpool/ROOT 440M 38.7G 31K legacy fc_rpool/rpool/ROOT/solaris 440M 38.7G 405M /fc/root fc_rpool/rpool/ROOT/solaris@install 67K - 405M - fc_rpool/rpool/ROOT/solaris/var 34.3M 38.7G 33.6M /fc/root/var fc_rpool/rpool/ROOT/solaris/var@install 665K - 33.6M - fc_rpool/rpool/VARSHARE 31K 38.7G 31K /var/share fc_rpool/rpool/export 62K 38.7G 31K /export fc_rpool/rpool/export/home 31K 38.7G 31K /export/home
The new zone hosted on shared storage provided from a fibre channel target has been successfully installed. This zone can now be booted using zoneadm boot.
After the zone has been booted, the zone administrator can observe virtualized ZFS datasets and storage pools from inside the zone.
root@fc:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 39.8G 451M 39.3G 1% 1.00x ONLINE - root@fc:~# zpool status -v pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0t600144F0DBF8AF190000510979640005d0 ONLINE 0 0 0 root@fc:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT rpool 467M 38.7G 31K /rpool rpool/ROOT 467M 38.7G 31K legacy rpool/ROOT/solaris 467M 38.7G 430M / rpool/ROOT/solaris@install 1.90M - 405M - rpool/ROOT/solaris/var 34.4M 38.7G 33.7M /var rpool/ROOT/solaris/var@install 703K - 33.6M - rpool/VARSHARE 39K 38.7G 39K /var/share rpool/export 96.5K 38.7G 32K /export rpool/export/home 64.5K 38.7G 32K /export/home rpool/export/home/user 32.5K 38.7G 32.5K /export/home/user