F Managing Solaris Zones on Exalogic

Oracle Solaris zones are an integral part of the Oracle Solaris operating system. Zones isolate software applications and services using flexible software-defined boundaries. This appendix describes how to manage Solaris zones on Exalogic.

This appendix contains the following sections:

F.1 Requirements

Creating zones on an Exalogic machine has the following requirements:

  • An Exalogic machine imaged to release 2.0.4.0 running Oracle Solaris.

  • An Exalogic machine patched to the April 2013 Patch Set Update available in the My Oracle Support document ID 1545364.1.

  • An Exalogic machine patched with the Solaris patch for Zones on Shared Storage (ZOSS) over iSCSI available in the My Oracle Support document ID 16514816.

F.2 Terminology

Table F-1 describes the terms used in this appendix.

Table F-1 Terminology

Term Description

Logical Unit

A logical unit is a component of a storage system. A logical unit is uniquely numbered creating a Logical Unit Number (LUN). The storage appliance can contain many LUNs. LUNs when associated with one or more SCSI targets, form a unique SCSI device. This SCSI device can be accessed by one or more SCSI initiators.

Initiator

An initiator is an application or production system end-point that is capable of initiating a SCSI session, sending SCSI commands, and I/O requests. Initiators are also identified by unique addressing methods.

Initiator Group

A set of initiators. When an initiator group is associated with a LUN, only initiators from that group may access the LUN.

Target

A target is an end-point that provides a service of processing SCSI commands and I/O requests from an initiator. A target, once configured, consists of zero or more logical units.

Target Group

A set of targets. LUNs are exported over all the targets in one specific target group.


F.3 Creating a Solaris Zone

This section describes how to create a Solaris zone. This section contains the following topics:

F.3.1 Prerequisites

Before creating a Solaris zone, you should perform the following tasks:

F.3.1.1 Creating an iSCSI Target

You can create an iSCSI target by doing the following:

  1. Log in to the storage appliance BUI as the root user.

  2. Click the Configuration tab.

  3. Click SAN.

  4. Click iSCSI Targets.

  5. To create a new iSCSI target, click the plus button next to iSCSI Targets.

    The New iSCSI Target dialog box appears.

  6. For the Target IQN, select the Auto-assign option.

  7. In the Alias field, enter a name for your iSCSI target.

  8. For the Initiator authentication mode, select the authentication mode you are using for communication between the compute node and storage appliance. By default, no authentication is used.

    Note:

    For more information on setting up CHAP authentication between the compute node and storage, see the "Setting Up CHAP Authentication" topic in the following document: http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/iscsi-quickstart-v1-2-051512-1641594.pdf

  9. From the Network interfaces list, select the interface that corresponds to your InfiniBand partition. You can identify the interface by logging in to the storage appliance and running configuration net interfaces show. If there are no partitions defined, identify the interface for the label IB_Interface.

  10. Click OK.

  11. You can add the iSCSI target to an iSCSI target group by dragging and dropping the target to a iSCSI target group in the iSCSI Target Groups panel on the right. If required, you can create a new iSCSI target group by dragging and dropping the target to the top of iSCSI Target Groups panel on the right.

F.3.1.2 Creating an iSCSI Initiator

You can create an iSCSI initiator by doing the following:

  1. Before you can create an iSCSI initiator, you must identify an initiator IQN. The initiator IQN is a unique reference number associated with a specific compute node. To find the initiator IQN for a compute node, do the following:

    1. Log in to an Exalogic compute node.

    2. Run the iscsiadm list initiator-node command as follows:

      # iscsiadm list initiator-node
      Initiator node name: iqn.1986-03.com.sun:01:e00000000000.51891a8b
      Initiator node alias: el01cn01
              Login Parameters (Default/Configured):
                      Header Digest: NONE/-
                      Data Digest: NONE/-
                      Max Connections: 65535/-
              Authentication Type: NONE
              RADIUS Server: NONE
              RADIUS Access: disabled
              Tunable Parameters (Default/Configured):
                      Session Login Response Time: 60/-
                      Maximum Connection Retry Time: 180/-
                      Login Retry Time Interval: 60/-
              Configured Sessions: 1
      

      In this example, the initiator IQN is:

      iqn.1986-03.com.sun:01:e00000000000.51891a8b

  2. Log in to the storage appliance BUI as the root user.

  3. Click the Configuration tab.

  4. Click SAN.

  5. Click Initiators.

  6. Click iSCSI Initiators.

  7. Click the plus button next to iSCSI Initiators to create a new iSCSI initiator.

  8. In the Initiator IQN field, enter the initiator IQN you identified in step 1.

  9. In the Alias field, enter a name for the iSCSI initiator you are creating.

  10. If you are using CHAP authentication, select the Use CHAP check box and fill in the Initiator CHAP name and Initiator CHAP secret fields as you did in Section F.3.1.1, "Creating an iSCSI Target."

    Note:

    For more information on setting up CHAP authentication between the compute node and storage appliance, see the "Setting Up CHAP Authentication" topic in the following document: http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/iscsi-quickstart-v1-2-051512-1641594.pdf

  11. Click OK.

  12. Add the iSCSI initiator to an iSCSI initiator group by dragging and dropping the initiator.

    If required, you can create a new iSCSI initiator group.

F.3.1.3 Creating the Project and LUN

You can create the project and LUN by doing the following:

  1. Create a project as described in Section 8.5, "Creating Custom Projects."

  2. You can create the LUN by doing the following:

    1. Next to the Project, click Shares.

    2. Click LUNs.

      The list of LUNs appears.

    3. Click the plus button next to LUNs.

    4. In the Project field, select the project you created in step 1.

    5. In the Name field, enter a name for the LUN.

    6. Enter the size of the volume in GB.

    7. Select Thin provisioned.

    8. Set the Volume block size as 32k.

    9. In the Target Group field, select the target group you used in Section F.3.1.1, "Creating an iSCSI Target."

    10. In the Initiator Group field, select the initiator group you used in Section F.3.1.2, "Creating an iSCSI Initiator."

    11. Click Apply.

    12. Note the GUID of the LUN you created in the list of LUNs. For example, g600144f09c96cca900005190bfc4000a.

    Note:

    After creating the LUN, ensure that the Write cache enabled check box is deselected. You can find this check box in the Protocols tab of the LUN.

F.3.1.4 Disabling the Write Cache

You must disable the write cache on the LUN permanently by doing the following:

  1. Log in to the compute node for which you identified the initiator node name as described in Section F.3.1.2, "Creating an iSCSI Initiator."

  2. Edit the /kernel/drv/sd.conf file.

  3. Add the following to the sd.conf file:

    sd-config-list="SUN     ZFS Storage 7120","write-cache-disable",
                   "SUN     ZFS Storage 7320","write-cache-disable",
                   "SUN     ZFS Storage 7420","write-cache-disable";
    write-cache-disable=1,0x00010,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0;
    
  4. Restart the compute node by running the reboot command.

F.3.1.5 Formatting the LUN

Before the compute node can use the LUN, you must format the LUN. You can format the LUN by doing the following:

  1. Log in to a compute node as the root user.

  2. Run the iscsiadm commands to discover the iSCSI targets from the compute node:

    # iscsiadm add discovery-address IPoIB_address_of_the_storage_appliance
    # iscsiadm modify discovery -t enable
    

    In this example, IPoIB_address_of_the_storage_appliance is the IP address of the storage appliance on the IPoIB network.

  3. Run the following command to load drivers, attach device instances, create logical links to device nodes, and load the device policy for iSCSI:

    # devfsadm -c iscsi
    
  4. Identify the disk you should format and label by running echo | format as follows:

    # echo | format
    Searching for disks...done
     
     
    AVAILABLE DISK SELECTIONS:
           0. c0t600144F09C96CCA90000518CDEB10005d0 <SUN-ZFS Storage 7320-1.0-64.00GB>
              /scsi_vhci/disk@g600144f09c96cca90000518cdeb10005
           1. c0t600144F09C96CCA90000518CDF100006d0 <SUN-ZFS Storage 7320-1.0-64.00GB>
              /scsi_vhci/disk@g600144f09c96cca90000518cdf100006
           2. c0t600144F09C96CCA90000518CDFB60007d0 <SUN-ZFS Storage 7320-1.0-64.00GB>
              /scsi_vhci/disk@g600144f09c96cca90000518cdfb60007
           3. c0t600144F09C96CCA900005190BFC4000Ad0 <SUN-ZFS Storage 7320-1.0 cyl 8352 alt 2 hd 255 sec 63>
              /scsi_vhci/disk@g600144f09c96cca900005190bfc4000a
           4. c7t0d0 <LSI-MR9261-8i-2.12-28.87GB>
              /pci@0,0/pci8086,340a@3/pci1000,9263@0/sd@0,0
    Specify disk (enter its number): Specify disk (enter its number):
    

    The value after /scsi_vhci/disk@g is the GUID of the LUN you created in Section F.3.1.3, "Creating the Project and LUN." In this example, the disk c0t600144F09C96CCA900005190BFC4000Ad0 with the GUID g600144f09c96cca900005190bfc4000a should be formatted and labelled.

  5. Format the disk by doing the following:

    1. Run the format command to start formatting the disk as follows:

      # format -e c0t600144F09C96CCA900005190BFC4000Ad0
      selecting c0t600144F09C96CCA900005190BFC4000Ad0
      [disk formatted]
      

      The format prompt appears.

    2. Enter fdisk to manipulate the partition tables as follows:

      format> fdisk
      No fdisk table exists. The default partition for the disk is:
       
        a 100% "SOLARIS System" partition
      
    3. When prompted, enter n to edit the partition table.

      Type "y" to accept the default partition, otherwise type "n" to edit the
      partition table. n
      
    4. Enter 1 to set the partition type.

    5. Enter f to set the partition type as EFI (Protective) as follows:

      Select the partition type to create:
         1=SOLARIS2   2=UNIX      3=PCIXOS     4=Other        5=DOS12
         6=DOS16      7=DOSEXT    8=DOSBIG     9=DOS16LBA     A=x86 Boot
         B=Diagnostic C=FAT32     D=FAT32LBA   E=DOSEXTLBA    F=EFI (Protective)
         G=EFI_SYS    0=Exit? f
      
  6. Label the LUN by doing the following:

    1. Enter 6 to label the LUN.

      The format prompt appears.

    2. Enter label to label the disk as follows:

      format> label
      

      The list of label types appears.

    3. Enter 1 to specify the label type as an EFI label as follows:

      [0] SMI Label
      [1] EFI Label
      Specify Label type[1]: 1
      

      A confirmation message appears.

    4. Enter y to continue.

    5. Enter quit to exit the format prompt.

    6. You can use the format command to ensure the disk is available and the same size you specified in the storage appliance BUI as follows:

      # format
      Searching for disks...done
       
       
      AVAILABLE DISK SELECTIONS:
             0. c0t600144F09C96CCA90000518CDEB10005d0 <SUN-ZFS Storage 7320-1.0-64.00GB>
                /scsi_vhci/disk@g600144f09c96cca90000518cdeb10005
             1. c0t600144F09C96CCA90000518CDF100006d0 <SUN-ZFS Storage 7320-1.0-64.00GB>
                /scsi_vhci/disk@g600144f09c96cca90000518cdf100006
             2. c0t600144F09C96CCA90000518CDFB60007d0 <SUN-ZFS Storage 7320-1.0-64.00GB>
                /scsi_vhci/disk@g600144f09c96cca90000518cdfb60007
             3. c0t600144F09C96CCA900005190BFC4000Ad0 <SUN-ZFS Storage 7320-1.0-64.00GB>
                /scsi_vhci/disk@g600144f09c96cca900005190bfc4000a
             4. c7t0d0 <LSI-MR9261-8i-2.12-28.87GB>
                /pci@0,0/pci8086,340a@3/pci1000,9263@0/sd@0,0
      Specify disk (enter its number):
      

F.3.1.6 Setting Up the Exclusive 10 GbE Network for the Zone

The zone you want to create should be given access to an exclusive network. You should create the necessary VNICs for the zone by doing the following:

  1. Create a VLAN and VNIC on the switch by running steps 1 - 11 in Section 10.4, "Setting Up Ethernet Over InfiniBand (EoIB) on Oracle Solaris 11.1."

  2. Log in to the compute node as the root user.

  3. Run the dladm show-phys command to identify the physical links of the EoIB devices as in the following example:

    # dladm show-phys 
    LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
    net6              Infiniband           up         32000  unknown   ibp1
    net0              Ethernet             up         1000   full      igb0
    net1              Ethernet             unknown    0      unknown   igb1
    net3              Ethernet             unknown    0      unknown   igb3
    net4              Ethernet             up         10     full      usbecm0
    net8              Ethernet             up         10000  full      eoib1
    net2              Ethernet             unknown    0      unknown   igb2
    net5              Infiniband           up         32000  unknown   ibp0
    net9              Ethernet             up         10000  full      eoib0
    

    In this example, net8 and net9 are the physical links of the EoIB devices.

  4. Create a VNIC on the compute node for the first physical link using the dladm create-vnic command as follows:

    # dladm create-vnic -l link_of_eoib0 -v VLAN_ID vnic1_name
    

    Example:

    # dladm create-vnic -l net9 -v 1706 vnic3_1706
    
  5. Create a VNIC for the second physical link using the dladm create-vnic command as follows:

    # dladm create-vnic -l link_of_eoib1 -v VLAN_ID vnic2_name
    

    Example:

    # dladm create-vnic -l net8 -v 1706 vnic2_1706
    

F.3.2 Setting Up a Solaris Zone

With the storage appliance prepared, you can store the zone on the storage appliance and set up additional bonded network on the 10 GbE Exalogic client network exclusively for the zone.

You can set up the solaris zone by doing the following:

  1. Creating a Zone

  2. Installing and Booting Up the Zone

F.3.2.1 Creating a Zone

You can create a zone by doing the following:

  1. Log in to the compute node as the root user.

  2. Run the zonecfg command to configure the zone as follows:

    # zonecfg -z zone_name
    

    Example:

    # zonecfg -z zone04
    Use 'create' to begin configuring a new zone.
    

    In this example, the name of the zone you are creating is zone04.

  3. Enter create to begin configuring the zone:

    zonecfg:zone04 create
    create: Using system default template 'SYSdefault'
    
  4. Create the zone by running the following commands:

    zonecfg:zone04> set zonepath=/zones/zone04
    zonecfg:zone04> add rootzpool 
    zonecfg:zone04:rootzpool> add storage iscsi://IPoIB_Address_of_the_storage_Appliance/luname.naa.LUNGUID
    zonecfg:zone04:rootzpool> end
    zonecfg:zone04> remove anet
    zonecfg:zone04> add net
    zonecfg:zone04:net> set physical=vnic1_name
    zonecfg:zone04:net> end
    zonecfg:zone04> add net
    zonecfg:zone04:net> set physical=vnic2_name
    zonecfg:zone04:net> end
    zonecfg:zone04> verify
    zonecfg:zone04> commit
    

    In this example:

    Example:

    zonecfg:zone04> set zonepath=/zones/zone04
    zonecfg:zone04> add rootzpool 
    zonecfg:zone04:rootzpool> add storage iscsi://192.168.14.133/luname.naa.600144f09c96cca900005190bfc4000a
    zonecfg:zone04:rootzpool> end
    zonecfg:zone04> remove anet
    zonecfg:zone04> add net
    zonecfg:zone04:net> set physical=vnic2_1706
    zonecfg:zone04:net> end
    zonecfg:zone04> add net
    zonecfg:zone04:net> set physical=vnic3_1706
    zonecfg:zone04:net> end
    zonecfg:zone04> verify
    zonecfg:zone04> commit
    
  5. You can verify the details of the zone by running the info command as follows:

    zonecfg:zone04> info
    zonename: zone04
    zonepath: /zones/zone04
    brand: solaris
    autoboot: false
    bootargs: 
    file-mac-profile: 
    pool: 
    limitpriv: 
    scheduling-class: 
    ip-type: exclusive
    hostid: 
    fs-allowed: 
    net:
        address not specified
        allowed-address not specified
        configure-allowed-address: true
        physical: vnic1_name
        defrouter not specified
    net:
        address not specified
        allowed-address not specified
        configure-allowed-address: true
        physical: vnic2_name
        defrouter not specified
    rootzpool:
        storage: iscsi://IPoIB_Address_of_the_Storage_Appliance/luname.naa.LUNGUID
    zonecfg:zone04>
    

F.3.2.2 Installing and Booting Up the Zone

Before installing the zone, ensure that you have a repository for the Solaris installation set up stored on the storage appliance. The zone creation uses this repository to store the operating system files for the zone.

  1. Install the zone by running the command as follows:

    # zoneadm -z zone04 install
    
    Configured zone storage resource(s) from:
        iscsi://192.168.14.133/luname.naa.600144f09c96cca900005190bfc4000a
    Created zone zpool: zone04_rpool
    Progress being logged to /var/log/zones/zoneadm.20130513T104657Z.zone04.install
           Image: Preparing at /zones/zone04/root.
     
     AI Manifest: /tmp/manifest.xml.lPaGVo
      SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
        Zonename: zone04
    Installation: Starting ...
     
                  Creating IPS image
    Startup linked: 1/1 done
                  Installing packages from:
                      exa-family
                        origin:                                     http://localhost:1008/exa-family/acbd22da328c302a86fb9f23d43f5d10f13cf5a6/
                      solaris
                          origin:  http://install1/release/solaris/
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                            185/185   34345/34345  229.7/229.7 10.6M/s
     
    PHASE                                          ITEMS
    Installing new actions                   48269/48269
    Updating package state database                 Done 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Installation: Succeeded
     
            Note: Man pages can be obtained by installing pkg:/system/manual
     
     done.
     
            Done: Installation completed in 81.509 seconds.
     
     
      Next Steps: Boot the zone, then log into the zone console (zlogin -C)
     
                  to complete the configuration process.
     
    Log saved in non-global zone as /zones/zone04/root/var/log/zones/zoneadm.20130513T104657Z.zone04.install
    
  2. Boot up the zone by running the following command:

    # zoneadm -z zone04 boot
    
  3. Once the zone has booted up, log in to the zone using the zlogin command as follows:

    # zlogin zone04
    [Connected to zone 'zone04' pts/7]
    
  4. Bond the VNICs you created as described in step 18 of Section 10.4, "Setting Up Ethernet Over InfiniBand (EoIB) on Oracle Solaris 11.1."

  5. Run the following command to display the bond you created in the previous step:

    root@zone04:~# ipadm show-addr
    ADDROBJ           TYPE     STATE        ADDR
    lo0/v4            static   ok           127.0.0.1/8
    bond1/v4          static   ok           138.3.51.2/22
    lo0/v6            static   ok           ::1/128
    

    Note the IP address of the bond you created.

  6. Run netstat -rn to display the routing table as in the following example:

    root@zone04:~# netstat -rn
     
    Routing Table: IPv4
      Destination           Gateway           Flags  Ref     Use     Interface 
    -------------------- -------------------- ----- ----- ---------- --------- 
    127.0.0.1            127.0.0.1            UH        2          0 lo0       
    138.3.48.0           138.3.51.2           U         2          0 bond1     
     
    Routing Table: IPv6
      Destination/Mask            Gateway                   Flags Ref   Use    If   
    --------------------------- --------------------------- ----- --- ------- ----- 
    ::1                         ::1                         UH      2       0 lo0
    
  7. Add the IP address of the bond you noted in step 5 by running the following command:

    root@zone04:~# route -p add default IP_address_of_bond
    

    Example:

    root@zone04:~# route -p add default 138.3.48.1
    add net default: gateway 138.3.48.1
    add persistent net default: gateway 138.3.48.1
    
  8. Display the routing table again to verify that the IP address of the bond was added as in the following example:

    root@zone04:~# netstat -rn
     
    Routing Table: IPv4
      Destination           Gateway           Flags  Ref     Use     Interface 
    -------------------- -------------------- ----- ----- ---------- --------- 
    default              138.3.48.1           UG        1          0           
    127.0.0.1            127.0.0.1            UH        2          0 lo0       
    138.3.48.0           138.3.51.2           U         2          0 bond1     
     
    Routing Table: IPv6
      Destination/Mask            Gateway                   Flags Ref   Use    If   
    --------------------------- --------------------------- ----- --- ------- ----- 
    ::1                         ::1                         UH      2       0 lo0
    

F.4 Migrating a Zone to a New Host

You can migrate a zone from one physical host to another by running the following procedure:

Note:

The zone is shut down during the migration process. If you require high availability, ensure you use a clustered software solution.

  1. Log in to the compute node hosting the zone as the root user.

  2. Shutdown the zone by running the following command:

    # zoneadm -z name_of_zone shutdown
    

    Example:

    # zoneadm -z zone04 shutdown 
    
  3. Detach the zone by running the following command:

    # zoneadm -z name_of_zone detach
    

    Example:

    # zoneadm -z zone04 detach
    zoneadm: zone 'zone04': warning(s) occured during processing URI: 'iscsi://192.168.14.133/luname.naa.600144f09c96cca900005190bfc4000a'
    Could not remove one or more iSCSI discovery addresses because logical unit is in use
    Exported zone zpool: zone04_rpool
    Unconfigured zone storage resource(s) from:
            iscsi://192.168.14.133/luname.naa.600144f09c96cca900005190bfc4000a
    # 
    
  4. Create a directory on the storage appliance to which you can export the configuration of the zone:

    # mkdir -p directory
    

    Example:

    # mkdir -p /u01/common/general/zone04
    
  5. Export the configuration of the zone by running the following command:

    # zonecfg -z name_of_zone export > directory/name_of_zone.cfg
    
    

    Example:

    # zonecfg -z zone04 export > /common/general/zone04/zone04.cfg
    
  6. Log in to the compute node you want to migrate the zone to as the root user.

  7. Import the zone from the configuration file you created in the previous step by running the following command:

    # zonecfg -z name_of_zone -f directory/name_of_zone.cfg
    

    Example:

    # zonecfg -z zone04 -f /common/general/zone04/zone04.cfg
    
  8. Attach the zone by running the following command:

    # zoneadm -z name_of_zone attach
    

    Example:

    # zoneadm -z zone04 attach 
    Configured zone storage resource(s) from:
        iscsi://192.168.14.133/luname.naa.600144f09c96cca900005190bfc4000a
    Imported zone zpool: zone04_rpool
    Progress being logged to /var/log/zones/zoneadm.20130513T135704Z.zone04.attach
        Installing: Using existing zone boot environment
          Zone BE root dataset: zone04_rpool/rpool/ROOT/solaris
                         Cache: Using /var/pkg/publisher.
      Updating non-global zone: Linking to image /.
    Processing linked: 1/1 done
      Updating non-global zone: Auditing packages.
    No updates necessary for this image.
     
      Updating non-global zone: Zone updated.
                        Result: Attach Succeeded.
    Log saved in non-global zone as /zones/zone04/root/var/log/zones/zoneadm.20130513T135704Z.zone04.attach
    
  9. Boot up the zone by running the following command:

    # zoneadm -z name_of_zone boot
    

    Example:

    # zoneadm -z zone04 boot
    

Note:

In some situations, the process of detaching and attaching can cause the server to boot up with the system configuration wizard running.

You can resolve this issue, by logging in to the console and completing the wizard. You can use the following command to log in to the zone:

# zlogin -C name_of_zone