C H A P T E R  5

Other Information and Tasks

This chapter contains information and tasks about using the Logical Domains software that are not described in the preceding chapters.


Restrictions on Entering Names in the CLI

The following sections describe the restrictions on entering names in the Logical Domains Manager CLI.

File Names (file) and Variable Names (var_name)

Virtual Disk Server file|device and Virtual Switch device Names

Configuration Name (config_name)

The logical domain configuration name (config_name) that you assign to a configuration stored on the system controller must have no more than 64 characters.

All Other Names

The remainder of the names, such as the logical domain name (ldom), service names (vswitch_name, service_name, vdpcs_service_name, and vcc_name), virtual network name (if_name), and virtual disk name (disk_name), must be in the following format:


Using ldm list Subcommands

This section shows the syntax usage for the ldm subcommands, defines some output terms, such as flags and utilization statistics, and provides examples of the output.

Machine-Readable Output

If you are creating scripts that use ldm list command output, always use the -p option to produce the machine-readable form of the output. See To Generate a Parseable, Machine-Readable List (-p) for more information.

procedure icon  To Show Syntax Usage for ldm Subcommands

  •   To look at syntax usage for all ldm subcommands, do the following.


EXAMPLE 5-1   Syntax Usage for All ldm Subcommands  
primary# ldm --help
 
Usage:
 ldm [--help] command [options] [properties] operands
 
Command(s) for each resource (aliases in parens):
 
     bindings
         list-bindings [-e] [-p] [<ldom>...]
 
     services
         list-bindings [-e] [-p] [<ldom>...]
 
     constraints
         list-constraints ([-x] | [-e] [-p]) [<ldom>...]
 
     devices
         list-devices [-a] [-p] [cpu] [mau] [memory] [io]
 
     domain      ( dom )
         add-domain (-i <file> | mac-addr=<num> <ldom> | <ldom>...)
         remove-domain (-a | <ldom>...)
         list-domain [-e] [-l] [-p] [<ldom>...]
         start-domain start-domain (-a | -i <file> | <ldom>...)
         stop-domain stop-domain [-f] (-a | <ldom>...)
         bind-domain (-i <file> | <ldom>)
         unbind-domain <ldom>
         panic-domain <ldom>
 
     io
         add-io [bypass=on] <bus> <ldom>
         remove-io <bus> <ldom>
 
     mau
         add-mau <number> <ldom>
         set-mau <number> <ldom>
         remove-mau <number> <ldom>
 
     memory      ( mem )
         add-memory <number>[GMK] <ldom>
         set-memory <number>[GMK] <ldom>
         remove-memory <number>[GMK] <ldom>
 
     reconf
         remove-reconf <ldom>
 
     spconfig      ( config )
         add-spconfig <config_name>
         set-spconfig <config_name>
         remove-spconfig <config_name>
         list-spconfig
 
     variable    ( var ) 
         add-variable <var_name>=<value> <ldom>
         set-variable <var_name>=<value> <ldom>
         remove-variable <var_name> <ldom>
         list-variable [<var_name>...] <ldom>
 
     vconscon    ( vcc )
         add-vconscon port-range=<x>-<y> <vcc_name> <ldom>
         set-vconscon port-range=<x>-<y> <vcc_name>
         remove-vconscon [-f] <vcc_name>
 
     vconsole    ( vcons ) 
         set-vcons [port=[<port-num>]] [group=<group>] [service=<vcc_server>] <ldom>
 
     vcpu
         add-vcpu <number> <ldom>
         set-vcpu <number> <ldom>
         remove-vcpu <number> <ldom>
 
     vdisk
         add-vdisk [timeout=<seconds>] <disk_name>  <volume_name>@<service_name> <ldom>
         remove-vdisk [-f] <disk_name> <ldom>
 
     vdiskserver ( vds )
         add-vdiskserver <service_name> <ldom>
         remove-vdiskserver [-f] <service_name>
 
     vdpcc       ( ndpsldcc )
         add-vdpcc <vdpcc_name> <service_name> <ldom>
         remove-vdpcc [-f] <vdpcc_name> <ldom>
 
     vdpcs       ( ndpsldcs )
         add-vdpcs <vdpcs_name> <ldom>
         remove-vdpcs [-f] <vdpcs_name>
 
     vdiskserverdevice   ( vdsdev )
         add-vdiskserverdevice [options=<opts>] <file|device>  <volume_name>@<service_name>
         remove-vdiskserverdevice [-f] <volume_name>@<service_name>
 
     vnet
         add-vnet [mac-addr=<num>] <if_name> <vswitch_name> <ldom>
         set-vnet [mac-addr=<num>] [vswitch=<vswitch_name>] <if_name>  <ldom>
         remove-vnet [-f] <if_name> <ldom>
 
     vswitch     ( vsw )
         add-vswitch [mac-addr=<num>] [net-dev=<device>]  <vswitch_name> <ldom>
         set-vswitch [mac-addr=<num>] [net-dev=<device>] <vswitch_name>
         remove-vswitch [-f] <vswitch_name>
 
Verb aliases:
         Alias          Verb
         -----          -------
         rm             remove
         ls             list
 
Command aliases:
         Alias          Command
         -----          -------
         create         add-domain
         destroy        remove-domain
         cancel-reconf  remove-reconf
         start          start-domain
         stop           stop-domain
         bind           bind-domain
         unbind         unbind-domain
         panic          panic-domain

Flag Definitions

The following flags can be shown in the output for a domain:

-normal
placeholders
cstarting or stopping
control domaint
dtransition
delayed reconfigurationv
nvirtual I/O domain

If you use the long (-l) option for the command, the flags are spelled out. If not, you see the letter abbreviation.

The list flag values are position dependent. Following are the values that can appear in each of the five columns from left to right:

Column 1Column 3Column 5n or tc or -
Column 2Column 4s or -d or -v or -

Utilization Statistic Definition

The per virtual CPU utilization statistic (UTIL) is shown on the long (-l) option of the ldm list command. The statistic is the percentage of time, since the last statistics display, that the virtual CPU spent executing on behalf of the guest operating system. A virtual CPU is considered to be executing on behalf of the guest operating system except when it has been yielded to the hypervisor. If the guest operating system does not yield virtual CPUs to the hypervisor, the utilization of CPUs in the guest operating system will always show as 100%.

The utilization statistic reported for a logical domain is the average of the virtual CPU utilizations for the virtual CPUs in the domain.

Examples of Various Lists

procedure icon  To Show Software Versions (-V)

  •   To view the current software versions installed, do the following and you receive a listing similar to the following.


EXAMPLE 5-2   Software Versions Installed 
primary$ ldm -V
 
Logical Domain Manager (v 1.0.2)
   Hypervisor control protocol v 1.0
 
System PROM:
   Hypervisor   v. 1.5.2           @(#)Hypervisor 1.5.2 2007/09/25 08:39/015
   OpenBoot     v. 4.27.2          @(#)OBP 4.27.2 2007/09/24 16:28

procedure icon  To Generate a Short List

  •   To generate a short list for all domains, do the following.


EXAMPLE 5-3   Short List for All Domains 
primary$ ldm list
NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active   -t-cv           4     1G       0.5%  3d 21h 7m
ldg1             active   -t---   5000    8     1G        23%  2m

procedure icon  To Generate a Long List (-l)

  •   To generate a long list for all domains, do the following.


EXAMPLE 5-4   Long List for All Domains  
primary$ ldm list -l
NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active   -t-cv           1     768M     0.0%  0s
 
VCPU
    VID    PID    UTIL STRAND
    0      0      0.0%   100%
 
MEMORY
    RA               PA               SIZE
    0x4000000        0x4000000        768M
 
IO
    DEVICE           PSEUDONYM        OPTIONS
    pci@780          bus_a
    pci@7c0          bus_b            bypass=on
 
VCC
    NAME             PORT-RANGE
    vcc0             5000-5100
 
VSW
    NAME             MAC               NET-DEV   DEVICE    MODE
    vsw0             08:00:20:aa:bb:e0 e1000g0   switch@0  prog,promisc
    vsw1             08:00:20:aa:bb:e1                     routed
 
VDS
    NAME             VOLUME         OPTIONS          DEVICE
    vds0             myvol-a        slice            /disk/a
                     myvol-b                         /disk/b
                     myvol-c        ro,slice,excl    /disk/c
    vds1             myvol-d                         /disk/d
 
VDPCS
    NAME
    vdpcs0
    vdpcs1
 
------------------------------------------------------------------------------
NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
ldg1             bound    -----   5000    1     512M 
 
VCPU
    VID    PID    UTIL STRAND
    0      1             100%
 
MEMORY
    RA               PA               SIZE
    0x4000000        0x34000000       512M
 
NETWORK
    NAME         SERVICE                     DEVICE       MAC
    mynet-b      vsw0@primary                network@0    08:00:20:ab:9a:12
    mynet-a      vsw0@primary                network@1    08:00:20:ab:9a:11
 
DISK
    NAME             VOLUME                      DEVICE     SERVER
    mydisk-a         myvol-a@vds0                disk@0     primary
    mydisk-b         myvol-b@vds0                disk@1     primary
 
VDPCC
    NAME             SERVICE
    myvdpcc-a        vdpcs0@primary
    myvdpcc-b        vdpcs0@primary
 
VCONS
    NAME             SERVICE                     PORT
    mygroup          vcc0@primary                5000 

procedure icon  To Generate an Extended List (-e)

  •   To generate an extended list of all domains, do the following.


EXAMPLE 5-5   Extended List for all Domains
primary$ ldm list -e
NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active   -t-cv           1     768M     0.0%  0s
 
VCPU
    VID    PID    UTIL STRAND
    0      0      0.0%   100%
 
MEMORY
    RA               PA               SIZE
    0x4000000        0x4000000        768M
 
IO
    DEVICE           PSEUDONYM        OPTIONS
    pci@780          bus_a
    pci@7c0          bus_b            bypass=on
 
VLDC
    NAME
    primary
 
VCC
    NAME             PORT-RANGE
    vcc0             5000-5100
 
VSW
    NAME             MAC               NET-DEV   DEVICE    MODE
    vsw0             08:00:20:aa:bb:e0 e1000g0   switch@0  prog,promisc
    vsw1             08:00:20:aa:bb:e1                     routed
 
VDS
    NAME             VOLUME         OPTIONS          DEVICE
    vds0             myvol-a        slice            /disk/a
                     myvol-b                         /disk/b
                     myvol-c        ro,slice,excl    /disk/c
    vds1             myvol-d                         /disk/d
 
VDPCS
    NAME
    vdpcs0
    vdpcs1
 
VLDCC
    NAME             SERVICE                     DESC
    hvctl            primary@primary             hvctl 
    vldcc0           primary@primary             ds 
 
------------------------------------------------------------------------------
NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
ldg1             bound    -----   5000    1     512M 
 
VCPU
    VID    PID    UTIL STRAND
    0      1             100%
 
MEMORY
    RA               PA               SIZE 
    0x4000000        0x34000000       512M
 
VLDCC
 NAME             SERVICE                     DESC
 vldcc0           primary@primary             ds 
 
NETWORK
    NAME         SERVICE                     DEVICE       MAC
    mynet-b      vsw0@primary                network@0    08:00:20:ab:9a:12
    mynet-a      vsw0@primary                network@1    08:00:20:ab:9a:11
 
DISK
    NAME             VOLUME                      DEVICE     SERVER 
    mydisk-a         myvol-a@vds0                disk@0     primary 
    mydisk-b         myvol-b@vds0                disk@1     primary 
 
VDPCC
    NAME             SERVICE 
    myvdpcc-a        vdpcs0@primary 
    myvdpcc-b        vdpcs0@primary 
 
VCONS
    NAME             SERVICE                     PORT
    mygroup          vcc0@primary                5000

procedure icon  To Generate a Parseable, Machine-Readable List (-p)

  •   To generate a parseable, machine-readable list of all domains, do the following.


EXAMPLE 5-6   Machine-Readable List 
primary$ ldm list -p
VERSION 1.0
DOMAIN|name=primary|state=active|flags=-t-cv|cons=|ncpu=1|mem=805306368|util=0.0|uptime=0
DOMAIN|name=ldg1|state=bound|flags=-----|cons=5000|ncpu=1|mem=536870912|util=|uptime=

procedure icon  To Show the Status of a Domain

  •   To look at the status of a domain (for example, guest domain ldg1), do the following.


EXAMPLE 5-7   Domain Status
primary# ldm list-domain ldg1
NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
ldg1             active   -t---   5000    8     1G       0.3%  2m

procedure icon  To List a Variable

  •   To list a variable (for example, boot-device) for a domain (for example, ldg1), do the following.


EXAMPLE 5-8   Variable List for a Domain 
primary$ ldm list-variable boot-device ldg1
boot-device=/virtual-devices@100/channel-devices@200/disk@0:a

procedure icon  To List Bindings

  •   To list resources that are bound for a domain (for example, ldg1) do the following.


EXAMPLE 5-9   Bindings List for a Domain  
primary$ ldm list-bindings ldg1
NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
ldg1             bound    -----   5000    1     512M 
 
VCPU
    VID    PID    UTIL STRAND
    0      1             100%
 
MEMORY
    RA               PA               SIZE
    0x4000000        0x34000000       512M
 
NETWORK
    NAME             SERVICE                   DEVICE     MAC
    mynet-b          vsw0@primary              network@0  08:00:20:ab:9a:12
        PEER                        MAC
        vsw0@primary                08:00:20:aa:bb:e0
        mynet-a@ldg1                08:00:20:ab:9a:11
        mynet-c@ldg2                08:00:20:ab:9a:22
    NAME             SERVICE                   DEVICE     MAC
    mynet-a          vsw0@primary              network@1  08:00:20:ab:9a:11
        PEER                        MAC
        vsw0@primary                08:00:20:aa:bb:e0
        mynet-b@ldg1                08:00:20:ab:9a:12
        mynet-c@ldg2                08:00:20:ab:9a:22
 
DISK
    NAME             VOLUME                      DEVICE     SERVER
    mydisk-a         myvol-a@vds0                disk@0     primary
    mydisk-b         myvol-b@vds0                disk@1     primary
 
VDPCC
    NAME             SERVICE
    myvdpcc-a        vdpcs0@primary
    myvdpcc-b        vdpcs0@primary
 
VCONS
    NAME             SERVICE                     PORT
    mygroup          vcc0@primary                5000

procedure icon  To List Configurations

  •   To list logical domain configurations that have been stored on the SC, do the following.


EXAMPLE 5-10   Configurations List 
primary$ ldm list-config
factory-default [current]
initial [next]

Meaning of Labels

The labels to the right of the configuration name mean the following:

  • current - configuration currently being used

  • next - configuration to be used at the next power cycle

procedure icon  To List Devices

  •   To list all server resources, bound and unbound, do the following.


EXAMPLE 5-11   List of All Server Resources  
primary$ ldm list-devices -a
VCPU
    PID  %FREE
    0       0
    1       0
    2       0
    3       0
    4       100
    5       100
    6       100
    7       100
    8       100
    9       100
    10      100
    11      100
    12      100
    13      100
    14      100
    15      100
    16      100
    17      100
    18      100
    19      100
    20      100
    21      100
    22      100
    23      100
    24      100
    25      100
    26      100
    27      100
    28      100
    29      100
    30      100
    31      100
 
MAU
    CPUSET                                  BOUND
    (0, 1, 2, 3)                            ldg2
    (4, 5, 6, 7)
    (8, 9, 10, 11)
    (12, 13, 14, 15)
    (16, 17, 18, 19)
    (20, 21, 22, 23)
    (24, 25, 26, 27)
    (28, 29, 30, 31)
 
MEMORY
    PA                   SIZE            BOUND
    0x0                  512K            _sys_
    0x80000              1536K           _sys_
    0x200000             62M             _sys_
    0x4000000            768M            primary
    0x34000000           512M            ldg1
    0x54000000           8M              _sys_
    0x54800000           2G              ldg2
    0xd4800000           29368M
 
IO
    DEVICE           PSEUDONYM        BOUND   OPTIONS
    pci@780          bus_a            yes 
    pci@7c0          bus_b            yes     bypass=on

procedure icon  To List Services

  •   To list the services that are available, do the following.


EXAMPLE 5-12   Services List
primary$ ldm list-services
VDS
    NAME             VOLUME         OPTIONS          DEVICE
    primary-vds0
VCC
    NAME             PORT-RANGE
    primary-vcc0     5000-5100
VSW
   NAME         MAC               NET-DEV  DEVICE   MODE        
   primary-vsw0 00:14:4f:f9:68:d0 e1000g0  switch@0 prog,promisc

Listing Constraints

To the Logical Domains Manager, constraints are one or more resources you want to have assigned to a particular domain. You either receive all the resources you ask to be added to a domain or you get none of them, depending upon the available resources. The list-constraints subcommand lists those resources you requested assigned to the domain.

procedure icon  To List Constraints for One Domain

  •   To list constraints for one domain (for example, ldg1) do the following.


EXAMPLE 5-13   Constraints List for One Domain 
primary$ ldm list-constraints ldg1
DOMAIN
ldg1
 
VCPU
    COUNT
    1
 
MEMORY
    SIZE
    512M
 
NETWORK
    NAME         SERVICE                     DEVICE       MAC
    mynet-b      vsw0                        network@0    08:00:20:ab:9a:12
    mynet-b      vsw0                        network@0    08:00:20:ab:9a:12
 
DISK
    NAME             VOLUME
    mydisk-a         myvol-a@vds0
    mydisk-b         myvol-b@vds0
 
VDPCC
    NAME             SERVICE
    myvdpcc-a        vdpcs0@primary
    myvdpcc-b        vdpcs0@primary
 
VCONS
    NAME             SERVICE
    mygroup          vcc0

procedure icon  To List Constraints in XML Format

  •   To list constraints in XML format for a particular domain (for example, ldg1), do the following.


EXAMPLE 5-14   Constraints for a Domain in XML Format  
primary$ ldm list-constraints -x ldg1
<?xml version="1.0"?>
<LDM_interface version="1.0">
  <data version="2.0">
    <ldom>
      <ldom_info>
        <ldom_name>ldg1</ldom_name>
      </ldom_info>
      <cpu>
        <number>8</number>
      </cpu>
      <memory>
        <size>1G</size>
      </memory>
      <network>
        <vnet_name>vnet0</vnet_name>
        <service_name>primary-vsw0</service_name>
        <mac_address>01:14:4f:fa:0f:55</mac_address>
      </network>
      <disk>
        <vdisk_name>vdisk0</vdisk_name>
        <service_name>primary-vds0</service_name>
        <vol_name>vol0</vol_name>
      </disk>
      <var>
        <name>boot-device</name>
        <value>/virtual-devices@100/channel-devices@200/disk@0:a</value>
      </var>
      <var>
        <name>nvramrc</name>
        <value>devalias vnet0 /virtual-devices@100/channel-devices@200/
network@0</value>      </var>
      <var>
        <name>use-nvramrc?</name>
        <value>true</value>
      </var>
    </ldom>
  </data>
</LDM_interface>
 

procedure icon  To List Constraints in a Machine-Readable Format

  •   To list constraints for all domains in a parseable format, do the following.


EXAMPLE 5-15   Constraints for All Domains in a Machine-Readable Format  
primary$ ldm list-constraints -p
VERSION 1.0
DOMAIN|name=primary
MAC|mac-addr=00:03:ba:d8:b1:46
VCPU|count=4
MEMORY|size=805306368
IO
|dev=pci@780|alias=
|dev=pci@7c0|alias=
VDS|name=primary-vds0
|vol=disk-ldg2|opts=|dev=/ldoms/nv72-ldg2/disk
|vol=vol0|opts=|dev=/ldoms/nv72-ldg1/disk
VCC|name=primary-vcc0|port-range=5000-5100
VSW|name=primary-vsw0|mac-addr=|net-dev=e1000g0|dev=switch@0
DOMAIN|name=ldg1
VCPU|count=8
MEMORY|size=1073741824
VARIABLES
|boot-device=/virtual-devices@100/channel-devices@200/disk@0:a
|nvramrc=devalias vnet0 /virtual-devices@100/channel-devices@200/network@0
|use-nvramrc?=true
VNET|name=vnet0|dev=network@0|service=primary-vsw0|mac-addr=01:14:4f:fa:0f:55
VDISK|name=vdisk0|vol=vol0@primary-vds0


The ldm stop-domain Command Can Time Out If the Domain Is Heavily Loaded

An ldm stop-domain command can time out before the domain completes shutting down. When this happens, an error similar to the following is returned by the Logical Domains Manager:


LDom ldg8 stop notification failed

However, the domain could still be processing the shutdown request. Use the ldm list-domain command to verify the status of the domain. For example:


# ldm list-domain ldg8
NAME         STATE   FLAGS  CONS   VCPU MEMORY  UTIL UPTIME
ldg8         active  s----  5000   22   3328M   0.3% 1d 14h 31m

The preceding list shows the domain as active, but the s flag indicates that the domain is in the process of stopping. This should be a transitory state.

The following example shows the domain has now stopped:


# ldm list-domain ldg8
NAME         STATE   FLAGS  CONS   VCPU MEMORY  UTIL UPTIME
ldg8         bound   -----  5000   22   3328M


Determining the Solaris Network Interface Name Corresponding to a Virtual Network Device

There is no way to determine the Solaris OS network interface name on a guest, corresponding to a given virtual device, directly from the output provided by the ldm list-* commands. However, you can do this by using a combination of the output from ldm list -l command and the entries under /devices on the Solaris OS guest.

procedure icon  To Find Solaris OS Network Interface Name

In this example, guest domain ldg1 contains two virtual network devices, net-a and net-c, to find the Solaris OS network interface name in ldg1 that corresponds to net-c, do the following.

  1. Use the ldm command to find the virtual network device instance for net-c.


    # ldm list -l ldg1
    ...
    NETWORK
    NAME         SERVICE                     DEVICE       MAC
    net-a        primary-vsw0@primary        network@0    00:14:4f:f8:91:4f
    net-c        primary-vsw0@primary        network@2    00:14:4f:f8:dd:68
    ...
    #
    

    The virtual network device instance for net-c is network@2.

  2. To find the corresponding network interface on ldg1, log into ldg1 and find the entry for this instance under /devices.


    # uname -n
    ldg1
    # find /devices/virtual-devices@100 -type c -name network@2\*
    /devices/virtual-devices@100/channel-devices@200/network@2:vnet1
    #
    

    The network interface name is the part of the entry after the colon; that is, vnet1.

  3. Plumb vnet1 to see that it has the MAC address 00:14:4f:f8:dd:68 as shown in the ldm list -l output for net-c in Step 1.


    # ifconfig vnet1
    vnet1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
              inet 0.0.0.0 netmask 0
              ether 0:14:4f:f8:dd:68
    #
    


Assigning MAC Addresses Automatically or Manually

You must have enough media access control (MAC) addresses to assign to the number of logical domains, virtual switches, and virtual networks you are going to use. You can have the Logical Domains Manager automatically assign MAC addresses to a logical domain, a virtual network (vnet), and a virtual switch (vswitch), or you can manually assign MAC addresses from your own pool of assigned MAC addresses. The ldm subcommands that set MAC addresses are add-domain, add-vsw, set-vsw, add-vnet, and set-vnet. If you do not specify a MAC address in these subcommands, the Logical Domains Manager assigns one automatically.

The advantage to having the Logical Domains Manager assign the MAC addresses is that it utilizes the block of MAC addresses dedicated for use with logical domains. Also, the Logical Domains Manager detects and prevents MAC address collisions with other Logical Domains Manager instances on the same subnet. This frees you from having to manually manage your pool of MAC addresses.

MAC address assignment happens as soon as a logical domain is created or a network device is configured into a domain. In addition, the assignment is persistent until the device, or the logical domain itself, is removed.

The following topics are addressed in this section:

Range of MAC Addresses Assigned to Logical Domains Software

Logical domains have been assigned the following block of 512K MAC addresses:

00:14:4F:F8:00:00 ~ 00:14:4F:FF:FF:FF

The lower 256K addresses are used by the Logical Domains Manager for automatic MAC address allocation, and you cannot manually request an address in this range:

00:14:4F:F8:00:00 - 00:14:4F:FB:FF:FF

You can use the upper half of this range for manual MAC address allocation:

00:14:4F:FC:00:00 - 00:14:4F:FF:FF:FF

Automatic Assignment Algorithm

When you do not specify a MAC address in creating logical domain or a network device, the Logical Domains Manager automatically allocates and assigns a MAC address to that logical domain or network device. To obtain this MAC address, the Logical Domains Manager iteratively attempts to select an address and then checks for potential collisions.

Before selecting a potential address, the Logical Domains Manager first looks to see if it has a recently freed, automatically assigned address saved in a database for this purpose (see Freed MAC Addresses). If so, the Logical Domains Manager selects its candidate address from the database.

If no recently freed addresses are available, the MAC address is randomly selected from the 256K range of addresses set aside for this purpose. The MAC address is selected randomly to lessen the chance of a duplicate MAC address being selected as a candidate.

The address selected is then checked against other Logical Domains Managers on other systems to prevent duplicate MAC addresses from actually being assigned. The algorithm employed is described in Duplicate MAC Address Detection. If the address is already assigned, the Logical Domains Manager iterates, choosing another address, and again checking for collisions. This continues until a MAC address is found that is not already allocated, or a time limit of 30 seconds has elapsed. If the time limit is reached, then the creation of the device fails, and an error message similar to the following is shown:


Automatic MAC allocation failed.  Please set the vnet MAC address manually.

Duplicate MAC Address Detection

To prevent the same MAC address from being allocated to different devices, one Logical Domains Manager checks with other Logical Domains Managers on other systems by sending a multicast message over the control domain’s default network interface, including the address that the Logical Domain Manager wants to assign to the device. The Logical Domains Manger attempting to assign the MAC address waits for one second for a response back. If a different device on another LDoms-enabled system has already been assigned that MAC address, the Logical Domains Manager on that system sends back a response containing the MAC address in question. If the requesting Logical Domains Manager receives a response, it knows the chosen MAC address has already been allocated, chooses another, and iterates.

By default, these multicast messages are sent only to other managers on the same subnet; the default time-to-live (TTL) is 1. The TTL can be configured using the Service Management Facilities (SMF) property ldmd/hops.

Each Logical Domains Manager is responsible for:

If the Logical Domains Manager on a system is shut down for any reason, duplicate MAC addresses could occur while the Logical Domains Manager is down.

Automatic MAC allocation occurs at the time the logical domain or network device is created and persists until the device or the logical domain is removed.

Freed MAC Addresses

When a logical domain or a device associated with an automatic MAC address is removed, that MAC address is saved in a database of recently freed MAC addresses for possible later use on that system. These MAC addresses are saved to prevent the exhaustion of Internet Protocol (IP) addresses from a Dynamic Host Configuration Protocol (DHCP) server. When DHCP servers allocate IP addresses, they do so for a period of time (the lease time). The lease duration is often configured to be quite long, generally hours or days. If network devices are created and removed at a high rate without the Logical Domains Manager reusing automatically allocated MAC addresses, the number of MAC addresses allocated could soon overwhelm a typically configured DHCP server.

When a Logical Domains Manager is requested to automatically obtain a MAC address for a logical domain or network device, it first looks to the freed MAC address database to see if there is a previously assigned MAC address it can reuse. If there is a MAC address available from this database, the duplicate MAC address detection algorithm is run. If the MAC address had not been assigned to someone else since it was previously freed, it will be reused and removed from the database. If a collision is detected, the address is simply removed from the database. The Logical Domains Manager then either tries the next address in the database or if none is available, randomly picks a new MAC address.

Manual Allocation of MAC Addresses

The following procedure tells you how to create a manual MAC address.

procedure icon  To Allocate a MAC Address Manually

  1. Convert the subnet portion of the IP address of the physical host into hexadecimal format and save the result.


    # grep $hostname /etc/hosts| awk ’{print $1}’ | awk -F. ’{printf("%x",$4)}’
    27
    

  2. Determine the number of domains present excluding the control domain.


    # /opt/SUNWldm/bin/ldm list-domain
    NAME          STATE   FLAGS  CONS   VCPU  MEMORY  UTIL  UPTIME
    primary       active  -n-cv  SP     4     768M    0.3%  4h 54m
    myldom1       active  -n---  5000   2     512M    1.9%  1h 12m
    

    There is one guest domain, and you need to include the domain you want to create, so the domain count is 2.

  3. Append the converted IP address (27) to the vendor string (0x08020ab) followed by 10 plus the number of logical domains (2 in this example), which equals 12.


    0x08020ab and 27 and 12 = 0x08020ab2712 or 8:0:20:ab:27:12
    


CPU and Memory Address Mapping

The Solaris Fault Management Architecture (FMA) reports CPU errors in terms of physical CPU numbers and memory errors in terms of physical memory addresses.

If you want to determine within which logical domain an error occurred and the corresponding virtual CPU number or real memory address within the domain, then you must perform a mapping.

CPU Mapping

The domain and the virtual CPU number within the domain, which correspond to a given physical CPU number, can be determined with the following procedures.

procedure icon  To Determine the CPU Number

  1. Generate a long parseable list for all domains.


    primary$ ldm ls -l -p
    

  2. Look for the entry in the list’s VCPU sections that has a pid field equal to the physical CPU number.

    1. If you find such an entry, the CPU is in the domain the entry is listed under, and the virtual CPU number within the domain is given by the entry’s vid field.

    2. If you do not find such an entry, the CPU is not in any domain.

Memory Mapping

The domain and the real memory address within the domain, which correspond to a given physical memory address (PA), can be determined as follows.

procedure icon  To Determine the Real Memory Address

  1. Generate a long parseable list for all domains.


    primary$ ldm ls -l -p
    

  2. Look for the line in the list’s MEMORY sections where the PA falls within the inclusive range pa to (pa + size - 1): that is, pa <= PA < (pa + size - 1).

    Here pa and size refer to the values in the corresponding fields of the line.

    1. If you find such an entry, the PA is in the domain the entry is listed under and the corresponding real address within the domain is given by ra + (PA - pa).

    2. If you do not find such an entry, the PA is not in any domain.

Examples of CPU and Memory Mapping

Suppose you have a logical domain configuration as shown in EXAMPLE 5-16, and you want to determine the domain and the virtual CPU corresponding to physical CPU number 5, and the domain and the real address corresponding to physical address 0x7e816000.

Looking through the VCPU entries in the list for the one with the pid field equal to 5, you can find the following entry under logical domain ldg1:

Hence, the physical CPU number 5 is in domain ldg1 and within the domain it has virtual CPU number 1.


|vid=1|pid=5|util=29|strand=100

Looking through the MEMORY entries in the list, you can find the following entry under domain ldg2:


ra=0x8000000|pa=0x78000000|size=1073741824

Where 0x78000000 <= 0x7e816000 <= (0x78000000 + 1073741824 - 1), that is, pa <= PA <= (pa + size - 1).Hence, the PA is in domain ldg2 and the corresponding real address is 0x8000000 + (0x7e816000 - 0x78000000) = 0xe816000.


EXAMPLE 5-16   Long Parseable List of Logical Domains Configurations 
primary$ ldm ls -l -p
VERSION 1.0
DOMAIN|name=primary|state=active|flags=normal,control,vio-service|cons=SP|ncpu=4|mem=1073741824|util=0.6|uptime=64801|softstate=Solaris running
VCPU
|vid=0|pid=0|util=0.9|strand=100
|vid=1|pid=1|util=0.5|strand=100
|vid=2|pid=2|util=0.6|strand=100
|vid=3|pid=3|util=0.6|strand=100
MEMORY
|ra=0x8000000|pa=0x8000000|size=1073741824
IO
|dev=pci@780|alias=bus_a
|dev=pci@7c0|alias=bus_b
VDS|name=primary-vds0|nclients=2
|vol=disk-ldg1|opts=|dev=/opt/ldoms/testdisk.1
|vol=disk-ldg2|opts=|dev=/opt/ldoms/testdisk.2
VCC|name=primary-vcc0|nclients=2|port-range=5000-5100
VSW|name=primary-vsw0|nclients=2|mac-addr=00:14:4f:fb:42:5c|net-dev=e1000g0|dev=switch@0|mode=prog,promisc
VCONS|type=SP
DOMAIN|name=ldg1|state=active|flags=normal|cons=5000|ncpu=2|mem=805306368|util=29|uptime=903|softstate=Solaris running
VCPU
|vid=0|pid=4|util=29|strand=100
|vid=1|pid=5|util=29|strand=100
MEMORY
|ra=0x8000000|pa=0x48000000|size=805306368
VARIABLES
|auto-boot?=true
|boot-device=/virtual-devices@100/channel-devices@200/disk@0
VNET|name=net|dev=network@0|service=primary-vsw0@primary|mac-addr=00:14:4f:f9:8f:e6
VDISK|name=vdisk-1|vol=disk-ldg1@primary-vds0|dev=disk@0|server=primary
VCONS|group=group1|service=primary-vcc0@primary|port=5000
DOMAIN|name=ldg2|state=active|flags=normal|cons=5001|ncpu=3|mem=1073741824|util=35|uptime=775|softstate=Solaris running
VCPU
|vid=0|pid=6|util=35|strand=100
|vid=1|pid=7|util=34|strand=100
|vid=2|pid=8|util=35|strand=100
MEMORY
|ra=0x8000000|pa=0x78000000|size=1073741824
VARIABLES
|auto-boot?=true
|boot-device=/virtual-devices@100/channel-devices@200/disk@0
VNET|name=net|dev=network@0|service=primary-vsw0@primary|mac-addr=00:14:4f:f9:8f:e7
VDISK|name=vdisk-2|vol=disk-ldg2@primary-vds0|dev=disk@0|server=primary
VCONS|group=group2|service=primary-vcc0@primary|port=5000


Configuring Split PCI Express Bus to Use Multiple Logical Domains



Note - For Sun UltraSPARC T-2 based servers, such as the Sun SPARC Enterprise T5120 and T5220 servers, you would assign a Network Interface Unit (NIU) to the logical domain rather than use this procedure.



The PCI Express (PCI-E) bus on a Sun UltraSPARC T1-based server consists of two ports with various leaf devices attached to them. These are identified on a server with the names pci@780 (bus_a) and pci@7c0 (bus_b). In a multidomain environment, the PCI-E bus can be programmed to assign each leaf to a separate domain using the Logical Domains Manager. Thus, you can enable more than one domain with direct access to physical devices instead of using I/O virtualization.

When the Logical Domains system is powered on, the control (primary) domain uses all the physical device resources, so the primary domain owns both the PCI-E bus leaves.



caution icon

Caution - All internal disks on the supported servers are connected to a single leaf. If a control domain is booted from an internal disk, do not remove that leaf from the domain. Also, ensure that you are not removing the leaf with the primary network port. If you remove the wrong leaf from the control or service domain, that domain would not be able to access required devices and would become unusable. If the primary network port is on a different bus than the system disk, then move the network cable to an onboard network port and use the Logical Domains Manager to reconfigure the virtual switch (vsw) to reflect this change.



procedure icon  To Create a Split PCI Configuration

The example shown here is for a Sun Fire T2000 server. This procedure also can be used on other Sun UltraSPARC T1-based servers, such a Sun Fire T1000 server and a Netra T2000 server. The instructions for different servers might vary slightly from these, but you can obtain the basic principles from the example. Mainly, you need to retain the leaf that has the boot disk and remove the other leaf from the primary domain and assign it to another domain.

  1. Verify that the primary domain owns both leaves of the PCI Express bus.


    primary# ldm list-bindings primary
    ...
    IO
        DEVICE           PSEUDONYM        OPTIONS
        pci@780          bus_a
        pci@7c0          bus_b
    ...
    

  2. Determine the device path of the boot disk, which needs to be retained.


    primary# df /
    /                  (/dev/dsk/c1t0d0s0 ): 1309384 blocks   457028 files
    

  3. Determine the physical device to which the block device c1t0d0s0 is linked.


    primary# ls -l /dev/dsk/c1t0d0s0
    lrwxrwxrwx   1 root     root          65 Feb  2 17:19 /dev/dsk/c1t0d0s0 -> ../
    ../devices/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0:a
    

    In this example, the physical device for the boot disk for domain primary is under the leaf pci@7c0, which corresponds to our earlier listing of bus_b. This means that we can assign bus_a (pci@780) of the PCI-Express bus to another domain.

  4. Check /etc/path_to_inst to find the physical path of the onboard network ports.


    primary# grep e1000g /etc/path_to_inst
    

  5. Remove the leaf that does not contain the boot disk (pci@780 in this example) from the primary domain.


    primary# ldm remove-io pci@780 primary
    

  6. Add this split PCI configuration (split-cfg in this example) to the system controller.


    primary# ldm add-config split-cfg
    

    This configuration (split-cfg) is also set as the next configuration to be used after the reboot.



    Note - Currently, there is a limit of 8 configurations that can be saved on the SC, not including the factory-default configuration.



  7. Reboot the primary domain so that the change takes effect.


    primary# shutdown -i6 -g0 -y
    

  8. Add the leaf (pci@780 in this example) to the domain (ldg1 in this example) that needs direct access.


    primary# ldm add-io pci@780 ldg1
    Notice: the LDom Manager is running in configuration mode. Any
    configuration changes made will only take effect after the machine
    configuration is downloaded to the system controller and the
    host is reset.
    

    If you have an Infiniband card, you might need to enable the bypass mode on the pci@780 bus. See Enabling the I/O MMU Bypass Mode on a PCI Bus for information on whether you need to enable the bypass mode.

  9. Reboot domain ldg1 so that the change takes effect.

    All domains must be inactive for this reboot. If you are configuring this domain for the first time, the domain will be inactive.


    ldg1# shutdown -i6 -g0 -y
    

  10. Confirm that the correct leaf is still assigned to the primary domain and the correct leaf is assigned to domain ldg1.


    primary# ldm list-bindings primary
    NAME          STATE   FLAGS  CONS   VCPU  MEMORY  UTIL  UPTIME
    primary       active  -n-cv  SP     4     4G      0.4%  18h 25m
    ...
    IO
        DEVICE           PSEUDONYM        OPTIONS
        pci@7c0          bus_b
    ...
    ----------------------------------------------------------------
    NAME          STATE   FLAGS  CONS   VCPU  MEMORY  UTIL  UPTIME
    ldg1          active  -n---  5000   4     2G      10%   35m
    ...
    IO
        DEVICE           PSEUDONYM        OPTIONS
        pci@780          bus_a
    ...
    

    This output confirms that the PCI-E leaf bus_b and the devices below it are assigned to domain primary, and bus_a and its devices are assigned to ldg1.


Enabling the I/O MMU Bypass Mode on a PCI Bus

If you have an Infiniband Host Channel Adapter (HCA) card, you might need to turn the I/O memory management unit (MMU) bypass mode on. By default, Logical Domains software controls PCI-E transactions so that a given I/O device or PCI-E option can only access the physical memory assigned within the I/O domain. Any attempt to access memory of another guest domain is prevented by the I/O MMU. This provides a higher level of security between the I/O domain and all other domains. However, in the rare case where a PCI-E or PCI-X option card does not load or operate with the I/O MMU bypass mode off, this option allows you to turn the I/O MMU bypass mode on. However, if you turn the bypass mode on, there no longer is a hardware-enforced protection of memory accesses from the I/O domain.

The bypass=on option turns on the I/O MMU bypass mode. This bypass mode should be enabled only if the respective I/O domain and I/O devices within that I/O domain are trusted by all guest domains. This example turns on the bypass mode.


primary# ldm add-io bypass=on pci@780 ldg1

The output shows bypass=on under OPTIONS.


Using Console Groups

The virtual network terminal server daemon, vntsd(1M), enables you to provide access for multiple domain consoles using a single TCP port. At the time of domain creation, the Logical Domains Manager assigns a unique TCP port to each console by creating a new default group for that domain’s console. The TCP port is then assigned to the console group as opposed to the console itself. The console can be bound to an existing group using the set-vcons subcommand.

procedure icon  To Combine Multiple Consoles Into One Group

  1. Bind the consoles for the domains into one group.

    The following example shows binding the console for three different domains (ldg1, ldg2, and ldg3) to the same console group (group1).


    primary# ldm set-vcons group=group1 service=primary-vcc0 ldg1
    primary# ldm set-vcons group=group1 service=primary-vcc0 ldg2
    primary# ldm set-vcons group=group1 service=primary-vcc0 ldg3
    

  2. Connect to the associated TCP port (localhost at port 5000 in this example).


    # telnet localhost 5000
    primary-vnts-group1: h, l, c{id}, n{name}, q:
    

    You are prompted to select one of the domain consoles.

  3. List the domains within the group by selecting l (list).


    primary-vnts-group1: h, l, c{id}, n{name}, q: l
    DOMAIN ID           DOMAIN NAME                   DOMAIN STATE
    0                   ldg1                          online
    1                   ldg2                          online
    2                   ldg3                          online
    



    Note - To re-assign the console to a different group or vcc instance, the domain must be unbound; that is, it has to be in the inactive state. Refer to the Solaris 10 OS vntsd(1M) man page for more information on configuring and using SMF to manage vntsd and using console groups.




Moving a Logical Domain From One Server to Another

You can move a logical domain, which is not running, from one server to another. Before you move the domain, if you set up the same domain on two servers, the domain would be easier to move. In fact, you do not have to move the domain itself; you only have to unbind and stop the domain on one server and bind and start the domain on the other server.

procedure icon  To Set Up Domains to Move

  1. Create a domain with the same name on two servers; for example, create domainA1 on serverA and serverB.

  2. Add a virtual disk server device and a virtual disk to both servers. The virtual disk server opens the underlying device for export as part of the bind.

  3. Bind the domain only on one server; for example, serverA. Leave the domain inactive on the other server.

procedure icon  To Move the Domain

  1. Unbind and stop the domain on serverA.

  2. Bind and start the domain on serverB.

Bind the Domain

Note - No resources are used until you bind the domain.




Removing Logical Domains

This section describes how to remove all guest domains and revert to a single OS instance that controls the whole server.

procedure icon  To Remove All Guest Logical Domains

  1. List all the logical domain configurations on the system controller.


    primary# ldm ls-config
    

  2. Remove all configurations (config_name) previously saved to the system controller (SC). Use the following command for each such configuration.


    primary# ldm rm-config config_name
    

    Once you remove all the configurations previously saved to the SC, the factory-default domain would be the next one to use when the control domain (primary) is rebooted.

  3. Stop all guest domains using the -a option.


    primary# ldm stop-domain -a
    

  4. List all domains to see all the resources attached to guest domains.


    primary# ldm ls
    

  5. Release all the resources attached to guest domains. To do this, use the ldm unbind-domain command for each guest domain (ldom) configured in your system.



    Note - You might not be able to unbind an I/O domain in a split-PCI configuration if it is providing services required by the control domain. In this situation, skip this step.




    primary# ldm unbind-domain ldom
    

  6. Stop the control domain.


    primary# shutdown -i1 -g0 -y
    

  7. Power-cycle the system controller so that the factory-default configuration is reloaded.


    sc> poweroff
    sc> poweron
    


Operating the Solaris OS With Logical Domains

This section describes the changes in behavior in using the Solaris OS that occur once a configuration created by the Logical Domains Manager is instantiated; that is, domaining is enabled.



Note - Any discussion about whether domaining is enabled pertains only to Sun UltraSPARC T1–based platforms. Otherwise, domaining is always enabled.



OpenBoot Firmware Not Available After Solaris OS Has Started If Domaining Is Enabled

If domaining is enabled, the OpenBoot firmware is not available after the Solaris OS has started, because it is removed from memory.

To reach the ok prompt from the Solaris OS, you must halt the domain. You can use the Solaris OS halt command to halt the domain.

Power-Cycling a Server

Whenever performing any maintenance on a system running LDoms software that requires power-cycling the server, you must save your current logical domain configurations to the SC first.

procedure icon  To Save Your Current Logical Domain Configurations to the SC

  •   Use the following command.


    # ldm add-config config_name
    

Result of an OpenBoot power-off Command

The OpenBoottrademark power-off command does not power down a system. To power down a system while in OpenBoot firmware, use your system controller’s or system processor’s poweroff command. The OpenBoot power-off command displays the following message:


NOTICE: power-off command is not supported, use appropriate
NOTICE: command on System Controller to turn power off.

Result of Solaris OS Breaks

If domaining is not enabled, the Solaris OS normally goes to the OpenBoot prompt after a break is issued. The behavior described in this section is seen in two situations:

  1. You press the L1-A key sequence when the input device is set to keyboard.

  2. You enter the send break command when the virtual console is at the telnet prompt.

If domaining is enabled, you receive the following prompt after these types of breaks.


c)ontinue, s)ync, r)eboot, h)alt?

Type the letter that represents what you want the system to do after these types of breaks.

Results from Halting or Rebooting the Control Domain

The following table shows the expected behavior of halting or rebooting the control (primary) domain.



Note - The question in TABLE 5-1 regarding whether domaining is enabled pertains only to the Sun UltraSPARC T1 processors. Otherwise, domaining is always enabled.




TABLE 5-1   Expected Behavior of Halting or Rebooting the Control (primary) Domain  
Command Domaining Enabled? Other Domain Configured? Behavior
halt Disabled N/A For Sun UltraSPARC T1 Processors:

Drops to the ok prompt.

  Enabled No For Sun UltraSPARC T1 Processors:

System either resets and goes to the OpenBoot ok prompt or goes to the following prompt:

r)eboot, o)k prompt, or h)alt?

For Sun UltraSPARC T2 Processors:

Host powered off and stays off until powered on at the SC.

  Enabled Yes Soft resets and boots up if the variable auto-boot?=true. Soft resets and halts at ok prompt if the variable auto-boot?=false.
reboot Disabled N/A For Sun UltraSPARC T1 Processors:

Powers off and powers on the host.

  Enabled No For Sun UltraSPARC T1 Processors:

Powers off and powers on the host.

For Sun UltraSPARC T2 Processors:

Reboots the host, no power off.

  Enabled Yes For Sun UltraSPARC T1 Processors:

Powers off and powers on the host.

For Sun UltraSPARC T2 Processors:

Reboots the host, no power off.

shutdown -i 5 Disabled N/A For Sun UltraSPARC T1 Processors:

Powers off the host.

  Enabled No Host powered off, stays off until powered on at the SC.
  Enabled Yes Soft resets and reboots.

Some format(1M) Command Options Do Not Work With Virtual Disks

The Solaris OS format(1M) command does not work in a guest domain with virtual disks:

For getting or setting the volume table of contents (VTOC) of a virtual disk, use the prtvtoc(1M) command and fmthard(1M) command instead of the format(1M) command. You also can use the format(1M) command from the service domain on the real disks.


Using LDoms With ALOM CMT

The section describes information to be aware of in using Advanced Lights Out Manager (ALOM) chip multithreading (CMT) with the Logical Domains Manager. For more information about using the ALOM CMT software, refer to the Advanced Lights Out Management (ALOM) CMT v1.3 Guide.



caution icon

Caution - The ALOM CMT documentation refers to only one domain, so you must be aware that the Logical Domains Manager is introducing multiple domains. If a logical domain is restarted, I/O services for guest domains might be unavailable until the control domain has restarted. This is because the control domain functions as a service domain in the Logical Domains Manager 1.0.2 software. Guest domains appear to freeze during the reboot process. Once the control domain has fully restarted, the guest domains resume normal operations. It is only necessary to shut down guest domains when power is going to be removed from the entire server.



An additional option is available to the existing ALOM CMT command.


bootmode [normal|reset_nvram|bootscript=strong|config=”config-name”]

The config=”config-name option enables you to set the configuration on the next power on to another configuration, including the factory-default shipping configuration.

You can invoke the command whether the host is powered on or off. It takes effect on the next host reset or power on.

procedure icon  To Reset the Logical Domain Configuration to the Default or Another Configuration


Enabling and Using BSM Auditing

The Logical Domains Manager uses the Solaris OS Basic Security module (BSM) auditing capability. BSM auditing provides the means to examine the history of actions and events on your control domain to determine what happened. The history is kept in a log of what was done, when it was done, by whom, and what was affected.

If you want to use this auditing capability, this section describes how to enable, verify, disable, print output, and rotate audit logs. You can find further information about BSM auditing in the Solaris 10 System Administration Guide: Security Services.

You can enable BSM auditing in one of two ways. When you want to disable auditing, be sure you use the same method that you used in enabling. The two methods are:

Here are the procedures for both methods.

procedure icon  To Use the enable-bsm.fin Finish Script

  1. Copy the ldm_control-secure.driver to my-ldm.driver, where my-ldm.driver is the name for your copy of the ldm_control-secure.driver.

  2. Copy the ldm_control-config.driver to my-ldm-config.driver, where my-ldm-config.driver is the name for your copy of the ldm_control-config.driver.

  3. Copy the ldm_control-hardening.driver to my-ldm-hardening.driver, where my-ldm-hardening.driver is the name for your copy of the ldm_control-hardening.driver.

  4. Edit my-ldm.driver to refer to the new configuration and hardening drivers, my-ldm-control.driver and my-ldm-hardening.driver, respectively.

  5. Edit my-ldm-hardening.driver, and remove the pound sign (#) from in front of the following line in the driver.


    enable-bsm.fin
    

  6. Execute my-ldm.driver.


    # /opt/SUNWjass/bin/jass-execute -d my-ldm.driver
    

  7. Reboot the Solaris OS for auditing to take effect.

procedure icon  To Use the Solaris OS bsmconv(1M) Command

  1. Add vs in the flags: line of the /etc/security/audit_control file.

  2. Run the bsmconv(1M) command.


    # /etc/security/bsmconv
    

    For more information about this command, refer to the Solaris 10 Reference Manual Collection or the man page.

  3. Reboot the Solaris Operating System for auditing to take effect.

procedure icon  To Verify that BSM Auditing is Enabled

  1. Type the following command.


    # auditconfig -getcond
    

  2. Check that audit condition = auditing appears in the output.

procedure icon  To Disable Auditing

You can disable auditing in one of two ways, depending on how you enabled it. See Enabling and Using BSM Auditing.

  1. Do one of the following.

    • Undo the Solaris Security Toolkit hardening run which enabled BSM auditing.


      # /opt/SUNWjass/bin/jass-execute -u
      

    • Use the Solaris OS bsmunconv(1M) command.


      # /etc/security/bsmunconv
      

  2. Reboot the Solaris OS for the disabling of auditing to take effect.

procedure icon  To Print Audit Output

procedure icon  To Rotate Audit Logs


Configuring Virtual Switch and Service Domain for NAT and Routing

The virtual switch (vswitch) is a layer-2 switch, that also can be used as a network device in the service domain. The virtual switch can be configured to act only as a switch between the virtual network (vnet) devices in the various logical domains but with no connectivity to a network outside the box through a physical device. In this mode, plumbing the vswitch as a network device and enabling IP routing in the service domain enables virtual networks to communicate outside the box using the service domain as a router. This mode of operation is very essential to provide external connectivity to the domains when the physical network adapter is not GLDv3-compliant.

The advantages of this configuration are:

procedure icon  To Set Up the Virtual Switch to Provide External Connectivity to Domains

  1. Create a virtual switch with no associated physical device.

    If assigning an address, ensure that the virtual switch has an unique MAC address.


    primary# ldm add-vsw [mac-addr=xx:xx:xx:xx:xx:xx] primary-vsw0 primary
    

  2. Plumb the virtual switch as a network device in addition to the physical network device being used by the domain.

    See To Configure the Virtual Switch as the Primary Interface for more information about plumbing the virtual switch.

  3. Configure the virtual switch device for DHCP, if needed.

    See To Configure the Virtual Switch as the Primary Interface for more information about configuring the virtual switch device for DHCP.

  4. Create the /etc/dhcp.vsw file, if needed.

  5. Configure IP routing in the service domain, and set up required routing tables in all the domains.

    For information about how to do this, refer to the section on “Packet Forwarding and Routing on IPv4 Networks” in Chapter 5 “Configuring TCP/IP Network Services and IPv4 Administration” in the System Administration Guide: IP Services in the Solaris Express System Administrator Collection.


Using ZFS With Virtual Disks

The following topics regarding using the Zettabyte File System (ZFS) with virtual disks on logical domains are described in this section:

Creating a Virtual Disk on Top of a ZFS Volume

The following procedure describes how to create a ZFS volume in a service domain and make that volume available to other domains as a virtual disk. In this example, the service domain is the same as the control domain and is named primary. The guest domain is named ldg1 as an example. The prompts in each step show in which domain to run the command.

procedure icon  To Create a Virtual Disk on Top of a ZFS Volume

  1. Create a ZFS storage pool (zpool).


    primary# zpool create -f tank1 c2t42d1
    

  2. Create a ZFS volume.


    primary# zfs create -V 100m tank1/myvol
    

  3. Verify that the zpool (tank1 in this example) and ZFS volume (tank/myvol in this example) have been created.


    primary# zfs list
            NAME                   USED  AVAIL  REFER  MOUNTPOINT
            tank1                  100M  43.0G  24.5K  /tank1
            tank1/myvol           22.5K  43.1G  22.5K  -
    

  4. Configure a service exporting tank1/myvol as a virtual disk.


    primary# ldm add-vdsdev /dev/zvol/rdsk/tank1/myvol zvol@primary-vds0
    

  5. Add the exported disk to another domain (ldg1 in this example).


    primary# ldm add-vdisk vzdisk zvol@primary-vds0 ldg1
    

  6. On the other domain (ldg1 in this example), start the domain and ensure that the new virtual disk is visible (you might have to run the devfsadm command).

    In this example, the new disk appears as /dev/rdsk/c2d2s0.


    ldg1# newfs /dev/rdsk/c2d2s0
    newfs: construct a new file system /dev/rdsk/c2d2s0: (y/n)? y
    Warning: 4096 sector(s) in last cylinder unallocated
    Warning: 4096 sector(s) in last cylinder unallocated
    /dev/rdsk/c2d2s0: 204800 sectors in 34 cylinders of 48 tracks, 128 sectors
    100.0MB in 3 cyl groups (14 c/g, 42.00MB/g, 20160 i/g) super-block backups
    (for fsck -F ufs -o b=#) at: 32, 86176, 172320,
     
    ldg1# mount /dev/dsk/c2d2s0 /mnt
     
    ldg1# df -h /mnt
    Filesystem             size   used   avail capacity  Mounted on
    /dev/dsk/c2d2s0         93M   1.0M     82M       2%  /mnt
    



    Note - A ZFS volume is exported to a logical domain as a virtual disk slice. Therefore, it is not possible to either use the format command or install the Solaris OS to a zvol-backed virtual disk.



Using ZFS Over a Virtual Disk

The following procedure shows how to use ZFS directly from a domain on top of a virtual disk. You can create ZFS pools, file systems, and volumes over the top of virtual disks with the Solaris 10 OS zpool(1M) and zfs(1M) commands. Although the storage backend is different (virtual disks instead of physical disks), there is no change to the usage of ZFS.

Additionally, if you have an already existing ZFS file system, then you can export it from a service domain to use it in another domain.

In this example, the service domain is the same as the control domain and is named primary. The guest domain is named ldg1 as an example. The prompts in each step show in which domain to run the command.

procedure icon  To Use ZFS Over a Virtual Disk

  1. Create a ZFS pool (tank in this example), and then verify that it has been created.


    primary# zpool create -f tank c2t42d0
    primary# zpool list
    NAME                SIZE   USED  AVAIL   CAP   HEALTH  ALTROOT
    tank               43.8G   108K  43.7G    0%   ONLINE  -      
    

  2. Create a ZFS file system (tank/test in this example), and then verify that it has been created.

    In this example, the file system is created on top of disk c2t42d0 by running the following command on the service domain.


    primary# zfs create tank/test
    primary# zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    tank                   106K  43.1G  25.5K  /tank
    tank/test             24.5K  43.1G  24.5K  /tank/test
    

  3. Export the ZFS pool (tank in this example).


    primary# zpool export tank
    

  4. Configure a service exporting the physical disk c2t42d0s2 as a virtual disk.


    primary# ldm add-vdsdev /dev/rdsk/c2t42d0s2 volz@primary-vds0
    

  5. Add the exported disk to another domain (ldg1 in this example).


    primary# ldm add-vdisk vdiskz volz@primary-vds0 ldg1
    

  6. On the other domain (ldg1 in this example), start the domain and make sure the new virtual disk is visible (you might have to run the devfsadm command), and then import the ZFS pool.


    ldg1# zpool import tank
    ldg1# zpool list
    NAME            SIZE    USED    AVAIL   CAP   HEALTH   ALTROOT
    tank           43.8G    214K    43.7G    0%   ONLINE   -      
     
    ldg1# zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    tank                   106K  43.1G  25.5K  /tank
    tank/test             24.5K  43.1G  24.5K  /tank/test
     
    ldg1# df -hl -F zfs
    Filesystem             size   used  avail capacity  Mounted on
    tank                    43G    25K    43G     1%    /tank
    tank/test               43G    24K    43G     1%    /tank/test
    

    The ZFS pool (tank/test in this example) is now imported and usable from domain ldg1.

Using ZFS for Boot Disks

You can use a ZFS file system with a large file as the virtual disks in logical domains.



Note - A ZFS file system requires more memory in the service domain. Take this into account when configuring the service domain.



ZFS enables:

procedure icon  To Use ZFS for Boot Disks

You can use the following procedure to create ZFS disks for logical domains, and also snapshot and clone them for other domains.

  1. On the primary domain, reserve a entire disk or slice for use as the storage for the ZFS pool. Step 2 uses slice 5 of a disk.

  2. Create a ZFS pool; for example, ldomspool.


    # zpool create ldomspool /dev/dsk/c0t0d0s5
    

  3. Create a ZFS file system for the first domain (ldg1 in this example).


    # zfs create ldomspool/ldg1
    

  4. Create a file to be the disk for this domain.


    # mkfile 1G /ldomspool/ldg1/bootdisk
    

  5. Specify the file as the device to use when creating the domain.


    primary# ldm add-vdsdev /ldomspool/ldg1/bootdisk vol1@primary-vds0
    primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1
    

  6. Boot domain ldg1 and net install to vdisk1. This file functions as a full disk and can have partitions; that is, separate partitions for root, usr, home, dump, and swap.

  7. Once the installation is complete, snapshot the file system.


    # zfs snapshot ldomspool/ldg1@initial
    



    Note - Doing the snapshot before the domain reboots does not save the domain state as part of the snapshot or any other clones created from the snapshot.



  8. Create additional clones from the snapshot and use it as the boot disk for other domains (ldg2 and ldg3 in this example).


    # zfs clone ldomspool/ldg1@initial ldomspool/ldg2
    # zfs clone ldomspool/ldg1@initial ldomspool/ldg3
    

  9. Verify that everything was created successfully.


    # zfs list
       NAME                       USED  AVAIL  REFER  MOUNTPOINT     
       ldomspool                 1.07G  2.84G  28.5K  /ldomspool     
       ldomspool/ldg1            1.03G  2.84G  1.00G  /ldomspool/ldg1
       ldomspool/ldg1@initial    23.0M      -  1.00G  -              
       ldomspool/ldg2            23.2M  2.84G  1.00G  /ldomspool/ldg2
       ldomspool/ldg3            21.0M  2.84G  1.00G  /ldomspool/ldg3
    



    Note - Ensure that the ZFS pool has enough space for the clones that are being created. ZFS uses copy-on-write and uses space from the pool only when the blocks in the clone are modified. Even after booting the domain, the clones only use a small percentage needed for the disk (since most of the OS binaries are the same as those in the initial snapshot).




Using Volume Managers in a Logical Domains Environment

The following topics are described in this section:

Using Virtual Disks on Top of Volume Managers

Any Zettabyte File System (ZFS), Solaris™ Volume Manager (SVM), or Veritas Volume Manager (VxVM) volume can be exported from a service domain to a guest domain as a virtual disk. The exported volume appears as a virtual disk with a single slice (s0) into the guest domain.



Note - The remainder of this section uses an SVM volume as an example. However, the discussion also applies to ZFS and VxVM volumes.



For example, if a service domain exports the SVM volume /dev/md/dsk/d0 to domain1 and domain1 sees that virtual disk as /dev/dsk/c0d2*, then domain1 only has an s0 device; that is, /dev/dsk/c0d2s0.

The virtual disk in the guest domain (for example, /dev/dsk/c0d2s0) is directly mapped to the associated volume (for example, /dev/md/dsk/d0), and data stored onto the virtual disk from the guest domain are directly stored onto the associated volume with no extra metadata. So data stored on the virtual disk from the guest domain can also be directly accessed from the service domain through the associated volume.

Examples:

Using Virtual Disks on Top of SVM

When a RAID or mirror SVM volume is used as a virtual disk by another domain, and if there is a failure on one of the components of the SVM volume, then the recovery of the SVM volume using the metareplace command or using a hot spare does not start. The metastat command sees the volume as resynchronizing, but the resynchronization does not progress.

For example, /dev/md/dsk/d0 is a RAID SVM volume exported as a virtual disk to another domain, and d0 is configured with some hot-spare devices. If a component of d0 fails, SVM replaces the failing component with a hot spare and resynchronizes the SVM volume. However, the resynchronization does not start. The volume is reported as resynchronizing, but the resynchronization does not progress.


# metastat d0
d0: RAID
    State: Resyncing
    Hot spare pool: hsp000
    Interlace: 32 blocks
    Size: 20097600 blocks (9.6 GB)
Original device:
    Size: 20100992 blocks (9.6 GB)
Device                                     Start Block  Dbase   State Reloc
c2t2d0s1                                           330  No       Okay  Yes
c4t12d0s1                                          330  No       Okay  Yes
/dev/dsk/c10t600C0FF0000000000015153295A4B100d0s1  330  No  Resyncing  Yes

In such a situation, the domain using the SVM volume as a virtual disk has to be stopped and unbound to complete the resynchronization. Then the SVM volume can be resynchronized using the metasync command.


# metasync d0

Using Virtual Disks When VxVM Is Installed

When the Veritas Volume Manager (VxVM) is installed on your system, you have to ensure that Veritas Dynamic Multipathing (DMP) is not enabled on the physical disks or partitions you want to export as virtual disks. Otherwise, you receive an error in /var/adm/messages while binding a domain that uses such a disk.


vd_setup_vd():  ldi_open_by_name(/dev/dsk/c4t12d0s2) = errno 16
vds_add_vd():  Failed to add vdisk ID 0

You can check if Veritas DMP is enabled by checking multipathing information in the output of the command vxdisk list; for example:


# vxdisk list Disk_3
Device:    Disk_3
devicetag: Disk_3
type:      auto
info:      format=none
flags:     online ready private autoconfig invalid
pubpaths:  block=/dev/vx/dmp/Disk_3s2 char=/dev/vx/rdmp/Disk_3s2
guid:      -
udid:      SEAGATE%5FST336753LSUN36G%5FDISKS%5F3032333948303144304E0000
site:      -
Multipathing information:
numpaths:  1
c4t12d0s2  state=enabled

If Veritas DMP is enabled on a disk or a slice that you want to export as a virtual disk, then you must disable DMP using the vxdmpadm command. For example:


# vxdmpadm -f disable path=/dev/dsk/c4t12d0s2

Using Volume Managers on Top of Virtual Disks

This section describes the following situations in the Logical Domains environment:

Using ZFS on Top of Virtual Disks

Any virtual disk can be used with ZFS. A ZFS storage pool (zpool) can be imported in any domain that sees all the storage devices that are part of this zpool, regardless of whether the domain sees all these devices as virtual devices or real devices.

Using SVM on Top of Virtual Disks

Any virtual disk can be used in the SVM local disk set. For example, a virtual disk can be used for storing the SVM meta database (metadb) of the local disk set or for creating SVM volumes in the local disk set.

Currently, you can only use virtual disks with the local disk set, but not with any shared disk set (metaset). Virtual disks can not be added into a SVM shared disk set. Trying to add a virtual disk into a SVM shared disk set fails with an error similar to the following.


# metaset -s test -a c2d2
metaset: domain1: test: failed to reserve any drives

Using VxVM on Top of Virtual Disks

VxVM does not currently work with virtual disks. The VxVM software can be installed into a domain having virtual disks but VxVM is unable to see any of the virtual disks available.


Configuring IPMP in a Logical Domains Environment

Internet Protocol Network Multipathing (IPMP) provides fault-tolerance and load balancing across multiple network interface cards. By using IPMP, you can configure one or more interfaces into an IP multipathing group. After configuring IPMP, the system automatically monitors the interfaces in the IPMP group for failure. If an interface in the group fails or is removed for maintenance, IPMP automatically migrates, or fails over, the failed interface’s IP addresses. In a Logical Domains environment, either the physical or virtual network interfaces can be configured for failover using IPMP.

Configuring Virtual Network Devices into an IPMP Group in a Logical Domain

A logical domain can be configured for fault-tolerance by configuring its virtual network devices to an IPMP group. When setting up an IPMP group with virtual network devices, in a active-standby configuration, set up the group to use probe-based detection. Link-based detection and failover currently are not supported for virtual network devices in Logical Domains 1.0.2 software.

The following diagram shows two virtual networks (vnet1 and vnet2) connected to separate virtual switch instances (vsw0 and vsw1) in the service domain, which, in turn, use two different physical interfaces (e1000g0 and e1000g1). In the event of a physical interface failure, the IP layer in LDom_A detects failure and loss of connectivity on the corresponding vnet through probe-based detection, and automatically fails over to the secondary vnet device.

FIGURE 5-1   Two Virtual Networks Connected to Separate Virtual Switch Instances




Further reliability can be achieved in the logical domain by connecting each virtual network device (vnet0 and vnet1) to virtual switch instances in different service domains (as shown in the following diagram). Two service domains (Service_1 and Service_2) with virtual switch instances (vsw1 and vsw2) can be set up using a split-PCI configuration. In this case, in addition to network hardware failure, LDom_A can detect virtual network failure and trigger a failover following a service domain crash or shutdown.

FIGURE 5-2   Each Virtual Network Device Connected to Different Service Domains




Refer to the Solaris 10 System Administration Guide: IP Services for more information about how to configure and use IPMP groups.

Configuring and Using IPMP in the Service Domain

Network failure detection and recovery can also be set up in a Logical Domains environment by configuring the physical interfaces in the service domain into a IPMP group. To do this, configure the virtual switch in the service domain as a network device, and configure the service domain itself to act as an IP router. (Refer to the Solaris 10 System Administration Guide: IP Services for information on setting up IP routing).

Once configured, the virtual switch sends all packets originating from virtual networks (and destined for an external machine), to its IP layer, instead of sending the packets directly via the physical device. In the event of a physical interface failure, the IP layer detects failure and automatically re-routes packets through the secondary interface.

Since the physical interfaces are directly being configured into a IPMP group, the group can be set up for either link-based or probe-based detection. The following diagram shows two network interfaces (e1000g0 and e1000g1) configured as part of an IPMP group. The virtual switch instance (vsw0) has been plumbed as a network device to send packets to its IP layer.

FIGURE 5-3   Two Network Interfaces Configured as Part of IPMP Group