Logical Domains 1.2 Release Notes

General Issues

This section describes general known issues about this release of LDoms software that are broader than a specific bug number. Workarounds are provided where available.

Service Processor and System Controller Are Interchangeable Terms

For discussions in Logical Domains documentation, the terms service processor (SP) and system controller (SC) are interchangeable.

Cards Not Supported

The following cards are not supported for this LDoms 1.2 software release:


Caution – Caution –

If these unsupported configurations are used with LDoms 1.2, stop and unbind all logical domains before the control domain is rebooted. Failure to do so can result in a system crash causing the loss of all the logical domains that are active in the system.


In Certain Conditions, a Guest Domain's SVM Configuration or Metadevices Can Be Lost

If a service domain is running a version of Solaris 10 OS prior to Solaris 10 5/09 and is exporting a physical disk slice as a virtual disk to a guest domain, then this virtual disk will appear in the guest domain with an inappropriate device ID. If that service domain is then upgraded to Solaris 10 5/09, the physical disk slice exported as a virtual disk will appear in the guest domain with no device ID.

This removal of the device ID of the virtual disk can cause problems to applications attempting to reference the device ID of virtual disks. In particular, this can cause the Solaris Volume Manager (SVM) to be unable to find its configuration or to access its metadevices.

Workaround: After upgrading a service domain to Solaris 10 5/09, if a guest domain is unable to find its SVM configuration or its metadevices, execute the following procedure.

ProcedureFind a Guest Domain's SVM Configuration or Metadevices

  1. Boot the guest domain.

  2. Disable the devid feature of SVM by adding the following lines to the /kernel/dr/md.conf file:


    md_devid_destroy=1;
    md_keep_repl_state=1;
  3. Reboot the guest domain.

    After the domain has booted, the SVM configuration and metadevices should be available.

  4. Check the SVM configuration and ensure that it is correct.

  5. Re-enable the SVM devid feature by removing from the /kernel/drv/md.conf file the two lines that you added in Step 2.

  6. Reboot the guest domain.

    During the reboot, you will see messages similar to this:


    NOTICE: mddb: unable to get devid for 'vdc', 0x10

    These messages are normal and do not report any problems.

Logical Domain Channels (LDCs) and Logical Domains

There is a limit to the number of LDCs available in any logical domain. For UltraSPARC T1 based platforms, that limit is 256. For all other platforms, the limit is 512. This only becomes an issue on the control domain because the control domain has at least part, if not all, of the I/O subsystem allocated to it. This might also be an issue because of the potentially large number of LDCs that are created for both virtual I/O data communications and the Logical Domains Manager control of the other logical domains.


Note –

The examples in this section are what happens on UltraSPARC T1 based platforms. However, the behavior is the same if you go over the limit on other supported platforms.


If you try to add a service, or bind a domain, so that the number of LDC channels exceeds the limit on the control domain, the operation fails with an error message similar to the following:


13 additional LDCs are required on guest primary to meet this request,
but only 9 LDCs are available

    The following guidelines can help prevent creating a configuration that could overflow the LDC capabilities of the control domain:

  1. The control domain allocates 12 LDCs for various communication purposes with the hypervisor, Fault Management Architecture (FMA), and the system controller (SC), independent of the number of other logical domains configured.

  2. The control domain allocates 1 LDC to every logical domain, including itself, for control traffic.

  3. Each virtual I/O service on the control domain consumes 1 LDC for every connected client of that service.

For example, consider a control domain and 8 additional logical domains. Each logical domain needs the following at a minimum:

Applying the above guidelines yields the following results (numbers in parentheses correspond to the preceding guideline number from which the value was derived):

12(1) + 9(2) + 8 x 3(3)=45 LDCs in total.

Now consider the case where there are 32 domains instead of 8, and each domain includes 3 virtual disks, 3 virtual networks, and a virtual console. Now the equation becomes:

12 + 33 + 32 x 7=269 LDCs in total.

Depending upon the number of supported LDCs of your platform, the Logical Domains Manager will either accept or reject the configurations.

Memory Size Requirements

Logical Domains software does not impose a memory size limitation when creating a domain. The memory size requirement is a characteristic of the guest operating system. Some Logical Domains functionality might not work if the amount of memory present is less than the recommended size. For recommended and minimum size memory requirements, refer to the installation guide for the operating system you are using. Refer to System Requirements and Recommendations in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

The OpenBootTM PROM has a minimum size restriction for a domain. Currently, that restriction is 12 Mbytes. If you have a domain less than that size, the Logical Domains Manager will automatically boost the size of the domain to 12 Mbytes. Refer to the release notes for your system firmware for information about memory size requirements.

Booting a Large Number of Domains

You can the boot following number of domains depending on your platform:

If unallocated virtual CPUs are available, assign them to the service domain to help process the virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more than 32 domains. In cases where maximum domain configurations have only a single CPU in the service domain, do not put unnecessary stress on the single CPU when configuring and using the domain. The virtual switch (vsw) services should be spread over all the network adapters available in the machine. For example, if booting 128 domains on a Sun SPARC Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances. Do not have more than 32 vnet instances per vsw service because having more than that tied to a single vsw could cause hard hangs in the service domain.

To run the maximum configurations, a machine needs the following amount of memory, depending on your platform, so that the guest domains contain an adequate amount of memory:

Memory and swap space usage increases in a guest domain when the vsw services used by the domain provides services to many virtual networks (in multiple domains). This is due to the peer-to-peer links between all the vnet connected to the vsw. The service domain benefits from having extra memory. Four Gbytes is the recommended minimum when running more than 64 domains. Start domains in groups of 10 or less and wait for them to boot before starting the next batch. The same advice applies to installing operating systems on domains.

Cleanly Shutting Down and Power Cycling a Logical Domains System

If you have made any configuration changes since last saving a configuration to the SC, before you attempt to power off or power cycle a Logical Domains system, make sure that you save the latest configuration that you want to keep.

ProcedurePower Off a System With Multiple Active Domains

  1. Shut down and unbind all the non-I/O domains.

  2. Shut down and unbind any active I/O domains.

  3. Halt the primary domain.

    Because no other domains are bound, the firmware automatically powers off the system.

ProcedurePower Cycle the System

  1. Shut down and unbind all the non-I/O domains.

  2. Shut down and unbind any active I/O domains.

  3. Reboot the primary domain.

    Because no other domains are bound, the firmware automatically power cycles the system before rebooting it. When the system restarts, it boots into the Logical Domains configuration last saved or explicitly set.

Memory Size Requested Might Be Different From Memory Allocated

Under certain circumstances, the Logical Domains Manager rounds up the requested memory allocation to either the next largest 8-Kbyte or 4-Mbyte multiple. This can be seen in the following example output of the ldm list-domain -l command, where the constraint value is smaller than the actual allocated size:


Memory:
          Constraints: 1965 M
          raddr          paddr5          size
          0x1000000      0x291000000     1968M

Logical Domain Variable Persistence

With domaining enabled, variable updates persist across a reboot, but not across a powercycle, unless the variable updates are either initiated from OpenBoot firmware on the control domain, or followed by saving the configuration to the SC.

In this context, it is important to note that a reboot of the control domain could initiate a powercycle of the system:

LDom variables for a domain can be specified using any of the following methods:

The goal is that, variable updates that are made by using any of these methods always persist across reboots of the domain. The variable updates also always reflect in any subsequent logical domain configurations that were saved to the SC.

In LDoms 1.2 software, there are a few cases where variable updates do not persist as expected:

The following Bug IDs have been filed to resolve these issues: 6520041, 6540368, 6540937, and 6590259.

Sun SNMP Management Agent Does Not Support Multiple Domains

Sun Simple Management Network Protocol (SNMP) Management Agent does not support multiple domains. Only a single global domain is supported.

The sysfwdownload Utility Takes Significantly Longer to Run While LDoms Is Enabled on Certain Systems

The sysfwdownload utility takes significantly longer to run from within a Logical Domains environment on systems based on UltraSPARC T1 processors. This happens if you use the sysfwdownload utility while the LDoms software is enabled.

Workaround: Boot to the factory-default configuration with the LDoms software disabled before using the utility.

Containers, Processor Sets, and Pools Are Not Compatible With CPU Power Management

Using CPU dynamic reconfiguration (DR) to power down virtual CPUs does not work with processor sets, resource pools, or the zone's dedicated CPU feature.

When using CPU power management in elastic mode, the Solaris OS guest sees only the CPUs that are allocated to the domains that are powered on. That means that output from the psrinfo(1M) command dynamically changes depending on the number of CPUs currently power-managed. This causes an issue with processor sets and pools, which require actual CPU IDs to be static to allow allocation to their sets. This can also impact the zone's dedicated CPU feature.

Workaround: Set the performance mode for the power management policy.

CPU Power Management Does Not Occur When Any Domain Is in a Transition State

A domain is in transition when booting, shutting down, at the ok prompt, or in the kernel debugger. Use the ldm list command to determine whether a guest domain is in the transition state. The command output shows a t flag for any domain that is in the transition state. To enable CPU Power Management for the other domains, boot the guest domain that is in the transition state, or use the ldm stop command to stop that guest domain.

Fault Management

There are several issues associated with FMA and power-managing CPUs. If a CPU faults when running in elastic mode, switch to performance mode until the faulted CPU recovers. If all faulted CPUs recover, then elastic mode can be used again.

For more information about faulted resources, see the OpenSolaris Fault Management web page.

Delayed Reconfiguration

When a primary domain is in a delayed reconfiguration state, CPUs are power managed only after the primary domain reboots. This means that CPU power management will not bring additional CPUs online when the domain is experiencing high-load usage until the primary domain reboots, clearing the delayed reconfiguration state.

Domain Migration in Elastic Mode Is Not Supported

Domain migrations are not supported for a source or target machine in elastic mode. If a migration is underway while in performance mode and the power management policy is set to elastic mode, the policy switch is deferred until the migration completes. The migration command returns an error if either the source or target machine is in elastic mode and a domain migration is attempted.

Cryptographic Units

The power management feature requires dynamic CPU DR to function. So, do not use the power management feature in Integrated Lights-Out Management (ILOM) if your domains are to have cryptographic units bound. Currently, the Solaris OS support for cryptographic DR does not support CPU DR without a guest reboot.