JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle VM Server for SPARC 2.2 Release Notes     Oracle VM Server for SPARC
search filter icon
search icon

Document Information

Preface

1.  Oracle VM Server for SPARC 2.2 Release Notes

What's New in This Release

System Requirements

Supported Platforms

Required Software and Patches

Required and Recommended Oracle Solaris OS

Required Software to Enable Oracle VM Server for SPARC 2.2 Features

Required and Recommended System Firmware Patches

Minimum Version of Software Required

Direct I/O Hardware and Software Requirements

PCIe SR-IOV Hardware and Software Requirements

Live Migration Requirements

Location of the Oracle VM Server for SPARC 2.2 Software

Location of Patches

Location of Documentation

Related Software

Optional Software

Software That Can Be Used With the Logical Domains Manager

System Controller Software That Interacts With Logical Domains Software

Upgrading to the Oracle VM Server for SPARC 2.2 Software

Known Issues

General Issues

Upgrading From Oracle Solaris 10 OS Older Than Oracle Solaris 10 5/08 OS

I/O MMU Bypass Mode Is No Longer Needed

Service Processor and System Controller Are Interchangeable Terms

In Certain Conditions, a Guest Domain's Oracle Solaris Volume Manager Configuration or Metadevices Can Be Lost

Logical Domain Channels and Logical Domains

Memory Size Requirements

Booting a Large Number of Domains

Cleanly Shutting Down and Power Cycling a Logical Domains System

Memory Size Requested Might Be Different From Memory Allocated

Logical Domains Variable Persistence

Oracle's Sun SNMP Management Agent Does Not Support Multiple Domains

Containers, Processor Sets, and Pools Are Not Compatible With CPU Power Management

Fault Management

Delayed Reconfiguration

Cryptographic Units

ldmp2v convert Command: VxVM Warning Messages During Boot

Extended Mapin Space Is Only Available in the Oracle Solaris 10 8/11 OS and Oracle Solaris 11 OS

Graphical Configuration Assistant Tool Has Been Removed

Oracle Hard Partitioning Requirements for Software Licenses

Upgrade Option Not Presented When Using ldmp2v prepare -R

Block of Dynamically Added Memory Can Be Dynamically Removed Only as a Whole

ldmp2v Command: ufsdump Archiving Method Is No Longer Used

Only One CPU Configuration Operation Is Permitted to Be Performed During a Delayed Reconfiguration

Domain Migration Restrictions

Version Restrictions for Migration

CPU Restrictions for Migration

Oracle VM Server for SPARC MIB Issues

The snmptable Command Does Not Work With the Version 2 or Version 3 Option

Bugs Affecting the Oracle VM Server for SPARC 2.2 Software

PCIe Fabrics Are Not Accessible by Guest Domains When 11 or More Domains Have PCIe Devices

ldmd Terminates Abnormally on Operations After Canceling a Delayed Reconfiguration

Unbound Domain That Has Disabled CPUs Reports Incorrect Number of CPU Resources

Re-creating a Domain That Has PCIe Virtual Functions From an XML File Fails

Incorrect Error Message Issued When Changing the Control Domain From Using Whole Cores to Using Partial Cores

ldm init-system Command Cannot Correctly Re-create a Domain That Has Virtual Function Devices

Logical Domains Manager Might Crash and Restart When You Attempt to Modify Many Domains Simultaneously

ldm init-system Reports a disk server not found Error

Setting Unicast Slots to a Number That Exceeds the Maximum Causes the Value to Be Reset to 0

Attempt to Exceed the Maximum Number of Unicast Slots of ixgbe Physical Functions and Virtual Functions Does Not Fail

Control Domain Requires the Lowest Core in the System

ldmd Daemon Does Not Come Online

After Canceling a Migration, ldm Commands That Are Run on the Target System Are Unresponsive

Some Emulex Cards Do Not Work When Assigned to I/O Domain

Guest Domain Panics When Running the cputrack Command During a Migration to a SPARC T4 System

Oracle Solaris 11: DRM Stealing Reports Oracle Solaris DR Failure and Retries

Limit the Maximum Number of Virtual Functions That Can Be Assigned to a Domain

A Domain That Uses Cross-CPU Migration Reports Random Uptimes After the Migration Completes

ldm init-system -r -i XML-file Does Not Reboot the primary Domain

Oracle Solaris 10: ixgbe Driver Might Cause a Panic When Booted With an Intel Dual Port Ethernet Controller X540 Card

Version 8.2.0 of the System Firmware Contains a New Version of the scvar Database

panic: BAD TRAP: occurred in module "pcie" due to an illegal access to a user address

Control Domain Reconfigured From an XML File Fails to Remove I/O Devices Properly

An Invalid vdsdev Backend Is Seen as a Valid Path

After Disabling the Whole-Core Constraint, the Constraint Reappears After a primary Domain Reboot

Destroying All Virtual Functions and Returning the Slots to the Root Domain Does Not Restore the Root Complex Resources

ldm start Erroneously Returns 0 Instead of 1 on Failure to Start a Guest Domain

ldm remove-io of PCIe Cards That Have PCIe-to-PCI Bridges Should Be Disallowed

ldm stop Command Might Fail If Issued Immediately After an ldm start Command

Using ldm set-io to Change pvid Value Twice in Succession Might Cause a Configuration Failure

System Panics When Rebooting a primary Domain That Has a Very Large Number of Virtual Functions Assigned

Vague SR-IOV Error Message: Create vf failed

Oracle Solaris 11 OS: Using Direct I/O to Remove Multiple PCIe Slots From the primary Domain on a Multi-Socket SPARC T-Series System Might Panic at Boot Time

Partial Core primary Fails to Permit Whole-Core DR Transitions

After a primary Domain Reboot, igb and ixgbe Virtual Functions That Are Assigned to the primary Domain Become Faulty

ldmconfig Is Only Supported on Oracle Solaris 10 Systems

ldm list-io Command Shows the UNK or INV State After Boot

Cannot Detach Network Interface Card Driver

Oracle VM Server for SPARC MIB Is Only Supported on Oracle Solaris 10 Systems

Migrating a Very Large Memory Domain on SPARC T4-4s Results in a Panicked Domain on the Target System

Removing a Large Number of CPUs From a Guest Domain

A Large-Memory Domain in Elastic Mode Might Take a Long Time to Stop

Cannot Use Oracle Solaris Hot Plug Operations to Hot Remove a PCIe Endpoint Device

Virtual Disk Validation Fails for a Physical Disk With No Slice 2

nxge Panics When Migrating a Guest Domain That Has Hybrid I/O and Virtual I/O Virtual Network Devices

All ldm Commands Hang When Migrations Have Missing Shared NFS Resources

ldmd Fails to Remove Cores From a Domain That Has Partial Cores

Logical Domains Agent Service Does Not Come Online if the System Log Service Does Not Come Online

Kernel Deadlock Causes Machine Hang During a Migration

DRM and ldm list Output Shows a Different Number of Virtual CPUs Than Are Actually in the Guest Domain

Live Migration of a Domain That Depends on an Inactive Master Domain on the Target Machine Causes ldmd to Fault With a Segmentation Fault

DRM Fails to Restore the Default Number of Virtual CPUs for a Migrated Domain When the Policy is Removed or Expired

Virtual CPU Timeout Failures During DR

Migration Failure Reason Not Reported When the System MAC Address Clashes With Another MAC Address

Simultaneous Migration Operations in "Opposite Direction" Might Cause ldm to Hang

Removing a Large Number of CPUs From the Control Domain

System That Has the Elastic Policy Set and Is Running the Oracle Solaris 10 8/11 OS Might Hang

pkgadd Fails to Set ACL Entries on /var/svc/manifest/platform/sun4v/ldmd.xml

SPARC T3-1: Detect And Handle Disks That Are Accessible Through Multiple Direct I/O Paths

Memory DR Removal Operations With Multiple Plumbed NIU nxge Instances Can Hang Indefinitely and Never Complete

Using ldm stop -a Command on Domains in a Master-Slave Relationship Leaves the Slave With the stopping Flag Set

Migration of a Domain That Has an Enabled Default DRM Policy Results in a Target Domain Being Assigned All Available CPUs

An In-Use MAC Address Can be Reassigned

ldmconfig Cannot Create a Domain Configuration on the SP

Uncooperative Oracle Solaris Domain Migration Can Be Blocked If cpu0 Is Offline

Memory DR Is Disabled Following a Canceled Migration

Dynamic Reconfiguration of MTU Values of Virtual Network Devices Sometimes Fails

Migrated Domain With MAUs Contains Only One CPU When Target OS Does Not Support DR of Cryptographic Units

Confusing Migration Failure Message for Real Address Memory Bind Failures

Dynamically Removing All the Cryptographic Units From a Domain Causes SSH to Terminate

PCI Express Dual 10-Gigabit Ethernet Fiber Card Shows Four Subdevices in ldm list-io -l Output

ldm Commands Are Slow to Respond When Several Domains Are Booting

Guest Domain Might Fail to Successfully Reboot When a System Is in Power Management Elastic Mode

Guest Domain Sometimes Fails to Make Proper Domain Services Connection to the Control Domain

Oracle Solaris 11: Zones Configured With an Automatic Network Interface Might Fail to Start

Oracle Solaris 10: Virtual Network Devices Are Not Created Properly on the Control Domain

Newly Added NIU/XAUI Adapters Are Not Visible to Host OS If Logical Domains Is Configured

I/O Domain or Guest Domain Panics When Booting From e1000g

Explicit Console Group and Port Bindings Are Not Migrated

Constraint Database Is Not Synchronized to Saved Configuration

Migration Does Not Fail If a vdsdev on the Target Has a Different Back End

Migration Can Fail to Bind Memory Even If the Target Has Enough Available

Logical Domains Manager Does Not Start If the Machine Is Not Networked and an NIS Client Is Running

Logical Domains Manager Displays Migrated Domains in Transition States When They Are Already Booted

Cannot Connect to Migrated Domain's Console Unless vntsd Is Restarted

Sometimes, Executing the uadmin 1 0 Command From an Logical Domains System Does Not Return the System to the OK Prompt

Logical Domains Manager Can Take Over 15 Minutes to Shut Down a Domain

If the Oracle Solaris 10 5/08 OS Is Installed on a Service Domain, Attempting a Net Boot of the Oracle Solaris 10 8/07 OS on Any Guest Domain Serviced by It Can Hang the Installation

Simultaneous Net-Installation of Multiple Domains Fails When in a Common Console Group

The scadm Command Can Hang Following an SC or SP Reset

ldc_close: (0xb) unregister failed, 11 Warning Messages

Guest Domain With Too Many Virtual Networks on the Same Network Using DHCP Can Become Unresponsive

Logical Domains Manager Does Not Retire Resources On Guest Domain After a Panic and Reboot

OpenBoot PROM Variables Cannot be Modified by the eeprom(1M) Command When the Logical Domains Manager is Running

Cannot Set Security Keys With Logical Domains Running

Behavior of the ldm stop-domain Command Can Be Confusing

Resolved Issues

Known Issues

This section contains general issues and specific bugs concerning the Oracle VM Server for SPARC 2.2 software.

General Issues

This section describes general known issues about this release of the Oracle VM Server for SPARC software that are broader than a specific bug number. Workarounds are provided where available.

Upgrading From Oracle Solaris 10 OS Older Than Oracle Solaris 10 5/08 OS

If the control domain is upgraded from an Oracle Solaris 10 OS version older than Oracle Solaris 10 5/08 OS (or without patch 127127-11), and if volume manager volumes were exported as virtual disks, the virtual disk back ends must be re-exported with options=slice after the Logical Domains Manager has been upgraded. See Exporting Volumes and Backward Compatibility in Oracle VM Server for SPARC 2.2 Administration Guide.

I/O MMU Bypass Mode Is No Longer Needed

Starting with the Oracle VM Server for SPARC 2.0 release, I/O memory management unit (MMU) bypass mode is no longer needed. As a result, the bypass=on property is no longer available for use by the ldm add-io command.

Service Processor and System Controller Are Interchangeable Terms

For discussions in Oracle VM Server for SPARC documentation, the terms service processor (SP) and system controller (SC) are interchangeable.

In Certain Conditions, a Guest Domain's Oracle Solaris Volume Manager Configuration or Metadevices Can Be Lost

If a service domain is running a version of Oracle Solaris 10 OS prior to Oracle Solaris 10 8/11 and is exporting a physical disk slice as a virtual disk to a guest domain, then this virtual disk will appear in the guest domain with an inappropriate device ID. If that service domain is then upgraded to Oracle Solaris 10 8/11, the physical disk slice exported as a virtual disk will appear in the guest domain with no device ID.

This removal of the device ID of the virtual disk can cause problems to applications attempting to reference the device ID of virtual disks. In particular, this can cause the Oracle Solaris Volume Manager to be unable to find its configuration or to access its metadevices.

Workaround: After upgrading a service domain to Oracle Solaris 10 8/11, if a guest domain is unable to find its Oracle Solaris Volume Manager configuration or its metadevices, execute the following procedure.

Find a Guest Domain's Oracle Solaris Volume Manager Configuration or Metadevices

  1. Boot the guest domain.
  2. Disable the devid feature of Oracle Solaris Volume Manager by adding the following lines to the /kernel/dr/md.conf file:
    md_devid_destroy=1;
    md_keep_repl_state=1;
  3. Reboot the guest domain.

    After the domain has booted, the Oracle Solaris Volume Manager configuration and metadevices should be available.

  4. Check the Oracle Solaris Volume Manager configuration and ensure that it is correct.
  5. Re-enable the Oracle Solaris Volume Manager devid feature by removing from the /kernel/drv/md.conf file the two lines that you added in Step 2.
  6. Reboot the guest domain.

    During the reboot, you will see messages similar to this:

    NOTICE: mddb: unable to get devid for 'vdc', 0x10

    These messages are normal and do not report any problems.

Logical Domain Channels and Logical Domains

There is a limit to the number of logical domain channels (LDCs) that are available in any logical domain. For UltraSPARC T2 servers, SPARC T3-1 servers, SPARC T3-1B servers, SPARC T4-1 servers, and SPARC T4-1B servers, the limit is 512. For UltraSPARC T2 Plus servers, the other SPARC T3 servers and the other SPARC T4 servers, the limit is 768. This only becomes an issue on the control domain because the control domain has at least part, if not all, of the I/O subsystem allocated to it. This might also be an issue because of the potentially large number of LDCs that are created for both virtual I/O data communications and the Logical Domains Manager control of the other logical domains.

If you try to add a service, or bind a domain, so that the number of LDC channels exceeds the limit on the control domain, the operation fails with an error message similar to the following:

13 additional LDCs are required on guest primary to meet this request,
but only 9 LDCs are available

If you have a large number of virtual network devices that are connected to the same virtual virtual switch, you can reduce the number of LDC channels assigned by using the ldm add-vsw or ldm set-vsw command to set inter-vnet-link=off. When this property is set to off, LDC channels are not used for inter-vnet communications. Instead, an LDC channel is assigned only for communication between virtual network devices and virtual switch devices. See the ldm(1M) man page.


Note - Although disabling the assignment of inter-vnet channels reduces the number of LDCs, it might negatively affect guest-to-guest network performance.


The following guidelines can help prevent creating a configuration that could overflow the LDC capabilities of the control domain:

  1. The control domain allocates approximately 15 LDCs for various communication purposes with the hypervisor, Fault Management Architecture (FMA), and the system controller (SC), independent of the number of other logical domains configured. The exact number of LDC channels that is allocated by the control domain depends on the platform and on the version of the software that is used.

  2. The control domain allocates 1 LDC to every logical domain, including itself, for control traffic.

  3. Each virtual I/O service on the control domain consumes 1 LDC for every connected client of that service.

For example, consider a control domain and 8 additional logical domains. Each logical domain needs the following at a minimum:

Applying the above guidelines yields the following results (numbers in parentheses correspond to the preceding guideline number from which the value was derived):

15(1) + 9(2) + 8 x 3(3) = 48 LDCs in total

Now consider the case where there are 45 domains instead of 8, and each domain includes 5 virtual disks, 5 virtual networks, and a virtual console. Now the equation becomes:

15 + 46 + 45 x 11 = 556 LDCs in total

Depending upon the number of supported LDCs of your platform, the Logical Domains Manager will either accept or reject the configurations.

Memory Size Requirements

The Oracle VM Server for SPARC software does not impose a memory size limitation when you create a domain. The memory size requirement is a characteristic of the guest operating system. Some Oracle VM Server for SPARC functionality might not work if the amount of memory present is less than the recommended size. For recommended and minimum memory requirements for the Oracle Solaris 10 OS, see System Requirements and Recommendations in Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade. For recommended and minimum memory requirements for the Oracle Solaris 11 OS, see Oracle Solaris 11 Release Notes.

The OpenBoot PROM has a minimum size restriction for a domain. Currently, that restriction is 12 Mbytes. If you have a domain less than that size, the Logical Domains Manager will automatically boost the size of the domain to 12 Mbytes. Refer to the release notes for your system firmware for information about memory size requirements.

The memory dynamic reconfiguration (DR) feature enforces 256-Mbyte alignment on the address and size of the memory involved in a given operation. See Memory Alignment in Oracle VM Server for SPARC 2.2 Administration Guide.

Booting a Large Number of Domains

You can boot the following number of domains depending on your platform:

If unallocated virtual CPUs are available, assign them to the service domain to help process the virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more than 32 domains. In cases where maximum domain configurations have only a single CPU in the service domain, do not put unnecessary stress on the single CPU when configuring and using the domain. The virtual switch (vsw) services should be spread over all the network adapters available in the machine. For example, if booting 128 domains on a Sun SPARC Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances. Do not have more than 32 vnet instances per vsw service because having more than that tied to a single vsw could cause hard hangs in the service domain.

To run the maximum configurations, a machine needs the an adequate amount of memory to support the guest domains. The amount of memory is dependent on your platform and your OS. See the documentation for your platform, Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade, and Installing Oracle Solaris 11 Systems.

Memory and swap space usage increases in a guest domain when the vsw services used by the domain provides services to many virtual networks (in multiple domains). This is due to the peer-to-peer links between all the vnet connected to the vsw. The service domain benefits from having extra memory. Four Gbytes is the recommended minimum when running more than 64 domains. Start domains in groups of 10 or less and wait for them to boot before starting the next batch. The same advice applies to installing operating systems on domains. You can reduce the number of links by disabling inter-vnet channels. See Inter-Vnet LDC Channels in Oracle VM Server for SPARC 2.2 Administration Guide.

Cleanly Shutting Down and Power Cycling a Logical Domains System

If you have made any configuration changes since last saving a configuration to the SC, before you attempt to power off or power cycle a Logical Domains system, make sure that you save the latest configuration that you want to keep.

Power Off a System With Multiple Active Domains

  1. Shut down, stop, and unbind all the non-I/O domains.
  2. Shut down, stop, and unbind any active I/O domains.
  3. Halt the primary domain.

    Because no other domains are bound, the firmware automatically powers off the system.

Power Cycle the System

  1. Shut down, stop, and unbind all the non-I/O domains.
  2. Shut down, stop, and unbind any active I/O domains.
  3. Reboot the primary domain.

    Because no other domains are bound, the firmware automatically power cycles the system before rebooting it. When the system restarts, it boots into the Logical Domains configuration last saved or explicitly set.

Memory Size Requested Might Be Different From Memory Allocated

Under certain circumstances, the Logical Domains Manager rounds up the requested memory allocation to either the next largest 8-Kbyte or 4-Mbyte multiple. This can be seen in the following example output of the ldm list-domain -l command, where the constraint value is smaller than the actual allocated size:

Memory:
          Constraints: 1965 M
          raddr          paddr5          size
          0x1000000      0x291000000     1968M

Logical Domains Variable Persistence

Variable updates persist across a reboot, but not across a powercycle, unless the variable updates are either initiated from OpenBoot firmware on the control domain or followed by saving the configuration to the SC.

In this context, it is important to note that a reboot of the control domain could initiate a powercycle of the system:

Logical Domains variables for a domain can be specified using any of the following methods:

The goal is that, variable updates that are made by using any of these methods always persist across reboots of the domain. The variable updates also always reflect in any subsequent logical domain configurations that were saved to the SC.

In Oracle VM Server for SPARC 2.2 software, there are a few cases where variable updates do not persist as expected:

If you are concerned about Logical Domains variable changes, do one of the following:

If you modify the time or date on a logical domain, for example using the ntpdate command, the change persists across reboots of the domain but not across a power cycle of the host. To ensure that time changes persist, save the configuration with the time change to the SP and boot from that configuration.

The following Bug IDs have been filed to resolve these issues: 6520041, 6540368, 6540937, and 6590259.

Oracle's Sun SNMP Management Agent Does Not Support Multiple Domains

Sun Simple Network Management Protocol (SNMP) Management Agent does not support multiple domains. Only a single global domain is supported.

Containers, Processor Sets, and Pools Are Not Compatible With CPU Power Management

Using CPU dynamic reconfiguration (DR) to power down virtual CPUs does not work with processor sets, resource pools, or the zone's dedicated CPU feature.

When using the CPU power management elastic policy, the Oracle Solaris OS guest sees only the CPUs that are allocated to the domains that are powered on. That means that output from the psrinfo(1M) command dynamically changes depending on the number of CPUs currently power-managed. This causes an issue with processor sets and pools, which require actual CPU IDs to be static to allow allocation to their sets. This can also impact the zone's dedicated CPU feature.

Workaround: Set the power management policy to the performance policy.

Fault Management

There are several issues associated with FMA and power-managing CPUs. If a CPU faults when running with the elastic policy set, switch to the performance policy until the faulted CPU recovers. If all faulted CPUs recover, then elastic policy can be used again.

Delayed Reconfiguration

When a primary domain is in a delayed reconfiguration state, CPUs are power managed only after the primary domain reboots. This means that CPU power management will not bring additional CPUs online while the domain is experiencing high-load usage until the primary domain reboots, clearing the delayed reconfiguration state.

Cryptographic Units

The Oracle Solaris 10 10/09 OS introduces the capability to dynamically add and remove cryptographic units from a domain, which is called cryptographic unit dynamic reconfiguration (DR). The Logical Domains Manager automatically detects whether a domain allows cryptographic unit DR, and enables the functionality only for those domains. In addition, CPU DR is no longer disabled in domains that have cryptographic units bound and are running an appropriate version of the Oracle Solaris OS.

No core disable operations are performed on domains that have cryptographic units bound when the SP is set to the elastic policy. To enable core disable operations to be performed when the system has the elastic policy set, remove the cryptographic units that are bound to the domain.

ldmp2v convert Command: VxVM Warning Messages During Boot

Running Veritas Volume Manager (VxVM) 5.x on the Oracle Solaris 10 OS is the only supported (tested) version for the Oracle VM Server for SPARC P2V tool. Older versions of VxVM, such as 3.x and 4.x running on the Solaris 8 and Solaris 9 operating systems, might also work. In those cases, the first boot after running the ldmp2v convert command might show warning messages from the VxVM drivers. You can ignore these messages. You can remove the old VRTS* packages after the guest domain has booted.

Boot device: disk0:a File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: normaal
Configuring devices.
/kernel/drv/sparcv9/vxdmp: undefined symbol ?romp?
WARNING: mod_load: cannot load module ?vxdmp?
WARNING: vxdmp: unable to resolve dependency, module ?misc/ted? not found
/kernel/drv/sparcv9/vxdmp: undefined symbol ?romp?
WARNING: mod_load: cannot load module ?vxdmp?
WARNING: vxdmp: unable to resolve dependency, module ?misc/ted? not found
/kernel/drv/sparcv9/vxio: undefined symbol ?romp?
WARNING: mod_load: cannot load module ?vxio?
WARNING: vxio: unable to resolve dependency, module ?drv/vxdmp? not found
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
NOTICE: VxVM not started

Extended Mapin Space Is Only Available in the Oracle Solaris 10 8/11 OS and Oracle Solaris 11 OS

Extended mapin space is only available in the Oracle Solaris 10 8/11 OS and Oracle Solaris 11 OS. By default, this feature is disabled.

You can use the ldm add-domain or ldm set-domain command to enable the mode by setting extended-mapin-space=on on a domain that is running the Oracle Solaris 10 8/11 OS or Oracle Solaris 11 OS. See the ldm(1M) man page.

Graphical Configuration Assistant Tool Has Been Removed

Starting with the Oracle VM Server for SPARC 2.1 release, only the terminal-based Configuration Assistant tool, ldmconfig, is available. The graphic user interface tool is no longer available.

Oracle Hard Partitioning Requirements for Software Licenses

For information about Oracle's hard partitioning requirements for software licenses, see Partitioning: Server/Hardware Partitioning.

Upgrade Option Not Presented When Using ldmp2v prepare -R

The Oracle Solaris Installer does not present the Upgrade option when the partition tag of the slice that holds the root (/) file system is not set to root. This situation occurs if the tag is not explicitly set when labeling the guest's boot disk. You can use the format command to set the partition tag as follows:

AVAILABLE DISK SELECTIONS:
       0. c0d0 <SUN-DiskImage-10GB cyl 282 alt 2 hd 96 sec 768>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c4t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@400/pci@0/pci@1/scsi@0/sd@2,0
       2. c4t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@400/pci@0/pci@1/scsi@0/sd@3,0
Specify disk (enter its number)[0]: 0
selecting c0d0
[disk formatted, no defect list found]
format> p


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit

partition> 0
Part      Tag    Flag     Cylinders       Size            Blocks
  0 unassigned    wm       0              0         (0/0/0)          0

Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 0
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 8g
partition> label
Ready to label disk, continue? y

partition>

Block of Dynamically Added Memory Can Be Dynamically Removed Only as a Whole

A block of dynamically added memory can be dynamically removed only as a whole. That is, a subset of that memory block cannot be dynamically removed.

This situation could occur if a domain with a small memory size is dynamically grown to a much larger size, as the following example shows:

# ldm list ldom1
NAME  STATE FLAGS   CONS VCPU MEMORY UTIL UPTIME
ldom1 active -n---- 5000 2    1G     0.4% 23h

# ldm add-mem 16G ldom1

# ldm rm-mem 8G ldom1
Memory removal failed because all of the memory is in use.

# ldm rm-mem 16G ldom1

# ldm list ldom1
NAME  STATE FLAGS   CONS VCPU MEMORY UTIL UPTIME
ldom1 active -n---- 5000 2    1G     0.4% 23h

Workaround: Dynamically add memory in smaller amounts to reduce the probability that this condition will occur.

Recovery: Reboot the domain.

ldmp2v Command: ufsdump Archiving Method Is No Longer Used

Restoring ufsdump archives on a virtual disk that is backed by a file on a UFS file system might cause the system to hang. In such a case, the ldmp2v prepare command will exit. You might encounter this problem when you manually restore ufsdump archives in preparation for the ldmp2v prepare -R /altroot command when the virtual disk is a file on a UFS file system. For compatibility with previously created ufsdump archives, you can still use the ldmp2v prepare command to restore ufsdump archives on virtual disks that are not backed by a file on a UFS file system. However, the use of ufsdump archives is not recommended.

Only One CPU Configuration Operation Is Permitted to Be Performed During a Delayed Reconfiguration

Do not attempt to perform more than one CPU configuration operation on the primary domain while it is in a delayed reconfiguration. If you attempt more CPU configuration requests, they will be rejected.

Workaround: Perform one of the following actions:

Domain Migration Restrictions

The following sections describe restrictions for domain migration. The Logical Domains Manager software and the system firmware versions must be compatible to permit migrations. Also, you must meet certain CPU requirements to ensure a successful domain migration.

Version Restrictions for Migration

Both the source and target machines must run at least version 2.1 of the Logical Domains Manager.

The following examples show the messages that you see when you run older versions of the Logical Domains Manager, the system firmware, or both:

CPU Restrictions for Migration

If the domain to be migrated is running an Oracle Solaris OS version older than the Oracle Solaris 10 8/11 OS, you might see the following message during the migration:

Domain domain-name is not running an operating system that is
compatible with the latest migration functionality.

The following CPU requirements and restrictions apply only when you run an OS that precedes the Oracle Solaris 10 8/11 OS:

These restrictions also apply when you attempt to migrate a domain that is running in OpenBoot or in the kernel debugger. See Migrating a Domain From the OpenBoot PROM or a Domain That Is Running in the Kernel Debugger in Oracle VM Server for SPARC 2.2 Administration Guide.

Oracle VM Server for SPARC MIB Issues

This section summarizes the issues that you might encounter when using Oracle VM Server for SPARC Management Information Base (MIB) software.


Note - The Oracle VM Server for SPARC MIB software is only available on Oracle Solaris 10 systems.


The snmptable Command Does Not Work With the Version 2 or Version 3 Option

Bug ID 6521530: You receive empty SNMP tables if you query the Oracle VM Server for SPARC MIB 2.1 software using the snmptable command with the -v2c or -v3 option. The snmptable command with the -v1 option works as expected.

Workaround: Use the -CB option to use only GETNEXT, not GETBULK, requests to retrieve data. See How to Retrieve Oracle VM Server for SPARC MIB Objects in Oracle VM Server for SPARC 2.2 Administration Guide.

Bugs Affecting the Oracle VM Server for SPARC 2.2 Software

This section summarizes the bugs that you might encounter when using this version of the software. The bug descriptions are in numerical order by bug ID. If a workaround and a recovery procedure are available, they are specified.

PCIe Fabrics Are Not Accessible by Guest Domains When 11 or More Domains Have PCIe Devices

Bug ID 7166620: If the control domain is rebooted when 11 or more guest domains have PCIe endpoint devices assigned, the PCIe devices are inaccessible on the guest domain.

Recovery: Stop and restart the affected guest domains.

Workaround: Configure a domain dependency relationship between the control domain and the guest domains that have PCIe endpoint devices assigned to them. The following dependency relationship ensures that domains that have PCIe endpoint devices are automatically stopped when the control domain reboots for any reason:

primary# ldm set-domain failure-policy=stop primary
primary# ldm set-domain master=primary ldom

ldmd Terminates Abnormally on Operations After Canceling a Delayed Reconfiguration

Bug ID 7165095 and 7165101: On a system that has Direct I/O or SR-IOV domains, canceling a delayed reconfiguration and then performing any subsequent reconfiguration operation, the ldmd daemon terminates abnormally and produces a core file. The ldmd SMF service might also enter maintenance mode.

Workaround: Avoid the use of the ldm cancel-reconf command. If you must cancel or have already canceled the delayed reconfiguration, restart the ldmd SMF service before you perform any other ldm operations.

# scvadm restart ldmd

Recovery: If the ldmd SMF service enters maintenance mode, you must power cycle the system before you can restore the ldmd service.

The following show how to power cycle the system from the control domain and from the service processor (SP):

Unbound Domain That Has Disabled CPUs Reports Incorrect Number of CPU Resources

Bug ID 7160502: Disabled CPUs can make the Logical Domains Manager report an incorrect number of CPU resources. The following example shows that unbinding a domain erroneously changes the CPU resource count for the domain:

# ldm list
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  UART    9     4G       0.2%  1h 5m
ldg1             bound      ------  5000    116   2G
# ldm unbind ldg1
# ldm list
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  UART    9     4G       1.1%  1h 5m
ldg1             inactive   ------          120   2G

At this point, the number of CPU resources is incorrect. The count for the ldg1 domain should be 116 and not 120 as shown after the unbind operation.


Note - This example is just one example, and there might possibly be other situations where the CPU count is incorrect due to disabled CPUs. In such cases, use the approach that is presented in the workaround.


Workaround: If possible, avoid using cores that have disabled CPUs. Otherwise, when unbinding a domain that has disabled cores, take care to reset the number of CPUs to the correct amount so that the domain can later be rebound.

To rebind the domain, you must reset the number of CPU resources. For example:

# ldm set-vcpu 116 ldg1
# ldm bind ldg1

Re-creating a Domain That Has PCIe Virtual Functions From an XML File Fails

Bug ID 7159359: You might encounter a problem when attempting to re-create a configuration from an XML file that incorrectly represents virtual function constraints.

This problem occurs when you use the ldm list-constraints -x command to save the configuration of a domain that has PCIe virtual functions.

If you later re-create the domain by using the ldm add-domain -i command, the original virtual functions do not exist, and a domain bind attempt fails with the following error message:

No free matching PCIe device...

Even if you create the missing virtual functions, another domain bind attempt fails with the same error message because the virtual functions are miscategorized as PCIe devices by the ldm add-domain command.

Workaround: Use the ldm list-io command to save the information about the virtual functions, and then use the ldm rm-dom command to destroy each affected domain. Then, use the ldm create-vf command to create all the required virtual functions. Now, you can use the ldm command to rebuild the domains. When you use the ldm add-io command to add each virtual function, it is correctly categorized as a virtual function device, so the domain can be bound.

For information about rebuilding a domain configuration that uses virtual functions, see ldm init-system Command Cannot Correctly Re-create a Domain That Has Virtual Function Devices.

Incorrect Error Message Issued When Changing the Control Domain From Using Whole Cores to Using Partial Cores

Bug ID 7159114: When you change the control domain from using physically constrained cores to using unconstrained CPU resources, you might see the following extraneous message:

Whole-core partitioning has been removed from domain primary,because
dynamic reconfiguration has failed and the domain is now configured
with a partial CPU core.

Workaround: You can ignore this message.

ldm init-system Command Cannot Correctly Re-create a Domain That Has Virtual Function Devices

Bug ID 7158496: When you use the ldm list-constraints -x command to save constraints to an XML file, the virtual function details are not saved. As a result, when the configuration is reset to factory-default, and the ldm init-system command is run to re-create the saved configuration, the virtual functions are not created, and any domain bind attempts fail.

Workaround: If any existing configuration has virtual functions, save all information about those virtual functions. You can later use this information to manually re-create the virtual functions before running the ldm init-system command.

The following procedure shows how to save all the information about the virtual functions to use later:

  1. Save the domain configuration in a file, vfs.txt, for use in re-creating the virtual functions.

    primary# ldm list-io -l -p | grep "type=VF" >vfs.txt

    A typical virtual function entry in vfs.txt appears as follows:

    |dev=pci@400/pci@1/pci@0/pci@4/network@0,83|alias=/SYS/MB/NET0/IOVNET.PF1.VF1|
       status=RDY|domain=ldg1|type=VF|class=NETWORK
    |proptype=class|mac-addr=00:14:4f:f9:74:d0
    |proptype=class|vlan-ids=3,5,7
    |proptype=class|mtu=1500
    |proptype=device|unicast-slots=6

    The first line is intentionally split into two lines for readability. It will be a single line in the vfs.txt file.

  2. Reset the domain to the factory-default configuration.

  3. Reboot the control domain.

  4. Create the virtual functions based on the information in the vfs.txt file.

    For each such entry, use the ldm create-vf command to re-create the virtual function with its original name and properties. Use the following command for the example virtual function:

    primary# ldm create-vf mac-addr=00:14:4f:f9:74:d0 vid=3,5,7 mtu=1500 \ unicast-slots=6 /SYS/MB/NET0/IOVNET.PF1

    For details about the class and device properties, see the ldm(1M) man page.


    Note - The virtual function name is generated from the name of its parent physical function. As a result, execute the ldm create-vf commands in increasing numeric order based on the virtual function part of the name. For example, physical function /SYS/MB/NET0/IOVNET.PF1 has the following child virtual functions:

    /SYS/MB/NET0/IOVNET.PF1.VF0 mac-addr=00:14:4f:f9:74:d0
    /SYS/MB/NET0/IOVNET.PF1.VF1 mac-addr=00:14:4f:f9:74:d1

    The following commands create the virtual functions:

    primary# ldm create-vf mac-addr=00:14:4f:f9:74:d0 /SYS/MB/NET0/IOVNET.PF1
    Created new VF: /SYS/MB/NET0/IOVNET.PF1.VF0
    primary# ldm create-vf mac-addr=00:14:4f:f9:74:d1 /SYS/MB/NET0/IOVNET.PF1
    Created new VF: /SYS/MB/NET0/IOVNET.PF1.VF1

    The first ldm create-vf command causes the system to enter delayed reconfiguration mode.


  5. Verify that the new configuration includes the virtual functions that you manually created.

    primary# ldm list-io -l -p | grep "type=VF" >vfs.after.txt

    Compare the contents of the vfs.after.txt file with the vfs.txt file.

  6. Reboot the control domain.

  7. Reconfigure a domain from an XML file.

    primary# ldm init-system -i file.xml

Logical Domains Manager Might Crash and Restart When You Attempt to Modify Many Domains Simultaneously

Bug ID 7158454: Logical Domains Manager might crash and restart when you attempt an operation that affects the configuration of many domains. You might see this issue when you attempt to change anything related to the virtual networking configuration, if many virtual network devices in the same virtual switch exist across many domains. Typically, this issue is seen with around 90 or more domains that have virtual network devices connected to the same virtual switch, and the inter-vnet-link property is enabled (the default behavior). Confirm the symptom by finding the following message in the ldmd log file and a core file in the /var/opt/SUNWldm directory:

Frag alloc for 'domain-name'/MD memory of size 0x80000 failed

Workaround: Avoid creating many virtual network devices connected to the same virtual switch. If you intend to do so, set the inter-vnet-link property to off on the virtual switch. Be aware that this option might negatively affect network performance between guest domains.

ldm init-system Reports a disk server not found Error

Bug ID 7155386: When an XML file contains both control domain and guest domain configurations, the ldm init-system command first configures the guest domains and then configures the control domain. In a factory default configuration that does not have a virtual disk server configured, attempting to add a virtual disk server device to the guest domains might fail with the following error:

Disk Server xxx not found

This failure occurs if the specified virtual disk server should be provided by the control domain.

Setting Unicast Slots to a Number That Exceeds the Maximum Causes the Value to Be Reset to 0

Bug ID 7155349: Setting unicast slots to a number that exceeds the maximum limit fails with an appropriate error message. However, the number of unicast slots are incorrectly and silently reset to 0.

Workaround: Specify a value for the number of unicast slots that is within the range of supported values.

Attempt to Exceed the Maximum Number of Unicast Slots of ixgbe Physical Functions and Virtual Functions Does Not Fail

Bug ID 7155282: When you attempt to set more unicast slots of ixgbe physical functions and virtual functions than is permitted by the maximum limit, the command succeeds. Attempting to exceed this maximum limit should fail, but it doesn't.

Use the following command to identify the maximum number of unicast slots that is supported by the device:

# ldm list-io -d pf-name

Then, ensure that the total number of unicast slots given to each virtual function in that physical function does not exceed that maximum value.

Control Domain Requires the Lowest Core in the System

Bug ID 7153060: The control domain requires the lowest core in the system. So, if core ID 0 is the lowest core, it cannot be shared with any other domain if you want to apply the whole-core constraint to the control domain.

For example, if the lowest core in the system is core ID 0, the control domain should look similar to the following output:

# ldm ls -o cpu primary
NAME
primary

VCPU
    VID    PID    CID    UTIL STRAND
    0      0      0      0.4%   100%
    1      1      0      0.2%   100%
    2      2      0      0.1%   100%
    3      3      0      0.2%   100%
    4      4      0      0.3%   100%
    5      5      0      0.2%   100%
    6      6      0      0.1%   100%
    7      7      0      0.1%   100%

ldmd Daemon Does Not Come Online

Bug ID 7151847: The Service Management Facility (SMF) service for the ldmd daemon does not come online if the Oracle VM Server for SPARC 2.2 software is installed on a control domain that runs the Oracle Solaris OS versions that are Oracle Solaris 10 10/09 or earlier. This situation occurs because an explicit SMF dependency on the svc:/ldoms/agents SMF service has been added.

Workaround: Install patch ID 142909-17, which adds support for the svc:/ldoms/agents SMF service, ldmad, upon which ldmd depends.

After Canceling a Migration, ldm Commands That Are Run on the Target System Are Unresponsive

Bug ID 7150793: If you cancel a live migration, the memory contents of the domain instance that is created on the target must be “scrubbed” by the hypervisor. This scrubbing process is performed for security reasons and must be complete before the memory can be returned to the pool of free memory. While this scrubbing is in progress, ldm commands become unresponsive. As a result, the Logical Domains Manager appears to be hung.

Recovery: You must wait for this “scrubbing” request to complete before you attempt to run other ldm commands. This process might take a long time. For example, a guest domain that has 500 Gbytes of memory might complete this process in up to 7 minutes on a SPARC T4 server or up to 25 minutes on a SPARC T3 server.

Some Emulex Cards Do Not Work When Assigned to I/O Domain

Bug ID 7150209: On a system that runs the Oracle Solaris OS on the control domain and an I/O domain, some Emulex cards that are assigned to the I/O domain do not function properly because the cards do not receive interrupts. However, when assigned to the control domain, the same cards work properly.

This problem occurs with the following Emulex cards:

Workaround: None.

Guest Domain Panics When Running the cputrack Command During a Migration to a SPARC T4 System

Bug ID 7149951: If the cputrack command is run on a guest domain while that domain is migrated to a SPARC T4 system, the guest domain might panic on the target machine after it has been migrated.

Workaround: Do not run the cputrack command during the migration of a guest domain to a SPARC T4 system.

Oracle Solaris 11: DRM Stealing Reports Oracle Solaris DR Failure and Retries

Bug ID 7149365: A domain that has a higher-priority policy can steal virtual CPU resources from a domain with a lower-priority policy. While this “stealing” action is in progress, you might see the following warning messages in the ldmd log every 10 seconds:

warning: Unable to unconfigure CPUs out of guest domain-name

Workaround: You can ignore these misleading messages.

Limit the Maximum Number of Virtual Functions That Can Be Assigned to a Domain

Bug ID 7149323: An I/O domain has a limit on the number of interrupt resources that are available per root complex.

On SPARC T3 and SPARC T4 systems, the limit is approximately 63 MSI/X vectors. Each igb virtual function uses three interrupts. The ixgbe virtual function uses two interrupts.

If you assign a large number of virtual functions to a domain, the domain runs out of system resources to support these devices. You might see messages similar to the following:

WARNING: ixgbevf32: interrupt pool too full.
WARNING: ddi_intr_alloc: cannot fit into interrupt pool

A Domain That Uses Cross-CPU Migration Reports Random Uptimes After the Migration Completes

Bug ID 7148394: After a domain is migrated between two machines that have different CPU frequencies, the uptime reports by the ldm list command might be incorrect. These incorrect results occur because uptime is calculated relative to the STICK frequency of the machine on which the domain runs. If the STICK frequency differs between the source and target machines, the uptime appears to be scaled incorrectly.

The uptime reported and shown by the guest domain itself is correct. Also, any accounting that is performed by the Oracle Solaris OS in the guest domain is correct.

ldm init-system -r -i XML-file Does Not Reboot the primary Domain

Bug ID 7146725: When you use the ldm init-system command to install a domain from an XML configuration, the primary domain fails to reboot even though the -r option is specified.

Workaround: Manually reboot the primary domain.

Oracle Solaris 10: ixgbe Driver Might Cause a Panic When Booted With an Intel Dual Port Ethernet Controller X540 Card

Bug ID 7146423: When booted with an Intel dual port Ethernet Controller X540 card, the Oracle Solaris 10 ixgbe driver might cause a system panic. This panic occurs because the driver has a high-priority timer that blocks other drivers from attaching.

Workaround: Reboot the system.

Version 8.2.0 of the System Firmware Contains a New Version of the scvar Database

Bug ID 7144314: version 8.2.0 of the system firmware contains a new version of the scvar database, which reverts to defaults after the installation completes.

Workaround: Make note of the running Oracle VM Server for SPARC configuration or any changed system diagnostic properties prior to installing the system firmware. Use the ILOM show command. For example:

-> show /HOST/domain/configs

After you install the firmware and prior to powering on the system, use the ILOM set command. For example:

-> set /HOST/bootmode config=config-name

At this time, the Oracle VM Server for SPARC configurations are preserved. However, you must select whether to boot a particular configuration or the factory-default configuration.

The following property values revert to default values after you install the firmware:

/HOST
   Properties:
   autorunonerror
   ioreconfigure

/HOST/bootmode
   Properties:
   config

/HOST/diag
   Properties:
   error_reset_level
   error_reset_verbosity
   hw_change_level
   hw_change_verbosity
   level
   mode
   power_on_level
   power_on_verbosity
   trigger
   verbosity

/HOST/domain/control
   Properties:
   auto-boot
   boot_guests

/HOST/tpm
   Properties:
   enable
   activate
   forceclear

/SYS
   Properties:
   keyswitch_state

/SP/powermgmt
   Properties:
   policy

panic: BAD TRAP: occurred in module "pcie" due to an illegal access to a user address

Bug ID 7142913: After you bind and start 15 guest domains, the primary domain panics and the following error message is issued:

panic: BAD TRAP: occurred in module "pcie" due to an illegal access to a user address

The domains are configured as follows:

Control Domain Reconfigured From an XML File Fails to Remove I/O Devices Properly

Bug ID 7134203: Existing I/O devices are not properly removed from the control domain when the control domain is reconfigured from an XML file by using the ldm init-system command. This situation might cause a binding failure on the guest domain if the control domain still has the PCIe leaf node devices bound to the control domain.

An Invalid vdsdev Backend Is Seen as a Valid Path

Bug ID 7131596: If you specify an incorrect vdsdev backend to the ldm add-vdsdev command, the resulting error message identifies the backend as a valid path:

# ldm add-vdsdev /wrong/path/file disk1@primary-vds0
Path /wrong/path/file is valid but not accessible on service domain primary

Workaround: Verify and, if necessary, correct the specified path.

After Disabling the Whole-Core Constraint, the Constraint Reappears After a primary Domain Reboot

Bug ID 7130693: After the whole-core constraint is disabled, the constraint reappears after a primary domain reboot.

This problem only occurs in the following circumstances:

Workaround: Disable the whole-core constraint by specifying a different number of virtual CPUs.

Destroying All Virtual Functions and Returning the Slots to the Root Domain Does Not Restore the Root Complex Resources

Bug ID 7129252: The resources on the root complex are not restored after you destroy all the virtual functions and return the slots to the root domain.

Workaround: Perform the following steps:

  1. Remove the PCIe bus from the root domain.

    primary# ldm rm-io pci_0 primary
    Initiating a delayed reconfiguration operation on the primary domain.
    All configuration changes for other domains are disabled until the primary
    domain reboots, at which time the new configuration for the primary domain
    will also take effect.
  2. Reassign the PCIe bus to the root domain.

    primary# ldm add-io pci_0 primary
    ------------------------------------------------------------------------------
    Notice: The primary domain is in the process of a delayed reconfiguration.
    Any changes made to the primary domain will only take effect after it reboots.
    ------------------------------------------------------------------------------
  3. Reboot the PCIe bus to the root domain.

    primary# reboot

ldm start Erroneously Returns 0 Instead of 1 on Failure to Start a Guest Domain

Bug ID 7125579: A guest domain might fail to start because of an unexpected hypervisor error. Even if the domain fails to start, the command exits with 0 instead of 1 and issues the following error message:

LDom domain start failed, retry the operation

Workaround: Do not rely solely on the exit code to determine whether a domain started successfully. Instead, perform one of the following checks:

ldm remove-io of PCIe Cards That Have PCIe-to-PCI Bridges Should Be Disallowed

Bug ID 7121963: Only use the PCIe cards that support the Direct I/O (DIO) feature, which are listed in this support document.

Workaround: Use the ldm add-io command to re-add the card to the primary domain.

ldm stop Command Might Fail If Issued Immediately After an ldm start Command

Bug ID 7118936: If you issue an ldm stop command immediately after an ldm start command, the ldm stop command might fail with the following error:

LDom domain stop notification failed

Workaround: Reissue the ldm stop command.

Using ldm set-io to Change pvid Value Twice in Succession Might Cause a Configuration Failure

Bug ID 7109458: Using the ldm set-io command to change the pvid property value for a virtual function more than one time might cause the pvid value to not be correctly set on the virtual function hardware.

Workaround: Wait a few seconds before you run the ldm set-io command again.

System Panics When Rebooting a primary Domain That Has a Very Large Number of Virtual Functions Assigned

Bug ID 7104911: A system panics when you reboot a primary domain that has a very large number of virtual functions assigned to it.

Workaround: Perform one of the following workarounds:

Vague SR-IOV Error Message: Create vf failed

Bug ID 7101229: When you attempt to create one more virtual function than the maximum number of configurable virtual functions for a physical function device, the Create vf failed message is issued. This error message is unclear as to the reason for the failure.

Oracle Solaris 11 OS: Using Direct I/O to Remove Multiple PCIe Slots From the primary Domain on a Multi-Socket SPARC T-Series System Might Panic at Boot Time

Bug ID 7100859: Your system might panic at boot time if you use direct I/O (ldm remove-io) to remove multiple PCIe slots from a multi-socket SPARC T-Series system. This occurs when the paths to the PCIe slots are similar to each other (except for the root complex path). The panic might occur after you remove the PCIe slots and then reboot the primary domain. For more information about the direct I/O (DIO) feature, see Assigning PCIe Endpoint Devices in Oracle VM Server for SPARC 2.2 Administration Guide.

For example, if you remove the /SYS/MB/PCIE5 (pci@500/pci@2/pci@0/pci@0) and /SYS/MB/PCIE4 (pci@400/pci@2/pci@0/pci@0) slots, which have similar path names, the next boot of the Oracle Solaris 11 OS might panic.

The following ldm list-io command is run after the /SYS/MB/PCIE4 and /SYS/MB/PCIE5 PCIe slots are removed.

# ldm list-io
IO              PSEUDONYM       DOMAIN
--              ---------       ------
pci@400         pci_0           primary
niu@480         niu_0           primary
pci@500         pci_1           primary
niu@580         niu_1           primary

PCIE                       PSEUDONYM       STATUS  DOMAIN
----                       ---------       ------  ------
pci@400/pci@2/pci@0/pci@8  /SYS/MB/PCIE0   OCC     primary
pci@400/pci@2/pci@0/pci@4  /SYS/MB/PCIE2   OCC     primary
pci@400/pci@2/pci@0/pci@0  /SYS/MB/PCIE4   OCC
pci@400/pci@1/pci@0/pci@8  /SYS/MB/PCIE6   OCC     primary
pci@400/pci@1/pci@0/pci@c  /SYS/MB/PCIE8   OCC     primary
pci@400/pci@2/pci@0/pci@e  /SYS/MB/SASHBA  OCC     primary
pci@400/pci@1/pci@0/pci@4  /SYS/MB/NET0    OCC     primary
pci@500/pci@2/pci@0/pci@a  /SYS/MB/PCIE1   OCC     primary
pci@500/pci@2/pci@0/pci@6  /SYS/MB/PCIE3   OCC     primary
pci@500/pci@2/pci@0/pci@0  /SYS/MB/PCIE5   OCC
pci@500/pci@1/pci@0/pci@6  /SYS/MB/PCIE7   OCC     primary
pci@500/pci@1/pci@0/pci@0  /SYS/MB/PCIE9   OCC     primary
pci@500/pci@1/pci@0/pci@5  /SYS/MB/NET2    OCC     primary
#

Workaround: Do not remove all slots that have similar path names. Instead, remove only one such PCIe slot.

You also might be able to insert the PCIe cards into slots that do not have similar paths and then use them with the DIO feature.

Partial Core primary Fails to Permit Whole-Core DR Transitions

Bug ID 7100841: When the primary domain shares the lowest physical core (usually 0) with another domain, attempts to set the whole-core constraint for the primary domain fail.

Workaround: Perform the following steps:

  1. Determine the lowest bound core that is shared by the domains.

    # ldm list -o cpu
  2. Unbind all the CPU threads of the lowest core from all domains other than the primary domain.

    As a result, CPU threads of the lowest core are not shared and are free for binding to the primary domain.

  3. Set the whole-core constraint by doing one of the following:

    • Bind the CPU threads to the primary domain, and set the whole-core constraint by using the ldm set-vcpu -c command.

    • Use the ldm set-core command to bind the CPU threads and set the whole-core constraint in a single step.

After a primary Domain Reboot, igb and ixgbe Virtual Functions That Are Assigned to the primary Domain Become Faulty

Bug ID 7098941: The igb and ixgbe virtual function devices become faulty after the primary domain is rebooted. These virtual functions are assigned to the primary domain. The system configuration only has a primary domain. No guest domains or I/O domains are configured.

The fmadm faulty command shows that each virtual function device is faulty. The fmadm repair command enables you to recover from the faults, but the faulty state returns each time you reboot the primary domain.

Workaround: Use the fmadm repair command to recover from the faults each time you reboot the primary domain.

ldmconfig Is Only Supported on Oracle Solaris 10 Systems

Bug ID 7093344: You can only use the ldmconfig command on Oracle Solaris 10 systems.

ldm list-io Command Shows the UNK or INV State After Boot

Bug ID 7084728: The ldm list-io command might show the UNK or INV state for PCIe slots and SR-IOV virtual functions if the command runs immediately after the primary domain is booted. This problem is caused by the delay in the Logical Domains agent reply from the Oracle Solaris OS.

This problem has only been reported on a few systems.

Workaround: The status of the PCIe slots and the virtual functions is automatically updated after the information is received from the Logical Domains agent.

Cannot Detach Network Interface Card Driver

Bug ID 7083321: The nwam daemon holds a reference count on the network interface card (NIC) device node, so the NIC driver cannot be detached.

Workaround: Do not use the Automatic network configuration profile. Instead, use the DefaultFixed network configuration profile.

Oracle VM Server for SPARC MIB Is Only Supported on Oracle Solaris 10 Systems

Bug ID 7082776: You can only use the Oracle VM Server for SPARC MIB on Oracle Solaris 10 systems.

Migrating a Very Large Memory Domain on SPARC T4-4s Results in a Panicked Domain on the Target System

Bug ID 7071426: Avoid migrating domains that have over 500 Gbytes of memory. Use the ldm list -o mem command to see the memory configuration of your domain. Some memory configurations that have multiple memory blocks that total over 500 Gbytes might panic with a stack that resembles the following:

panic[cpu21]/thread=2a100a5dca0:
BAD TRAP: type=30 rp=2a100a5c930 addr=6f696e740a232000 mmu_fsr=10009

sched:data access exception: MMU sfsr=10009: Data or instruction address out of range context 0x1

pid=0, pc=0x1076e2c, sp=0x2a100a5c1d1, tstate=0x4480001607, context=0x0
g1-g7: 80000001, 0, 80a5dca0, 0, 0, 0, 2a100a5dca0

000002a100a5c650 unix:die+9c (30, 2a100a5c930, 6f696e740a232000, 10009, 2a100a5c710, 10000)
000002a100a5c730 unix:trap+75c (2a100a5c930, 0, 0, 10009, 30027b44000, 2a100a5dca0)
000002a100a5c880 unix:ktl0+64 (7022d6dba40, 0, 1, 2, 2, 18a8800)
000002a100a5c9d0 unix:page_trylock+38 (6f696e740a232020, 1, 6f69639927eda164, 7022d6dba40, 13, 1913800)
000002a100a5ca80 unix:page_trylock_cons+c (6f696e740a232020, 1, 1, 5, 7000e697c00, 6f696e740a232020)
000002a100a5cb30 unix:page_get_mnode_freelist+19c (701ee696d00, 12, 1, 0, 19, 3)
000002a100a5cc80 unix:page_get_cachelist+318 (12, 1849fe0, ffffffffffffffff, 3,
0, 1)
000002a100a5cd70 unix:page_create_va+284 (192aec0, 300ddbc6000, 0, 0, 2a100a5cf00, 300ddbc6000)
000002a100a5ce50 unix:segkmem_page_create+84 (18a8400, 2000, 1, 198e0d0, 1000, 11)
000002a100a5cf60 unix:segkmem_xalloc+b0 (30000002d98, 0, 2000, 300ddbc6000, 0, 107e290)
000002a100a5d020 unix:segkmem_alloc_vn+c0 (30000002d98, 2000, 107e000, 198e0d0,
30000000000, 18a8800)
000002a100a5d0e0 genunix:vmem_xalloc+5c8 (30000004000, 2000, 0, 0, 80000, 0)
000002a100a5d260 genunix:vmem_alloc+1d4 (30000004000, 2000, 1, 2000, 30000004020, 1)
000002a100a5d320 genunix:kmem_slab_create+44 (30000056008, 1, 300ddbc4000, 18a6840, 30000056200, 30000004000)
000002a100a5d3f0 genunix:kmem_slab_alloc+30 (30000056008, 1, ffffffffffffffff, 0, 300000560e0, 30000056148)
000002a100a5d4a0 genunix:kmem_cache_alloc+2dc (30000056008, 1, 0, b9, fffffffffffffffe, 2006)
000002a100a5d550 genunix:kmem_cpucache_magazine_alloc+64 (3000245a740, 3000245a008, 7, 6028f283750, 3000245a1d8,
193a880)
000002a100a5d600 genunix:kmem_cache_free+180 (3000245a008, 6028f2901c0, 7, 7, 7, 3000245a740)
000002a100a5d6b0 ldc:vio_destroy_mblks+c0 (6028efe8988, 800, 0, 200, 19de0c0, 0)
000002a100a5d760 ldc:vio_destroy_multipools+30 (6028f1542b0, 2a100a5d8c8, 40, 0, 10, 30000282240)
000002a100a5d810 vnet:vgen_unmap_rx_dring+18 (6028f154040, 0, 6028f1a3cc0, a00,
200, 6028f1abc00)
000002a100a5d8d0 vnet:vgen_process_reset+254 (1, 6028f154048, 6028f154068, 6028f154060, 6028f154050, 6028f154058)
000002a100a5d9b0 genunix:taskq_thread+3b8 (6028ed73908, 6028ed738a0, 18a6840, 6028ed738d2, e4f746ec17d8,
6028ed738d4)

Workaround: Avoid performing migrations of domains that have over 500 Gbytes of memory.

Removing a Large Number of CPUs From a Guest Domain

Bug ID 7062298: You might see the following error message when you attempt to remove a large number of CPUs from a guest domain:

Request to remove cpu(s) sent, but no valid response received
VCPU(s) will remain allocated to the domain, but might
not be available to the guest OS
Resource modification failed

Workaround: Stop the guest domain before you remove more than 100 CPUs from the domain.

A Large-Memory Domain in Elastic Mode Might Take a Long Time to Stop

Bug ID 7058261: When you use the ldm stop command to stop a large-memory domain while the system has the power management elastic policy set, it might take a long time. If the domain is sufficiently idle, the majority of the CPU threads that are assigned to the domain will be disabled. By disabling the CPUs, the processing that is required to stop a domain is left to the remaining active threads.

For example, a guest domain that has 252 Gbytes of memory and only 2 CPUs enabled takes approximately 7 minutes to stop.

Workaround: Disable power management (PM) by switching from the elastic policy to the performance policy before you stop the domain.

Cannot Use Oracle Solaris Hot Plug Operations to Hot Remove a PCIe Endpoint Device

Bug ID 7054326: You cannot use Oracle Solaris hotplug operations to hot remove a PCIe endpoint device after that device is removed from the primary domain by using the ldm rm-io command. For information about replacing or removing a PCIe endpoint device, see Making PCIe Hardware Changes in Oracle VM Server for SPARC 2.2 Administration Guide.

Virtual Disk Validation Fails for a Physical Disk With No Slice 2

Bug ID 7042353: If a physical disk is configured with a slice 2 that has a size of 0, you might encounter the following issues:

Another workaround permits you to permanently disable the disk validation that is performed by the ldm add-vdsdev and ldm bind commands. As a result, you do not need to specify the -q option. Permanently disable the disk validation by updating the device_validation property of the ldmd service:

# svccfg -s ldmd setprop ldmd/device_validation=value
# svcadm refresh ldmd
# svcadm restart ldmd

Specify a value of 0 to disable validation for network and disk devices. Specify a value of 1 to disable validation for disk devices but still enable validation for network devices.

The possible values for the device_validation property are:

0

Disable validation for all devices

1

Enable validation for network devices

2

Enable validation for disk devices

3

Enable validation for network and disk devices

-1

Enable validation for all type of devices, which is the default

nxge Panics When Migrating a Guest Domain That Has Hybrid I/O and Virtual I/O Virtual Network Devices

Bug ID 7038650: When a heavily loaded guest domain has a hybrid I/O configuration and you attempt to migrate it, you might see an nxge panic.

Workaround: Add the following line to the /etc/system file on the primary domain and on any service domain that is part of the hybrid I/O configuration for the domain:

set vsw:vsw_hio_max_cleanup_retries = 0x200

All ldm Commands Hang When Migrations Have Missing Shared NFS Resources

Bug ID 7036137: An initiated or ongoing migration, or any ldm command, hangs forever. This situation occurs when the domain to be migrated uses a shared file system from another system, and the file system is no longer shared.

Workaround: Make the shared file system accessible again.

ldmd Fails to Remove Cores From a Domain That Has Partial Cores

Bug ID 7035438: ldmd permits you to enable the whole-core constraint on a domain that has partial cores, yet fails to remove or set cores from the same domain.

Workaround: Do the following from the factory-default configuration on the control domain:

  1. Initiate a delayed reconfiguration on the control domain.

    # ldm start-reconf primary
  2. Perform any memory reconfiguration operations first.

  3. Perform the CPU reconfiguration operations.

    # ldm set-vcpu 16 primary
    # ldm set-vcpu -c 2 primary

This example uses 2 cores but the number of cores can be from 1 to the system limit.

Logical Domains Agent Service Does Not Come Online if the System Log Service Does Not Come Online

Bug ID 7034191: If the system log service, svc:/system/system-log, fails to start and does not come online, the Logical Domains agent service will not come online. When the Logical Domains agent service is not online, the virtinfo, ldm add-vsw, ldm add-vdsdev, and ldm list-io commands might not behave as expected.

Workaround: Ensure that the svc:/ldoms/agents:default service is enabled and online:

# svcs -l svc:/ldoms/agents:default

If the svc:/ldoms/agents:default service is offline, verify that the service is enabled and that all dependent services are online.

Kernel Deadlock Causes Machine Hang During a Migration

Bug ID 7030045: The migration of an active guest domain might hang and cause the source machine to become unresponsive. When this problem occurs, the following message is written to the console and to the /var/adm/messages file:

vcc: i_vcc_ldc_fini: cannot close channel 15

vcc: [ID 815110 kern.notice] i_vcc_ldc_fini: cannot
close channel 15

Note that the channel number shown is an Oracle Solaris internal channel number that might be different for each warning message.

Workaround: Before you migrate the domain, disconnect from the guest domain's console.

Recovery: Perform a powercycle of the source machine.

DRM and ldm list Output Shows a Different Number of Virtual CPUs Than Are Actually in the Guest Domain

Bug ID 7027105: A No response message might appear in the Oracle VM Server for SPARC log when a loaded domain's DRM policy expires after the CPU count has been substantially reduced. The ldm list output shows that there are more CPU resources allocated to the domain than is shown in the psrinfo output.

Workaround: Use the ldm set-vcpu command to reset the number of CPUs on the domain to the value that is shown in the psrinfo output.

Live Migration of a Domain That Depends on an Inactive Master Domain on the Target Machine Causes ldmd to Fault With a Segmentation Fault

Bug ID 7026177: If you attempt a live migration of a domain that depends on an inactive domain on the target machine, the ldmd daemon faults with a segmentation fault, and the domain on the target machine restarts. You can still perform a migration, but it will not be a live migration.

Workaround: Perform one of the following actions before you attempt the live migration:

DRM Fails to Restore the Default Number of Virtual CPUs for a Migrated Domain When the Policy is Removed or Expired

Bug ID 7026160: You perform a domain migration while a DRM policy is in effect. Later, if the DRM policy expires or is removed from the migrated domain, DRM fails to restore the original number of virtual CPUs to the domain.

Workaround: If a domain is migrated while a DRM policy is active and the DRM policy is subsequently expired or removed, reset the number of virtual CPUs. Use the ldm set-vcpu command to set the number of virtual CPUs to the original value on the domain.

Virtual CPU Timeout Failures During DR

Bug ID 7025445: Running the ldm set-vcpu 1 command on a guest domain that has over 100 virtual CPUs and some cryptographic units fails to remove the virtual CPUs. The virtual CPUs are not removed because of a DR timeout failure. The cryptographic units are successfully removed.

Workaround: Use the ldm rm-vcpu command to remove all but one of the virtual CPUs from the guest domain. Do not remove more than 100 virtual CPUs at a time.

Migration Failure Reason Not Reported When the System MAC Address Clashes With Another MAC Address

Bug ID 7023216: A domain cannot be migrated if it contains a duplicate MAC address. Typically, when a migration fails for this reason, the failure message shows the duplicate MAC address. However in rare circumstances, this failure message might not report the duplicate MAC address.

# ldm migrate ldg2 system2
Target Password:
Domain Migration of LDom ldg2 failed

Workaround: Ensure that the MAC addresses on the target machine are unique.

Simultaneous Migration Operations in “Opposite Direction” Might Cause ldm to Hang

Bug ID 7019493: If two ldm migrate commands are issued simultaneously in the “opposite direction,” the two commands might hang and never complete. For example, an opposite direction situation is one where you simultaneously start a migration on machine A to machine B and a migration on machine B to machine A.

The hang results for the migration processes even if they are initiated as dry runs by using the -n. When this problem occurs, all other ldm commands might hang.

Workaround: None.

Removing a Large Number of CPUs From the Control Domain

Bug ID 6994984: Use a delayed reconfiguration rather than dynamic reconfiguration to remove more than 100 CPUs from the control domain (also known as the primary domain). Use the following steps:

  1. Use the ldm start-reconf primary command to put the control domain in delayed reconfiguration mode.

  2. Remove the desired number of CPU resources.

    If you make a mistake while removing CPU resources, do not attempt another request to remove CPUs while the control domain is still in a delayed reconfiguration. If you do so, the commands will fail (see Only One CPU Configuration Operation Is Permitted to Be Performed During a Delayed Reconfiguration). Instead, undo the delayed reconfiguration operation by using the ldm cancel-reconf command, and start over.

  3. Reboot the control domain.

System That Has the Elastic Policy Set and Is Running the Oracle Solaris 10 8/11 OS Might Hang

Bug IDs 6989192 and 7071760: You might experience OS hangs at login or while executing commands when the following conditions are met:

Workaround: Apply patch ID 147149-01.

pkgadd Fails to Set ACL Entries on /var/svc/manifest/platform/sun4v/ldmd.xml

Bug ID 6984681: When using the pkgadd command to install the SUNWldm.v package from a directory that is exported by means of NFS from a Sun ZFS Storage Appliance, you might see the following error message:

cp: failed to set acl entries on /var/svc/manifest/platform/sun4v/ldmd.xml

Workaround: Ignore this message.

SPARC T3-1: Detect And Handle Disks That Are Accessible Through Multiple Direct I/O Paths

Bug ID 6984008: A SPARC T3-1 system can be installed with dual-ported disks, which can be accessed by two different direct I/O devices. In this case, assigning these two direct I/O devices to different domains can cause the disks to be used by both domains and impact each other based on the actual usage of those disks.

Workaround: Do not assign direct I/O devices that have access to the same set of disks to different I/O domains. The steps to determine if you have dual-ported disks on T3-1 system are as follows:

Determine whether the system has dual-ported disks by running the following command on the SP:

-> show /SYS/SASBP

If the output includes the following fru_description value, the corresponding system has dual-ported disks:

fru_description = BD,SAS2,16DSK,LOUISE

When dual disks are found to be present in the system, ensure that both of the following direct I/O devices are always assigned to the same domain:

pci@400/pci@1/pci@0/pci@4  /SYS/MB/SASHBA0
pci@400/pci@2/pci@0/pci@4  /SYS/MB/SASHBA1

Memory DR Removal Operations With Multiple Plumbed NIU nxge Instances Can Hang Indefinitely and Never Complete

Bug ID 6983279: When multiple NIU nxge instances are plumbed on a domain, the ldm rm-mem and ldm set-mem commands, which are used to remove memory from the domain, might never complete. To determine whether the problem has occurred during a memory removal operation, monitor the progress of the operation with the ldm list -o status command. You might have encountered this problem if the progress percentage remains constant for several minutes.

Recovery: Cancel the ldm rm-mem or ldm set-mem command.

Workaround: Cancel the ldm rm-mem or ldm set-mem command, and check if a sufficient amount of memory was removed. If not, a subsequent memory removal command to remove a smaller amount of memory might complete successfully.

If the problem has occurred on the primary domain, do the following:

  1. Start a delayed reconfiguration operation on the primary domain.

    # ldm start-reconf primary
  2. Assign the desired amount of memory to the domain.

  3. Reboot the primary domain.

If the problem occurred on another domain, stop the domain before adjusting the amount of memory that is assigned to the domain.

Using ldm stop -a Command on Domains in a Master-Slave Relationship Leaves the Slave With the stopping Flag Set

Bug ID 6979574: When a reset dependency is created, an ldm stop -a command might result in a domain with a reset dependency being restarted instead of only stopped.

Workaround: First, issue the ldm stop command to the master domain. Then, issue the ldm stop command to the slave domain. If the initial stop of the slave domain results in a failure, issue the ldm stop -f command to the slave domain.

Migration of a Domain That Has an Enabled Default DRM Policy Results in a Target Domain Being Assigned All Available CPUs

Bug ID 6968507: Following the migration of an active domain, CPU utilization in the migrated domain can increase dramatically for a short period of time. If a dynamic resource managment (DRM) policy is in effect for the domain at the time of the migration, the Logical Domains Manager might begin to add CPUs. In particular, if the vcpu-max and attack properties were not specified when the policy was added, the default value of unlimited causes all the unbound CPUs in the target machine to be added to the migrated domain.

Recovery: No recovery is necessary. After the CPU utilization drops below the upper limit that is specified by the DRM policy, the Logical Domains Manager automatically removes the CPUs.

An In-Use MAC Address Can be Reassigned

Bug ID 6968100: Sometimes an in-use MAC address is not detected and it is erroneously reassigned.

Workaround: Manually ensure that an in-use MAC address cannot be reassigned.

ldmconfig Cannot Create a Domain Configuration on the SP

Bug ID 6967799: The ldmconfig script cannot properly create a stored logical domains configuration on the service processor (SP).

Workaround: Do not power cycle the system after the ldmconfig script completes and the domain reboots. Instead, perform the following manual steps:

  1. Add the configuration to the SP.

    # ldm add-spconfig new-config-name
  2. Remove the primary-with-clients configuration from the SP.

    # ldm rm-spconfig primary-with-clients
  3. Power cycle the system.

If you do not perform these steps prior to the system being power cycled, the existence of the primary-with-client configuration causes the domains to be inactive. In this case, you must bind each domain manually and then start them by running the ldm start -a command. After the guests have booted, repeating this sequence enables the guest domains to be booted automatically after a power cycle.

Uncooperative Oracle Solaris Domain Migration Can Be Blocked If cpu0 Is Offline

Bug ID 6965758: The migration of an active domain can fail if it is running a release older than the Oracle Solaris 10 10/09 OS and the lowest numbered CPU in the domain is in the offline state. The operation fails when the Logical Domains Manager uses CPU DR to reduce the domain to a single CPU. In doing so, the Logical Domains Manager attempts to remove all but the lowest CPU in the domain, but as that CPU is offline, the operation fails.

Workaround: Before attempting the migration, ensure that the lowest numbered CPU in the domain is in the online state.

Memory DR Is Disabled Following a Canceled Migration

Bug ID 6956431: After an Oracle Solaris 10 9/10 domain has been suspended as part of a migration operation, memory dynamic reconfiguration (DR) is disabled. This applies not only when the migration is successful, but also when the migration has been canceled, despite the fact that the domain remains on the source machine.

Dynamic Reconfiguration of MTU Values of Virtual Network Devices Sometimes Fails

Bug ID 6936833: If you modify the maximum transmission unit (MTU) of a virtual network device on the control domain, a delayed reconfiguration operation is triggered. If you subsequently cancel the delayed reconfiguration, the MTU value for the device is not restored to the original value.

Recovery: Rerun the ldm set-vnet command to set the MTU to the original value. Resetting the MTU value puts the control domain into delayed reconfiguration mode, which you need to cancel. The resulting MTU value is now the original, correct MTU value.

# ldm set-vnet mtu=orig-value vnet1 primary
# ldm cancel-op reconf primary

Migrated Domain With MAUs Contains Only One CPU When Target OS Does Not Support DR of Cryptographic Units

Bug ID 6904849: Starting with the Logical Domains 1.3 release, a domain can be migrated even if it has one or more cryptographic units bound to it.

In the following circumstances, the target machine will only contain one CPU after the migration is completed:

After the migration completes, the target domain will resume successfully and be operational, but will be in a degraded state (just one CPU).

Workaround: Prior to the migration, remove the cryptographic unit or units from the source machine that runs Logical Domains 1.3.

Mitigation: To avoid this issue, perform one or both of these steps:

Confusing Migration Failure Message for Real Address Memory Bind Failures

Bug ID 6904240: In certain situations, a migration fails with the following error message, and ldmd reports that it was not possible to bind the memory needed for the source domain. This situation can occur even if the total amount of available memory on the target machine is greater than the amount of memory being used by the source domain (as shown by ldm ls-devices -a mem).

Unable to bind 29952M memory region at real address 0x8000000
Domain Migration of LDom ldg0 failed

Cause: This failure is due the inability to meet congruence requirements between the Real Address (RA) and the Physical Address (PA) on the target machine.

Workaround: Stop the domain and perform the migration as a cold migration. You can also reduce the size of the memory on the guest domain by 128 Mbytes, which might permit the migration to proceed while the domain is running.

Dynamically Removing All the Cryptographic Units From a Domain Causes SSH to Terminate

Bug ID 6897743: If all the hardware cryptographic units are dynamically removed from a running domain, the cryptographic framework fails to seamlessly switch to the software cryptographic providers, and kills all the ssh connections.

Recovery: Re-establish the ssh connections after all the cryptographic units are removed from the domain.

Workaround: Set UseOpenSSLEngine=no in the /etc/ssh/sshd_config file on the server side, and run the svcadm restart ssh command.

Then, all ssh connections will no longer use the hardware cryptographic units (and thus not benefit from the associated performance improvements), and ssh connections would not be disconnected when the cryptographic units are removed.

PCI Express Dual 10-Gigabit Ethernet Fiber Card Shows Four Subdevices in ldm list-io -l Output

Bug ID 6892229: When you run the ldm ls-io -l command on a system that has an PCI Express Dual 10-Gigabit Ethernet Fiber card (X1027A-Z) installed, the output might show the following:

primary# ldm ls-io -l
...
pci@500/pci@0/pci@c PCIE5 OCC primary
network@0
network@0,1
ethernet
ethernet

The output shows four subdevices even though this Ethernet card has only two ports. This anomaly occurs because this card has four PCI functions. Two of these functions are disabled internally and appear as ethernet in the ldm ls-io -l output.

Workaround: You can ignore the ethernet entries in the ldm ls-io -l output.

ldm Commands Are Slow to Respond When Several Domains Are Booting

Bug ID 6855079: An ldm command might be slow to respond when several domains are booting. If you issue an ldm command at this stage, the command might appear to hang. Note that the ldm command will return after performing the expected task. After the command returns, the system should respond normally to ldm commands.

Workaround: Avoid booting many domains simultaneously. However, if you must boot several domains at once, refrain from issuing further ldm commands until the system returns to normal. For instance, wait for about two minutes on Sun SPARC Enterprise T5140 and T5240 Servers and for about four minutes on the Sun SPARC Enterprise T5440 Server or Netra T5440 Server.

Guest Domain Might Fail to Successfully Reboot When a System Is in Power Management Elastic Mode

Bug ID 6853273: While a system has the power management elastic policy set, rebooting a guest domain might produce the following warning messages and fail to boot successfully:

WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Sending packet to LDC, status: -1
WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Can't send vdisk read request!
WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Timeout receiving packet from LDC ... retrying

Workaround: If you see these warnings, perform one of the workarounds in the following order:

Guest Domain Sometimes Fails to Make Proper Domain Services Connection to the Control Domain

Bug ID 6839787: Sometimes, a guest domain that runs at least the Oracle Solaris 10 10/08 OS does not make a proper Domain Services connection to a control domain that runs the Oracle Solaris 10 5/09 OS.

Domain Services connections enable features such as dynamic reconfiguration (DR), FMA, and power management (PM). Such a failure occurs when the guest domain is booted, so rebooting the domain usually clears the problem.

Workaround: Reboot the guest domain.

Oracle Solaris 11: Zones Configured With an Automatic Network Interface Might Fail to Start

Bug ID 6837615: In Oracle Solaris 11, zones that are configured with an automatic network interface (anet) might fail to start in a domain that has Logical Domains virtual network devices only.

The following are the workarounds:

Oracle Solaris 10: Virtual Network Devices Are Not Created Properly on the Control Domain

Bug ID 6836587: Sometimes ifconfig indicates that the device does not exist after you add a virtual network or virtual disk device to a domain. This situation might occur as the result of the /devices entry not being created.

Although this should not occur during normal operation, the error was seen when the instance number of a virtual network device did not match the instance number listed in /etc/path_to_inst file.

For example:

# ifconfig vnet0 plumb
ifconfig: plumb: vnet0: no such interface

The instance number of a virtual device is shown under the DEVICE column in the ldm list output:

# ldm list -o network primary
NAME             
primary          

MAC
    00:14:4f:86:6a:64

VSW
    NAME         MAC               NET-DEV DEVICE   DEFAULT-VLAN-ID PVID VID MTU  MODE  
    primary-vsw0 00:14:4f:f9:86:f3 nxge0   switch@0 1               1        1500        

NETWORK
    NAME   SERVICE              DEVICE    MAC               MODE PVID VID MTU  
    vnet1  primary-vsw0@primary network@0 00:14:4f:f8:76:6d      1        1500

The instance number (0 for both the vnet and vsw shown previously) can be compared with the instance number in the path_to_inst file to ensure that they match.

# egrep '(vnet|vsw)' /etc/path_to_inst
"/virtual-devices@100/channel-devices@200/virtual-network-switch@0" 0 "vsw"
"/virtual-devices@100/channel-devices@200/network@0" 0 "vnet"

Workaround: In the case of mismatching instance numbers, remove the virtual network or virtual switch device. Then, add them again by explicitly specifying the instance number required by setting the id property.

You can also manually edit the /etc/path_to_inst file. See the path_to_inst(4) man page.


Caution

Caution - Be aware of the warning contained in the man page that states “changes should not be made to /etc/path_to_inst without careful consideration.”


Newly Added NIU/XAUI Adapters Are Not Visible to Host OS If Logical Domains Is Configured

Bug ID 6829016: When Logical Domains is configured on a system and you add another XAUI network card, the card is not visible after the machine is powercycled.

Recovery: To make the newly added XAUI visible in the control domain, perform the following steps:

  1. Set and clear a dummy variable in the control domain.

    The following commands use a dummy variable called fix-xaui:

    # ldm set-var fix-xaui=yes primary
    # ldm rm-var fix-xaui primary
  2. Save the modified configuration to the SP, replacing the current configuration.

    The following commands use a configuration name of config1:

    # ldm rm-spconfig config1
    # ldm add-spconfig config1
  3. Perform a reconfiguration reboot of the control domain.

    # reboot -- -r

    At this time, you can configure the newly available network or networks for use by Logical Domains.

I/O Domain or Guest Domain Panics When Booting From e1000g

Bug ID 6808832: You can configure a maximum of two domains with dedicated PCI-E root complexes on systems such as the Sun Fire T5240. These systems have two UltraSPARC T2+ CPUs and two I/O root complexes.

pci@500 and pci@400 are the two root complexes in the system. The primary domain will always contain at least one root complex. A second domain can be configured with an unassigned or unbound root complex.

The pci@400 fabric (or leaf) contains the on-board e1000g network card. The following circumstances could lead to a domain panic:

Avoid the following network devices if they are configured in a non-primary domain:

/pci@400/pci@0/pci@c/network@0,1
/pci@400/pci@0/pci@c/network@0

When these conditions are true, the domain will panic with a PCI-E Fatal error.

Avoid such a configuration, or if the configuration is used, do not boot from the listed devices.

Explicit Console Group and Port Bindings Are Not Migrated

Bug ID 6781589: During a migration, any explicitly assigned console group and port are ignored, and a console with default properties is created for the target domain. This console is created using the target domain name as the console group and using any available port on the first virtual console concentrator (vcc) device in the control domain. If there is a conflict with the default group name, the migration fails.

Recovery: To restore the explicit console properties following a migration, unbind the target domain and manually set the desired properties using the ldm set-vcons command.

Constraint Database Is Not Synchronized to Saved Configuration

Bug ID 6773569: After switching from one configuration to another (using the ldm set-config command followed by a powercycle), domains defined in the previous configuration might still be present in the current configuration, in the inactive state.

This is a result of the Logical Domains Manager's constraint database not being kept in sync with the change in configuration. These inactive domains do not affect the running configuration and can be safely destroyed.

Migration Does Not Fail If a vdsdev on the Target Has a Different Back End

Bug ID 6772120: If the virtual disk on the target machine does not point to the same disk back end that is used on the source machine, the migrated domain cannot access the virtual disk using that disk back end. A hang can result when accessing the virtual disk on the domain.

Currently, the Logical Domains Manager checks only that the virtual disk volume names match on the source and target machines. In this scenario, no error message is displayed if the disk back ends do not match.

Workaround: Ensure that when you are configuring the target domain to receive a migrated domain that the disk volume (vdsdev) matches the disk back end used on the source domain.

Recovery: Do one of the following if you discover that the virtual disk device on the target machine points to the incorrect disk back end:

Migration Can Fail to Bind Memory Even If the Target Has Enough Available

Bug ID 6772089: In certain situations, a migration fails and ldmd reports that it was not possible to bind the memory needed for the source domain. This can occur even if the total amount of available memory on the target machine is greater than the amount of memory being used by the source domain.

This failure occurs because migrating the specific memory ranges in use by the source domain requires that compatible memory ranges are available on the target, as well. When no such compatible memory range is found for any memory range in the source, the migration cannot proceed.

Recovery: If this condition is encountered, you might be able to migrate the domain if you modify the memory usage on the target machine. To do this, unbind any bound or active logical domain on the target.

Use the ldm list-devices -a mem command to see what memory is available and how it is used. You might also need to reduce the amount of memory that is assigned to another domain.

Logical Domains Manager Does Not Start If the Machine Is Not Networked and an NIS Client Is Running

Bug ID 6764613: If you do not have a network configured on your machine and have a Network Information Services (NIS) client running, the Logical Domains Manager will not start on your system.

Workaround: Disable the NIS client on your non-networked machine:

# svcadm disable nis/client

Logical Domains Manager Displays Migrated Domains in Transition States When They Are Already Booted

Bug ID 6760933: On occasion, an active logical domain appears to be in the transition state instead of the normal state long after it is booted or following the completion of a domain migration. This glitch is harmless, and the domain is fully operational. To see what flag is set, check the flags field in the ldm list -l -p command output, or check the FLAGS field in the ldm list command, which shows -n---- for normal or -t---- for transition.

Recovery: After the next reboot, the domain shows the correct state.

Cannot Connect to Migrated Domain's Console Unless vntsd Is Restarted

Bug ID 6757486: Occasionally, after a domain has been migrated, it is not possible to connect to the console for that domain.

Workaround: Restart the vntsd SMF service to enable connections to the console:

# svcadm restart vntsd

Note - This command will disconnect all active console connections.


Sometimes, Executing the uadmin 1 0 Command From an Logical Domains System Does Not Return the System to the OK Prompt

Bug ID 6753683: Sometimes, executing the uadmin 1 0 command from the command line of an Logical Domains system does not leave the system at the ok prompt after the subsequent reset. This incorrect behavior is seen only when the Logical Domains variable auto-reboot? is set to true. If auto-reboot? is set to false, the expected behavior occurs.

Workaround: Use this command instead:

uadmin 2 0

Or, always run with auto-reboot? set to false.

Logical Domains Manager Can Take Over 15 Minutes to Shut Down a Domain

Bug ID 6742805: A domain shutdown or memory scrub can take over 15 minutes with a single CPU and a very large memory configuration. During a shutdown, the CPUs in a domain are used to scrub all the memory owned by the domain. The time taken to complete the scrub can be quite long if a configuration is imbalanced, for example, a single CPU domain with 512 Gbytes of memory. This prolonged scrub time extends the amount of time it takes to shut down a domain.

Workaround: Ensure that large memory configurations (>100 Gbytes) have at least one core. This results in a much faster shutdown time.

If the Oracle Solaris 10 5/08 OS Is Installed on a Service Domain, Attempting a Net Boot of the Oracle Solaris 10 8/07 OS on Any Guest Domain Serviced by It Can Hang the Installation

Bug ID 6705823: Attempting a net boot of the Oracle Solaris 10 8/07 OS on any guest domain serviced by a service domain running the Oracle Solaris 10 5/08 OS can result in a hang on the guest domain during the installation.

Workaround: Patch the miniroot of the Oracle Solaris 10 8/07 OS net install image with Patch ID 127111-05.

Simultaneous Net-Installation of Multiple Domains Fails When in a Common Console Group

Bug ID 6656033: Simultaneous net installation of multiple guest domains fails on systems that have a common console group.

Workaround: Only net-install on guest domains that each have their own console group. This failure is seen only on domains with a common console group shared among multiple net-installing domains.

The scadm Command Can Hang Following an SC or SP Reset

Bug ID 6629230: The scadm command on a control domain running at least the Solaris 10 11/06 OS can hang following an SC reset. The system is unable to properly reestablish a connection following an SC reset.

Workaround: Reboot the host to reestablish connection with the SC.

Recovery: Reboot the host to reestablish connection with the SC.

ldc_close: (0xb) unregister failed, 11 Warning Messages

Bug ID 6610702: You might see the following warning message on the system console or in the system log:

ldc_close: (0xb) unregister failed, 11

Note that the number in parentheses is the Oracle Solaris internal channel number, which might be different for each warning message.

Workaround: You can ignore these messages.

Guest Domain With Too Many Virtual Networks on the Same Network Using DHCP Can Become Unresponsive

Bug ID 6603974: If you configure more than four virtual networks (vnets) in a guest domain on the same network using the Dynamic Host Configuration Protocol (DHCP), the guest domain can eventually become unresponsive while running network traffic.

Workaround: Set ip_ire_min_bucket_cnt and ip_ire_max_bucket_cnt to larger values, such as 32, if you have 8 interfaces.

Recovery: Issue an ldm stop-domain ldom command followed by an ldm start-domain ldom command on the guest domain (ldom) in question.

Logical Domains Manager Does Not Retire Resources On Guest Domain After a Panic and Reboot

Bug ID 6591844: If a CPU or memory fault occurs, the affected domain might panic and reboot. If the Fault Management Architecture (FMA) attempts to retire the faulted component while the domain is rebooting, the Logical Domains Manager is not able to communicate with the domain, and the retire fails. In this case, the fmadm faulty command lists the resource as degraded.

Recovery: Wait for the domain to complete rebooting, and then force FMA to replay the fault event by restarting the fault manager daemon (fmd) on the control domain by using this command:

primary# svcadm restart fmd

OpenBoot PROM Variables Cannot be Modified by the eeprom(1M) Command When the Logical Domains Manager is Running

Bug ID 6540368: This issue is summarized in Logical Domains Variable Persistence and affects only the control domain.

Cannot Set Security Keys With Logical Domains Running

Bug ID 6510214: In a Logical Domains environment, there is no support for setting or deleting wide-area network (WAN) boot keys from within the Oracle Solaris OS by using the ickey(1M) command. All ickey operations fail with the following error:

ickey: setkey: ioctl: I/O error

In addition, WAN boot keys that are set using OpenBoot firmware in logical domains other than the control domain are not remembered across reboots of the domain. In these domains, the keys set from the OpenBoot firmware are only valid for a single use.

Behavior of the ldm stop-domain Command Can Be Confusing

Bug ID 6506494: There are some cases where the behavior of the ldm stop-domain command is confusing.

# ldm stop-domain -f ldom

If the domain is at the kernel module debugger, kmdb(1), prompt, then the ldm stop-domain command fails with the following error message:

LDom <domain name> stop notification failed