JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle VM Server for SPARC 2.1 Release Notes     Oracle VM Server for SPARC
search filter icon
search icon

Document Information

Preface

1.  Oracle VM Server for SPARC 2.1 Release Notes

What's New in This Release

System Requirements

Supported Platforms

Required Software and Patches

Required and Recommended Oracle Solaris OS

Required Software to Enable Oracle VM Server for SPARC 2.1 Features

Required and Recommended System Firmware Patches

Minimum Version of Software Required

Direct I/O Hardware and Software Requirements

Live Domain Migration Requirements

Location of Oracle VM Server for SPARC 2.1 Software

Location of Patches

Location of Documentation

Related Software

Optional Software

Software That Can Be Used With the Logical Domains Manager

System Controller Software That Interacts With Logical Domains Software

Assigning Physical Resources to Domains

Managing Physical Resources on the Control Domain

Restrictions for Managing Physical Resources on Domains

Upgrading to Oracle VM Server for SPARC 2.1 Software

Known Issues

General Issues

I/O MMU Bypass Mode Is No Longer Needed

Service Processor and System Controller Are Interchangeable Terms

In Certain Conditions, a Guest Domain's Solaris Volume Manager Configuration or Metadevices Can Be Lost

Logical Domain Channels and Logical Domains

Memory Size Requirements

Booting a Large Number of Domains

Cleanly Shutting Down and Power Cycling a Logical Domains System

Memory Size Requested Might Be Different From Memory Allocated

Logical Domains Variable Persistence

Oracle's Sun SNMP Management Agent Does Not Support Multiple Domains

Containers, Processor Sets, and Pools Are Not Compatible With CPU Power Management

Fault Management

Delayed Reconfiguration

Cryptographic Units

ldmp2v convert Command: VxVM Warning Messages During Boot

Extended Mapin Space Is Only Available in the Oracle Solaris 10 8/11 OS and Oracle Solaris 11 OS

Graphical Configuration Assistant Tool Has Been Removed

Upgrade Option Not Presented When Using ldmp2v prepare -R

Block of Dynamically Added Memory Can Be Dynamically Removed Only as a Whole

ldmp2v Command: ufsdump Archiving Method Is No Longer Used

Domain Migration Restrictions

Version Restrictions for Migration

CPU Restrictions for Migration

Oracle VM Server for SPARC MIB Issues

Incorrect ldomCryptoRpReserved Property Value

The snmptable Command Does Not Work With the Version 2 or Version 3 Option

Bugs Affecting the Oracle VM Server for SPARC 2.1 Software

init-system Does Not Restore Named Core Constraints for Guest Domains From Saved XML Files

Named Cores Can Power Off All CPUs When in Bind Mode

Oracle Solaris 11 OS: Using Direct I/O to Remove Multiple PCIe Slots From the primary Domain on a Multi-Socket SPARC T-Series System Might Panic at Boot Time

Partial Core primary Fails to Permit Whole-Core DR Transitions

ldmconfig Is Only Supported on Oracle Solaris 10 Systems

Oracle VM Server for SPARC MIB Is Only Supported on Oracle Solaris 10 Systems

Migrating a Very Large Memory Domain on SPARC T4-4s Results in a Panicked Domain on the Target System

Removing a Large Number of CPUs From a Guest Domain

CPU Threading Mode Is Not Restored After a Domain Migration Is Canceled

A Large-Memory Domain in Elastic Mode Might Take a Long Time to Stop

Cannot Use Solaris Hot Plug Operations to Hot Remove a PCIe Endpoint Device

install-ldm Hangs When Run By Using an Absolute Path From Another Directory

ldm add-dev Can Create a Device Alias That is Longer Than Supported by OpenBoot

Virtual Disk Validation Fails for a Physical Disk With No Slice 2

When incoming_migration_enabled=false, Outgoing Migrations Fail

nxge Panics When Migrating a Guest Domain That Has Hybrid I/O and Virtual I/O Virtual Network Devices

Do Not Use the Sun Management Console Software to Monitor an Oracle VM Server for SPARC System

Incorrect SP Configuration Is Used as the Default

All ldm Commands Hang When Migrations Have Missing Shared NFS Resources

ldmd Fails to Remove Cores From a Domain That Has Partial Cores

Incorrect Return Status for a Failed CPU DR Operation on a Domain Booted in Single User Mode

Logical Domains Agent Service Does Not Come Online if the System Log Service Does Not Come Online

Kernel Deadlock Causes Machine Hang During a Migration

DRM and ldm list Output Shows a Different Number of Virtual CPUs Than Are Actually in the Guest Domain

DRM Fails to Restore the Default Number of Virtual CPUs for a Migrated Domain When the Policy is Removed or Expired

Virtual CPU Timeout Failures During DR

Domain Bind Fails When XML File Has an Invalid Network or Disk Back End

Migration Failure Reason Not Reported When the System MAC Address Clashes With Another MAC Address

Simultaneous Migration Operations in "Opposite Direction" Might Cause ldm to Hang

Removing a Large Number of CPUs From the Control Domain

SPARC T3: Oracle VM Server for SPARC Hangs When Performing Memory Operations

System That Has the Elastic Policy Set and Is Running the Oracle Solaris 10 8/11 OS Might Hang

pkgadd Fails to Set ACL Entries on /var/svc/manifest/platform/sun4v/ldmd.xml

SPARC T3-1: Detect And Handle Disks That Are Accessible Through Multiple Direct I/O Paths

Memory DR Removal Operations With Multiple Plumbed NIU nxge Instances Can Hang Indefinitely and Never Complete

ldmd Falsely Reports 100% CPU Utilization on a Domain

Guest Domains Cannot Boot From an Exported DVD Device

Using ldm stop -a Command on Domains in a Master-Slave Relationship Leaves the Slave With the stopping Flag Set

Cryptographic Units Cannot Be Removed From the primary Domain

Migration of a Guest Domain That Has Hybrid I/O-Enabled Virtual Network Devices Panics the Service Domain

Migration of a Domain That Has an Enabled Default DRM Policy Results in a Target Domain Being Assigned All Available CPUs

An In-Use MAC Address Can be Reassigned

ldmconfig Cannot Create a Domain Configuration on the SP

Uncooperative Oracle Solaris Domain Migration Can Be Blocked If cpu0 Is Offline

Memory DR Is Disabled Following a Canceled Migration

Dynamic Reconfiguration of MTU Values of Virtual Network Devices Sometimes Fails

Memory DR Is Not Supported With Some Physical Memory Configurations

Migrated Domain With MAUs Contains Only One CPU When Target OS Does Not Support DR of Cryptographic Units

Confusing Migration Failure Message for Real Address Memory Bind Failures

Dynamically Removing All the Cryptographic Units From a Domain Causes SSH to Terminate

Atlas PCI Express Dual 10-Gigabit Ethernet Fiber Card Shows Four Subdevices in ldm list-io -l Output

ldm Commands Are Slow to Respond When Several Domains Are Booting

Guest Domain Might Fail to Successfully Reboot When a System Is in Power Management Elastic Mode

Spurious ds_ldc_cb: LDC READ event Message Seen When Rebooting the Control Domain or a Guest Domain

Guest Domain Sometimes Fails to Make Proper Domain Services Connection to the Control Domain

Virtual Network Devices Are Not Created Properly on the Control Domain

Newly Added NIU/XAUI Adapters Are Not Visible to Host OS If Logical Domains Is Configured

I/O Domain or Guest Domain Panics When Booting From e1000g

Explicit Console Group and Port Bindings Are Not Migrated

Constraint Database Is Not Synchronized to Saved Configuration

Migration Does Not Fail If a vdsdev on the Target Has a Different Back End

Migration Can Fail to Bind Memory Even If the Target Has Enough Available

Logical Domains Manager Does Not Start If the Machine Is Not Networked and an NIS Client Is Running

Logical Domains Manager Displays Migrated Domains in Transition States When They Are Already Booted

Cannot Connect to Migrated Domain's Console Unless vntsd Is Restarted

Sometimes, Executing the uadmin 1 0 Command From an Logical Domains System Does Not Return the System to the OK Prompt

Logical Domains Manager Can Take Over 15 Minutes to Shut Down a Domain

If the Oracle Solaris 10 5/08 OS Is Installed on a Service Domain, Attempting a Net Boot of the Oracle Solaris 10 8/07 OS on Any Guest Domain Serviced by It Can Hang the Installation

ldmd Might Dump Core If Multiple set-vcpu Operations Are Performed on the Control Domain While It Is in Delayed Reconfiguration Mode

Solaris Volume Manager Volumes Built on Slice 2 Fail JumpStart When Used as the Boot Device in a Guest Domain

Simultaneous Net-Installation of Multiple Domains Fails When in a Common Console Group

The scadm Command Can Hang Following an SC or SP Reset

ldc_close: (0xb) unregister failed, 11 Warning Messages

Guest Domain With Too Many Virtual Networks on the Same Network Using DHCP Can Become Unresponsive

Logical Domains Manager Does Not Retire Resources On Guest Domain After a Panic and Reboot

OpenBoot PROM Variables Cannot be Modified by the eeprom(1M) Command When the Logical Domains Manager is Running

Cannot Set Security Keys With Logical Domains Running

Behavior of the ldm stop-domain Command Can Be Confusing

Hang Can Occur With Guest OS in Simultaneous Operations

Sometimes DR Requests Fail to Remove All Requested CPUs

Documentation Errata

Incorrect Cross Reference to Required Software Information

ldm stop Command Description Is Misleading

Logical Domains Manager Package Name Incorrect in Upgrade Procedure

ILOM load Command Synopsis Uses Incorrect Character

Resolved Issues

Oracle VM Server for SPARC 2.1 RFEs and Bugs Fixed in Oracle Solaris 10 8/11 OS

RFEs and Bugs Fixed for Oracle VM Server for SPARC 2.1 Software

RFEs and Bugs Fixed for Oracle VM Server for SPARC 2.1 Software Patch

Known Issues

This section contains general issues and specific bugs concerning the Oracle VM Server for SPARC 2.1 software.

General Issues

This section describes general known issues about this release of the Oracle VM Server for SPARC software that are broader than a specific bug number. Workarounds are provided where available.

I/O MMU Bypass Mode Is No Longer Needed

Starting with the Oracle VM Server for SPARC 2.0 release, I/O memory management unit (MMU) bypass mode is no longer needed. As a result, the bypass=on property is no longer available for use by the ldm add-io command.

Service Processor and System Controller Are Interchangeable Terms

For discussions in Oracle VM Server for SPARC documentation, the terms service processor (SP) and system controller (SC) are interchangeable.

In Certain Conditions, a Guest Domain's Solaris Volume Manager Configuration or Metadevices Can Be Lost

If a service domain is running a version of Oracle Solaris 10 OS prior to Oracle Solaris 10 9/10 and is exporting a physical disk slice as a virtual disk to a guest domain, then this virtual disk will appear in the guest domain with an inappropriate device ID. If that service domain is then upgraded to Oracle Solaris 10 9/10, the physical disk slice exported as a virtual disk will appear in the guest domain with no device ID.

This removal of the device ID of the virtual disk can cause problems to applications attempting to reference the device ID of virtual disks. In particular, this can cause the Solaris Volume Manager to be unable to find its configuration or to access its metadevices.

Workaround: After upgrading a service domain to Oracle Solaris 10 9/10, if a guest domain is unable to find its Solaris Volume Manager configuration or its metadevices, execute the following procedure.

Find a Guest Domain's Solaris Volume Manager Configuration or Metadevices

  1. Boot the guest domain.
  2. Disable the devid feature of Solaris Volume Manager by adding the following lines to the /kernel/dr/md.conf file:
    md_devid_destroy=1;
    md_keep_repl_state=1;
  3. Reboot the guest domain.

    After the domain has booted, the Solaris Volume Manager configuration and metadevices should be available.

  4. Check the Solaris Volume Manager configuration and ensure that it is correct.
  5. Re-enable the Solaris Volume Manager devid feature by removing from the /kernel/drv/md.conf file the two lines that you added in Step 2.
  6. Reboot the guest domain.

    During the reboot, you will see messages similar to this:

    NOTICE: mddb: unable to get devid for 'vdc', 0x10

    These messages are normal and do not report any problems.

Logical Domain Channels and Logical Domains

There is a limit to the number of logical domain channels (LDCs) that are available in any logical domain. For UltraSPARC T2 servers, SPARC T3-1 servers, SPARC T3-1B servers, SPARC T4-1 servers, and SPARC T4-1B servers, the limit is 512. For UltraSPARC T2 Plus servers, the other SPARC T3 servers and the other SPARC T4 servers, the limit is 768. This only becomes an issue on the control domain because the control domain has at least part, if not all, of the I/O subsystem allocated to it. This might also be an issue because of the potentially large number of LDCs that are created for both virtual I/O data communications and the Logical Domains Manager control of the other logical domains.

If you try to add a service, or bind a domain, so that the number of LDC channels exceeds the limit on the control domain, the operation fails with an error message similar to the following:

13 additional LDCs are required on guest primary to meet this request,
but only 9 LDCs are available

If you have a large number of virtual network devices that are connected to the same virtual virtual switch, you can reduce the number of LDC channels assigned by using the ldm add-vsw or ldm set-vsw command to set inter-vnet-link=off. When this property is set to off, LDC channels are not used for inter-vnet communications. Instead, an LDC channel is assigned only for communication between virtual network devices and virtual switch devices. See the ldm(1M) man page.


Note - Although disabling the assignment of inter-vnet channels reduces the number of LDCs, it might negatively affect guest-to-guest network performance.


The following guidelines can help prevent creating a configuration that could overflow the LDC capabilities of the control domain:

  1. The control domain allocates approximately 15 LDCs for various communication purposes with the hypervisor, Fault Management Architecture (FMA), and the system controller (SC), independent of the number of other logical domains configured. The exact number of LDC channels that is allocated by the control domain depends on the platform and on the version of the software that is used.

  2. The control domain allocates 1 LDC to every logical domain, including itself, for control traffic.

  3. Each virtual I/O service on the control domain consumes 1 LDC for every connected client of that service.

For example, consider a control domain and 8 additional logical domains. Each logical domain needs the following at a minimum:

Applying the above guidelines yields the following results (numbers in parentheses correspond to the preceding guideline number from which the value was derived):

15(1) + 9(2) + 8 x 3(3) = 48 LDCs in total

Now consider the case where there are 45 domains instead of 8, and each domain includes 5 virtual disks, 5 virtual networks, and a virtual console. Now the equation becomes:

15 + 46 + 45 x 11 = 556 LDCs in total

Depending upon the number of supported LDCs of your platform, the Logical Domains Manager will either accept or reject the configurations.

Memory Size Requirements

The Oracle VM Server for SPARC software does not impose a memory size limitation when you create a domain. The memory size requirement is a characteristic of the guest operating system. Some Oracle VM Server for SPARC functionality might not work if the amount of memory present is less than the recommended size. For recommended and minimum size memory requirements for the Oracle Solaris 10 OS, see System Requirements and Recommendations in Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade.

The OpenBoot PROM has a minimum size restriction for a domain. Currently, that restriction is 12 Mbytes. If you have a domain less than that size, the Logical Domains Manager will automatically boost the size of the domain to 12 Mbytes. Refer to the release notes for your system firmware for information about memory size requirements.

The memory dynamic reconfiguration (DR) feature enforces 256-Mbyte alignment on the address and size of the memory involved in a given operation. See Memory Alignment in Oracle VM Server for SPARC 2.1 Administration Guide.

Booting a Large Number of Domains

You can boot the following number of domains depending on your platform:

If unallocated virtual CPUs are available, assign them to the service domain to help process the virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more than 32 domains. In cases where maximum domain configurations have only a single CPU in the service domain, do not put unnecessary stress on the single CPU when configuring and using the domain. The virtual switch (vsw) services should be spread over all the network adapters available in the machine. For example, if booting 128 domains on a Sun SPARC Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances. Do not have more than 32 vnet instances per vsw service because having more than that tied to a single vsw could cause hard hangs in the service domain.

To run the maximum configurations, a machine needs the an adequate amount of memory to support the guest domains. The amount of memory is dependent on your platform and your OS. See the documentation for your platform, Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade, and Installing Oracle Solaris 11 Systems.

Memory and swap space usage increases in a guest domain when the vsw services used by the domain provides services to many virtual networks (in multiple domains). This is due to the peer-to-peer links between all the vnet connected to the vsw. The service domain benefits from having extra memory. Four Gbytes is the recommended minimum when running more than 64 domains. Start domains in groups of 10 or less and wait for them to boot before starting the next batch. The same advice applies to installing operating systems on domains. You can reduce the number of links by disabling inter-vnet channels. See Inter-Vnet LDC Channels in Oracle VM Server for SPARC 2.1 Administration Guide.

Cleanly Shutting Down and Power Cycling a Logical Domains System

If you have made any configuration changes since last saving a configuration to the SC, before you attempt to power off or power cycle a Logical Domains system, make sure that you save the latest configuration that you want to keep.

Power Off a System With Multiple Active Domains

  1. Shut down, stop, and unbind all the non-I/O domains.
  2. Shut down, stop, and unbind any active I/O domains.
  3. Halt the primary domain.

    Because no other domains are bound, the firmware automatically powers off the system.

Power Cycle the System

  1. Shut down, stop, and unbind all the non-I/O domains.
  2. Shut down, stop, and unbind any active I/O domains.
  3. Reboot the primary domain.

    Because no other domains are bound, the firmware automatically power cycles the system before rebooting it. When the system restarts, it boots into the Logical Domains configuration last saved or explicitly set.

Memory Size Requested Might Be Different From Memory Allocated

Under certain circumstances, the Logical Domains Manager rounds up the requested memory allocation to either the next largest 8-Kbyte or 4-Mbyte multiple. This can be seen in the following example output of the ldm list-domain -l command, where the constraint value is smaller than the actual allocated size:

Memory:
          Constraints: 1965 M
          raddr          paddr5          size
          0x1000000      0x291000000     1968M

Logical Domains Variable Persistence

Variable updates persist across a reboot, but not across a powercycle, unless the variable updates are either initiated from OpenBoot firmware on the control domain or followed by saving the configuration to the SC.

In this context, it is important to note that a reboot of the control domain could initiate a powercycle of the system:

Logical Domains variables for a domain can be specified using any of the following methods:

The goal is that, variable updates that are made by using any of these methods always persist across reboots of the domain. The variable updates also always reflect in any subsequent logical domain configurations that were saved to the SC.

In Oracle VM Server for SPARC 2.1 software, there are a few cases where variable updates do not persist as expected:

If you are concerned about Logical Domains variable changes, do one of the following:

If you modify the time or date on a logical domain, for example using the ntpdate command, the change persists across reboots of the domain but not across a power cycle of the host. To ensure that time changes persist, save the configuration with the time change to the SP and boot from that configuration.

The following Bug IDs have been filed to resolve these issues: 6520041, 6540368, 6540937, and 6590259.

Oracle's Sun SNMP Management Agent Does Not Support Multiple Domains

Sun Simple Management Network Protocol (SNMP) Management Agent does not support multiple domains. Only a single global domain is supported.

Containers, Processor Sets, and Pools Are Not Compatible With CPU Power Management

Using CPU dynamic reconfiguration (DR) to power down virtual CPUs does not work with processor sets, resource pools, or the zone's dedicated CPU feature.

When using CPU power management in elastic mode, the Oracle Solaris OS guest sees only the CPUs that are allocated to the domains that are powered on. That means that output from the psrinfo(1M) command dynamically changes depending on the number of CPUs currently power-managed. This causes an issue with processor sets and pools, which require actual CPU IDs to be static to allow allocation to their sets. This can also impact the zone's dedicated CPU feature.

Workaround: Set the performance mode for the power management policy.

Fault Management

There are several issues associated with FMA and power-managing CPUs. If a CPU faults when running in elastic mode, switch to performance mode until the faulted CPU recovers. If all faulted CPUs recover, then elastic mode can be used again.

Delayed Reconfiguration

When a primary domain is in a delayed reconfiguration state, CPUs are power managed only after the primary domain reboots. This means that CPU power management will not bring additional CPUs online while the domain is experiencing high-load usage until the primary domain reboots, clearing the delayed reconfiguration state.

Cryptographic Units

The Oracle Solaris 10 10/09 OS introduces the capability to dynamically add and remove cryptographic units from a domain, which is called cryptographic unit dynamic reconfiguration (DR). The Logical Domains Manager automatically detects whether a domain allows cryptographic unit DR, and enables the functionality only for those domains. In addition, CPU DR is no longer disabled in domains that have cryptographic units bound and are running an appropriate version of the Oracle Solaris OS.

No core disable operations are performed on domains that have cryptographic units bound when the SP is set to elastic mode. To enable core disable operations to be performed when the system is in elastic mode, remove the cryptographic units that are bound to the domain.

ldmp2v convert Command: VxVM Warning Messages During Boot

Running Veritas Volume Manager (VxVM) 5.x on the Oracle Solaris 10 OS is the only supported (tested) version for the Oracle VM Server for SPARC P2V tool. Older versions of VxVM, such as 3.x and 4.x running on the Solaris 8 and Solaris 9 operating systems, might also work. In those cases, the first boot after running the ldmp2v convert command might show warning messages from the VxVM drivers. You can ignore these messages. You can remove the old VRTS* packages after the guest domain has booted.

Boot device: disk0:a File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: normaal
Configuring devices.
/kernel/drv/sparcv9/vxdmp: undefined symbol ?romp?
WARNING: mod_load: cannot load module ?vxdmp?
WARNING: vxdmp: unable to resolve dependency, module ?misc/ted? not found
/kernel/drv/sparcv9/vxdmp: undefined symbol ?romp?
WARNING: mod_load: cannot load module ?vxdmp?
WARNING: vxdmp: unable to resolve dependency, module ?misc/ted? not found
/kernel/drv/sparcv9/vxio: undefined symbol ?romp?
WARNING: mod_load: cannot load module ?vxio?
WARNING: vxio: unable to resolve dependency, module ?drv/vxdmp? not found
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
NOTICE: VxVM not started

Extended Mapin Space Is Only Available in the Oracle Solaris 10 8/11 OS and Oracle Solaris 11 OS

Extended mapin space is only available in the Oracle Solaris 10 8/11 OS and Oracle Solaris 11 OS. By default, this feature is disabled.

You can use the ldm add-domain or ldm set-domain command to enable the mode by setting extended-mapin-space=on on a domain that is running the Oracle Solaris 10 8/11 OS or Oracle Solaris 11 OS. See the ldm(1M) man page.

Graphical Configuration Assistant Tool Has Been Removed

Starting with the Oracle VM Server for SPARC 2.1 release, only the terminal-based Configuration Assistant tool, ldmconfig, is available. The graphic user interface tool is no longer available.

Upgrade Option Not Presented When Using ldmp2v prepare -R

The Solaris Installer does not present the Upgrade option when the partition tag of the slice that holds the root (/) file system is not set to root. This situation occurs if the tag is not explicitly set when labeling the guest's boot disk. You can use the format command to set the partition tag as follows:

AVAILABLE DISK SELECTIONS:
       0. c0d0 <SUN-DiskImage-10GB cyl 282 alt 2 hd 96 sec 768>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c4t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@400/pci@0/pci@1/scsi@0/sd@2,0
       2. c4t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@400/pci@0/pci@1/scsi@0/sd@3,0
Specify disk (enter its number)[0]: 0
selecting c0d0
[disk formatted, no defect list found]
format> p


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit

partition> 0
Part      Tag    Flag     Cylinders       Size            Blocks
  0 unassigned    wm       0              0         (0/0/0)          0

Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 0
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 8g
partition> label
Ready to label disk, continue? y

partition>

Block of Dynamically Added Memory Can Be Dynamically Removed Only as a Whole

A block of dynamically added memory can be dynamically removed only as a whole. That is, a subset of that memory block cannot be dynamically removed.

This situation could occur if a domain with a small memory size is dynamically grown to a much larger size, as the following example shows:

# ldm list ldom1
NAME  STATE FLAGS   CONS VCPU MEMORY UTIL UPTIME
ldom1 active -n---- 5000 2    1G     0.4% 23h

# ldm add-mem 16G ldom1

# ldm rm-mem 8G ldom1
Memory removal failed because all of the memory is in use.

# ldm rm-mem 16G ldom1

# ldm list ldom1
NAME  STATE FLAGS   CONS VCPU MEMORY UTIL UPTIME
ldom1 active -n---- 5000 2    1G     0.4% 23h

Workaround: Dynamically add memory in smaller amounts to reduce the probability that this condition will occur.

Recovery: Reboot the domain.

ldmp2v Command: ufsdump Archiving Method Is No Longer Used

Restoring ufsdump archives on a virtual disk that is backed by a file on a UFS file system might cause the system to hang. In such a case, the ldmp2v prepare command will exit. You might encounter this problem when you manually restore ufsdump archives in preparation for the ldmp2v prepare -R /altroot command when the virtual disk is a file on a UFS file system. For compatibility with previously created ufsdump archives, you can still use the ldmp2v prepare command to restore ufsdump archives on virtual disks that are not backed by a file on a UFS file system. However, the use of ufsdump archives is not recommended.

Domain Migration Restrictions

The following sections describe restrictions for domain migration. The Logical Domains Manager software and the system firmware versions must be compatible to permit migrations. Also, you must meet certain CPU requirements to ensure a successful domain migration.

Version Restrictions for Migration

Both the source and target machines must run at least Version 2.1 of the Logical Domains Manager.

The following examples show the messages that you see when you run older versions of the Logical Domains Manager, the system firmware, or both:

CPU Restrictions for Migration

If the domain to be migrated is running an Oracle Solaris OS version older than the Oracle Solaris 10 9/10 OS, you might see the following message during the migration:

Domain domain-name is not running an operating system that is
compatible with the latest migration functionality.

The following CPU requirements and restrictions apply:

These restrictions also apply when you attempt to migrate a domain that is running in OpenBoot or in the kernel debugger. See Migrating a Domain That is Running in OpenBoot or in the Kernel Debugger in Oracle VM Server for SPARC 2.1 Administration Guide.

Oracle VM Server for SPARC MIB Issues

This section summarizes the issues that you might encounter when using Oracle VM Server for SPARC Management Information Base (MIB) software.


Note - The Oracle VM Server for SPARC MIB software is only available on Oracle Solaris 10 systems.


Incorrect ldomCryptoRpReserved Property Value

Bug ID 7042966: The value of the ldomCryptoRpReserved property in the cryptographic unit resource pool (ldomCryptoResourcePool) erroneously includes the number of cryptographic unit devices that have been assigned to inactive domains.

The snmptable Command Does Not Work With the Version 2 or Version 3 Option

Bug ID 6521530: You receive empty SNMP tables if you query the Oracle VM Server for SPARC MIB 2.1 software using the snmptable command with the -v2c or -v3 option. The snmptable command with the -v1 option works as expected.

Workaround: Use the -CB option to use only GETNEXT, not GETBULK, requests to retrieve data. See Retrieve Oracle VM Server for SPARC MIB Objects in Oracle VM Server for SPARC 2.1 Administration Guide.

Bugs Affecting the Oracle VM Server for SPARC 2.1 Software

This section summarizes the bugs that you might encounter when using this version of the software. The bug descriptions are in numerical order by bug ID. If a workaround and a recovery procedure are available, they are specified.

init-system Does Not Restore Named Core Constraints for Guest Domains From Saved XML Files

Bug ID 7117766: The ldm init-system command fails to restore the named CPU core constraints for guest domains from a saved XML file.

Workaround: Perform the following steps:

  1. Create an XML file for the primary domain.

    # ldm ls-constraints -x primary > primary.xml
  2. Create an XML file for the guest domain or domains.

    # ldm ls-constraints -x ldom[,ldom][,...] > guest.xml
  3. Power cycle the system and boot a factory default configuration.

  4. Apply the XML configuration to the primary domain.

    # ldm init-system -r -i primary.xml
  5. Reboot.

  6. Apply the XML configuration to the guest domain or domains.

    # ldm init-system -f -i guest.xml

Named Cores Can Power Off All CPUs When in Bind Mode

Bug ID 7111119: You cannot use the ldm add-core, ldm set-core, and ldm remove-core commands when the domain has the elastic policy enabled.

Workaround: Ensure that the domain has the performance policy enabled.

Oracle Solaris 11 OS: Using Direct I/O to Remove Multiple PCIe Slots From the primary Domain on a Multi-Socket SPARC T-Series System Might Panic at Boot Time

Bug ID 7100859: Your system might panic at boot time if you use direct I/O (ldm remove-io) to remove multiple PCIe slots from a multi-socket SPARC T-Series system. This occurs when the paths to the PCIe slots are similar to each other (except for the root complex path). The panic might occur after you remove the PCIe slots and then reboot the primary domain. For more information about the direct I/O (DIO) feature, see Assigning PCIe Endpoint Devices in Oracle VM Server for SPARC 2.1 Administration Guide.

For example, if you remove the /SYS/MB/PCIE5 (pci@500/pci@2/pci@0/pci@0) and /SYS/MB/PCIE4 (pci@400/pci@2/pci@0/pci@0) slots, which have similar path names, the next boot of the Oracle Solaris 11 OS might panic.

The following ldm list-io command is run after the /SYS/MB/PCIE4 and /SYS/MB/PCIE5 PCIe slots are removed.

# ldm list-io
IO              PSEUDONYM       DOMAIN
--              ---------       ------
pci@400         pci_0           primary
niu@480         niu_0           primary
pci@500         pci_1           primary
niu@580         niu_1           primary

PCIE                       PSEUDONYM       STATUS  DOMAIN
----                       ---------       ------  ------
pci@400/pci@2/pci@0/pci@8  /SYS/MB/PCIE0   OCC     primary
pci@400/pci@2/pci@0/pci@4  /SYS/MB/PCIE2   OCC     primary
pci@400/pci@2/pci@0/pci@0  /SYS/MB/PCIE4   OCC
pci@400/pci@1/pci@0/pci@8  /SYS/MB/PCIE6   OCC     primary
pci@400/pci@1/pci@0/pci@c  /SYS/MB/PCIE8   OCC     primary
pci@400/pci@2/pci@0/pci@e  /SYS/MB/SASHBA  OCC     primary
pci@400/pci@1/pci@0/pci@4  /SYS/MB/NET0    OCC     primary
pci@500/pci@2/pci@0/pci@a  /SYS/MB/PCIE1   OCC     primary
pci@500/pci@2/pci@0/pci@6  /SYS/MB/PCIE3   OCC     primary
pci@500/pci@2/pci@0/pci@0  /SYS/MB/PCIE5   OCC
pci@500/pci@1/pci@0/pci@6  /SYS/MB/PCIE7   OCC     primary
pci@500/pci@1/pci@0/pci@0  /SYS/MB/PCIE9   OCC     primary
pci@500/pci@1/pci@0/pci@5  /SYS/MB/NET2    OCC     primary
#

Workaround: Do not remove all slots that have similar path names. Instead, remove only one such PCIe slot.

You also might be able to insert the PCIe cards into slots that do not have similar paths and then use them with the DIO feature.

Partial Core primary Fails to Permit Whole-Core DR Transitions

Bug ID 7100841: When the primary domain shares the lowest physical core (usually 0) with another domain, attempts to set the whole-core constraint for the primary domain fail.

Workaround: Perform the following steps:

  1. Determine the lowest bound core that is shared by the domains.

    # ldm list -o cpu
  2. Unbind all the CPU threads of the lowest core from all domains other than the primary domain.

    As a result, CPU threads of the lowest core are not shared and are free for binding to the primary domain.

  3. Set the whole-core constraint by doing one of the following:

    • Bind the CPU threads to the primary domain, and set the whole-core constraint by using the ldm set-vcpu -c command.

    • Use the ldm set-core command to bind the CPU threads and set the whole-core constraint in a single step.

ldmconfig Is Only Supported on Oracle Solaris 10 Systems

Bug ID 7093344: You can only use the ldmconfig command on Oracle Solaris 10 systems.

Oracle VM Server for SPARC MIB Is Only Supported on Oracle Solaris 10 Systems

Bug ID 7082776: You can only use the Oracle VM Server for SPARC MIB on Oracle Solaris 10 systems.

Migrating a Very Large Memory Domain on SPARC T4-4s Results in a Panicked Domain on the Target System

Bug ID 7071426: A panic might occur during a migration when the domain being migrated has multiple memory blocks that total over 500 Gbytes. Use the ldm list -o mem command to determine the amount of memory on the domain.

The panic stack resembles the following:

panic[cpu21]/thread=2a100a5dca0:
BAD TRAP: type=30 rp=2a100a5c930 addr=6f696e740a232000 mmu_fsr=10009

sched:data access exception: MMU sfsr=10009: Data or instruction address out of range context 0x1

pid=0, pc=0x1076e2c, sp=0x2a100a5c1d1, tstate=0x4480001607, context=0x0
g1-g7: 80000001, 0, 80a5dca0, 0, 0, 0, 2a100a5dca0

000002a100a5c650 unix:die+9c (30, 2a100a5c930, 6f696e740a232000, 10009, 2a100a5c710, 10000)
000002a100a5c730 unix:trap+75c (2a100a5c930, 0, 0, 10009, 30027b44000, 2a100a5dca0)
000002a100a5c880 unix:ktl0+64 (7022d6dba40, 0, 1, 2, 2, 18a8800)
000002a100a5c9d0 unix:page_trylock+38 (6f696e740a232020, 1, 6f69639927eda164, 7022d6dba40, 13, 1913800)
000002a100a5ca80 unix:page_trylock_cons+c (6f696e740a232020, 1, 1, 5, 7000e697c00, 6f696e740a232020)
000002a100a5cb30 unix:page_get_mnode_freelist+19c (701ee696d00, 12, 1, 0, 19, 3)
000002a100a5cc80 unix:page_get_cachelist+318 (12, 1849fe0, ffffffffffffffff, 3,
0, 1)
000002a100a5cd70 unix:page_create_va+284 (192aec0, 300ddbc6000, 0, 0, 2a100a5cf00, 300ddbc6000)
000002a100a5ce50 unix:segkmem_page_create+84 (18a8400, 2000, 1, 198e0d0, 1000, 11)
000002a100a5cf60 unix:segkmem_xalloc+b0 (30000002d98, 0, 2000, 300ddbc6000, 0, 107e290)
000002a100a5d020 unix:segkmem_alloc_vn+c0 (30000002d98, 2000, 107e000, 198e0d0,
30000000000, 18a8800)
000002a100a5d0e0 genunix:vmem_xalloc+5c8 (30000004000, 2000, 0, 0, 80000, 0)
000002a100a5d260 genunix:vmem_alloc+1d4 (30000004000, 2000, 1, 2000, 30000004020, 1)
000002a100a5d320 genunix:kmem_slab_create+44 (30000056008, 1, 300ddbc4000, 18a6840, 30000056200, 30000004000)
000002a100a5d3f0 genunix:kmem_slab_alloc+30 (30000056008, 1, ffffffffffffffff, 0, 300000560e0, 30000056148)
000002a100a5d4a0 genunix:kmem_cache_alloc+2dc (30000056008, 1, 0, b9, fffffffffffffffe, 2006)
000002a100a5d550 genunix:kmem_cpucache_magazine_alloc+64 (3000245a740, 3000245a008, 7, 6028f283750, 3000245a1d8,
193a880)
000002a100a5d600 genunix:kmem_cache_free+180 (3000245a008, 6028f2901c0, 7, 7, 7, 3000245a740)
000002a100a5d6b0 ldc:vio_destroy_mblks+c0 (6028efe8988, 800, 0, 200, 19de0c0, 0)
000002a100a5d760 ldc:vio_destroy_multipools+30 (6028f1542b0, 2a100a5d8c8, 40, 0, 10, 30000282240)
000002a100a5d810 vnet:vgen_unmap_rx_dring+18 (6028f154040, 0, 6028f1a3cc0, a00,
200, 6028f1abc00)
000002a100a5d8d0 vnet:vgen_process_reset+254 (1, 6028f154048, 6028f154068, 6028f154060, 6028f154050, 6028f154058)
000002a100a5d9b0 genunix:taskq_thread+3b8 (6028ed73908, 6028ed738a0, 18a6840, 6028ed738d2, e4f746ec17d8,
6028ed738d4)

Workaround: Avoid performing migrations of domains that have over 500 Gbytes of memory.

Removing a Large Number of CPUs From a Guest Domain

Bug ID 7062298: You might see the following error message when you attempt to remove a large number of CPUs from a guest domain:

Request to remove cpu(s) sent, but no valid response received
VCPU(s) will remain allocated to the domain, but might
not be available to the guest OS
Resource modification failed

Workaround: Stop the guest domain before you remove more than 100 CPUs from the domain.

CPU Threading Mode Is Not Restored After a Domain Migration Is Canceled

Bug ID 7061265: If you cancel the migration of a domain that has the threading property set to max-ipc, the threading property value is incorrectly restored to max-throughput on the domain to be migrated.

Workaround: Manually reset the threading property to max-ipc on the domain that will be migrated from the source machine.

A Large-Memory Domain in Elastic Mode Might Take a Long Time to Stop

Bug ID 7058261: When you use the ldm stop command to stop a large-memory domain while the system is in elastic power management mode, it might take a long time. If the domain is sufficiently idle, the majority of the CPU threads that are assigned to the domain will be disabled. By disabling the CPUs, the processing that is required to stop a domain is left to the remaining active threads.

For example, a guest domain that has 252 Gbytes of memory and only 2 CPUs enabled takes approximately 7 minutes to stop.

Workaround: Disable power management (PM) by switching from elastic to performance mode before you stop the domain.

Cannot Use Solaris Hot Plug Operations to Hot Remove a PCIe Endpoint Device

Bug ID 7054326: You cannot use Solaris hotplug operations to hot remove a PCIe endpoint device after that device is removed from the primary domain by using the ldm rm-io command. For information about replacing or removing a PCIe endpoint device, see Making PCIe Hardware Changes in Oracle VM Server for SPARC 2.1 Administration Guide.

install-ldm Hangs When Run By Using an Absolute Path From Another Directory

Bug ID 7050588: If you specify the absolute path to the install-ldm command from another directory, the command hangs.

Workaround: Change to the directory in which the install-ldm command is installed before you run the command.

# cd dirname/OVM_Server_SPARC-2_1/Install
# ./install-ldm

ldm add-dev Can Create a Device Alias That is Longer Than Supported by OpenBoot

Bug ID 7044329: If a guest domain has a virtual device with a name that is longer than 31 characters, OpenBoot issues an error message when the domain is started. The device alias that matches the virtual device name is not created.

The error message looks similar to the following:

Error: device alias name 'mynet1234567890123456789012345678901234567890'
length is greater than 31 chars, device alias not created

Virtual Disk Validation Fails for a Physical Disk With No Slice 2

Bug ID 7042353: If a physical disk is configured with a slice 2 that has a size of 0, you might encounter the following issues:

Another workaround permits you to permanently disable the disk validation that is performed by the ldm add-vdsdev and ldm bind commands. As a result, you do not need to specify the -q option. Permanently disable the disk validation by updating the device_validation property of the ldmd service:

# svccfg -s ldmd setprop ldmd/device_validation=value
# svcadm refresh ldmd
# svcadm restart ldmd

Specify a value of 0 to disable validation for network and disk devices. Specify a value of 1 to disable validation for disk devices but still enable validation for network devices.

The possible values for the device_validation property are:

0

Disable validation for all devices

1

Enable validation for network devices

2

Enable validation for disk devices

3

Enable validation for network and disk devices

-1

Enable validation for all type of devices, which is the default

When incoming_migration_enabled=false, Outgoing Migrations Fail

Bug ID 7039793: When incoming_migration_enabled=false and outgoing_migration_enabled=true, outgoing migrations fail with the following message:

The source machine is running an older version of the System Firmware
that is not compatible with the version running on the target machine.

When outgoing_migration_enabled=false, outgoing migrations are expected to fail.

Workaround: Do the following:

  1. Set incoming_migration_enabled=true.

    # svccfg -s ldmd setprop ldmd/incoming_migration_enabled=true
  2. Refresh ldmd.

    # svcadm refresh ldmd
  3. Restart ldmd.

    # svcadm restart ldmd

nxge Panics When Migrating a Guest Domain That Has Hybrid I/O and Virtual I/O Virtual Network Devices

Bug ID 7038650: When a heavily loaded guest domain has a hybrid I/O configuration and you attempt to migrate it, you might see an nxge panic.

Workaround: Add the following line to the /etc/system file on the primary domain and on any service domain that is part of the hybrid I/O configuration for the domain:

set vsw:vsw_hio_max_cleanup_retries = 0x200

Do Not Use the Sun Management Console Software to Monitor an Oracle VM Server for SPARC System

Bug ID 7037495: Using a Sun Management Console to query the CPU status of an Oracle VM Server for SPARC system has the potential to cause data corruption. The corruption is limited to the data structures that the Hypervisor uses to track running domains, and results in the Logical Domains Manager being unable to start. For this reason, do not use the Sun Management Console software to monitor Oracle VM Server for SPARC systems.

Workaround: Power cycle the system to use a configuration that is known to be valid.

Incorrect SP Configuration Is Used as the Default

Bug ID 7037295: If the Logical Domains Manager is restarted or the primary domain is rebooted after running the ldm add-spconfig -r spconfig command, the Logical Domains Manager uses the default configuration rather than the specified configuration, spconfig. This means that any subsequent configuration modifications are made to the default configuration rather than to the specified configuration, spconfig.

Workaround: Set the Logical Domains Manager current configuration by either performing a power cycle or by running the ldm add-spconfig spconfig command.

All ldm Commands Hang When Migrations Have Missing Shared NFS Resources

Bug ID 7036137: An initiated or ongoing migration, or any ldm command, hangs forever. This situation occurs when the domain to be migrated uses a shared file system from another system, and the file system is no longer shared.

Workaround: Make the shared file system accessible again.

ldmd Fails to Remove Cores From a Domain That Has Partial Cores

Bug ID 7035438: ldmd permits you to enable the whole-core constraint on a domain that has partial cores, yet fails to remove or set cores from the same domain.

Workaround: Do the following from the factory-default configuration on the control domain:

  1. Initiate a delayed reconfiguration on the control domain.

    # ldm start-reconf primary
  2. Perform any memory reconfiguration operations first.

  3. Perform the CPU reconfiguration operations.

    # ldm set-vcpu 16 primary
    # ldm set-vcpu -c 2 primary

This example uses 2 cores but the number of cores can be from 1 to the system limit.

Incorrect Return Status for a Failed CPU DR Operation on a Domain Booted in Single User Mode

Bug ID 7034498: When in single-user mode, attempting to add a virtual CPU to a domain returns a status value of 0. The status value for this failure should be 1.

Logical Domains Agent Service Does Not Come Online if the System Log Service Does Not Come Online

Bug ID 7034191: If the system log service, svc:/system/system-log, fails to start and does not come online, the Logical Domains agent service will not come online. When the Logical Domains agent service is not online, the virtinfo, ldm add-vsw, ldm add-vdsdev, and ldm list-io commands might not behave as expected.

Workaround: Ensure that the svc:/ldoms/agents:default service is enabled and online:

# svcs -l svc:/ldoms/agents:default

If the svc:/ldoms/agents:default service is offline, verify that the service is enabled and that all dependent services are online.

Kernel Deadlock Causes Machine Hang During a Migration

Bug ID 7030045: The migration of an active guest domain might hang and cause the source machine to become unresponsive. When this problem occurs, the following message is written to the console and to the /var/adm/messages file:

vcc: i_vcc_ldc_fini: cannot close channel 15

vcc: [ID 815110 kern.notice] i_vcc_ldc_fini: cannot
close channel 15

Note that the channel number shown is an Oracle Solaris internal channel number that might be different for each warning message.

Workaround: Before you migrate the domain, disconnect from the guest domain's console.

Recovery: Perform a powercycle of the source machine.

DRM and ldm list Output Shows a Different Number of Virtual CPUs Than Are Actually in the Guest Domain

Bug ID 7027105: A No response message might appear in the Oracle VM Server for SPARC log when a loaded domain's DRM policy expires after the CPU count has been substantially reduced. The ldm list output shows that there are more CPU resources allocated to the domain than is shown in the psrinfo output.

Workaround: Use the ldm set-vcpu command to reset the number of CPUs on the domain to the value that is shown in the psrinfo output.

DRM Fails to Restore the Default Number of Virtual CPUs for a Migrated Domain When the Policy is Removed or Expired

Bug ID 7026160: You perform a domain migration while a DRM policy is in effect. Later, if the DRM policy expires or is removed from the migrated domain, DRM fails to restore the original number of virtual CPUs to the domain.

Workaround: If a domain is migrated while a DRM policy is active and the DRM policy is subsequently expired or removed, reset the number of virtual CPUs. Use the ldm set-vcpu command to set the number of virtual CPUs to the original value on the domain.

Virtual CPU Timeout Failures During DR

Bug ID 7025445: Running the ldm set-vcpu 1 command on a guest domain that has over 100 virtual CPUs and some cryptographic units fails to remove the virtual CPUs. The virtual CPUs are not removed because of a DR timeout failure. The cryptographic units are successfully removed.

Workaround: Use the ldm rm-vcpu command to remove all but one of the virtual CPUs from the guest domain. Do not remove more than 100 virtual CPUs at a time.

Domain Bind Fails When XML File Has an Invalid Network or Disk Back End

Bug ID 7024499: If you use an XML file to bind a domain with the ldm bind -i xml-file command, the bind might fail. The failure is due to an invalid network device or disk back-end path even if you use the -f or -q option. The bind fails when both of the following circumstances are true:

Although both the -f and -q options can be specified with the bind -i xml-file command, these options are ignored.

Workaround: Do the following:

  1. Temporarily disable ldmad on the service domains that have an invalid device or back end.

    # svcadm disable ldoms/agents
  2. Re-enable ldmad on each service domain where you disabled ldmad after the bind.

    # svcadm enable ldoms/agents

Migration Failure Reason Not Reported When the System MAC Address Clashes With Another MAC Address

Bug ID 7023216: A domain cannot be migrated if it contains a duplicate MAC address. Typically, when a migration fails for this reason, the failure message shows the duplicate MAC address. However in rare circumstances, this failure message might not report the duplicate MAC address.

# ldm migrate ldg2 system2
Target Password:
Domain Migration of LDom ldg2 failed

Workaround: Ensure that the MAC addresses on the target machine are unique.

Simultaneous Migration Operations in “Opposite Direction” Might Cause ldm to Hang

Bug ID 7019493: If two ldm migrate commands are issued simultaneously in the “opposite direction,” the two commands might hang and never complete. For example, an opposite direction situation is one where you simultaneously start a migration on machine A to machine B and a migration on machine B to machine A.

The hang results for the migration processes even if they are initiated as dry runs by using the -n. When this problem occurs, all other ldm commands might hang.

Workaround: None.

Removing a Large Number of CPUs From the Control Domain

Bug ID 6994984: Use a delayed reconfiguration rather than dynamic reconfiguration to remove more than 100 CPUs from the primary domain. Use the following steps:

  1. Use the ldm start-reconf primary command to put the control domain in delayed reconfiguration mode.

  2. Partition the host system's resources that are owned by the control domain, as necessary.

  3. Use the ldm cancel-reconf command to undo the operations in Step 2, if necessary, and start over.

  4. Reboot the control domain to make the reconfiguration changes take effect.

SPARC T3: Oracle VM Server for SPARC Hangs When Performing Memory Operations

Bug ID 6994300: The Logical Domains Manager might hang on a SPARC T3 system when performing memory operations and possibly migration operations. Such operations will fail to complete.

This hang might occur on any T3 platform that uses any network interface unit (NIU) adapter, but the hang has been confirmed on systems that have XAUI extenders.

Workaround: Apply patch ID 144500-19.

System That Has the Elastic Policy Set and Is Running the Oracle Solaris 10 8/11 OS Might Hang

Bug IDs 6989192 and 7071760: You might experience OS hangs at login or while executing commands when the following conditions are met:

Workaround: Apply patch ID 147149-01.

pkgadd Fails to Set ACL Entries on /var/svc/manifest/platform/sun4v/ldmd.xml

Bug ID 6984681: When using the pkgadd command to install the SUNWldm.v package from a directory that is exported by means of NFS from a Sun ZFS Storage Appliance, you might see the following error message:

cp: failed to set acl entries on /var/svc/manifest/platform/sun4v/ldmd.xml

Workaround: Ignore this message.

SPARC T3-1: Detect And Handle Disks That Are Accessible Through Multiple Direct I/O Paths

Bug ID 6984008: A SPARC T3-1 system can be installed with dual-ported disks, which can be accessed by two different direct I/O devices. In this case, assigning these two direct I/O devices to different domains can cause the disks to be used by both domains and impact each other based on the actual usage of those disks.

Workaround: Do not assign direct I/O devices that have access to the same set of disks to different I/O domains. The steps to determine if you have dual-ported disks on T3-1 system are as follows:

Determine whether the system has dual-ported disks by running the following command on the SP:

-> show /SYS/SASBP

If the output includes the following fru_description value, the corresponding system has dual-ported disks:

fru_description = BD,SAS2,16DSK,LOUISE

When dual disks are found to be present in the system, ensure that both of the following direct I/O devices are always assigned to the same domain:

pci@400/pci@1/pci@0/pci@4  /SYS/MB/SASHBA0
pci@400/pci@2/pci@0/pci@4  /SYS/MB/SASHBA1

Memory DR Removal Operations With Multiple Plumbed NIU nxge Instances Can Hang Indefinitely and Never Complete

Bug ID 6983279: When multiple NIU nxge instances are plumbed on a domain, the ldm rm-mem and ldm set-mem commands, which are used to remove memory from the domain, might never complete. To determine whether the problem has occurred during a memory removal operation, monitor the progress of the operation with the ldm list -o status command. You might have encountered this problem if the progress percentage remains constant for several minutes.

Recovery: Cancel the ldm rm-mem or ldm set-mem command.

Workaround: Cancel the ldm rm-mem or ldm set-mem command, and check if a sufficient amount of memory was removed. If not, a subsequent memory removal command to remove a smaller amount of memory might complete successfully.

If the problem has occurred on the primary domain, do the following:

  1. Start a delayed reconfiguration operation on the primary domain.

    # ldm start-reconf primary
  2. Assign the desired amount of memory to the domain.

  3. Reboot the primary domain.

If the problem occurred on another domain, stop the domain before adjusting the amount of memory that is assigned to the domain.

ldmd Falsely Reports 100% CPU Utilization on a Domain

Bug ID 6982280: In rare instances when in elastic mode, ldmd might falsely report that a few CPUs performing I/O on a guest domain are at 100% utilization. This ldmd report contradicts the actual processor status that is reported by running prsinfo on the guest domain.

Workaround: Set the CPU count on the guest domain to be 2. Then, reset the CPU count to the original value.

Guest Domains Cannot Boot From an Exported DVD Device

Bug ID 6981081: When a bootable physical CD or DVD is exported as a virtual disk, the virtual CD or DVD might not be bootable from the guest domain that uses it. Also, the boot might fail with an error similar to the following:

{0} ok boot /virtual-devices@100/channel-devices@200/disk@1:f
Boot device: /virtual-devices@100/channel-devices@200/disk@1:f  File and args:
Bad magic number in disk label
ERROR: /virtual-devices@100/channel-devices@200/disk@1: Can't open disk label package
ERROR: boot-read fail
Can't open boot device

Whether this problem occurs depends on the type of physical CD or DVD drive that is installed on the system.

Using ldm stop -a Command on Domains in a Master-Slave Relationship Leaves the Slave With the stopping Flag Set

Bug ID 6979574: When a reset dependency is created, an ldm stop -a command might result in a domain with a reset dependency being restarted instead of only stopped.

Workaround: First, issue the ldm stop command to the master domain. Then, issue the ldm stop command to the slave domain. If the initial stop of the slave domain results in a failure, issue the ldm stop -f command to the slave domain.

Cryptographic Units Cannot Be Removed From the primary Domain

Bug ID 6978843: Sometimes, when you attempt to dynamically remove cryptographic units, the following message is issued:

# ldm set-crypto 0 primary
Aug 20 13:02:27 guest1 ncp: WARNING: ncp0: ncp_mau_unconfig:
unable to find MAU for cpu 112
Aug 20 13:02:27 guest1 ncp: WARNING: ncp0: ncp_mau_unconfig:
unable to find MAU for cpu 104

Workaround: Determine whether any CPUs are faulted, and if they are, mark them as being online.

# psrinfo
# psradm -n 0-127

Use delayed reconfiguration to remove the cryptographic units.

# ldm start-reconf primary
# ldm set-crypto 0 primary
# reboot

Migration of a Guest Domain That Has Hybrid I/O-Enabled Virtual Network Devices Panics the Service Domain

Bug ID 6972633: The service domain panics when performing a warm migration of a guest domain. The source machine in the migration is a SPARC T3-1 that has the NIU hybrid I/O capability.

The problem can occur when all of the following conditions are met:

A guest domain that has hybrid I/O enabled for a virtual network interface shows hybrid in the MODE column as follows:

# ldm list -o network ldg1
...
NAME    SERVICE             ID  DEVICE     MAC                MODE    PVID  MTU
vnet2    niu-vsw@primary     1  network@1  00:14:4f:fa:9e:89  hybrid  1    1500

However, the hybrid I/O resource is only assigned if the following command shows any output on the guest domain:

# kstat -p nxge

Workaround: Perform the following steps:

  1. Obtain the current configuration of the virtual network device.

    This step ensures that replumbing the interface is error-free.

    # ifconfig vnet1
  2. Unplumb the virtual network interface on the guest domain prior to the migration.

    # ifconfig vnet1 unplumb
  3. Perform the migration.

  4. Plumb the interface.

    # ifconfig vnet1 plumb

Migration of a Domain That Has an Enabled Default DRM Policy Results in a Target Domain Being Assigned All Available CPUs

Bug ID 6968507: Following the migration of an active domain, CPU utilization in the migrated domain can increase dramatically for a short period of time. If a dynamic resource managment (DRM) policy is in effect for the domain at the time of the migration, the Logical Domains Manager might begin to add CPUs. In particular, if the vcpu-max and attack properties were not specified when the policy was added, the default value of unlimited causes all the unbound CPUs in the target machine to be added to the migrated domain.

Recovery: No recovery is necessary. After the CPU utilization drops below the upper limit that is specified by the DRM policy, the Logical Domains Manager automatically removes the CPUs.

An In-Use MAC Address Can be Reassigned

Bug ID 6968100: Sometimes an in-use MAC address is not detected and it is erroneously reassigned.

Workaround: Manually ensure that an in-use MAC address cannot be reassigned.

ldmconfig Cannot Create a Domain Configuration on the SP

Bug ID 6967799: The ldmconfig script cannot properly create a stored logical domains configuration on the service processor (SP).

Workaround: Do not power cycle the system after the ldmconfig script completes and the domain reboots. Instead, perform the following manual steps:

  1. Add the configuration to the SP.

    # ldm add-spconfig new-config-name
  2. Remove the primary-with-clients configuration from the SP.

    # ldm rm-spconfig primary-with-clients
  3. Power cycle the system.

If you do not perform these steps prior to the system being power cycled, the existence of the primary-with-client configuration causes the domains to be inactive. In this case, you must bind each domain manually and then start them by running the ldm start -a command. After the guests have booted, repeating this sequence enables the guest domains to be booted automatically after a power cycle.

Uncooperative Oracle Solaris Domain Migration Can Be Blocked If cpu0 Is Offline

Bug ID 6965758: The migration of an active domain can fail if it is running a release older than the Oracle Solaris 10 10/09 OS and the lowest numbered CPU in the domain is in the offline state. The operation fails when the Logical Domains Manager uses CPU DR to reduce the domain to a single CPU. In doing so, the Logical Domains Manager attempts to remove all but the lowest CPU in the domain, but as that CPU is offline, the operation fails.

Workaround: Before attempting the migration, ensure that the lowest numbered CPU in the domain is in the online state.

Memory DR Is Disabled Following a Canceled Migration

Bug ID 6956431: After an Oracle Solaris 10 9/10 domain has been suspended as part of a migration operation, memory dynamic reconfiguration (DR) is disabled. This applies not only when the migration is successful, but also when the migration has been canceled, despite the fact that the domain remains on the source machine.

Dynamic Reconfiguration of MTU Values of Virtual Network Devices Sometimes Fails

Bug ID 6936833: If you modify the maximum transmission unit (MTU) of a virtual network device on the control domain, a delayed reconfiguration operation is triggered. If you subsequently cancel the delayed reconfiguration, the MTU value for the device is not restored to the original value.

Recovery: Rerun the ldm set-vnet command to set the MTU to the original value. Resetting the MTU value puts the control domain into delayed reconfiguration mode, which you need to cancel. The resulting MTU value is now the original, correct MTU value.

# ldm set-vnet mtu=orig-value vnet1 primary
# ldm cancel-op reconf primary

Memory DR Is Not Supported With Some Physical Memory Configurations

Bug ID 6912155: In certain supported configurations when all the DIMM slots are not populated in a machine, the resulting physical memory address map is not contiguous and can have address “holes” between successive memory blocks. For such a configuration, memory DR is not supported.

Workaround: To reconfigure memory when memory DR is not supported, do the following:

For memory layout information, see your platform's hardware documentation.

Migrated Domain With MAUs Contains Only One CPU When Target OS Does Not Support DR of Cryptographic Units

Bug ID 6904849: Starting with the Logical Domains 1.3 release, a domain can be migrated even if it has one or more cryptographic units bound to it.

In the following circumstances, the target machine will only contain one CPU after the migration is completed:

After the migration completes, the target domain will resume successfully and be operational, but will be in a degraded state (just one CPU).

Workaround: Prior to the migration, remove the cryptographic unit or units from the source machine that runs Logical Domains 1.3.

Mitigation: To avoid this issue, perform one or both of these steps:

Confusing Migration Failure Message for Real Address Memory Bind Failures

Bug ID 6904240: In certain situations, a migration fails with the following error message, and ldmd reports that it was not possible to bind the memory needed for the source domain. This situation can occur even if the total amount of available memory on the target machine is greater than the amount of memory being used by the source domain (as shown by ldm ls-devices -a mem).

Unable to bind 29952M memory region at real address 0x8000000
Domain Migration of LDom ldg0 failed

Cause: This failure is due the inability to meet congruence requirements between the Real Address (RA) and the Physical Address (PA) on the target machine.

Workaround: Stop the domain and perform the migration as a cold migration. You can also reduce the size of the memory on the guest domain by 128 Mbytes, which might permit the migration to proceed while the domain is running.

Dynamically Removing All the Cryptographic Units From a Domain Causes SSH to Terminate

Bug ID 6897743: If all the hardware cryptographic units are dynamically removed from a running domain, the cryptographic framework fails to seamlessly switch to the software cryptographic providers, and kills all the ssh connections.

Recovery: Re-establish the ssh connections after all the cryptographic units are removed from the domain.

Workaround: Set UseOpenSSLEngine=no in the /etc/ssh/sshd_config file on the server side, and run the svcadm restart ssh command.

Then, all ssh connections will no longer use the hardware cryptographic units (and thus not benefit from the associated performance improvements), and ssh connections would not be disconnected when the cryptographic units are removed.

Atlas PCI Express Dual 10-Gigabit Ethernet Fiber Card Shows Four Subdevices in ldm list-io -l Output

Bug ID 6892229: When you run the ldm ls-io -l command on a system that has an Atlas PCI Express Dual 10-Gigabit Ethernet Fiber card (X1027A-Z) installed, the output might show the following:

primary# ldm ls-io -l
...
pci@500/pci@0/pci@c PCIE5 OCC primary
network@0
network@0,1
ethernet
ethernet

The output shows four subdevices even though this Ethernet card has only two ports. This anomaly occurs because this card has four PCI functions. Two of these functions are disabled internally and appear as ethernet in the ldm ls-io -l output.

Workaround: You can ignore the ethernet entries in the ldm ls-io -l output.

ldm Commands Are Slow to Respond When Several Domains Are Booting

Bug ID 6855079: An ldm command might be slow to respond when several domains are booting. If you issue an ldm command at this stage, the command might appear to hang. Note that the ldm command will return after performing the expected task. After the command returns, the system should respond normally to ldm commands.

Workaround: Avoid booting many domains simultaneously. However, if you must boot several domains at once, refrain from issuing further ldm commands until the system returns to normal. For instance, wait for about two minutes on Sun SPARC Enterprise T5140 and T5240 Servers and for about four minutes on the Sun SPARC Enterprise T5440 Server or Netra T5440 Server.

Guest Domain Might Fail to Successfully Reboot When a System Is in Power Management Elastic Mode

Bug ID 6853273: While a system is in power management elastic mode, rebooting a guest domain might produce the following warning messages and fail to boot successfully:

WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Sending packet to LDC, status: -1
WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Can't send vdisk read request!
WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Timeout receiving packet from LDC ... retrying

Workaround: If you see these warnings, perform one of the workarounds in the following order:

Spurious ds_ldc_cb: LDC READ event Message Seen When Rebooting the Control Domain or a Guest Domain

Bug ID 6846889: When rebooting the control domain or a guest domain, the following warning message might be logged on the control domain and on the guest domain that is rebooting:

WARNING: ds@0: ds_ldc_cb: LDC READ event while port not up

Workaround: You can ignore this message.

Guest Domain Sometimes Fails to Make Proper Domain Services Connection to the Control Domain

Bug ID 6839787: Sometimes, a guest domain that runs at least the Oracle Solaris 10 10/08 OS does not make a proper Domain Services connection to a control domain that runs the Oracle Solaris 10 5/09 OS.

Domain Services connections enable features such as dynamic reconfiguration (DR), FMA, and power management (PM). Such a failure occurs when the guest domain is booted, so rebooting the domain usually clears the problem.

Workaround: Reboot the guest domain.

Virtual Network Devices Are Not Created Properly on the Control Domain

Bug ID 6836587: Sometimes ifconfig indicates that the device does not exist after you add a virtual network or virtual disk device to a domain. This situation might occur as the result of the /devices entry not being created.

Although this should not occur during normal operation, the error was seen when the instance number of a virtual network device did not match the instance number listed in /etc/path_to_inst file.

For example:

# ifconfig vnet0 plumb
ifconfig: plumb: vnet0: no such interface

The instance number of a virtual device is shown under the DEVICE column in the ldm list output:

# ldm list -o network primary
NAME             
primary          

MAC
    00:14:4f:86:6a:64

VSW
    NAME         MAC               NET-DEV DEVICE   DEFAULT-VLAN-ID PVID VID MTU  MODE  
    primary-vsw0 00:14:4f:f9:86:f3 nxge0   switch@0 1               1        1500        

NETWORK
    NAME   SERVICE              DEVICE    MAC               MODE PVID VID MTU  
    vnet1  primary-vsw0@primary network@0 00:14:4f:f8:76:6d      1        1500

The instance number (0 for both the vnet and vsw shown previously) can be compared with the instance number in the path_to_inst file to ensure that they match.

# egrep '(vnet|vsw)' /etc/path_to_inst
"/virtual-devices@100/channel-devices@200/virtual-network-switch@0" 0 "vsw"
"/virtual-devices@100/channel-devices@200/network@0" 0 "vnet"

Workaround: In the case of mismatching instance numbers, remove the virtual network or virtual switch device. Then, add them again by explicitly specifying the instance number required by setting the id property.

You can also manually edit the /etc/path_to_inst file. See the path_to_inst(4) man page.


Caution

Caution - Be aware of the warning contained in the man page that states “changes should not be made to /etc/path_to_inst without careful consideration.”


Newly Added NIU/XAUI Adapters Are Not Visible to Host OS If Logical Domains Is Configured

Bug ID 6829016: When Logical Domains is configured on a system and you add another XAUI network card, the card is not visible after the machine is powercycled.

Recovery: To make the newly added XAUI visible in the control domain, perform the following steps:

  1. Set and clear a dummy variable in the control domain.

    The following commands use a dummy variable called fix-xaui:

    # ldm set-var fix-xaui=yes primary
    # ldm rm-var fix-xaui primary
  2. Save the modified configuration to the SP, replacing the current configuration.

    The following commands use a configuration name of config1:

    # ldm rm-spconfig config1
    # ldm add-spconfig config1
  3. Perform a reconfiguration reboot of the control domain.

    # reboot -- -r

    At this time, you can configure the newly available network or networks for use by Logical Domains.

I/O Domain or Guest Domain Panics When Booting From e1000g

Bug ID 6808832: You can configure a maximum of two domains with dedicated PCI-E root complexes on systems such as the Sun Fire T5240. These systems have two UltraSPARC T2+ CPUs and two I/O root complexes.

pci@500 and pci@400 are the two root complexes in the system. The primary domain will always contain at least one root complex. A second domain can be configured with an unassigned or unbound root complex.

The pci@400 fabric (or leaf) contains the onboard e1000g network card. The following circumstances could lead to a domain panic:

Avoid the following network devices if they are configured in a non-primary domain:

/pci@400/pci@0/pci@c/network@0,1
/pci@400/pci@0/pci@c/network@0

When these conditions are true, the domain will panic with a PCI-E Fatal error.

Avoid such a configuration, or if the configuration is used, do not boot from the listed devices.

Explicit Console Group and Port Bindings Are Not Migrated

Bug ID 6781589: During a migration, any explicitly assigned console group and port are ignored, and a console with default properties is created for the target domain. This console is created using the target domain name as the console group and using any available port on the first virtual console concentrator (vcc) device in the control domain. If there is a conflict with the default group name, the migration fails.

Recovery: To restore the explicit console properties following a migration, unbind the target domain and manually set the desired properties using the ldm set-vcons command.

Constraint Database Is Not Synchronized to Saved Configuration

Bug ID 6773569: After switching from one configuration to another (using the ldm set-config command followed by a powercycle), domains defined in the previous configuration might still be present in the current configuration, in the inactive state.

This is a result of the Logical Domains Manager's constraint database not being kept in sync with the change in configuration. These inactive domains do not affect the running configuration and can be safely destroyed.

Migration Does Not Fail If a vdsdev on the Target Has a Different Back End

Bug ID 6772120: If the virtual disk on the target machine does not point to the same disk back end that is used on the source machine, the migrated domain cannot access the virtual disk using that disk back end. A hang can result when accessing the virtual disk on the domain.

Currently, the Logical Domains Manager checks only that the virtual disk volume names match on the source and target machines. In this scenario, no error message is displayed if the disk back ends do not match.

Workaround: Ensure that when you are configuring the target domain to receive a migrated domain that the disk volume (vdsdev) matches the disk back end used on the source domain.

Recovery: Do one of the following if you discover that the virtual disk device on the target machine points to the incorrect disk back end:

Migration Can Fail to Bind Memory Even If the Target Has Enough Available

Bug ID 6772089: In certain situations, a migration fails and ldmd reports that it was not possible to bind the memory needed for the source domain. This can occur even if the total amount of available memory on the target machine is greater than the amount of memory being used by the source domain.

This failure occurs because migrating the specific memory ranges in use by the source domain requires that compatible memory ranges are available on the target, as well. When no such compatible memory range is found for any memory range in the source, the migration cannot proceed.

Recovery: If this condition is encountered, you might be able to migrate the domain if you modify the memory usage on the target machine. To do this, unbind any bound or active logical domain on the target.

Use the ldm list-devices -a mem command to see what memory is available and how it is used. You might also need to reduce the amount of memory that is assigned to another domain.

Logical Domains Manager Does Not Start If the Machine Is Not Networked and an NIS Client Is Running

Bug ID 6764613: If you do not have a network configured on your machine and have a Network Information Services (NIS) client running, the Logical Domains Manager will not start on your system.

Workaround: Disable the NIS client on your non-networked machine:

# svcadm disable nis/client

Logical Domains Manager Displays Migrated Domains in Transition States When They Are Already Booted

Bug ID 6760933: On occasion, an active logical domain appears to be in the transition state instead of the normal state long after it is booted or following the completion of a domain migration. This glitch is harmless, and the domain is fully operational. To see what flag is set, check the flags field in the ldm list -l -p command output, or check the FLAGS field in the ldm list command, which shows -n---- for normal or -t---- for transition.

Recovery: After the next reboot, the domain shows the correct state.

Cannot Connect to Migrated Domain's Console Unless vntsd Is Restarted

Bug ID 6757486: Occasionally, after a domain has been migrated, it is not possible to connect to the console for that domain.

Workaround: Restart the vntsd SMF service to enable connections to the console:

# svcadm restart vntsd

Note - This command will disconnect all active console connections.


Sometimes, Executing the uadmin 1 0 Command From an Logical Domains System Does Not Return the System to the OK Prompt

Bug ID 6753683: Sometimes, executing the uadmin 1 0 command from the command line of an Logical Domains system does not leave the system at the ok prompt after the subsequent reset. This incorrect behavior is seen only when the Logical Domains variable auto-reboot? is set to true. If auto-reboot? is set to false, the expected behavior occurs.

Workaround: Use this command instead:

uadmin 2 0

Or, always run with auto-reboot? set to false.

Logical Domains Manager Can Take Over 15 Minutes to Shut Down a Domain

Bug ID 6742805: A domain shutdown or memory scrub can take over 15 minutes with a single CPU and a very large memory configuration. During a shutdown, the CPUs in a domain are used to scrub all the memory owned by the domain. The time taken to complete the scrub can be quite long if a configuration is imbalanced, for example, a single CPU domain with 512 Gbytes of memory. This prolonged scrub time extends the amount of time it takes to shut down a domain.

Workaround: Ensure that large memory configurations (>100 Gbytes) have at least one core. This results in a much faster shutdown time.

If the Oracle Solaris 10 5/08 OS Is Installed on a Service Domain, Attempting a Net Boot of the Oracle Solaris 10 8/07 OS on Any Guest Domain Serviced by It Can Hang the Installation

Bug ID 6705823: Attempting a net boot of the Oracle Solaris 10 8/07 OS on any guest domain serviced by a service domain running the Oracle Solaris 10 5/08 OS can result in a hang on the guest domain during the installation.

Workaround: Patch the miniroot of the Oracle Solaris 10 8/07 OS net install image with Patch ID 127111-05.

ldmd Might Dump Core If Multiple set-vcpu Operations Are Performed on the Control Domain While It Is in Delayed Reconfiguration Mode

Bug ID 6697096: Under certain circumstances, when multiple ldm set-vcpu operations are performed on the control domain while it is in delayed reconfiguration mode, ldmd might abort and be restarted by the Service Management Facility (SMF).

While the control domain is in delayed reconfiguration mode, take care when attempting an ldm set-vcpu operation. A single ldm set-vcpu operation will succeed, but a second ldm set-vcpu operation might cause the ldmd daemon to dump core.

Workaround: Reboot the control domain before you attempt the second ldm set-vcpu operation.

Solaris Volume Manager Volumes Built on Slice 2 Fail JumpStart When Used as the Boot Device in a Guest Domain

Bug ID 6687634: If the Solaris Volume Manager volume is built on top of a disk slice that contains block 0 of the disk, then Solaris Volume Manager prevents writing to block 0 of the volume to avoid overwriting the label of the disk.

If an Solaris Volume Manager volume built on top of a disk slice that contains block 0 of the disk is exported as a full virtual disk, then a guest domain is unable to write a disk label for that virtual disk, and this prevents the Oracle Solaris OS from being installed on such a disk.

Workaround: Solaris Volume Manager volumes exported as a virtual disk should not be built on top of a disk slice that contains block 0 of the disk.

A more generic guideline is that slices that start on the first block (block 0) of a physical disk should not be exported (either directly or indirectly) as a virtual disk. Refer to Directly or Indirectly Exporting a Disk Slice in Oracle VM Server for SPARC 2.1 Administration Guide.

Simultaneous Net-Installation of Multiple Domains Fails When in a Common Console Group

Bug ID 6656033: Simultaneous net installation of multiple guest domains fails on systems that have a common console group.

Workaround: Only net-install on guest domains that each have their own console group. This failure is seen only on domains with a common console group shared among multiple net-installing domains.

The scadm Command Can Hang Following an SC or SP Reset

Bug ID 6629230: The scadm command on a control domain running at least the Solaris 10 11/06 OS can hang following an SC reset. The system is unable to properly reestablish a connection following an SC reset.

Workaround: Reboot the host to reestablish connection with the SC.

Recovery: Reboot the host to reestablish connection with the SC.

ldc_close: (0xb) unregister failed, 11 Warning Messages

Bug ID 6610702: You might see the following warning message on the system console or in the system log:

ldc_close: (0xb) unregister failed, 11

Note that the number in parentheses is the Oracle Solaris internal channel number, which might be different for each warning message.

Workaround: You can ignore these messages.

Guest Domain With Too Many Virtual Networks on the Same Network Using DHCP Can Become Unresponsive

Bug ID 6603974: If you configure more than four virtual networks (vnets) in a guest domain on the same network using the Dynamic Host Configuration Protocol (DHCP), the guest domain can eventually become unresponsive while running network traffic.

Workaround: Set ip_ire_min_bucket_cnt and ip_ire_max_bucket_cnt to larger values, such as 32, if you have 8 interfaces.

Recovery: Issue an ldm stop-domain ldom command followed by an ldm start-domain ldom command on the guest domain (ldom) in question.

Logical Domains Manager Does Not Retire Resources On Guest Domain After a Panic and Reboot

Bug ID 6591844: If a CPU or memory fault occurs, the affected domain might panic and reboot. If the Fault Management Architecture (FMA) attempts to retire the faulted component while the domain is rebooting, the Logical Domains Manager is not able to communicate with the domain, and the retire fails. In this case, the fmadm faulty command lists the resource as degraded.

Recovery: Wait for the domain to complete rebooting, and then force FMA to replay the fault event by restarting the fault manager daemon (fmd) on the control domain by using this command:

primary# svcadm restart fmd

OpenBoot PROM Variables Cannot be Modified by the eeprom(1M) Command When the Logical Domains Manager is Running

Bug ID 6540368: This issue is summarized in Logical Domains Variable Persistence and affects only the control domain.

Cannot Set Security Keys With Logical Domains Running

Bug ID 6510214: In a Logical Domains environment, there is no support for setting or deleting wide-area network (WAN) boot keys from within the Oracle Solaris OS by using the ickey(1M) command. All ickey operations fail with the following error:

ickey: setkey: ioctl: I/O error

In addition, WAN boot keys that are set using OpenBoot firmware in logical domains other than the control domain are not remembered across reboots of the domain. In these domains, the keys set from the OpenBoot firmware are only valid for a single use.

Behavior of the ldm stop-domain Command Can Be Confusing

Bug ID 6506494: There are some cases where the behavior of the ldm stop-domain command is confusing.

# ldm stop-domain -f ldom

If the domain is at the kernel module debugger, kmdb(1), prompt, then the ldm stop-domain command fails with the following error message:

LDom <domain name> stop notification failed

Hang Can Occur With Guest OS in Simultaneous Operations

Bug ID 6497796: Under rare circumstances, when a Logical Domains variable, such as boot-device, is being updated from within a guest domain by using the eeprom(1M) command at the same time that the Logical Domains Manager is being used to add or remove virtual CPUs from the same domain, the guest OS can hang.

Workaround: Ensure that these two operations are not performed simultaneously.

Recovery: Use the ldm stop-domain and ldm start-domain commands to stop and start the guest OS.

Sometimes DR Requests Fail to Remove All Requested CPUs

Bug ID 6493140: Sometimes, the Oracle Solaris OS is unable to use DR to remove all the requested CPUs. When this problem occurs, you see error messages similar to the following:

Removal of cpu 10 failed

Recovery: Issue a subsequent request to remove the number of CPUs that failed to be removed the first time. Such a retry generally succeeds.

Documentation Errata

This section contains documentation errors that have been found too late to resolve for the Oracle VM Server for SPARC 2.1 release.

Incorrect Cross Reference to Required Software Information

The section “Software Compatibility” in Oracle VM Server for SPARC 2.1 Administration Guide incorrectly refers to information about requirements to obtain the latest features. Instead, refer to Live Domain Migration Requirements.

ldm stop Command Description Is Misleading

The description states that the ldm stop command issues a shutdown request, while it actually issues a uadmin() system call.

To shut down a domain in the most “graceful” manner, perform a shutdown or init operation in the domain that you want to stop. See the shutdown(1M) or init(1M) man page.

Logical Domains Manager Package Name Incorrect in Upgrade Procedure

The name of the Logical Domains Manager package to install is SUNWldm.v. Any pkgadd command in the Oracle VM Server for SPARC 2.1 documentation must refer to the SUNWldm.v package name.

ILOM load Command Synopsis Uses Incorrect Character

The ILOM load command synopsis in Upgrade System Firmware in Oracle VM Server for SPARC 2.1 Administration Guide incorrectly uses a backslash character (\) to indicate that the entire command must be input on a single line.

When specifying this command, do not include the backslash character, and ensure that the entire command is input on a single line.