Logical Domains 1.2 Release Notes

Bugs Affecting LDoms 1.2 Software

This section summarizes the bugs that you might encounter when using this version of the software. The bug descriptions are in numerical order by bug ID. If a workaround and a recovery procedure are available, they are specified.

Logical Domains Manager Does Not Validate Disk Paths and Network Devices

Bug ID 6447740: The Logical Domains Manager does not validate disk paths and network devices.

Disk Paths

If a disk device listed in a guest domain's configuration is either non-existent or otherwise unusable, the disk cannot be used by the virtual disk server (vds). However, the Logical Domains Manager does not emit any warning or error when the domain is bound or started.

When the guest tries to boot, messages similar to the following are printed on the guest's console:


WARNING: /virtual-devices@100/channel-devices@200/disk@0: Timeout
connecting to virtual disk server... retrying

In addition, if a network interface specified using the net-dev= property does not exist or is otherwise unusable, the virtual switch is unable to communicate outside the physical machine, but the Logical Domains Manager does not emit any warning or error when the domain is bound or started.

ProcedureRecover From an Errant net-dev Property Specified for a Virtual Switch

  1. Issue the ldm set-vsw command with the corrected net-dev property value.

  2. Reboot the domain hosting the virtual switch in question.

ProcedureRecover From an Errant Virtual Disk Service Device or Volume

  1. Stop the domain owning the virtual disk bound to the errant device or volume.

  2. Issue the ldm rm-vdsdev command to remove the errant virtual disk service device.

  3. Issue the ldm add-vdsdev command to correct the physical path to the volume.

  4. Restart the domain owning the virtual disk.

Network Devices

If a disk device listed in a guest domain's configuration is being used by software other than the Logical Domains Manager (for example, if it is mounted in the service domain), the disk cannot be used by the virtual disk server (vds), but the Logical Domains Manager does not emit a warning that it is in use when the domain is bound or started.

When the guest domain tries to boot, a message similar to the following is printed on the guest's console:


WARNING: /virtual-devices@100/channel-devices@200/disk@0: Timeout
connecting to virtual disk server... retrying

ProcedureRecover From a Disk Device Being Used by Other Software

  1. Unbind the guest domain.

  2. Unmount the disk device to make it available.

  3. Bind the guest domain.

  4. Boot the domain.

Hang Can Occur With Guest OS in Simultaneous Operations

Bug ID 6497796: Under rare circumstances, when a Logical Domains variable, such as boot-device, is being updated from within a guest domain by using the eeprom(1M) command at the same time that the Logical Domains Manager is being used to add or remove virtual CPUs from the same domain, the guest OS can hang.

Workaround: Ensure that these two operations are not performed simultaneously.

Recovery: Use the ldm stop-domain and ldm start-domain commands to stop and start the guest OS.

Behavior of the ldm stop-domain Command Can Be Confusing

Bug ID 6506494: There are some cases where the behavior of the ldm stop-domain command is confusing.


# ldm stop-domain -f ldom

If the domain is at the kernel module debugger, kmdb(1), prompt, then the ldm stop-domain command fails with the following error message:


LDom <domain name> stop notification failed

Cannot Set Security Keys With Logical Domains Running

Bug ID 6510214: In a Logical Domains environment, there is no support for setting or deleting wide-area network (WAN) boot keys from within the Solaris OS by using the ickey(1M) command. All ickey operations fail with the following error:


ickey: setkey: ioctl: I/O error

In addition, WAN boot keys that are set using OpenBoot firmware in logical domains other than the control domain are not remembered across reboots of the domain. In these domains, the keys set from the OpenBoot firmware are only valid for a single use.

Logical Domains Manager Forgets Variable Changes After a Power Cycle

Bug ID 6590259: This issue is summarized in Logical Domain Variable Persistence.

Using the server-secure.driver With an NIS Enabled System, Whether or Not LDoms Is Enabled

Bug ID 6533696: On a system configured to use the Network Information Services (NIS) or NIS+ name service, if the Solaris Security Toolkit software is applied with the server-secure.driver, NIS or NIS+ fails to contact external servers. A symptom of this problem is that the ypwhich(1) command (which returns the name of the NIS or NIS+ server or map master) fails with a message similar to the following:


Domain atlas some.atlas.name.com not bound on nis-server-1.c

The recommended Solaris Security Toolkit driver to use with the Logical Domains Manager is ldm_control-secure.driver, and NIS and NIS+ work with this recommended driver.

If you are using NIS as your name server, you cannot use the Solaris Security Toolkit profile server-secure.driver because you might encounter Solaris OS Bug ID 6557663, IP Filter causes panic when using ipnat.conf. However, the default Solaris Security Toolkit driver, ldm_control-secure.driver, is compatible with NIS.

ProcedureRecover by Resetting Your System

  1. Log in to the system console from the system controller, and if necessary, switch to the ALOM mode by typing:


    # #.
    
  2. Power off the system by typing the following command in ALOM mode:


    sc> poweroff
    
  3. Power on the system.


    sc> poweron
    
  4. Switch to the console mode at the ok prompt:


    sc> console
    
  5. Power on the system.


    ok boot -s
    
  6. Edit the file /etc/shadow.

    Change the root entry of the shadow file to the following:


    root::6445::::::
  7. Log in to the system and do one of the following:

    • Add file /etc/ipf/ipnat.conf.

    • Undo the Solaris Security Toolkit, and apply another driver.


    # /opt/SUNWjass/bin/jass-execute -ui
    # /opt/SUNWjass/bin/jass-execute -a ldm_control-secure.driver
    

Network Performance Is Worse in a Logical Domain Guest Than in a Non-LDoms Configuration

Bug ID 6486234: The virtual networking infrastructure adds additional overhead to communications from a logical domain. All packets are sent through a virtual network device, which, in turn, passes the packets to the virtual switch. The virtual switch then sends the packets out through the physical device. The lower performance is seen due to the inherent overheads of the stack.

Workaround: Do one of the following depending on your server:

Logical Domain Time-of-Day Changes Do Not Persist Across a Power Cycle of the Host

Bug ID 6590259: If the time or date on a logical domain is modified, for example using the ntpdate command, the change persists across reboots of the domain but not across a power cycle of the host.

Workaround: For time changes to persist, save the configuration with the time change to the SC and boot from that configuration.

OpenBoot PROM Variables Cannot be Modified by the eeprom(1M) Command When the Logical Domains Manager is Running

Bug ID 6540368: This issue is summarized in Logical Domain Variable Persistence and affects only the control domain.

Emulex-based Fibre Channel Host Adapter Not Supported in Split-PCI Configuration on Sun Fire T1000 Servers

Bug ID 6544004: The following message appears at the ok prompt if an attempt is made to boot a guest domain that contains an Emulex-based Fibre Channel host adapter (Sun Part Number 375-3397):


ok> FATAL:system is not bootable, boot command is disabled

Workaround: Do not use this adapter in a split-PCI configuration on Sun Fire T1000 servers.

Starting and Stopping SunVTS Multiple Times Can Cause Host Console to Become Unusable

Bug ID 6549382: If SunVTS is started and stopped multiple times, it is possible that using the console SC command to switch from the SC console to the host console can result in either of the following messages being repeatedly emitted on the console:


Enter #. to return to ALOM.
Warning: Console connection forced into read-only mode

Recovery: Reset the SC using the resetsc command.

Virtual Disk Timeouts Do Not Work If Guest or Control Domain Is Halted

Bug ID 6589660: Virtual disk timeouts do not work if either the guest or control domain using the disk is halted, for example, if the domain is taken into the kernel debugger (kmdb) or taken into the OpenBoot PROM with the send break.

Workaround: None.

Logical Domains Manager Does Not Retire Resources On Guest Domain After a Panic and Reboot

Bug ID 6591844: If a CPU or memory fault occurs, the affected domain might panic and reboot. If the Fault Management Architecture (FMA) attempts to retire the faulted component while the domain is rebooting, the Logical Domains Manager is not able to communicate with the domain, and the retire fails. In this case, the fmadm faulty command lists the resource as degraded.

Recovery: Wait for the domain to complete rebooting, and then force FMA to replay the fault event by restarting the fault manager daemon (fmd) on the control domain by using this command:


primary# svcadm restart fmd

Guest Domain With Too Many Virtual Networks on the Same Network Using DHCP Can Become Unresponsive

Bug ID 6603974: If you configure more than four virtual networks (vnets) in a guest domain on the same network using the Dynamic Host Protocol (DHCP), the guest domain can eventually become unresponsive while running network traffic.

Workaround: Set ip_ire_min_bucket_cnt and ip_ire_max_bucket_cnt to larger values, such as 32, if you have 8 interfaces.

Recovery: Issue an ldm stop-domain ldom command followed by an ldm start-domain ldom command on the guest domain (ldom) in question.

Fault Manager Daemon Dumps Core On a Hardened, Single Strand Control Domain

Bug ID 6604253: If you run the Solaris 10 11/06 OS and you harden drivers on the primary domain that is configured with only one strand, rebooting the primary domain or restarting the fault manager daemon (fmd) can result in an fmd core dump. The fmd dumps core while it cleans up its resources, and this does not affect the FMA diagnosis.

Workaround: Add a few more strands into the primary domain. For example,


# ldm add-vcpu 3 primary

The scadm Command Can Hang Following an SC or SP Reset

Bug ID 6629230: The scadm command on a control domain running at least the Solaris 10 11/06 OS can hang following an SC reset. The system is unable to properly reestablish a connection following an SC reset.

Workaround: Reboot the host to reestablish connection with the SC.

Recovery: Reboot the host to reestablish connection with the SC.

Simultaneous Net-Installation of Multiple Domains Fails When in a Common Console Group

Bug ID 6656033: Simultaneous net installation of multiple guest domains fails on Sun SPARC Enterprise T5140 and Sun SPARC Enterprise T5240 systems that have a common console group.

Workaround: Only net-install on guest domains that each have their own console group. This failure is seen only on domains with a common console group shared among multiple net-installing domains.

Sometimes, the prtdiag(1M) Command Does Not List All CPUs

Bug ID 6694939: In certain cases, the prtdiag(1M) command does not list all the CPUs.

Workaround: For an accurate count of CPUs, use the psrinfo(1M) command.

SVM Volumes Built on Slice 2 Fail JumpStart When Used as the Boot Device in a Guest Domain

Bug ID 6687634: If the Sun Volume Manager (SVM) volume is built on top of a disk slice that contains block 0 of the disk, then SVM prevents writing to block 0 of the volume to avoid overwriting the label of the disk.

If an SVM volume built on top of a disk slice that contains block 0 of the disk is exported as a full virtual disk, then a guest domain is unable to write a disk label for that virtual disk, and this prevents the Solaris OS from being installed on such a disk.

Workaround: SVM volumes exported as a virtual disk should not be built on top of a disk slice that contains block 0 of the disk.

A more generic guideline is that slices that start on the first block (block 0) of a physical disk should not be exported (either directly or indirectly) as a virtual disk. Refer to Directly or Indirectly Exporting a Disk Slice in Logical Domains 1.2 Administration Guide.

If the Solaris 10 5/08 OS Is Installed on a Service Domain, Attempting a Net Boot of the Solaris 10 8/07 OS on Any Guest Domain Serviced by It Can Hang the Installation

Bug ID 6705823: Attempting a net boot of the Solaris 10 8/07 OS on any guest domain serviced by a service domain running the Solaris 10 5/08 OS can result in a hang on the guest domain during the installation.

Workaround: Patch the miniroot of the Solaris 10 8/07 OS net install image with Patch ID 127111-05.

Cryptographic DR Changes Incompatible With Pre-LDoms Firmware

Bug ID 6713547: Cryptographic dynamic reconfiguration (DR) changes are incompatible with firmware that is prior to LDoms software releases. This problem prevents UltraSPARC T1 based systems running old firmware from using cryptographic hardware.

Logical Domains Manager Can Take Over 15 Minutes to Shut Down a Logical Domain

Bug ID 6742805: A domain shutdown or memory scrub can take over 15 minutes with a single CPU and a very large memory configuration. During a shutdown, the CPUs in a domain are used to scrub all the memory owned by the domain. The time taken to complete the scrub can be quite long if a configuration is imbalanced, for example, a single CPU domain with 512 Gbytes of memory. This prolonged scrub time extends the amount of time it takes to shut down a domain.

Workaround: Ensure that large memory configurations (>100 Gbytes) have at least one core. This results in a much faster shutdown time.

With Elara Copper Card, the Service Domain Hangs on Reboot

Bug ID 6753219: After adding virtual switches to the primary domain and rebooting, the primary domain hangs when installed with an Elara Copper card.

Workaround: Add this line to the /etc/system file on the service domain and reboot:


set vsw:vsw_setup_switching_boot_delay=300000000

Sometimes, Executing the uadmin 1 0 Command From an LDoms System Does Not Return the System to the OK Prompt

Bug ID 6753683: Sometimes, executing the uadmin 1 0 command from the command line of an LDoms system does not leave the system at the OK prompt after the subsequent reset. This incorrect behavior is seen only when the LDoms variable auto-reboot? is set to true. If auto-reboot? is set to false, the expected behavior occurs.

Workaround: Use this command instead:


uadmin 2 0

Or, always run with auto-reboot? set to false.

Logical Domains Manager Displays Migrated Domains in Transition States When They Are Already Booted

Bug ID 6760933: On occasion, an active logical domain appears to be in the transition state instead of the normal state long after it is booted or following the completion of a domain migration. This glitch is harmless, and the domain is fully operational. To see what flag is set, check the flags field in the ldm list -l -p command output, or check the FLAGS field in the ldm list command, which shows -n---- for normal or -t---- for transition.

Recovery: The logical domain should display the correct state upon the next reboot.

Logical Domains Manager Does Not Start If the Machine Is Not Networked and an NIS Client Is Running

Bug ID 6764613: If you do not have a network configured on your machine and have a Network Information Services (NIS) client running, the Logical Domains Manager will not start on your system.

Workaround: Disable the NIS client on your non-networked machine:


# svcadm disable nis/client

Newly Configured Virtual Network Fails to Establish a Connection With the Virtual Switch

Bug ID 6765355: Under rare conditions, when a new virtual network (vnet) is added to a logical domain, it fails to establish a connection with the virtual switch. This results in loss of network connectivity to and from the logical domain. If you encounter this error, you can see that the /dev/vnetN symbolic link for the virtual network instance is missing. If present, and not in error, the link points to a corresponding /devices entry as follows:


/dev/vnetN -> ../devices/virtual-devices@100/channel-devices@200/network@N:vnetN

Workaround: Do one of the following:

Do Not Migrate a Guest Domain That Is at the kmdb Prompt

Bug ID 6766202: If a guest domain with only one CPU is at the kernel module debugger, kmdb(1), prompt, and if that domain is migrated to another system, then the guest domain panics when it is resumed on the target system.

Workaround: Before migrating a guest domain, exit the kmdb shell, and resume the execution of the OS by typing ::cont. Then migrate the guest domain. After the migration is completed, re-enter kmdb with the command mdb -K.

Cannot Export a ZFS Volume as a Single-Slice Virtual Disk From Service Domain Running Up to the Solaris 10 5/08 OS to Guest Domain Running Solaris 10 10/08 OS

Bug ID 6769808: If a service domain running up to the Solaris 10 5/08 OS is exporting a ZFS volume as a single-slice disk to a guest domain running the Solaris 10 10/08 OS, then this guest domain is unable to use that virtual disk. Any access to the virtual disk fails with an I/O error.

Workaround: Upgrade the service domain to Solaris 10 5/09.

Migration Can Fail to Bind Memory Even If the Target Has Enough Available

Bug ID 6772089: In certain situations, a migration fails and ldmd reports that it was not possible to bind the memory needed for the source domain. This can occur even if the total amount of available memory on the target machine is greater than the amount of memory being used by the source domain.

This failure occurs because migrating the specific memory ranges in use by the source domain requires that compatible memory ranges are available on the target, as well. When no such compatible memory range is found for any memory range in the source, the migration cannot proceed.

Recovery: If this condition is encountered, you might be able to migrate the domain if you modify the memory usage on the target machine. To do this, unbind any bound or active logical domain on the target.

Use the ldm list-devices -a mem command to see what memory is available and how it is used. You might also need to reduce the amount of memory that is assigned to another domain.

Migration Does Not Fail If a vdsdev on the Target Has a Different Backend

Bug ID 6772120: If the virtual disk on the target machine does not point to the same disk backend that is used on the source machine, the migrated domain cannot access the virtual disk using that disk backend. A hang can result when accessing the virtual disk on the domain.

Currently, the Logical Domains Manager checks only that the virtual disk volume names match on the source and target machines. In this scenario, no error message is displayed if the disk backends do not match.

Workaround: Ensure that when you are configuring the target domain to receive a migrated domain that the disk volume (vdsdev) matches the disk backend used on the source domain.

Recovery: Do one of the following if you discover that the virtual disk device on the target machine points to the incorrect disk backend:

Constraint Database Is Not Synchronized to Saved Configuration

Bug ID 6773569: After switching from one configuration to another (using the ldm set-config command followed by a powercycle), domains defined in the previous configuration might still be present in the current configuration, in the inactive state.

This is a result of the Logical Domains Manager's constraint database not being kept in sync with the change in configuration. These inactive domains do not affect the running configuration and can be safely destroyed.

Explicit Console Group and Port Bindings Are Not Migrated

Bug ID 6781589: During a migration, any explicitly assigned console group and port are ignored, and a console with default properties is created for the target domain. This console is created using the target domain name as the console group and using any available port on the first virtual console concentrator (vcc) device in the control domain. If there is a conflict with the default group name, the migration fails.

Recovery: To restore the explicit console properties following a migration, unbind the target domain, and manually set the desired properties using the ldm set-vcons command.

Pseudonyms for PCI Buses on Sun SPARC Enterprise T5440 Systems Are Not Correct

Bug ID 6784945: On a Sun SPARC Enterprise T5440 system, the pseudonyms (shortcut names) for the PCI buses are not correct.

Workaround: To configure PCI buses on a Sun SPARC Enterprise T5440 system, you must use the pci@xxxx form of the bus name, as listed under the DEVICE column of any of the following list commands:

Cancelling Domain Migration With Virtual Networks Using Multiple Virtual Switches Might Cause Domain Reboot

Bug ID 6787057: On a guest domain with two or more virtual network devices (vnets) using multiple virtual switches (vsws), if an in-progress migration is cancelled, the domain being migrated might reboot instead of resuming operation on the source machine with the OS running. This issue does not occur if all the vnets are connected to a single vsw.

Workaround: If you are migrating a domain with two or more virtual networks using multiple virtual switches, do not cancel the domain migration (either by using Ctrl-C or the ldm cancel-operation command) after the operation starts. If a domain is inadvertently migrated, it can be migrated back to the source machine after the original migration is completed.

VIO DR Operations Ignore the Force (-f) Option

Bug ID 6703127: Virtual input/output (VIO) dynamic reconfiguration (DR) operations ignore the -f (force) option in CLI commands.

libpiclsnmp:snmp_init() Blocks Indefinitely in open() on primary Domain

Bug ID 6736962: Power Management sometimes fails to retrieve policy from the service processor on LDoms startup after the control domain boots. If CPU power management could not retrieve the power management policy from the service processor, it allows LDoms to start up as expected, but logs the error Unable to get the initial PM Policy - timeout to the LDoms log and remains in performance mode.

Add forceload: drv/ds_snmp to /etc/system, then reboot the control domain.

Deadlock Occurs Rarely With CPU DR Operations

Bug ID 6703958: Under rare circumstances, running CPU dynamic reconfiguration (DR) operations in parallel with network interface-related operations, such as plumb or unplumb, can result in a deadlock.

Workaround: Minimize the risk by avoiding network interface-related operations. If this deadlock occurs while booting a domain, set the domain to 2 CPUs and then reboot the domain.

FMA Status Failures

Bug ID 6759853: The following error message might be written intermittently to the LDoms log when a domain is at the ok prompt:


fma_cpu_svc_get_p_status: Can't find fma_cpu_get_p_status routine error

Workaround: Boot the domain.

ldmconfig Might Cause the Root File System of the Control Domain to Become Full and Halt the System

Bug ID 6848114: ldmconfig can run on a system that does not have file systems of sufficient capacity to contain the virtual disks for the created domains. In this situation, an error message is issued. However, ldmconfig permits you to continue to use the disks that are in /ldoms/disks to deploy the configuration. This situation could cause the root file system of the control domain to become full and halt the system.

Workaround: Do the following:

  1. Exit the Configuration Assistant by typing q or by typing Ctrl-C.

  2. Add more file systems of adequate capacity.

  3. Rerun the ldmconfig command.

Guest Domain Sometimes Makes Improper Domain Services Connection to the Control Domain

Bug ID 6839787: Sometimes, a guest domain that runs at least the Solaris 10 10/08 OS does not make a proper Domain Services connection to a control domain that runs the Solaris 10 5/09 OS.

Domain Services connections enable features such as dynamic reconfiguration (DR), FMA, and power management (PM). Such a failure occurs when the guest domain is booted, so rebooting the domain usually clears the problem.

Workaround: Reboot the guest domain.

Spurious Domain Services Invalid Handle Warning Messages Are Logged to the Console

Bug ID 6815015: You can ignore these messages.

ldm: Autosave Feature Should Identify and Allow the Downloading of Damaged Configurations

Bug ID 6840800: An otherwise usable corrupted or damaged autosave configuration cannot be downloaded.

Workaround: Use another, undamaged autosave configuration or SP configuration.

Canceling a Pending Delayed Reconfiguration Operation Does Not Discard Changes Made to the Configuration

Bug ID 6839685: When you cancel a pending delayed reconfiguration operation to discard any changes that you made to a configuration, the changes are persisted in the current autosave configuration.

Workaround: Before starting a delayed reconfiguration operation on a configuration, save the existing autosave data for the current configuration, config-name:


# cd /
# tar -cvf autosave.config-name.tar var/opt/SUNWldm/autosave-config-name

After cancelling the delayed reconfiguration operation, restore the autosave data for the configuration:


# cd /
# rm -rf var/opt/SUNWldm/autosave-config-name
# tar -xvf autosave.config-name.tar

Configuration Autorecovery: ldm add-config -r oldcfg newcfg Should Leave oldcfg in Previous State

Bug ID 6846468: Currently, the oldcfg autosave configuration is deleted, and newcfg is set to be the next poweron configuration. If oldcfg was marked as current or next poweron, subsequent configuration modifications will create or update the autosave configuration for oldcfg. The expected behavior is that the autosave configuration for oldcfg is left intact, and an autosave configuration for newcfg is created. If oldcfg is the current or next poweron configuration, it will remain so after using this command.

Unable to bind memory; limit of 31 segments reached

Bug ID 6841421: Under certain memory configurations, creating a guest domain might fail with this error message:


Unable to bind memory; limit of 31 segments reached

Multiple memory segments are a normal occurrence that happens whenever there is a different amount of memory on the various CMP processors. However, the current versions of Logical Domains Manager can only support up to 31 memory segments for each guest domain.

Workaround: This situation might occur in the following situations:

ldmd Dumps Core If a rm-io Operation Is Followed by Multiple set-vcpu Operations

Bug ID 6697096: Under certain circumstances, when a ldm rm-io operation is followed by multiple ldm set-vcpu operations, ldmd might abort and be restarted by SMF.

Workaround: After executing an ldm rm-io operation on a domain, take care when attempting an ldm set-vcpu operation. A single ldm set-vcpu operation will succeed, but a second ldm set-vcpu operation might cause the ldmd daemon to dump core under certain circumstances. Instead, reboot the domain before attempting the second set-vcpu operation.

Domain Can Lose CPUs During a Migration If Another Domain Is Rebooting

Bug ID 6775847: For a period of time during a domain migration, a system can hang or end up with just one VCPU if another domain on the target system is rebooted.

ldm start and ldmm stop operations are prevented from running at this time. However, the issuing of a reboot or init command in the Solaris OS instance that runs on a guest domain cannot be prevented.

Workaround: Avoid rebooting domains on the target system while a migration is in progress.

Recovery: If the symptoms of this problem are detected, stop and restart the migrated domain on the target system.

Migration Does Not Clean Up a Target If the Virtual Network MAC Address Clashes With an Existing Domain

Bug ID 6779482: If a migrating domain has a virtual network device with a MAC address that matches a MAC address on the target, the migration appropriately fails. However, the migration leaves a residual inactive domain of the same name and configuration on the target.

Workaround: On the target, use ldm destroy to manually remove the inactive domain. Then, fix the MAC address so that it is unique, and retry the migration.

Migration Dry-Run Check Should Detect Inadequate Memory

Bug ID 6783450: The Domain Migration dry-run check (-n) does not ensure that the target system has enough free memory to bind the specified domain. If all other criteria are met, the command returns without an error. However, the command correctly returns an error when the migration is actually attempted.

Workaround: Run ldm list-devices mem on the target machine to verify that there is enough memory available for the domain to be migrated.

Virtual Network Devices Are Not Created Properly on the Control Domain

Bug ID 6836587: Sometimes ifconfig indicates that the device does not exist after you add a virtual network or virtual disk device to a domain. This situation might occur as the result of the /devices entry not being created.

Although this should not occur during normal operation, the error was seen when the instance number of a virtual network device did not match the instance number listed in /etc/path_to_inst file.

For example:


# ifconfig vnet0 plumb
ifconfig: plumb: vnet0: no such interface

The instance number of a virtual device is shown under the DEVICE column in the ldm list output:


# ldm list -o network primary
NAME             
primary          

MAC
    00:14:4f:86:6a:64

VSW
    NAME         MAC               NET-DEV DEVICE   DEFAULT-VLAN-ID PVID VID MTU  MODE  
    primary-vsw0 00:14:4f:f9:86:f3 nxge0   switch@0 1               1        1500        

NETWORK
    NAME   SERVICE              DEVICE    MAC               MODE PVID VID MTU  
    vnet1  primary-vsw0@primary network@0 00:14:4f:f8:76:6d      1        1500

The instance number (0 for both the vnet and vsw shown previously) can be compared with the instance number in the path_to_inst file to ensure that they match.


# egrep '(vnet|vsw)' /etc/path_to_inst
"/virtual-devices@100/channel-devices@200/virtual-network-switch@0" 0 "vsw"
"/virtual-devices@100/channel-devices@200/network@0" 0 "vnet"

Workaround: In the case of mismatching instance numbers, remove the virtual network or virtual switch device. Then, add them again by explicitly specifying the instance number required by setting the id property.

You can also manually edit the /etc/path_to_inst file. See the path_to_inst(4) man page.


Caution – Caution –

Be aware of the warning contained in the man page that states “changes should not be made to /etc/path_to_inst without careful consideration.”


Configuration Autorecovery: Improve Warning Messages for Broken Autosave Configurations

Bug ID 6845614: For most instances of a corrupted autosave configuration, the following misleading warning message is logged in the Logical Domains Manager log file:


warning: Autosave config 'config-name' missing HV MD

The actual reason for this message could be when a guest domain or control domain has a corrupted MD, or has no valid MD.

Logical Domains Domain Services Module Needs to Support More Than 64 Ports

Bug ID 6833994: This problem prevents the creation of more than 60 guest domains. This restriction is expected to be lifted with the release of the next Solaris 10 OS.

Cannot Connect to Migrated Domain's Console Unless vntsd Is Restarted

Bug ID 6757486: Occasionally, after a domain has been migrated, it is not possible to connect to the console for that domain.

Workaround: Restart the vntsd SMF service to enable connections to the console:


# svcadm restart vntsd

Note –

This command will disconnect all active console connections.


I/O Domain or Guest Domain Panics When Booting From e1000g

Bug ID 6808832: You can configure a maximum of two domains with dedicated PCI-E root complexes on systems such as the Sun Fire T5240. These systems have two UltraSPARC T2+ CPUs and two I/O root complexes.

pci@500 and pci@400 are the two root complexes in the system. The primary domain will always contain at least one root complex. A second domain can be configured with an unassigned or unbound root complex.

The pci@400 fabric (or leaf) contains the onboard e1000g network card. The following circumstances could lead to a domain panic:

Avoid the following network devices if they are configured in a non-primary domain:


/pci@400/pci@0/pci@c/network@0,1
/pci@400/pci@0/pci@c/network@0

When these conditions are true, the domain will panic with a PCI-E Fatal error.

Avoid such a configuration, or if the configuration is used, do not boot from the listed devices.

ldm stop Reports Timeout Too Soon For Large Domains

Bug ID 6839284: For logical domains that have at least 120 Gbytes of memory, the ldom stop or ldom stop -f command might indicate that the operation timed out. Even though the stop operation has timed out, the process continues to shut down the logical domain in the background.

Workaround: You can ignore the timeout indications because the logical domain will continue with the shutdown process.

set-vdisk and set-vnet Operations Place Guest Domains in Delayed Reconfiguration Mode

Bug ID 6852685: Starting with the Logical Domains 1.2 release, delayed reconfiguration operations are only supported on the control domain. However, the Logical Domains Manager does not properly enforce this restriction for the set-vdisk and set-vnet operations. If you issue either of these operations on a guest domain, that domain will enter delayed reconfiguration mode.

Workaround: If a guest domain enters delayed configuration mode as the result of a set-vdisk or set-vnet operation, do the following:

  1. Use the ldm cancel-operation reconf command to cancel the pending delayed reconfiguration.

  2. Stop the guest domain.

  3. Re-issue the ldm set-vdisk or ldm set-vnet command.

  4. Start the guest domain.


Note –

If the domain has already been stopped or was rebooted while in delayed reconfiguration mode, the pending configuration will be committed. For information about any issues or restrictions regarding the use of delayed reconfiguration operations, see the Logical Domains (LDoms) 1.1 Release Notes.


Guest Domain Might Fail to Successfully Reboot When a System Is in Power Management Elastic Mode

Bug ID 6853273: While a system is in power management elastic mode, rebooting a guest domain might produce the following warning messages and fail to boot successfully:


WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Sending packet to LDC, status: -1
WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Can't send vdisk read request!
WARNING: /virtual-devices@100/channel-devices@200/disk@0:
Timeout receiving packet from LDC ... retrying

Workaround: If you see these warnings, perform one of the workarounds in the following order:

CPU failed to start Panics Seen on Reboots in Elastic Mode

Bug ID 6852379: Under rare circumstances, when Power Management mode is set to elastic, a domain that is booting might panic very early in its boot sequence with messages similar to one of the following:


Boot device: /virtual-devices@100/channel-devices@200/disk@0  File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

panic[cpu0]/thread=180e000: cpu1 failed to start (2)

Or:


Boot device: /virtual-devices@100/channel-devices@200/disk@0  File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

panic[cpu0]/thread=180e000: XC SPL ENTER already entered (0x0)

Impact: Because the panic occurs very early in the boot sequence, it has no impact on applications or file systems because they have not yet been started. The domain should automatically reboot. Note that the domain might not reboot because of CR 6853590, see Reboot Stops at OpenBoot Prompt When Services Cannot Be Initialized.

Workaround: If the domain fails to boot, do one of the following:

ldmd Shows Domain State as transition When the Domain Is Running After a CPU DR Operation

Bug ID 6816969: A domain can sometimes be marked as being in transition mode even though it is booted. Transition mode is the mode in which a Solaris domain is booting or shutting down. CPU Power Management does not occur on a system when any domain is in transition mode. If a domain remains in transition mode, CPU Power Management will not occur when load is added to any domain.

Workaround: Switch from elastic mode to performance mode. If you want to return to elastic mode, reboot the domain that is stuck in transition mode.

Possible Issues After a Failed add-vdisk Command

Bug ID 6854189: If adding a virtual disk to a running guest domain fails, the guest domain might show messages like the following after the operation completes:


vdc: NOTICE: [5] Error initialising ports

Workaround: When the guest domain is in this state, adding another virtual disk to the running guest domain might not be immediately visible to the system. In this case, run the devfsadm command to force the system to configure the available devices and make the newly added virtual disk visible.

Reboot Stops at OpenBoot Prompt When Services Cannot Be Initialized

Bug ID 6853590: Occasionally, a logical domain reboot operation stops at the OpenBoot prompt after one or more of the following messages are shown on the console:


NOTICE: Unable to complete Domain Service protocol version handshake
WARNING: Unable to connect to Domain Service providers
WARNING: Unable to get LDOM Variable Updates
WARNING: Unable to update LDOM Variable

Workaround: Boot the domain manually from the OpenBoot prompt.

ldm Commands Are Slow to Respond When Several Domains Are Booting

Bug ID 6855079: An ldm command might be slow to respond when several domains are booting. If you issue an ldm command at this stage, the command might appear to hang. Note that the ldm command will return after performing the expected task. After the command returns, the system should respond normally to ldm commands.

Workaround: Avoid booting many domains simultaneously. However, if you must boot several domains at once, refrain from issuing further ldm commands until the system returns to normal. For instance, wait for about two minutes on Sun SPARC Enterprise T5140 and T5240 Servers and for about four minutes on the Sun SPARC Enterprise T5440 Server or Netra T5440 Server.

Spurious dl_ldc_cb: LDC READ event Message Seen When Rebooting the Control Domain or a Guest Domain

Bug ID 6846889: When rebooting the control domain or a guest domain, the following warning message might be logged on the control domain and on the guest domain that is rebooting:


WARNING: ds@0: ds_ldc_cb: LDC READ event while port not up

Workaround: You can ignore this message.

ldm list -l Causes ldmd to Dump Core After Upgrading From Logical Domains 1.1 to Logical Domains 1.2

Bug ID 6855534: When upgrading the control domain OS image from a previous release of Logical Domains, ensure that you preserve the constraints database file on the control domain. See Saving and Restoring the Logical Domains Constraints Database File in Logical Domains 1.2 Administration Guide.

If you were unable to preserve the constraints database, do not populate the control domain with a constraints database that does not match the running configuration. Such a mismatch could result in the Logical Domains Manager aborting when the ldm list -l command is issued, as follows:


primary# ldm list -l ldg0
Invalid response
primary#

Workaround: To recover, remove any existing constraints database files on the upgraded control domain. Then, use the svcadm restart ldmd command to restart the Logical Domains Manager and to resume normal operations.

UltraSPARC T2 and UltraSPARC T2 Plus Based Systems: Domain Might Panic When Adding New CPUs

Bug ID 6837313: Under rare circumstances on UltraSPARC T2 and UltraSPARC T2 Plus based systems, adding new CPUs to a domain might cause that domain to panic. This panic is more likely to occur when CPUs are added after PCI buses have been added or removed.

Impact: The domain panics with a stack trace that might contain references to the n2rng driver.

Workaround: This problem is triggered when the n2rng driver initializes structures for storing statistics. The problem can be prevented by disabling the generation of statistics for the n2rng driver, as follows: