C H A P T E R 1 |
These release notes contain changes for this release, a list of supported platforms, a matrix of required software and patches, and other pertinent information, including bugs that affect Logical Domains 1.1 software.
The major changes for this release of Logical Domains 1.1 software are as follows:
Extensible Markup Language (XML) version 3 (v3) public interface for monitoring and controlling logical domains provides:
Access using the Extensible Messaging and Presence Protocol (XMPP)
Logical Domains Manager interface and Logical Domains Manager events
(Refer to Chapter 10 of the Logical Domains (LDoms) 1.1 Administration Guide for details about using this feature.)
Guest Domain Migration provides the capability to migrate a guest logical domain from one server to another compatible server. If active, the domain on the source server is suspended, and its configuration and run-time state are transferred to another server, where the domain is recreated and resumed. Inactive and bound domains can also be migrated, and this happens almost instantaneously as only the domain’s configuration needs to be transferred and recreated. Refer to Chapter 8, “Migrating Logical Domains” in the Logical Domains (LDoms) 1.1 Administration Guide for more information.
Virtual Input/Output (I/O) Dynamic Reconfiguration (DR) provides the ability to add and remove virtual I/O services and devices without rebooting.
Network Interface Unit (NIU) Hybrid I/O provides support for a virtualized I/O path, a hybrid I/O path, and better performance and scalability. Refer to “Using NIU Hybrid I/O” in the Logical Domains (LDoms) 1.1 Administration Guide for more information.
VLAN Support provides the capability to configure and use virtual local area networks (VLANs) in logical domains. Refer to “Using VLAN Tagging With Logical Domains Software” in the Logical Domains (LDoms) 1.1 Administration Guide for how to use this feature.
Note - LDoms 1.1 software and Solaris 10 10/08 OS are required to use this feature. Tagged VLANs are not supported in any of the previous releases for LDoms networking components. |
Virtual Disk Failover adds support for disk multipathing which enables the virtual disk device in a guest domain to be serviced by multiple virtual disk servers. Refer to “Configuring Virtual Disk Multipathing” in the Logical Domains (LDoms) 1.1 Administration Guide for information about how to set up this feature.
Single-Slice Disk Enhancement allows installing the Solaris OS on a single-slice disk. Single-slice disks are now visible with the format(1M) command.
New and Changed CLI ldm Subcommands — refer to the ldm(1M) man page or the Logical Domains (LDoms) Manager 1.1 Man Page Guide for more details.
Use the new list subcommand output (-o format) options to limit the output format to one or more of the following subsets: console, cpu, crypto, disk, domain, memory, network, physio, serial, and status.
Use new arguments to the add-vnet and set-vnet subcommands to specify details for hybrid I/O (mode=hybrid).
Use new arguments to the add-vsw, set-vsw, add-vnet, and set-vnet subcommands to specify VLAN tagging (port VLAN ID and VLAN ID).
Use the new mpgroup= argument to the add-vdsdev and set-vdsdev subcommands to define a multipath group name for several virtual disk server devices (vdsdev). In case a virtual disk cannot communicate with a virtual disk server device, a failover is initiated to one of the other virtual disk server devices in the multipath group.
Use the ldm cancel-operations reconf command instead of the deprecated ldm remove-reconf or cancel-reconf commands to cancel a delayed reconfiguration operation. You can still use the remove-reconf or cancel-reconf subcommands as an alias, but the rm-reconf subcommand does not work.
Use the ldm cancel-operations migration command to cancel a domain migration operation.
Use the ldm migrate-domain command to migrate a logical domain from one machine to another.
Output of ldm ls-config has changed so that the annotations more accurately reflect when a configuration saved to the service processor (SP) matches the currently running configuration. See Output of the ldm ls-config Command Changed for more information.
CPU Power Management (PM) implements power saving by disabling each Sun UltraSPARC® T2– and T2 Plus–based Server core that has all of its CPU strands idle. Refer to “Using CPU Power Management With LDoms 1.1 Software” in the Logical Domains (LDoms) 1.1 Administration Guide. To use CPU power management, your system must have the firmware that supports power management.
This section contains system requirements for running LDoms software.
Logical Domains (LDoms) Manager 1.1 software is supported on the following platforms:
This section lists the required software and patches for use with Logical Domains 1.1 software.
To use all the features of LDoms 1.1 software, the operating system on all domains should be at least equivalent to the Solaris 10 10/08 OS. This can be either a fresh or upgrade installation of the following:
Following are the required Solaris 10 10/08 patches for use with Logical Domains 1.1 software. An X marks whether a patch must be installed on that specific type of domain, but the patches can be applied to all domains.
Following is a matrix of required software to enable all the Logical Domains 1.1 features.
Supported Servers | System Firmware | Solaris OS |
---|---|---|
Sun UltraSPARC T2 Plus–based servers | 7.2.x | One of the configurations in Required and Recommended Solaris OS |
Sun UltraSPARC T2–based servers | 7.2.x | One of the configurations Required and Recommended Solaris OS |
Sun UltraSPARC T1–based servers | 6.7.x | One of the configurations Required and Recommended Solaris OS |
It is possible to run the Logical Domains 1.1 software along with previous revisions of the other software components. For example, you could have differing versions of the Solaris OS on the various domains in a machine. It is recommended to have all domains running Solaris 10 10/08 OS plus the patches listed in TABLE 1-2. However, an alternate upgrade strategy could be to upgrade the control and service domains to Solaris 10 10/08 OS plus the patches listed in TABLE 1-2 and to continue running the guest domains at the existing patch level.
Following is a matrix of the minimum versions of software required. The LDoms 1.1 package, SUNWldm, can be applied to a system running at least the following versions of software. The minimum software versions are platform specific and depend on the requirements of the CPU in the machine. The minimum Solaris OS version for a given CPU type applies to all domain types (control, service, I/O, and guest).
Following are the required system firmware patches at a minimum for use with Logical Domains 1.1 software on supported servers.
Note - The -01 versions of the system firmware Patch IDs do not support power management. |
You can find the LDoms 1.1 software to download from the following web site:
The LDoms_Manager-1_1.zip file that you download contains the following:
The ldm(1M) man page is included in the SUNWldm.v package and gets installed when the package is installed.
Installation script for Logical Domains Manager 1.1 software and the Solaris Security Toolkit (install-ldm)
Install package for Libvirt for Logical Domains (SUNWldvirtinst.v)
The directory structure of the zip file is similar to the following:
LDoms_Manager-1_1/ Install/ install-ldm Legal/ LDoms_1.0.1_libvirt_entitlement(20071220).txt Ldoms_1.1_Entitlement.txt Ldoms_1.1_SLA_Entitlement.txt Ldoms_MIB_1.0.1_Entitlement.txt Ldoms_MIB_1.0.1_SLA_Entitlement.txt LGPLDisclaimer.txt THIRDPARTYREADME(20071220).txt Product/ SUNWjass SUNWldm.v SUNWldvirtinst.v Libvirt-source SUNWldlibvirt.v SUNWldmib.v README |
You can find the required Solaris OS and system firmware patches at the SunSolve site:
The Logical Domains (LDoms) 1.1 Administration Guide and Logical Domains (LDoms) 1.1 Release Notes can be obtained from:
http://docs.sun.com/app/docs/prod/ldoms
The Sun Logical Domains (LDoms) Wiki contains Best Practices, Guidelines, and Recommendations for deploying LDoms software.:
http://wikis.sun.com/display/SolarisLogicalDomains/Home
The Beginners Guide to LDoms: Understanding and Deploying Logical Domains can be used to get a general overview of Logical Domains software. However, the details of the guide specifically apply to the LDoms 1.0 software release and are now out of date for LDoms 1.1 software. The guide can be found at the Sun BluePrints site.
This section describes software that is related to LDoms software.
Solaris Security Toolkit 4.2 software – This software can help you secure the Solaris OS in the control domain and other domains. Refer to the Solaris Security Toolkit 4.2 Administration Guide and Solaris Security Toolkit 4.2 Reference Manual for more information.
Logical Domains (LDoms) Management Information Base (MIB) software – This software can help you enable third party applications to perform remote monitoring and a few control operations. Refer to the Logical Domains (LDoms) MIB 1.0.1 Administration Guide and Release Notes for more information.
Libvirt for LDoms software – This software provides virtual library (libvirt) interfaces for Logical Domains (LDoms) software so that virtualization customers can have consistent interfaces. The libvirt library (version 0.3.2) included in this software interacts with the Logical Domains Manager software running on Solaris 10 Operating System (OS) to support Logical Domains virtualization technology. Refer to the Libvirt for LDoms 1.0.1 Administration Guide and Release Notes for more information.
Note - LDoms MIB software and Libvirt for LDoms software works with LDoms 1.0.1 software at a minimum. |
This section details the software that is compatible with and can be used with the Logical Domains software. Be sure to check in the software documentation or your platform documentation to find the version number of the software that is available for your version of LDoms software and your platform.
SunVTS functionality is available in the control domain and guest domains on certain LDoms software releases and certain platforms. SunVTS is Sun’s Validation Test Suite, which provides a comprehensive diagnostic tool that tests and validates Sun hardware by verifying the connectivity and proper functioning of most hardware controllers and devices on Sun servers. For more information about SunVTS, refer to the SunVTS User’s Guide for your version of SunVTS.
Sun Management Center 4.0 Add-On Software can be used only on the control domain with the Logical Domains Manager software enabled. Sun Management Center is an open, extensible system monitoring and management solution that uses Java and a variant of the Simple Network Management Protocol (SNMP) to provide integrated and comprehensive enterprise-wide management of Sun products and their subsystem, component, and peripheral devices. Support for hardware monitoring within the Sun Management Center environment is achieved through the use of appropriate hardware server module add-on software, which presents hardware configuration and fault reporting information to the Sun Management Center management server and console. Refer to the Sun Management Center 4.0 Add-On Software Release Notes: For Sun Fire, SunBlade, Netra, and SunUltra Systems for more information about using Sun Management Center 4.0 on the supported servers.
Sun Explorer Data Collector can be used with the Logical Domains Manager software enabled on the control domain. Sun Explorer is a diagnostic data collection tool. The tool comprises shell scripts and a few binary executables. Refer to the Sun Explorer User’s Guide for more information about using the Sun Explorer Data Collector.
Solaris Cluster software can be used only on an I/O domain in Logical Domains software releases up through LDoms 1.0.2. In LDoms 1.0.3 and 1.1 software, Solaris Cluster software can be used in a guest domain with some restrictions. Refer to Solaris Cluster documentation for more information about any restrictions and about the Solaris Cluster software in general.
The following system controller (SC) software interacts with the Logical Domains 1.1 software:
Sun Integrated Lights Out Manager (ILOM) 3.0 firmware is the system management firmware you can use to monitor, manage, and configure Sun UltraSPARC T2-based server platforms. ILOM is preinstalled on these platforms and can be used on the control domain on LDoms-supported servers with the Logical Domains Manager 1.1 software enabled. Refer to the Sun Integrated Lights Out Manager 3.0 User’s Guide for features and tasks that are common to Sun rackmounted servers or blade servers that support ILOM. Other user documents present ILOM features and tasks that are specific to the server platform you are using. You can find the ILOM platform-specific information within the documentation set that accompanies your system.
Advanced Lights Out Manager (ALOM) Chip Multithreading (CMT) Version 1.3 software can be used on the control domain on UltraSPARC® T1-based servers with the Logical Domains Manager 1.0.1 software enabled. Refer to “Using LDoms With ALOM CMT” in the Logical Domains (LDoms) 1.1 Administration Guide. The ALOM system controller enables you to remotely manage and administer your supported CMT servers. ALOM enables you to monitor and control your server either over a network or by using a dedicated serial port for connection to a terminal or terminal server. ALOM provides a command-line interface that you can use to remotely administer geographically distributed or physically inaccessible machines. For more information about using ALOM CMT Version 1.3 software, refer to the Advanced Lights Out Management (ALOM) CMT v1.3 Guide.
Netra Data Plane Software Suite is a complete board software package solution. The software provides an optimized rapid development and runtime environment on top of multistrand partitioning firmware for Sun CMT platforms. The Logical Domains Manager contains some ldm subcommands (add-vdpcs, rm-vdpcs, add-vdpcc, and rm-vdpcc) for use with this software. Refer to the Netra Data Plane Software Suite documentation for more information about this software.
This section contains general issues and specific bugs concerning the Logical Domains 1.1 software.
This section describes general known issues about this release of LDoms software that are broader than a specific bug number. Workarounds are provided where available.
For discussions in Logical Domains documentation, the terms service processor (SP) and system controller (SC) are interchangeable.
The following cards are not supported for this LDoms 1.1 software release:
The OpenBoot firmware now supports the power-off command. The command powers off the system if only the control domain is active. The OpenBoot power-off command behaves on the control domain exactly the way the Solaris OS halt command behaves. Refer to Table 9-1 in Chapter 9 of the Logical Domains (LDoms) 1.1 Administration Guide for a specific description of the behavior of the halt command.
The output of the ldm ls-config command now more accurately reflects when a saved configuration matches the currently running configuration.
Previously, the configuration that was last booted (that is, on the previous power on) was always listed as [current]. Now, the last booted configuration is listed as [current] only until you initiate a reconfiguration. After the reconfiguration, the annotation changes to [next poweron].
Previously, the result of an ldm add-config or set-config command was that the specified configuration was labeled as [next]. Now, such a configuration is listed as [current], because it does match the currently running configuration.
If a service domain is running a version of Solaris 10 OS prior to Solaris 10 10/08 OS, and is exporting a physical disk slice as a virtual disk to a guest domain, then this virtual disk will appear in the guest domain with an inappropriate device ID. If that service domain is then upgraded to Solaris 10 10/08 OS, the physical disk slice exported as a virtual disk will appear in the guest domain with no device ID.
This removal of the device ID of the virtual disk can cause problems to applications attempting to reference the device ID of virtual disks. In particular, this can cause the Solaris Volume Manager (SVM) to be unable to find its configuration or to access its metadevices.
Workaround: After upgrading a service domain to Solaris 10 10/08, if a guest domain is unable to find its SVM configuration or its metadevices, execute the following procedure.
Disable the devid feature of SVM by adding the following lines to the /kernel/dr/md.conf file:
md_devid_destroy=1; md_keep_repl_state=1; |
After the domain has booted, the SVM configuration and metadevices should be available.
Re-enable the SVM devid feature by removing the two lines added in Step 2 from the /kernel/drv/md.conf file.
During the reboot, you will see messages similar to this:
NOTICE: mddb: unable to get devid for ’vdc’, 0x10 |
There is a limit to the number of LDCs available in any logical domain. For Sun UltraSPARC T1-based platforms, that limit is 256; for all other platforms, the limit is 512. Practically speaking, this only becomes an issue on the control domain, because the control domain has at least part, if not all, of the I/O subsystem allocated to it, and because of the potentially large number of LDCs created for both virtual I/O data communications and the Logical Domains Manager control of the other logical domains.
Note - The examples in this section are what happens on Sun UltraSPARC T1-based platforms. However, the behavior is the same if you go over the limit on other supported platforms. |
If you try to add a service, or bind a domain, so that the number of LDC channels exceeds the limit on the control domain, the operation fails with an error message similar to the following:
13 additional LDCs are required on guest primary to meet this request, but only 9 LDCs are available |
The following guidelines can help prevent creating a configuration that could overflow the LDC capabilities of the control domain:
The control domain allocates 12 LDCs for various communication purposes with the hypervisor, Fault Management Architecture (FMA), and the system controller (SC), independent of the number of other logical domains configured.
The control domain allocates one LDC to every logical domain, including itself, for control traffic.
Each virtual I/O service on the control domain consumes one LDC for every connected client of that service.
For example, consider a control domain and 8 additional logical domains. Each logical domain needs at a minimum:
Applying the above guidelines yields the following results (numbers in parentheses correspond to the preceding guideline number from which the value was derived):
12(1) + 9(2) + 8 x 3(3) = 45 LDCs in total.
Now consider the case where there are 32 domains instead of 8, and each domain includes 3 virtual disks, 3 virtual networks, and a virtual console. Now the equation becomes:
12 + 33 + 32 x 7 = 269 LDCs in total.
Depending upon the number of supported LDCs of your platform, the Logical Domain Manager will either accept or reject the configurations.
Logical Domains software does not impose a memory size limitation when creating a domain. The memory size requirement is a characteristic of the guest operating system. Some Logical Domains functionality might not work if the amount of memory present is less than the recommended size. For recommended and minimum size memory requirements, refer to the installation guide for the operating system you are using. Refer to “System Requirements and Recommendations” in the Solaris 10 Installation Guide: Planning for Installation and Upgrade.
The OpenBoot PROM has a minimum size restriction for a domain. Currently, that restriction is 12 megabytes. If you have a domain less than that size, the Logical Domains Manager will automatically boost the size of the domain to 12 megabytes. Refer to the release notes for your system firmware for information about memory size requirements.
You can the following number of domains depending on your platform:
If unallocated virtual CPUs are available, assign them to the service domain to help process the virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more than 32 domains.In cases where maximum domain configurations have only a single CPU in the service domain, do not put unnecessary stress on the single CPU when configuring and using the domain.The virtual switch (vsw) services should be spread over all the network adapters available in the machine. For example, if booting 128 domains on a Sun SPARC Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances. Do not have more than 32 vnet instances per vsw service, because having more than that tied to a single vsw could cause hard hangs in the service domain.To run the maximum configurations, a machine needs the following amount of memory, depending on your platform, so that the guest domains contain an adequate amount of memory:
Memory and swap space usage increases in a guest domain when the vsw services used by the domain provides services to many virtual networks (in multiple domains). This is due to the peer-to-peer links between all the vnet connected to the vsw.The service domain benefits from having extra memory. Four gigabytes is the recommended minimum when running more than 64 domains. Start domains in groups of 10 or less and wait for them to boot before starting the next batch. The same advice applies to installing operating systems on domains.
If you have made any configuration changes since last saving a configuration to the SC, before you attempt to power off or power cycle a Logical Domains system, make sure you save the latest configuration that you want to keep.
Under certain circumstances, the Logical Domains (LDoms) Manager rounds up the requested memory allocation to either the next largest 8-kilobyte or 4-megabyte multiple. This can be seen in the following example output of the ldm list-domain -l command, where the constraint value is smaller than the actual allocated size:
Memory: Constraints: 1965 M raddr paddr5 size 0x1000000 0x291000000 1968M |
Currently, Fault Management Architecture (FMA) diagnosis of I/O devices in a Logical Domains environment might not work correctly. The problems are:
Input/output (I/O) device faults diagnosed in a non-control domain are not logged on the control domain. These faults are only visible in the logical domain that owns the I/O device.
I/O device faults diagnosed in a non-control domain are not forwarded to the system controller. As a result, these faults are not logged on the SC and there are no fault actions on the SC, such as lighting of light-emitting diodes (LEDs) or updating the dynamic field-replaceable unit identifiers (DFRUIDs).
Errors associated with a root complex that is not owned by the control domain are not diagnosed properly. These errors can cause faults to be generated against the diagnosis engine (DE) itself.
With domaining enabled, variable updates persist across a reboot, but not across a power cycle, unless the variable updates are either initiated from OpenBoot firmware on the control domain, or followed by saving the configuration to the SC.
In this context, it is important to note that a reboot of the control domain could initiate a power cycle of the system:
When the control domain reboots, if there are no bound guest domains, and no delayed reconfiguration in progress, the SC power cycles the system.
When the control domain reboots, if there are guest domains bound or active (or the control domain is in the middle of a delayed reconfiguration), the SC does not power cycle the system.
LDom variables for a domain can be specified using any of the following methods:
Modifying, in a limited fashion, from the system controller (SC) using the bootmode command; that is, only certain variables, and only when in the factory-default configuration.
The goal is that, variable updates made using any of these methods always persist across reboots of the domain, and always reflect in any subsequent logical domain configurations saved to the SC.
In Logical Domains 1.1 software, there are a few cases where variable updates do not persist as expected:
With domaining enabled (the default in all cases except the UltraSPARC T1000 and T2000 systems running in factory-default configuration), all methods of updating a variable (OpenBoot firmware, eeprom command, ldm subcommand) persist across reboots of that domain, but not across a power cycle of the system, unless a subsequent logical domain configuration is saved to the SC. In addition, in the control domain, updates made using OpenBoot firmware persist across a power cycle of the system; that is, even without subsequently saving a new logical domain configuration to the SC.
When domaining is not enabled, variable updates specified through the eeprom(1M) command persist across a reboot of the primary domain into the same factory-default configuration, but do not persist into a configuration saved to the SC. Conversely, in this scenario, variable updates specified using the Logical Domains Manager do not persist across reboots, but are reflected in a configuration saved to the SC.
So, when domaining is not enabled, if you want a variable update to persist across a reboot into the same factory-default configuration, use the eeprom command. If you want it saved as part of a new logical domains configuration saved to the SC, use the appropriate Logical Domains Manager command.
In all cases, when reverting to the factory-default configuration from a configuration generated by the Logical Domains Manager, all LDoms variables start with their default values.
The following Bug IDs have been filed to resolve these issues: 6520041, 6540368, 6540937, and 6590259.
Sun Simple Management Network Protocol (SNMP) Management Agent does not support multiple domains. Only a single global domain is supported.
Do not use the CPU power management feature in Integrated Lights-Out Management (ILOM) if your domains are to have cryptographic units bound.
The sysfwdownload utility takes significantly longer to run from within a Logical Domains environment on systems based on UltraSPARC T1 processors. This happens if you use the sysfwdownload utility while the LDoms software is enabled.
Workaround: Boot to the factory-default configuration with the LDoms software disabled before using the utility.
Using CPU dynamic reconfiguration (DR) to power down virtual CPUs, does not work with processor sets, resource pools, or the zone’s dedicated CPU feature. CPU DR does work for systems or zones using CPU shares or CPU caps.
When using CPU power management in elastic mode, the Solaris OS guest sees only the CPUS that are allocated to the domains that are powered on. That means output from the psrinfo(1M) command dynamically changes depending on the number of CPUs currently power-managed. This causes an issue with processor sets and pools, which require actual CPU IDs to be static to allow allocation to their sets. This can also impact the zone’s dedicated CPU feature.
Workaround: Set the performance mode for the power management policy.
This section summarizes the bugs that you might encounter when using this version of the software. The bug descriptions are in numerical order by bug ID. If a workaround and a recovery procedure are available, they are specified.
Bug ID 6431107: When the Fault Management Architecture (FMA) places a CPU offline, it records that information, so that when the machine is rebooted the CPU remains offline. The offline designation persists in a non-Logical Domains environment.However, in a Logical Domains environment, this persistence is not always maintained for CPUs in guest domains. The Logical Domains Manager does not currently record data on fault events sent to it. This means that a CPU in a guest domain that has been marked as faulty, or one that was not allocated to a logical domain at the time the fault event is replayed, can subsequently be allocated to another logical domain with the result that it is put back online.
Bug ID 6447740: The Logical Domains Manager does not validate disk paths and network devices.
If a disk device listed in a guest domain’s configuration is either non-existent or otherwise unusable, the disk cannot be used by the virtual disk server (vds), but the Logical Domains Manager does not emit any warning or error when the domain is bound or started.
When the guest tries to boot, messages similar to the following are printed on the guest’s console:
WARNING: /virtual-devices@100/channel-devices@200/disk@0: Timeout connecting to virtual disk server... retrying |
In addition, if a network interface specified using the net-dev= parameter does not exist or is otherwise unusable, the virtual switch is unable to communicate outside the physical machine, but the Logical Domains Manager does not emit any warning or error when the domain is bound or started.
If a disk device listed in a guest domain’s configuration is being used by software other than the Logical Domains Manager (for example, if it is mounted in the service domain), the disk cannot be used by the virtual disk server (vds), but the Logical Domains Manager does not emit a warning that it is in use when the domain is bound or started.
When the guest domain tries to boot, a message similar to the following is printed on the guest’s console:
WARNING: /virtual-devices@100/channel-devices@200/disk@0: Timeout connecting to virtual disk server... retrying |
Bug ID 6497796: Under rare circumstances, when an ldom variable, such as boot-device, is being updated from within a guest domain by using the eeprom(1M) command at the same time that the Logical Domains Manager is being used to add or remove virtual CPUs from the same domain, the guest OS can hang.
Workaround: Ensure that these two operations are not performed simultaneously.
Recovery: Use the ldm stop-domain and ldm start-domain commands to stop and start the guest OS.
Bug ID 6506494: There are some cases where the behavior of the ldm stop-domain command is confusing.
If the Solaris OS is halted on the domain; for example, by using the halt(1M) command; and the domain is at the prompt “r)eboot, o)k prompt, h)alt?,“ the ldom stop-domain command fails with the following error message:
LDom <domain name> stop notification failed |
Workaround: Force a stop by using the ldm stop-domain command with the -f option.
Recovery: If you restart the domain from the kmdb prompt, the stop notification is handled, and the domain does stop.
# ldm stop-domain -f ldom |
If the domain is at the kernel module debugger, kmdb(1M) prompt, then the ldm stop-domain command fails with the following error message:
LDom <domain name> stop notification failed |
Bug ID 6510214: In a Logical Domains environment, there is no support for setting or deleting wide-area network (WAN) boot keys from within the Solaris OS using the ickey(1M) command. All ickey operations fail with the following error:
ickey: setkey: ioctl: I/O error |
In addition, WAN boot keys that are set using OpenBoot firmware in logical domains other than the control domain are not remembered across reboots of the domain. In these domains, the keys set from the OpenBoot firmware are only valid for a single use.
Bug ID 6590259: This issue is summarized in Logical Domain Variable Persistence.
Bug ID 6531058: When a memory page of a guest domain is diagnosed as faulty, the Logical Domains Manager retires the page in the logical domain. If the logical domain is stopped and restarted again, the page is no longer in a retired state.
The fmadm faulty -a command shows whether the page from either the control or guest domain is faulty, but the page is not actually retired. This means the faulty page can continue to generate memory errors.
Workaround: Use the following command in the control domain to restart the fault manager daemon, fmd(1M):
primary# svcadm restart fmd |
Bug ID 6533696: On a system configured to use the Network Information Services (NIS) or NIS+ name service, if the Solaris Security Toolkit software is applied with the server-secure.driver, NIS or NIS+ fails to contact external servers. A symptom of this problem is that the ypwhich(1) command, which returns the name of the NIS or NIS+ server or map master, fails with a message similar to the following:
Domain atlas some.atlas.name.com not bound on nis-server-1.c |
The recommended Solaris Security Toolkit driver to use with the Logical Domains Manager is ldm_control-secure.driver, and NIS and NIS+ work with this recommended driver.
If you are using NIS as your name server, you cannot use the Solaris Security Toolkit profile server-secure.driver, because you may encounter Solaris OS Bug ID 6557663, IP Filter causes panic when using ipnat.conf. However, the default Solaris Security Toolkit driver, ldm_control-secure.driver, is compatible with NIS.
Log in to the system console from the system controller, and if necessary, switch to the ALOM mode by typing:
# #. |
Power off the system by typing the following command in ALOM mode:
sc> poweroff |
sc> poweron |
Switch to the console mode at the ok prompt:
sc> console |
ok boot -s |
Edit the file /etc/shadow, and change the first line of the shadow file that has the root entry to:
root::6445:::::: |
Log in to the system and do one of the following:
# /opt/SUNWjass/bin/jass-execute -ui # /opt/SUNWjass/bin/jass-execute -a ldm_control-secure.driver |
Bug ID 6486234: The virtual networking infrastructure adds additional overhead to communications from a logical domain. All packets are sent through a virtual network device, which, in turn, passes the packets to the virtual switch. The virtual switch then sends the packets out through the physical device. The lower performance is seen due to the inherent overheads of the stack.
Workaround: Do one of the following depending on your server:
On Sun UltraSPARC T1-based servers, such as the Sun Fire T1000 and T2000, and Sun UltraSPARC T2+ based servers such as the Sun SPARC Enterprise T5140 and T5240, assign a physical network card to the logical domain using a split-PCI configuration. For more information, refer to “Configuring Split PCI Express Bus to Use Multiple Logical Domains” in the Logical Domains (LDoms) 1.1 Administration Guide.
On Sun Ultra SPARC T2-based servers, such as the Sun SPARC Enterprise T5120 and T5220 servers, assign a Network Interface Unit (NIU) to the logical domain.
Bug ID 6590259: If the time or date on a logical domain is modified, for example using the ntpdate command, the change persists across reboots of the domain but not across a power cycle of the host.
Workaround: For time changes to persist, save the configuration with the time change to the SC and boot from that configuration.
Bug ID 6538932: This requested fix is to attempt to prevent ds_pri hangs caused by other bugs. Currently, there are no outstanding bugs that are known to cause ds_pri hangs.
Workaround: If a logical domain should hang, stop and restart the affected domain.
Bug ID 6540368: This issue is summarized in Logical Domain Variable Persistence and affects only the control domain.
Bug ID 6542295: During operations in a split-PCI configuration, if a bus is unassigned to a domain or is assigned to a domain but not running the Solaris OS, any error in that bus or any other bus might not get logged. Consider the following example:
In a split-PCI configuration, Bus A is not assigned to any domain, and Bus B is assigned to the primary domain. In this case, any error that occurs on Bus B might not be logged. (The situation occurs only during a short time period.) The problem resolves when the unassigned Bus A is assigned to a domain and is running the Solaris OS, but by then some error messages might be lost.
Workaround: When using a split-PCI configuration, quickly verify that all buses are assigned to domains and are running the Solaris OS.
Bug ID 6544004: The following message appears at the ok prompt if an attempt is made to boot a guest domain that contains an Emulex-based Fibre Channel host adapter (Sun Part Number 375-3397):
ok> FATAL:system is not bootable, boot command is disabled |
Workaround: Do not use this adapter in a split-PCI configuration on Sun Fire T1000 servers.
Bug ID 6549382: If SunVTS™ is started and stopped multiple times, it is possible that switching from the SC console to the host console, using the console SC command can result in either of the following messages being repeatedly emitted on the console:
Enter #. to return to ALOM. |
Warning: Console connection forced into read-only mode |
Bug ID 6589660: Virtual disk timeouts do not work if either the guest or control domain using the disk is halted; for example, if the domain is taken into the kernel debugger (kmdb) or taken into the OpenBoot PROM with the send break.
Bug ID 6591844: If a CPU or memory fault occurs, the affected domain might panic and reboot. If the Fault Management Architecture (FMA) attempts to retire the faulted component while the domain is rebooting, the Logical Domains Manager is not able to communicate with the domain, and the retire fails. In this case, the fmadm faulty command lists the resource as degraded.
Recovery: Wait for the domain to complete rebooting, and then force FMA to replay the fault event by restarting the fault manager daemon (fmd) on the control domain by using this command:
primary# svcadm restart fmd |
Bug ID 6603974: If you configure more than four virtual networks (vnets) in a guest domain on the same network using the Dynamic Host Protocol (DHCP), the guest domain can eventually become unresponsive while running network traffic.
Workaround: Avoid such configurations.
Recovery: Issue an ldm stop-domain ldom command followed by an ldm start-domain ldom command on the guest domain (ldom) in question.
Bug ID 6604253: If you run the Solaris 10 11/06 OS and you harden drivers on the primary domain that is configured with only one strand, rebooting the primary domain or restarting the fault manager daemon (fmd) can result in an fmd core dump. The fmd dumps core while it cleans up its resources, and this does not affect the FMA diagnosis.
Workaround: Add a few more strands into the primary domain. For example,
# ldm add-vcpu 3 primary |
Bug ID 6624950: WAN booting a logical domain using a miniroot created from a Solaris 10 8/07 OS installation DVD hangs during a boot of the miniroot.
Bug ID 6629230: The scadm command on a control domain running Solaris 10 11/06 or later can hang following a SC reset. The system is unable to properly reestablish a connection following an SC reset.
Workaround: Reboot the host to reestablish connection with the SC.
Recovery: Reboot the host to reestablish connection with the SC.
Bug ID 6631043: This bug has not been seen on the Solaris OS. It has been seen on the virtual blade system controller (VBSC), which is running parallel code. It could cause the logical domain to hang.
Bug ID 6646690: If virtual devices are added to an active domain, and virtual devices are removed from that domain before that domain reboots, then the added devices do not function once the domain reboots.
Workaround: On an active domain, do not add and remove any virtual devices without an intervening reboot of the domain.
Recovery: Remove and then add the non-functional virtual devices, making sure that all the remove requests precede all the add requests, and then reboot the domain.
Bug ID 6656033: Simultaneous net installation of multiple guest domains fails on Sun SPARC Enterprise T5140 and Sun SPARC Enterprise T5240 systems that have a common console group.
Workaround: Only net-install on guest domains that each have their own console group. This failure is seen only on domains with a common console group shared among multiple net-installing domains.
Bug ID 6678891: Occasionally, a service domain panics during reboot if the virtual switch is configured to use an aggregated network device for external connectivity.
Workaround: Configure the virtual switch to use a regular physical network device instead of an aggregated network device.
Recovery: Reconfigure the virtual switch to a physical network device using the ldm set-vsw command, and then restart the domain.
Bug ID 6694939: In certain cases, the prtdiag(1M) command does not list all the CPUs.
Workaround: For an accurate count of CPUs, use the psrinfo(1M) command.
Bug ID 6687634: If the Sun Volume Manager (SVM) volume is built on top of a disk slice containing block 0 of the disk, then SVM prevents writing to block 0 of the volume to avoid overwriting the label of the disk.
If an SVM volume built on top of a disk slice containing block 0 of the disk is exported as a full virtual disk, then a guest domain is unable to write a disk label for that virtual disk, and this prevents the Solaris OS from being installed on such a disk.
Workaround: SVM volumes exported as a virtual disk should not be built on top of a disk slice containing block 0 of the disk.
A more generic guideline is that slices which start on the first block (block 0) of a physical disk should not be exported (either directly or indirectly) as a virtual disk. Refer to “Directly or Indirectly Exporting a Disk Slice” in the Logical Domains (LDoms) 1.1 Administration Guide.
Bug ID 6697096: For example, if you follow an ldm rm-io command by an ldm set-vcpu command, the Logical Domains Manager can, in certain circumstances, dump core and exit.
Workaround: For this specific example, reboot the domain after the rm-io subcommand and before the set-vcpu subcommand. In general, do not do a network interface operation while you are changing a configuration using CPU DR.
Recovery: The Service Management Facility (SMF) automatically restarts the Logical Domains Manager daemon (ldmd).
Bug ID 6703958: Under rare circumstances, running CPU dynamic reconfiguration (DR) operations in parallel with network interface–related operations, such as plumb or unplumb, can result in a deadlock.
Workaround: Avoiding network interface–related operations can minimize this risk.
Bug ID 6705823: Attempting a net boot of Solaris 10 8/07 OS on any guest domain serviced by a service domain running Solaris 10 5/08 OS can result in a hang on the guest domain during the installation.
Workaround: Patch the miniroot of the Solaris 10 8/07 OS net install image with Patch ID 127111-05 to fix this issue.
Bug ID 6713547: Cryptographic dynamic reconfiguration (DR) changes are incompatible with firmware that is prior to LDoms software releases. This problem prevents UltraSPARC T1-based systems running old firmware from using cryptographic hardware.
Bug ID 6723511: The ZFS pool label does not indicate that a pool is closed cleanly. The following panic can occur if a disk image has been cloned from a guest domain with a different host ID.
misc/forthdebug (173689 bytes) loaded WARNING: pool ’zfsroot’ could not be loaded as it was last accessed by another system (host: hostid: 0x84a156d0). See: http://www.sun.com/msg/ZFS-8000-EY NOTICE: spa_import_rootpool: error 9 Cannot mount root on /pci@400/pci@0/pci@1/scsi@0/disk@0,0:a fstype zfs panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root 000000000180b940 genunix:vfs_mountroot+34c (800, 200, 0, 18c6400, 18f8000, 1921800) %l0-3: 0000000001132400 0000000001132448 00000000018ce9b8 00000000012ae400 %l4-7: 00000000012ae400 0000000001923c00 0000000000000600 0000000000000200 000000000180ba00 genunix:main+120 (182f400, 191e400, 1870040, 1920c00, 180e000, 191a400) %l0-3: 0000000001343800 000000000180bad0 0000000000004000 0000000001343800 %l4-7: 0000000000000000 000000000182f400 000000000182f628 0000000000000000 |
Workarounds: Use one of the following procedures:
Boot into failsafe mode, which imports that pool with the correct host ID. Any subsequent reboots will work correctly.
Boot the guest logical domain using a DVD, execute a zfs import -f command to change the ownership on the ZFS root pool (rpool) to the correct host ID, then reboot, and use the rpool.
Reboot from the netinstall image, the miniroot, use the zpool import -f command to import the pool, and then immediately export the pool. Then reboot.
Bug ID 6736962: CPU power management (PM) sometimes fails to retrieve the PM policy from the service processor (SP) when the Logical Domains Manager starts after the control domain boots.
If CPU power management could not retrieve the PM policy from the SP, then it allows the Logical Domains Manager to start as expected, but it does log the following error to the LDoms log and remains in performance mode:
Unable to get the initial PM Policy - timeout |
Bug ID 6742805: A domain shutdown or memory scrub can take over 15 minutes with single CPU and a very large memory configuration. During a shutdown, the CPUs in a domain are used to scrub all the memory owned by the domain. The time taken to complete the scrub can take quite a long time if a configuration is imbalanced; for example, a single CPU domain with 512GB of memory. This prolonged scrub time extends the amount of time it takes to shut down a domain.
Workaround: Ensure that large memory configurations (>100GB) have at least one core. This results in a much faster shutdown time.
Bug ID 6743338: Under rare circumstances, dynamically removing a virtual network interface from a domain can cause this domain to panic.
Workaround: Do not dynamically remove a virtual network interface just after it has been dynamically added to a domain, or just after the domain has booted.
Bug ID 6746533: When the port VLAN ID (pvid) is set and hybrid I/O is enabled for a virtual network, the packets received and transmitted by the hybrid I/O resource to the outside network might not be tagged. Similarly, the received packets from the hybrid I/O resource might not be untagged before being sent up the stack.
Workaround: Do not enable hybrid I/O for a virtual network that has the pvid set.
Bug ID 6747730: A Solaris 10/08 OS installation hangs with a ZFS boot on Sun SPARC Enterprise T5220 servers with 1-GB of memory.
Workaround: Perform the installation on a single disk, then establish the ZFS root mirror after rebooting.
Bug ID 6749619: Do not switch to performance mode in CPU power management unless all domains are up and running the Solaris OS. Otherwise, all the CPUs in guest domains might not be powered up and dynamically reconfigured.
Workaround: Before you switch to performance mode, check both domain and CPU power status by entering an ldm list command.
Recovery: If you are in a state where a guest domain has resources that did not power up while the system was in performance mode, toggle the policy to elastic mode and back to performance mode.
Bug ID 6753219: After adding virtual switches to the primary domain and rebooting, the primary domain hangs when installed with an Elara Copper card.
Workaround: Add this line to the /etc/system file on the service domain and reboot:
set vsw:vsw_setup_switching_boot_delay=300000000 |
Bug ID 6753683: Sometimes, executing the uadmin 1 0 command from the command line of an LDoms system does not leave the system at the OK prompt after the subsequent reset. This incorrect behavior is seen only when the LDoms variable auto-reboot? is set to true. If auto-reboot? is set to false, the expected behavior occurs.
Workaround: Use this command instead:
uadmin 2 0 |
Bug ID 6756939: If a guest domain has 2 virtual networks enabled with hybrid I/O, and another guest domain configured as the service domain is stopped, an unrecoverable hardware error occurs.
Bug ID 6760933: On occasion, an active logical domain appears to be in the transition state instead of the normal state long after it is booted or following the completion of a domain migration. This glitch is harmless, and the domain is fully operational. To see what flag is set, check the flags field in the ldm list -l -p command output or the FLAGS field in the ldm list command, which shows -n---- for normal or -t---- for transition.
Recovery: The logical domain should display the correct state upon the next reboot.
Bug ID 6764613: If you do not have a network configured on your machine and have a Network Information Services (NIS) client running, the Logical Domains Manager will not start on your system.
Workaround: Disable the NIS client on your non-networked machine:
# svcadm disable nis/client |
Bug ID 6765355: Under rare conditions, when a new virtual network (vnet) is added to a logical domain, it fails to establish a connection with the virtual switch. This results in loss of network connectivity to and from the logical domain. If you encounter this error, you can see that the /dev/vnetN symbolic link for the virtual network instance is missing. If present, and not in error, the link points to a corresponding /devices entry as follows:
/dev/vnetN -> ../devices/virtual-devices@100/channel-devices@200/network@N:vnetN |
Bug ID 6766202: If a guest domain with only one CPU is at the kernel module debugger, kmdb(1M), prompt, and if that domain is migrated to another system, the the guest domain panics when it is resumed on the target system.
Workaround: Before migrating a guest domain, exit the kmdb shell, and resume the execution of the OS by typing ::cont. Then migrate the guest domain. After the migration is completed, re-enter kmdb with the command mdb -K.
Bug ID 6769808: If a service domain running Solaris 10 5/08 OS, or earlier, is exporting a ZFS volume as a single-slice disk to a guest domain running Solaris 10 10/08 OS, then this guest domain is unable to use that virtual disk. Any access to the virtual disk fails with an I/O error.
Workaround: Upgrade the service domain to Solaris 10 10/08 OS.
Bug ID 6772089: In certain situations, a migration fails and the Logical Domains Manager reports that it was not possible to bind the memory needed for the source domain. This can occur even if the total amount of available memory on the target machine is greater than the amount of memory being used by the source domain.
This failure occurs because migrating the specific memory ranges in use by the source domain requires that compatible memory ranges are available on the target as well. When no such compatible memory range is found for any memory range in the source, the migration cannot proceed.
Recovery: If this condition is encountered, you might be able to migrate the domain if you modify the memory usage on the target machine. To do this, unbind any bound or active logical domain on the target.
Bug ID 6772120: If the virtual disk on the target machine does not point to the same disk backend that is used on the source machine, the migrated domain cannot access the virtual disk using that disk backend. A hang can result when accessing the virtual disk on the domain.
Currently, the Logical Domains Manager checks only that the virtual disk volume names match on the source and target machines. In this scenario, no error message is displayed if the disk backends do not match.
Workaround: Ensure that when you are configuring the target domain to receive a migrated domain that the disk volume (vdsdev) matches the disk backend used on the source domain.
Recovery: Do one of the following if you discover that the virtual disk device on the target machine points to the incorrect disk backend:
Bug ID 6772485: The Logical Domains Manager daemon (ldmd) might fail to start after performing a node reconfiguration on a SPARC Enterprise T5440 server.
Workaround: If CMP0 fails, move another Chip-Multithreaded Processor (CMP) module to CMP Slot 0, and reconfigure the system to operate in a degraded state. CMP Slot 0 always must be occupied by a working CMP module.
Bug ID 6773569: After switching from one configuration to another (using the ldm set-config command followed by a power cycle), domains defined in the previous configuration might still be present in the current configuration, in the inactive state.
This is a result of the Logical Domains Manager’s constraint database not being kept in sync with the change in configuration. These inactive domains do not affect the running configuration and can be safely destroyed.
Bug ID 6773867: If the incoming_migration_enabled SMF property of the Logical Domains Manager daemon (ldmd) on a machine is set to false (by default it is true), and a user attempts to migrate a domain to the machine, the following cryptic message is printed by the Logical Domains Manager on the machine initiating the migration.
# ldm migrate-domain ldg1 target-machine Target Password: SSL ACCEPT FAILED ssl_err = 5 error:00000000:lib(0):func(0):reason(0) Failed to connect to LDom manager on target-machine Domain Migration of LDom ldg1 failed |
Recovery: Set the incoming_migration_enabled property of the svc:/ldoms/ldmd:default SMF service back to true using the svccfg(1M) command.
Bug IDs 6773930 and 6779134: You might experience this problem if the system’s CPU power management (PM) policy is set to elastic, there is more than one faulty CPU, and one of the faulty CPUs is cpu0. You can see this by using the psrinfo or fmadm faulty commands.
Workaround: Switch the power management policy on the service processor (SP) to performance, and then back to elastic.
-> set SP/powermgmt policy=performance -> set SP/powermgmt policy=elastic |
Bug ID 6775847: There is a period of time where a domain being migrated onto a system can end up with just one virtual CPU or hung if another domain on the target system is rebooted during the migration. The start-domain and stop-domain operations of the ldm(1M) command are prevented currently, but issuing a reboot and init command in the Solaris OS instance running on a guest domain cannot be prevented.
Workaround: Avoid rebooting domains while a migration is in progress onto a machine.
Recovery: Stop and restart the migrated domain on the target system if you detect the symptoms of this issue.
Bug ID 6777756: A panic occurs when more than two Hybrid I/O–capable virtual networks are activated in a guest domain.
Recovery: Remove all entries in /etc/path_to_inst of the guest domain that are similar to the following, and reboot:
"/niu@80/network@bad1" 2 "nxge" "/niu@80/network@bad1" 3 "nxge" "/niu@80/network@bad1" 4 "nxge" "/niu@80/network@bad1" 5 "nxge" |
Only entries, such as those following, that have function number 0 and 1 are known to not create this issue.
"/niu@80/network@bad1" 0 "nxge" "/niu@80/network@bad5" 1 "nxge |
Bug ID 6779482: If a domain being migrated has a virtual network (vnet) with a MAC address that matches a MAC address on the target, the migration fails appropriately, but leaves a residual inactive domain of the same name and configuration on the target.
Workaround: On the target, use the ldm destroy command to remove the inactive domain manually, fix the MAC address so that it is unique, and try the migration again.
Bug ID 6781589: During a migration, any explicitly assigned console group and port are ignored, and a console with default properties is created for the target domain. This console is created using the target domain name as the console group and using any available port on the first virtual console concentrator (vcc) device in the control domain. If there is a conflict with the default group name, the migration fails.
Recovery: To restore the explicit console properties following a migration, unbind the target domain, and manually set the desired properties using the ldm set-vcons subcommand.
Bug ID 6783450: The domain migration dry run option (-n) does not ensure there is enough memory free on the target system to bind the domain specified. If all other criteria are met, the command will return without an error but will correctly return an error when the migration is actually attempted.
Workaround: Run the ldm list-devices mem command on the target machine to verify that there is enough memory available for the domain to be migrated.
Bug ID 6784943: After the control domain has been rebooted, the first migration attempt might fail with the following error message:
Failed to send ’migrate’ command to ldmd(1m) |
This occurs if the Logical Domains Manager is started at boot time after networking is initialized. This includes scenarios where the control domain is booted without a plumbed network device and where a network device is set up after the boot has completed.
Workaround: Restart the Logical Domains Manager once networking is active in the control domain.
# svcadm restart ldmd |
Recovery: Once this problem occurs, the Logical Domains Manager is restarted automatically, clearing the error condition. Future attempts to perform migration should be successful.
Bug ID 6784945: On a Sun SPARC Enterprise T5440 system, the pseudonyms (shortcut names) for the PCI busses are not correct.
Workaround: To configure PCI busses on a Sun SPARC Enterprise T5440 system, you must use the pci@xxxx form of the bus name, as listed under the DEVICE column of any of the following list commands:
Bug ID 6787057: On a guest domain with two or more virtual network devices (vnets) using multiple virtual switches (vsws), if an in-progress migration is cancelled, the domain being migrated might reboot instead of resuming operation on the source machine with the OS running. This issue does not occur if all the vnets are connected to a single vsw.
Workaround: If you are migrating a domain with two or more virtual networks using multiple virtual switches, do not cancel the domain migration (either using ^C or the ldm cancel-operation command) once the operation starts. If a domain is inadvertently migrated, it can be migrated back to the source machine once the original migration is completed.
Bug ID 6788178: After a domain has successfully migrated from a source system to a target system, it is possible that a newly created domain on the source system could be allocated the same MAC address as the domain that was successfully migrated. If the source and target systems are on the same subnet, this can result in the new domain being unable to communicate on the network. In this case, the Solaris OS might generate errors stating that another machine on the network has the same IP address.
Workaround: If this issue occurs, operation can be restored by changing the MAC address of virtual network interfaces having problems.
Change the MAC Address of an Affected Virtual Network Interface |
This section contains documentation errors that have been found too late to resolve for the LDoms 1.1 release.
Bug ID 6703127: Virtual input/output (VIO) dynamic reconfiguration (DR) operations ignore the force (-f) option in CLI commands.
Bug ID 6774570: The ldm man page and the Logical Domains Manager (LDoms) 1.1 Man Page Guide erroneously state that you can still use the ldm rm-reconf command as an alias for the new ldm cancel-operation reconf command.
Workaround: Use the ldm cancel-operation reconf command to cancel a delayed reconfiguration operation.
The revised portion, which is not in the Solaris 10 10/08 Reference Manual Collection, now reads:
vntsd/listen_addr Set the IP address to which vntsd listens, using the following syntax: vntsd/listen_addr:"xxx.xxx.xxx.xxx" ...where xxx.xxx.xxx.xxx is a valid IP address. The default value of this property is to listen on IP address 127.0.0.1. Users can connect to a guest console over a network if the value is set to the IP address of the control domain. Note - Enabling network access to a console has security implications. Any user can connect to a console and for this reason it is disabled by default. |
In the ldm(1M) man page in the section on using the add-vsw subcommand, the definitions of default-vlan-id, pvid, and vid should say:
default-vlan-id=vlan-id specifies the default virtual local area network (VLAN) to which a virtual switch and its associated virtual network devices belong to implicitly, in untagged mode. It serves as the default port VLAN id (pvid) of the virtual switch and virtual network devices. Without this option, the default value of this property is 1. Normally, you would not need to use this option. It is provided only as a way to change the default value of 1.
pvid=port-vlan-id specifies the VLAN to which the virtual switch needs to be a member, in untagged mode. This applies to the set-vsw subcommand also.
vid=vlan-id specifies one or more VLANs to which a virtual switch needs to be a member, in tagged mode. This applies to the set-vsw subcommand also.
In the ldm(1M) man page in the sections on using the add-vnet and set-vnet subcommands, the definitions of pvid and vid should say:
The list subcommand has new output (-o format) options to limit the output format. The status output option was omitted from output options available in the ldm(1M) man page. This option is used to check the status of a migrating domain.
This section contains bugs that have been fixed since the LDoms 1.0.3 software release.
The following LDoms requests for enhancements (RFEs) and bugs were fixed for the Solaris 10 10/08 OS release:
The following LDoms 1.1 RFEs and bugs were fixed for the LDoms 1.1 software release:
Copyright © 2008, Sun Microsystems, Inc. All rights reserved.