|Skip Navigation Links|
|Exit Print View|
|Oracle® VM Server for SPARC 126.96.36.199, 3.1.1, and 3.1 Release Notes|
This section describes general known issues about this release of the Oracle VM Server for SPARC software that are broader than a specific bug number. Workarounds are provided where available.
Do not create a circular dependency between two domains in which each domain provides services to the other. Such a configuration creates a single point of failure condition where an outage in one domain causes the other domain to become unavailable. Circular dependency configurations also prevent you from unbinding the domains after they have been bound initially.
The Logical Domains Manager does not prevent the creation of circular domain dependencies.
If the domains cannot be unbound due to a circular dependency, remove the devices that cause the dependency and then attempt to unbind the domains.
A guest domain that has been assigned more than 1024 CPUs cannot run the Oracle Solaris 10 OS. In addition, you cannot use CPU DR to shrink the number of CPUs below 1024 to run the Oracle Solaris 10 OS.
To work around this problem, unbind the guest domain, remove CPUs until you have no more than 1024 CPUs, and then rebind the guest domain. You can then run the Oracle Solaris 10 OS on this guest domain.
Avoid creating a configuration where two domains provide services to each other. In such a case, an outage in one domain will take down the other domain. In addition, such domains cannot be unbound if they are bound with such a configuration. The Logical Domains Manager currently does not block such circular dependencies.
If you cannot unbind a domain because of this sort of dependency, remove the devices that cause the circular dependency and then attempt the unbind again.
If the control domain is upgraded from an Oracle Solaris 10 OS version older than Oracle Solaris 10 5/08 OS (or without patch 127127-11), and if volume manager volumes were exported as virtual disks, the virtual disk back ends must be re-exported with options=slice after the Logical Domains Manager has been upgraded. See Exporting Volumes and Backward Compatibility in Oracle VM Server for SPARC 3.1 Administration Guide .
For discussions in Oracle VM Server for SPARC documentation, the terms service processor (SP) and system controller (SC) are interchangeable.
If a service domain is running a version of Oracle Solaris 10 OS prior to Oracle Solaris 10 1/13 OS and is exporting a physical disk slice as a virtual disk to a guest domain, then this virtual disk will appear in the guest domain with an inappropriate device ID. If that service domain is then upgraded to Oracle Solaris 10 1/13 OS, the physical disk slice exported as a virtual disk will appear in the guest domain with no device ID.
This removal of the device ID of the virtual disk can cause problems to applications attempting to reference the device ID of virtual disks. In particular, Solaris Volume Manager might be unable to find its configuration or to access its metadevices.
Workaround: After upgrading a service domain to Oracle Solaris 10 1/13 OS, if a guest domain is unable to find its Solaris Volume Manager configuration or its metadevices, perform the following procedure.
After the domain has booted, the Solaris Volume Manager configuration and metadevices should be available.
During the reboot, you will see messages similar to this:
NOTICE: mddb: unable to get devid for 'vdc', 0x10
These messages are normal and do not report any problems.
The Oracle VM Server for SPARC software does not impose a memory size limitation when you create a domain. The memory size requirement is a characteristic of the guest operating system. Some Oracle VM Server for SPARC functionality might not work if the amount of memory present is smaller than the recommended size. For recommended and minimum memory requirements for the Oracle Solaris 10 OS, see System Requirements and Recommendations in Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade . For recommended and minimum memory requirements for the Oracle Solaris 11 OS, see Oracle Solaris 11 Release Notes and Oracle Solaris 11.1 Release Notes .
The OpenBoot PROM has a minimum size restriction for a domain. Currently, that restriction is 12 Mbytes. If you have a domain smaller than that size, the Logical Domains Manager will automatically boost the size of the domain to 12 Mbytes. The minimum size restriction for a Fujitsu M10 system is 256 Mbytes. Refer to the release notes for your system firmware for information about memory size requirements.
The memory dynamic reconfiguration (DR) feature enforces 256-Mbyte alignment on the address and size of the memory involved in a given operation. See Memory Alignment in Oracle VM Server for SPARC 3.1 Administration Guide .
You can boot the following number of domains depending on your platform:
Up to 256 on Fujitsu M10 systems per physical partition
Up to 128 on SPARC M6 systems per physical domain
Up to 128 on SPARC M5 systems per physical domain
Up to 128 on SPARC T5 systems
Up to 128 on SPARC T4 servers
Up to 128 on SPARC T3 servers
Up to 128 on UltraSPARC T2 Plus servers
Up to 64 on UltraSPARC T2 servers
If unallocated virtual CPUs are available, assign them to the service domain to help process the virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more than 32 domains. In cases where maximum domain configurations have only a single CPU in the service domain, do not put unnecessary stress on the single CPU when configuring and using the domain. The virtual switch (vsw) services should be spread across all the network adapters available in the machine. For example, if booting 128 domains on a Sun SPARC Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances. Assigning more than 32 vnet instances per vsw service could cause hard hangs in the service domain.
To run the maximum configurations, a machine needs an adequate amount of memory to support the guest domains. The amount of memory is dependent on your platform and your OS. See the documentation for your platform, Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade , Installing Oracle Solaris 11 Systems , and Installing Oracle Solaris 11.1 Systems .
Memory and swap space usage increases in a guest domain when the vsw services used by the domain provide services to many virtual networks in multiple domains. This increase is due to the peer-to-peer links between all the vnet instances connected to the vsw. The service domain benefits from having extra memory. The recommended minimum is four Gbytes when running more than 64 domains. Start domains in groups of 10 or fewer and wait for them to boot before starting the next batch. The same advice applies to installing operating systems on domains. You can reduce the number of links by disabling inter-vnet links. See Inter-Vnet LDC Channels in Oracle VM Server for SPARC 3.1 Administration Guide .
If you have made any configuration changes since last saving a configuration to the SC, before you attempt to power off or power cycle an Oracle VM Server for SPARC system, make sure that you save the latest configuration that you want to keep.
Because no other domains are bound, the firmware automatically powers off the system.
Because no other domains are bound, the firmware automatically power cycles the system before rebooting it. When the system restarts, it boots into the domain configuration last saved or explicitly set.
Under certain circumstances, the Logical Domains Manager rounds up the requested memory allocation to either the next largest 8-Kbyte or 4-Mbyte multiple. Thhe following example shows sample output of the ldm list-domain -l command, where the constraint value is smaller than the actual allocated size:
Memory: Constraints: 1965 M raddr paddr5 size 0x1000000 0x291000000 1968M
Variable updates persist across a reboot but not across a power cycle unless the variable updates are either initiated from OpenBoot firmware on the control domain or followed by saving the configuration to the SC.
Note the following conditions:
When the control domain reboots, if there are no bound guest domains and no delayed reconfiguration in progress, the SC performs a power cycle of the system.
When the control domain reboots, if guest domains are bound or active (or the control domain is in the middle of a delayed reconfiguration), the SC does not perform a power cycle of the system.
Logical Domains variables for a domain can be specified using any of the following methods:
At the OpenBoot prompt.
Using the Oracle Solaris OS eeprom(1M) command.
Using the Logical Domains Manager CLI (ldm).
In a limited fashion, from the system controller (SC) using the bootmode command. This method can be used for only certain variables, and only when in the factory-default configuration.
Variable updates that are made by using any of these methods should always persist across reboots of the domain. The variable updates also always apply to any subsequent domain configurations that were saved to the SC.
In Oracle VM Server for SPARC 3.1 software, variable updates do not persist as expected in a few cases:
All methods of updating a variable persist across reboots of that domain. However, they do not persist across a power cycle of the system unless a subsequent logical domain configuration is saved to the SC.
However in the control domain, updates made using either OpenBoot firmware commands or the eeprom command do persist across a power cycle of the system even without subsequently saving a new logical domain configuration to the SC. The eeprom command supports this behavior on SPARC T5, SPARC M5, and SPARC M6 systems, and on SPARC T3 and SPARC T4 systems that run at least version 8.2.1 of the system firmware.
In all cases, when reverting to the factory-default configuration from a configuration generated by the Logical Domains Manager, all Logical Domains variables start with their default values.
If you are concerned about Logical Domains variable changes, do one of the following:
Bring the system to the ok prompt and update the variables.
Update the variables while the Logical Domains Manager is disabled:
# svcadm disable ldmd update variables # svcadm enable ldmd
When running Live Upgrade, perform the following steps:
# svcadm disable -t ldmd # luactivate be3 # init 6
If you modify the time or date on a logical domain, for example, using the ntpdate command, the change persists across reboots of the domain but not across a power cycle of the host. To ensure that time changes persist, save the configuration with the time change to the SP and boot from that configuration.
The following Bug IDs have been filed to resolve these issues: 15375997, 15387338, 15387606, and 15415199.
Sun Simple Network Management Protocol (SNMP) Management Agent does not support multiple domains. Only a single global domain is supported.
When the primary domain is in a delayed reconfiguration state, resources that are managed by Oracle VM Server for SPARC are power-managed only after the primary domain reboots. Resources that are managed directly by the OS, such as CPUs that are managed by the Solaris Power Aware Dispatcher, are not affected by this state.
Discrete cryptographic units are present only on UltraSPARC T2, UltraSPARC T2 Plus, and SPARC T3 systems.
Cryptographic unit dynamic reconfiguration (DR) enables you to add and remove cryptographic units from a domain. The Logical Domains Manager automatically detects whether a domain allows cryptographic unit DR, and enables the functionality only for those domains. In addition, CPU DR is no longer disabled in domains that have cryptographic units bound and then are running an appropriate version of the Oracle Solaris OS.
Running Veritas Volume Manager (VxVM) 5.x on the Oracle Solaris 10 OS is the only supported (tested) version for the Oracle VM Server for SPARC P2V tool. Older versions of VxVM, such as 3.x and 4.x running on the Solaris 8 and Solaris 9 operating systems, might also work. In those cases, the first boot after running the ldmp2v convert command might show warning messages from the VxVM drivers. You can ignore these messages. You can remove the old VRTS* packages after the guest domain has booted.
Boot device: disk0:a File and args: SunOS Release 5.10 Version Generic_139555-08 64-bit Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: normaal Configuring devices. /kernel/drv/sparcv9/vxdmp: undefined symbol 'romp' WARNING: mod_load: cannot load module 'vxdmp' WARNING: vxdmp: unable to resolve dependency, module 'misc/ted' not found /kernel/drv/sparcv9/vxdmp: undefined symbol 'romp' WARNING: mod_load: cannot load module 'vxdmp' WARNING: vxdmp: unable to resolve dependency, module 'misc/ted' not found /kernel/drv/sparcv9/vxio: undefined symbol 'romp' WARNING: mod_load: cannot load module 'vxio' WARNING: vxio: unable to resolve dependency, module 'drv/vxdmp' not found WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER NOTICE: VxVM not started
For information about Oracle's hard partitioning requirements for software licenses, see Partitioning: Server/Hardware Partitioning.
The Oracle Solaris Installer does not present the Upgrade option when the partition tag of the slice that holds the root (/) file system is not set to root. This situation occurs if the tag is not explicitly set when labeling the guest's boot disk. You can use the format command to set the partition tag as follows:
AVAILABLE DISK SELECTIONS: 0. c0d0 <SUN-DiskImage-10GB cyl 282 alt 2 hd 96 sec 768> /virtual-devices@100/channel-devices@200/disk@0 1. c4t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@400/pci@0/pci@1/scsi@0/sd@2,0 2. c4t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@400/pci@0/pci@1/scsi@0/sd@3,0 Specify disk (enter its number): 0 selecting c0d0 [disk formatted, no defect list found] format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> 0 Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: root Enter partition permission flags[wm]: Enter new starting cyl: 0 Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 8g partition> label Ready to label disk, continue? y partition>
A block of dynamically added memory can be dynamically removed only as a whole. That is, a subset of that memory block cannot be dynamically removed.
This situation could occur if a domain with a small memory size is dynamically grown to a much larger size, as shown in the following example.
# ldm list ldom1 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldom1 active -n---- 5000 2 1G 0.4% 23h # ldm add-mem 16G ldom1 # ldm rm-mem 8G ldom1 Memory removal failed because all of the memory is in use. # ldm rm-mem 16G ldom1 # ldm list ldom1 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldom1 active -n---- 5000 2 1G 0.4% 23h
Workaround: Dynamically add memory in smaller amounts to reduce the probability that this condition will occur.
Recovery: Reboot the domain.
Restoring ufsdump archives on a virtual disk that is backed by a file on a UFS file system might cause the system to hang. In such a case, the ldmp2v prepare command will exit. You might encounter this problem when you manually restore ufsdump archives in preparation for the ldmp2v prepare -R /altroot command when the virtual disk is a file on a UFS file system. For compatibility with previously created ufsdump archives, you can still use the ldmp2v prepare command to restore ufsdump archives on virtual disks that are not backed by a file on a UFS file system. However, the use of ufsdump archives is not recommended.
Do not attempt to perform more than one CPU configuration operation on the primary domain while it is in a delayed reconfiguration. If you attempt more CPU configuration requests, they will be rejected.
Workaround: Perform one of the following actions:
Cancel the delayed reconfiguration, start another one, and request the configuration changes that were lost from the previous delayed reconfiguration.
Reboot the control domain with the incorrect CPU count and then make the allocation corrections after the domain reboots.
The Oracle VM Server for SPARC 3.0 software inadvertently exposed a capability to assign multiple virtual switches to a single network adapter. This capability is intended only to be used in a specific way by the Oracle VM Manager software.
The Oracle VM Server for SPARC 3.1 software restored the original behavior, which prevents you from assigning multiple virtual switches to a single network adapter. However, if you configured your Oracle VM Server for SPARC 3.0 system to assign multiple virtual switches to a single network adapter, the ldmd daemon does not start when you upgrade to Oracle VM Server for SPARC 3.1.
Workaround: Perform the following steps:
Temporarily re-enable this capability on your Oracle VM Server for SPARC 3.1 system to enable the ldmd daemon to start.
# svccfg -s ldoms/ldmd setprop ldmd/ovm_manager=true # svcadm refresh ldmd # svcadm disable ldmd # svcadm enable ldmd
Update your configuration to assign only one virtual switch to a network device.
Disable this capability on your Oracle VM Server for SPARC 3.1 system.
# svccfg -s ldoms/ldmd setprop ldmd/ovm_manager=false # svcadm refresh ldmd # svcadm disable ldmd # svcadm enable ldmd
It is important that you set the ovm_manager property to false because this property might introduce other side effects in future Oracle VM Server for SPARC releases.
Historically, the Oracle Solaris OS has been installed on a boot disk configured with an SMI VTOC disk label. Starting with the Oracle Solaris 11.1 OS, the OS is installed on a boot disk that is configured with an extensible firmware interface (EFI) GUID partition table (GPT) disk label by default. If the firmware does not support EFI, the disk is configured with an SMI VTOC disk label instead. This situation applies only to SPARC T4 servers that run at least system firmware version 8.4.0 and to SPARC T5, SPARC M5, or SPARC M6 servers that run at least system firmware version 9.1.0.
The following servers cannot boot from a disk that that has an EFI GPT disk label:
UltraSPARC T2, UltraSPARC T2 Plus, and SPARC T3 servers no matter which system firmware version is used
SPARC T4 servers that run system firmware versions prior to 8.4.0
SPARC T5, SPARC M5, and SPARC M6 servers that run system firmware versions prior to 9.1.0
So, an Oracle Solaris 11.1 boot disk that is created on an up-to-date SPARC T4, SPARC T5, SPARC M5, or SPARC M6 system cannot be used on older servers or on servers that run older firmware.
This limitation restrains the ability to use either cold or live migration to move a domain from a recent server to an older server. This limitation also prevents you from using an EFI GPT boot disk image on an older server.
To determine whether an Oracle Solaris 11.1 boot disk is compatible with your server and its firmware, ensure that the Oracle Solaris 11.1 OS is installed on a disk that is configured with an SMI VTOC disk label.
To maintain backward compatibility with systems that run older firmware, use one of the following procedures. Otherwise, the boot disk uses the EFI GPT disk label by default. These procedures show how to ensure that the Oracle Solaris 11.1 OS is installed on a boot disk with an SMI VTOC disk label on a SPARC T4 server with at least system firmware version 8.4.0 and on a SPARC T5, SPARC M5, or SPARC M6 server with at least system firmware version 9.1.0.
Solution 1: Remove the gpt property so that the firmware does not report that it supports EFI.
From the OpenBoot PROM prompt, disable automatic booting and reset the system to be installed.
ok setenv auto-boot? false ok reset-all
After the system resets, it returns to the ok prompt.
Change to the /packages/disk-label directory and remove the gpt property.
ok cd /packages/disk-label ok " gpt" delete-property
Begin the Oracle Solaris 11.1 OS installation.
For example, perform a network installation:
ok boot net - install
Solution 2: Use the format -e command to write an SMI VTOC label on the disk to be installed with the Oracle Solaris 11.1 OS.
Write an SMI VTOC label on the disk.
For example, select the label option and specify the SMI label:
# format -e c1d0 format> label  SMI Label  EFI Label Specify Label type: 0
Configure the disk with a slice 0 and slice 2 that cover the entire disk.
The disk should have no other partitions. For example:
format> partition partition> print Current partition table (unnamed): Total disk cylinders available: 14087 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 14086 136.71GB (14087/0/0) 286698624 1 unassigned wu 0 0 (0/0/0) 0 2 backup wu 0 - 14086 136.71GB (14087/0/0) 286698624 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0
Re-write the SMI VTOC disk label.
partition> label  SMI Label  EFI Label Specify Label type: 0 Ready to label disk, continue? y
Configure your Oracle Solaris Automatic Installer (AI) to install the Oracle Solaris OS on slice 0 of the boot disk.
Change the <disk> excerpt in the AI manifest as follows:
<target> <disk whole_disk="true"> <disk_keyword key="boot_disk"/> <slice name="0" in_zpool="rpool"/> </disk> [...] </target>
Perform the installation of the Oracle Solaris 11.1 OS.