JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle® VM Server for SPARC 3.1.1.2, 3.1.1.1, 3.1.1, and 3.1 Release Notes
Oracle Technology Network
Library
PDF
Print View
Feedback
search filter icon
search icon

Document Information

Using This Documentation

Chapter 1 Oracle VM Server for SPARC 3.1.1.2, 3.1.1.1, 3.1.1, and 3.1 Release Notes

Oracle VM Server for SPARC 3.1.1.2 Maintenance Update

Oracle VM Server for SPARC 3.1.1.1 Maintenance Update

What's New in This Release

What's New in the Oracle VM Server for SPARC 3.1.1.1 Maintenance Update

What's New in the Oracle VM Server for SPARC 3.1.1 Release

What's New in the Oracle VM Server for SPARC 3.1 Release

System Requirements

Supported Platforms

Required Software and Patches

Required Oracle Solaris OS Versions

Required Oracle Solaris OS Versions for the Oracle VM Server for SPARC 3.1.1.1 Maintenance Update

Required Oracle Solaris OS Versions for Oracle VM Server for SPARC 3.1.1

Required Oracle Solaris OS Versions for Oracle VM Server for SPARC 3.1

Required Software to Enable the Latest Oracle VM Server for SPARC Features

Required System Firmware Patches

Minimum Version of Software Required

Direct I/O Hardware and Software Requirements

PCIe SR-IOV Hardware and Software Requirements

Non-primary Root Domain Hardware and Software Requirements

Recovery Mode Hardware and Software Requirements

Location of the Oracle VM Server for SPARC Software

Location of Patches

Location of Documentation

Related Software

Software That Can Be Used With the Oracle VM Server for SPARC Software

System Controller Software That Interacts With Oracle VM Server for SPARC

Optional Software

Upgrading to the Current Oracle VM Server for SPARC Software

Upgrading to the Oracle VM Server for SPARC 3.1.1.1 Software

Upgrading to the Oracle VM Server for SPARC 3.1.1 Software

Upgrading to the Oracle VM Server for SPARC 3.1 Software

Deprecated Oracle VM Server for SPARC Features

Known Issues

General Issues

Cannot Unbind Domains When They Provide Services to Each Other

Guest Domain Cannot Run the Oracle Solaris 10 OS When More Than 1024 CPUs Are Assigned

Avoid Creating a Configuration Where Two Domains Provide Services to Each Other

Upgrading From Oracle Solaris 10 OS Older Than Oracle Solaris 10 5/08 OS

Service Processor and System Controller Are Interchangeable Terms

In Certain Conditions, a Guest Domain's Solaris Volume Manager Configuration or Metadevices Can Be Lost

How to Find a Guest Domain's Solaris Volume Manager Configuration or Metadevices

Memory Size Requirements

Booting a Large Number of Domains

Cleanly Shutting Down and Power Cycling an Oracle VM Server for SPARC System

How to Power Off a System With Multiple Active Domains

How to Power Cycle the System

Memory Size Requested Might Be Different From Memory Allocated

Logical Domains Variable Persistence

Oracle's Sun SNMP Management Agent Does Not Support Multiple Domains

Delayed Reconfiguration

Cryptographic Units

ldmp2v convert Command: VxVM Warning Messages During Boot

Oracle Hard Partitioning Requirements for Software Licenses

Upgrade Option Not Presented When Using ldmp2v prepare -R

Sometimes a Block of Dynamically Added Memory Can be Dynamically Removed Only as a Whole

ldmp2v Command: ufsdump Archiving Method Is No Longer Used

Only One CPU Configuration Operation Is Permitted to Be Performed During a Delayed Reconfiguration

Oracle VM Server for SPARC 3.1 ldmd Daemon Does Not Start If Multiple Virtual Switches Are Assigned to a Single Network Adapter

Oracle Solaris Boot Disk Compatibility

Domain Migration Restrictions

Version Restrictions for Migration

CPU Restrictions for Migration

Version Restrictions for Cross-CPU Migration

Domains That Have Only One Virtual CPU Assigned Might Panic During a Live Migration

Oracle VM Server for SPARC MIB Issues

snmptable Command Does Not Work With the Version 2 or Version 3 Option

SR-IOV Issues

Bad Trap Panic Occurs Rarely When Rebooting an Oracle Solaris 10 Root Domain That Has SR-IOV Virtual Functions Assigned to Guest Domains

prtdiag Might Cause an Oracle Solaris 10 Root Domain to Panic After Destroying SR-IOV Virtual Functions

Control Domain Hangs When Stopping or Starting I/O Domains

Warnings Appear on Console When Creating Fibre Channel Virtual Functions

Fibre Channel Physical Function Configuration Changes Require Several Minutes to Complete

Fujitsu M10 System Has Different SR-IOV Feature Limitations

InfiniBand SR-IOV Issues

Misleading Messages Shown For InfiniBand SR-IOV Operations

Bugs Affecting the Oracle VM Server for SPARC Software

Bugs Affecting the Oracle VM Server for SPARC 3.1.1.2 Software

System Crashes When Applying the Whole-Core Constraint to a Partial Core primary Domain

format Command Hangs After Having Migrated a Guest Domain or a Guest Domain Console Does Not Take Input

Kernel Zones Block Live Migration of Guest Domains

Bugs Affecting the Oracle VM Server for SPARC 3.1.1.1 Software

Live Migration Might Fail With Unable to restore ldc resource state on target Domain Migration of LDom failed

Recovery Mode Fails With ldmd in Maintenance Mode When Virtual Switch net-dev Is Missing

Migration to a SPARC M5 or SPARC T5 System Might Panic With suspend: get stick freq failed

Logical Domains Manager Does Not Prohibit the Creation of Circular Dependencies

Bugs Affecting the Oracle VM Server for SPARC 3.1.1 Software

Very Large LDC Counts Might Result in Oracle Solaris Issues in Guest Domains

Fibre Channel Physical Function Is Faulted by FMA And Disabled

Virtual Network LDC Handshake Issues Seen When There Are a Large Number of Virtual Network Devices Present

Sun Storage 16 Gb Fibre Channel Universal HBA Firmware Does Not Support Bandwidth Controls

Adding Memory After Performing a Cross-CPU Migration Might Cause a Guest Domain Panic

Incorrect Device Path for Fibre Channel Virtual Functions in a Root Domain

ldmd Dumps Core When Attempting to Bind a Domain in Either the Binding or Unbinding State

Bugs Affecting the Oracle VM Server for SPARC 3.1 Software

Issues Might Arise When FMA Detects Faulty Memory

ldmd Service Fails to Start Because of a Delay in Creating virtual-channel@0:hvctl

Poor Affinity on the Control Domain When You Assign Memory Before You Assign CPUs in a Delayed Reconfiguration

Cannot Install the Oracle Solaris 11.1 OS Using an EFI GPT Disk Label on Single-Slice Virtual Disk

After Being Migrated, A Domain Can Panic on Boot After Being Started or Rebooted

Size of Preallocated Machine Description Buffer Is Used During Migration

Attempting to Resize a Guest Domain's Virtual CPUs After a Successful Core Remap Operation Might Fail

Oracle Solaris 10: Non-primary Root Domain Hangs at Boot on a primary Reboot When failure-policy=reset

Virtual Network Hang Prevents a Domain Migration

ldmpower Output Sometimes Does Not Include Timestamps

mac_do_softlso Drops LSO Packets

Migration Failure: Invalid Shutdown-group: 0

Autosave Configuration Is Not Updated After the Removal of a Virtual Function or a PCIe Device

ldmp2v convert Command Failure Causes Upgrade Loop

Domain Migrations From SPARC T4 Systems That Run System Firmware 8.3 to SPARC T5, SPARC M5, or SPARC M6 Systems Are Erroneously Permitted

Guest Domain Panics at lgrp_lineage_add(mutex_enter: bad mutex, lp=10351178)

Guest Domains in Transition State After Reboot of the primary Domain

Panic Occurs in Rare Circumstances When the Virtual Network Device Driver Operates in TxDring Mode

A Domain That Has Only One Virtual CPU Assigned Might Panic During a Live Migration

ldm migrate -n Should Fail When Cross-CPU Migration From SPARC T5, SPARC M5, or SPARC M6 System to UltraSPARC T2 or SPARC T3 System

Recovery Mode Should Support PCIe Slot Removal in Non-primary Root Domains

ldm list Does Not Show the evacuated Property for Physical I/O Devices

Invalid Physical Address Is Received During a Domain Migration

send_mondo_set: timeout Panic Occurs When Using the ldm stop Command on a Guest Domain After Stress

Subdevices Under a PCIe Device Revert to an Unassigned Name

WARNING: ddi_intr_alloc: cannot fit into interrupt pool Means That Interrupt Supply Is Exhausted While Attaching I/O Device Drivers

SPARC M5-32 and SPARC M6-32: panic: mpo_cpu_add: Cannot read MD

SPARC M5-32 and SPARC M6-32: Issue With Disks That Are Accessible Through Multiple Direct I/O Paths

ixgbevf Device in SR-IOV Domains Might Become Disabled When Rebooting the primary Domain

Reboot of the Oracle Solaris 10 1/13 primary Domain Might Not Automatically Plumb or Assign an IP Address to a Virtual Function Interface

Oracle Solaris 10 Only: mutex_enter: bad mutex Panic in primary Domain During a Reboot or Shutdown

SPARC M5-32 and SPARC M6-32: LSI-SAS Controller Is Incorrectly Exported With SR-IOV

SPARC T5-8: Uptime Data Shows a Value of 0 for Some ldm List Commands

Cannot Set a Jumbo MTU for sxge Virtual Functions in the primary Domain of a SPARC T5-1B System

ldmd Is Unable to Set the mac-addr and alt-mac-addrs Property Values for the sxge Device

ldm list-io -d Output for an sxge Device on SPARC T5-1B System Is Missing Two Properties

ldm Fails to Evacuate a Faulty Core From a Guest Domain

Memory DR Operations Hang When Reducing Memory Below Four Gbytes

CPU DR of Very Large Number of Virtual CPUs Can Appear to Fail

Migration of a Guest Domain With HIO Virtual Networks and cpu-arch=generic Times Out While Waiting for the Domain to Suspend

SPARC T4-4: Unable to Bind a Guest Domain

Guest Domain Panics While Changing the threading Property Value From max-throughput to max-ipc

Control Domain Hangs on Reboot With Two Active Direct I/O Domains

No Error Message When a Memory DR Add is Partially Successful

Primary or Guest Domain Panics When Unbinding or Migrating a Guest Domain That Has Hybrid I/O Network Devices

Re-creating a Domain That Has PCIe Virtual Functions From an XML File Fails

Incorrect Error Message Issued When Changing the Control Domain From Using Whole Cores to Using Partial Cores

ldm init-system Command Might Not Correctly Restore a Domain Configuration on Which Physical I/O Changes Have Been Made

Logical Domains Manager Might Crash and Restart When You Attempt to Modify Many Domains Simultaneously

ldm list -o Command No Longer Accepts Format Abbreviations

Control Domain Requires the Lowest Core in the System

After Canceling a Migration, ldm Commands That Are Run on the Target System Are Unresponsive

Some Emulex Cards Do Not Work When Assigned to I/O Domain

Guest Domain Panics When Running the cputrack Command During a Migration to a SPARC T4 System

Oracle Solaris 11: DRM Stealing Reports Oracle Solaris DR Failure and Retries

Limit the Maximum Number of Virtual Functions That Can Be Assigned to a Domain

Guest Domain That Uses Cross-CPU Migration Reports Random Uptimes After the Migration Completes

Oracle Solaris 10: ixgbe Driver Might Cause a Panic When Booted With an Intel Dual Port Ethernet Controller X540 Card

Guest Domain Console Randomly Hangs on SPARC T4 Systems

Destroying All Virtual Functions and Returning the Slots to the Root Domain Does Not Restore the Root Complex Resources

ldm remove-io of PCIe Cards That Have PCIe-to-PCI Bridges Should Be Disallowed

ldm stop Command Might Fail If Issued Immediately After an ldm start Command

init-system Does Not Restore Named Core Constraints for Guest Domains From Saved XML Files

System Panics When Rebooting a primary Domain That Has a Very Large Number of Virtual Functions Assigned

Partial Core primary Fails to Permit Whole-Core DR Transitions

ldm list-io Command Shows the UNK or INV State After Boot

Migrating a Very Large Memory Domain on SPARC T4-4 Systems Results in a Panicked Domain on the Target System

Removing a Large Number of CPUs From a Guest Domain Fails

Cannot Use Oracle Solaris Hot-Plug Operations to Hot-Remove a PCIe Endpoint Device

nxge Panics When Migrating a Guest Domain That Has Hybrid I/O and Virtual I/O Virtual Network Devices

All ldm Commands Hang When Migrations Have Missing Shared NFS Resources

Logical Domains Agent Service Does Not Come Online If the System Log Service Does Not Come Online

Kernel Deadlock Causes Machine to Hang During a Migration

DRM and ldm list Output Shows a Different Number of Virtual CPUs Than Are Actually in the Guest Domain

Live Migration of a Domain That Depends on an Inactive Master Domain on the Target Machine Causes ldmd to Fault With a Segmentation Fault

DRM Fails to Restore the Default Number of Virtual CPUs for a Migrated Domain When the Policy Is Removed or Expired

Virtual CPU Timeout Failures During DR

Migration Failure Reason Not Reported When the System MAC Address Clashes With Another MAC Address

Simultaneous Migration Operations in “Opposite Direction” Might Cause ldm to Hang

Removing a Large Number of CPUs From the Control Domain Fails

System Running the Oracle Solaris 10 8/11 OS That Has the Elastic Policy Set Might Hang

pkgadd Fails to Set ACL Entries on /var/svc/manifest/platform/sun4v/ldmd.xml

SPARC T3-1: Issue With Disks That Are Accessible Through Multiple Direct I/O Paths

Memory DR Removal Operations With Multiple Plumbed NIU nxge Instances Can Hang Indefinitely and Never Complete

Using the ldm stop -a Command on Domains in a Master-Slave Relationship Leaves the Slave With the stopping Flag Set

Migration of a Domain That Has an Enabled Default DRM Policy Results in a Target Domain Being Assigned All Available CPUs

An In-Use MAC Address Can be Reassigned

ldmconfig Cannot Create a Domain Configuration on the SP

Uncooperative Oracle Solaris Domain Migration Can Be Blocked If cpu0 Is Offline

Memory DR Is Disabled Following a Canceled Migration

Dynamic Reconfiguration of MTU Values of Virtual Network Devices Sometimes Fails

Migrated Domain With MAUs Contains Only One CPU When Target OS Does Not Support DR of Cryptographic Units

Confusing Migration Failure Message for Real Address Memory Bind Failures

Dynamically Removing All the Cryptographic Units From a Domain Causes SSH to Terminate

PCI Express Dual 10-Gigabit Ethernet Fiber Card Shows Four Subdevices in ldm list-io -l Output

Using Logical Domains mpgroup With MPXIO Storage Array Configuration for High-Disk Availability

ldm Commands Are Slow to Respond When Several Domains Are Booting

Oracle Solaris 11: Zones Configured With an Automatic Network Interface Might Fail to Start

Oracle Solaris 10: Virtual Network Devices Are Not Created Properly on the Control Domain

Newly Added NIU/XAUI Adapters Are Not Visible to the Host OS If Logical Domains Is Configured

I/O Domain or Guest Domain Panics When Booting From e1000g

Explicit Console Group and Port Bindings Are Not Migrated

Migration Does Not Fail If a vdsdev on the Target Has a Different Back End

Migration Can Fail to Bind Memory Even If the Target Has Enough Available

Logical Domains Manager Does Not Start If the Machine Is Not Networked and an NIS Client Is Running

Logical Domains Manager Displays Migrated Domains in Transition States When They Are Already Booted

Cannot Connect to Migrated Domain's Console Unless vntsd Is Restarted

Sometimes, Executing the uadmin 1 0 Command From a Logical Domains System Does Not Return the System to the OK Prompt

Logical Domains Manager Can Take Over 15 Minutes to Shut Down a Domain

scadm Command Can Hang Following an SC or SP Reset

Simultaneous Net Installation of Multiple Domains Fails When in a Common Console Group

Guest Domain With Too Many Virtual Networks on the Same Network Using DHCP Can Become Unresponsive

OpenBoot PROM Variables Cannot be Modified by the eeprom Command When the Logical Domains Manager Is Running

Cannot Set Security Keys With Logical Domains Running

Behavior of the ldm stop-domain Command Can Be Confusing

Documentation Issues

ldm1M Man Page: Describe the Limitation for Using the mblock Property

ldm1M Man Page: Improve description of the ldm list -o status Command

ldm1M Man Page: Only ldm add-spconfig -r Performs a Manual Recovery

Oracle VM Server for SPARC 3.1 Administration Guide Fibre Channel SR-IOV OS Requirements Are Incorrect

Resolved Issues

Resolved Issues in the Oracle VM Server for SPARC 3.1.1.2 Release

Resolved Issues in the Oracle VM Server for SPARC 3.1.1.1 Release

Resolved Issues in the Oracle VM Server for SPARC 3.1.1 Release

Resolved Issues in the Oracle VM Server for SPARC 3.1.0.1 Release

Resolved Issues in the Oracle VM Server for SPARC 3.1 Release

General Issues

This section describes general known issues about this release of the Oracle VM Server for SPARC software that are broader than a specific bug number. Workarounds are provided where available.

Cannot Unbind Domains When They Provide Services to Each Other

 

Do not create a circular dependency between two domains in which each domain provides services to the other. Such a configuration creates a single point of failure condition where an outage in one domain causes the other domain to become unavailable. Circular dependency configurations also prevent you from unbinding the domains after they have been bound initially.

The Logical Domains Manager does not prevent the creation of circular domain dependencies.

If the domains cannot be unbound due to a circular dependency, remove the devices that cause the dependency and then attempt to unbind the domains.

Guest Domain Cannot Run the Oracle Solaris 10 OS When More Than 1024 CPUs Are Assigned

 

A guest domain that has been assigned more than 1024 CPUs cannot run the Oracle Solaris 10 OS. In addition, you cannot use CPU DR to shrink the number of CPUs below 1024 to run the Oracle Solaris 10 OS.

To work around this problem, unbind the guest domain, remove CPUs until you have no more than 1024 CPUs, and then rebind the guest domain. You can then run the Oracle Solaris 10 OS on this guest domain.

Avoid Creating a Configuration Where Two Domains Provide Services to Each Other

Avoid creating a configuration where two domains provide services to each other. In such a case, an outage in one domain will take down the other domain. In addition, such domains cannot be unbound if they are bound with such a configuration. The Logical Domains Manager currently does not block such circular dependencies.

If you cannot unbind a domain because of this sort of dependency, remove the devices that cause the circular dependency and then attempt the unbind again.

Upgrading From Oracle Solaris 10 OS Older Than Oracle Solaris 10 5/08 OS

If the control domain is upgraded from an Oracle Solaris 10 OS version older than Oracle Solaris 10 5/08 OS (or without patch 127127-11), and if volume manager volumes were exported as virtual disks, the virtual disk back ends must be re-exported with options=slice after the Logical Domains Manager has been upgraded. See Exporting Volumes and Backward Compatibility in Oracle VM Server for SPARC 3.1 Administration Guide .

Service Processor and System Controller Are Interchangeable Terms

For discussions in Oracle VM Server for SPARC documentation, the terms service processor (SP) and system controller (SC) are interchangeable.

In Certain Conditions, a Guest Domain's Solaris Volume Manager Configuration or Metadevices Can Be Lost

If a service domain is running a version of Oracle Solaris 10 OS prior to Oracle Solaris 10 1/13 OS and is exporting a physical disk slice as a virtual disk to a guest domain, then this virtual disk will appear in the guest domain with an inappropriate device ID. If that service domain is then upgraded to Oracle Solaris 10 1/13 OS, the physical disk slice exported as a virtual disk will appear in the guest domain with no device ID.

This removal of the device ID of the virtual disk can cause problems to applications attempting to reference the device ID of virtual disks. In particular, Solaris Volume Manager might be unable to find its configuration or to access its metadevices.

Workaround: After upgrading a service domain to Oracle Solaris 10 1/13 OS, if a guest domain is unable to find its Solaris Volume Manager configuration or its metadevices, perform the following procedure.

How to Find a Guest Domain's Solaris Volume Manager Configuration or Metadevices

  1. Boot the guest domain.
  2. Disable the devid feature of Solaris Volume Manager by adding the following lines to the /kernel/drv/md.conf file:
    md_devid_destroy=1;
    md_keep_repl_state=1;
  3. Reboot the guest domain.

    After the domain has booted, the Solaris Volume Manager configuration and metadevices should be available.

  4. Check the Solaris Volume Manager configuration and ensure that it is correct.
  5. Re-enable the Solaris Volume Manager devid feature by removing from the /kernel/drv/md.conf file the two lines that you added in Step 2.
  6. Reboot the guest domain.

    During the reboot, you will see messages similar to this:

    NOTICE: mddb: unable to get devid for 'vdc', 0x10

    These messages are normal and do not report any problems.

Memory Size Requirements

The Oracle VM Server for SPARC software does not impose a memory size limitation when you create a domain. The memory size requirement is a characteristic of the guest operating system. Some Oracle VM Server for SPARC functionality might not work if the amount of memory present is smaller than the recommended size. For recommended and minimum memory requirements for the Oracle Solaris 10 OS, see System Requirements and Recommendations in Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade . For recommended and minimum memory requirements for the Oracle Solaris 11 OS, see Oracle Solaris 11 Release Notes and Oracle Solaris 11.1 Release Notes .

The OpenBoot PROM has a minimum size restriction for a domain. Currently, that restriction is 12 Mbytes. If you have a domain smaller than that size, the Logical Domains Manager will automatically boost the size of the domain to 12 Mbytes. The minimum size restriction for a Fujitsu M10 system is 256 Mbytes. Refer to the release notes for your system firmware for information about memory size requirements.

The memory dynamic reconfiguration (DR) feature enforces 256-Mbyte alignment on the address and size of the memory involved in a given operation. See Memory Alignment in Oracle VM Server for SPARC 3.1 Administration Guide .

Booting a Large Number of Domains

If unallocated virtual CPUs are available, assign them to the service domain to help process the virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more than 32 domains. In cases where maximum domain configurations have only a single CPU in the service domain, do not put unnecessary stress on the single CPU when configuring and using the domain. The virtual switch (vsw) services should be spread across all the network adapters available in the machine. For example, if booting 128 domains on a Sun SPARC Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances. Assigning more than 32 vnet instances per vsw service could cause hard hangs in the service domain.

To run the maximum configurations, a machine needs an adequate amount of memory to support the guest domains. The amount of memory is dependent on your platform and your OS. See the documentation for your platform, Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade , Installing Oracle Solaris 11 Systems , and Installing Oracle Solaris 11.1 Systems .

Memory and swap space usage increases in a guest domain when the vsw services used by the domain provide services to many virtual networks in multiple domains. This increase is due to the peer-to-peer links between all the vnet instances connected to the vsw. The service domain benefits from having extra memory. The recommended minimum is four Gbytes when running more than 64 domains. Start domains in groups of 10 or fewer and wait for them to boot before starting the next batch. The same advice applies to installing operating systems on domains. You can reduce the number of links by disabling inter-vnet links. See Inter-Vnet LDC Channels in Oracle VM Server for SPARC 3.1 Administration Guide .

Cleanly Shutting Down and Power Cycling an Oracle VM Server for SPARC System

If you have made any configuration changes since last saving a configuration to the SC, before you attempt to power off or power cycle an Oracle VM Server for SPARC system, make sure that you save the latest configuration that you want to keep.

How to Power Off a System With Multiple Active Domains

  1. Shut down, stop, and unbind all the non-I/O domains.
  2. Shut down, stop, and unbind any active I/O domains.
  3. Halt the primary domain.

    Because no other domains are bound, the firmware automatically powers off the system.

How to Power Cycle the System

  1. Shut down, stop, and unbind all the non-I/O domains.
  2. Shut down, stop, and unbind any active I/O domains.
  3. Reboot the primary domain.

    Because no other domains are bound, the firmware automatically power cycles the system before rebooting it. When the system restarts, it boots into the domain configuration last saved or explicitly set.

Memory Size Requested Might Be Different From Memory Allocated

Under certain circumstances, the Logical Domains Manager rounds up the requested memory allocation to either the next largest 8-Kbyte or 4-Mbyte multiple. Thhe following example shows sample output of the ldm list-domain -l command, where the constraint value is smaller than the actual allocated size:

Memory:
          Constraints: 1965 M
          raddr          paddr5          size
          0x1000000      0x291000000     1968M

Logical Domains Variable Persistence

Variable updates persist across a reboot but not across a power cycle unless the variable updates are either initiated from OpenBoot firmware on the control domain or followed by saving the configuration to the SC.

Variable updates that are made by using any of these methods should always persist across reboots of the domain. The variable updates also always apply to any subsequent domain configurations that were saved to the SC.

If you modify the time or date on a logical domain, for example, using the ntpdate command, the change persists across reboots of the domain but not across a power cycle of the host. To ensure that time changes persist, save the configuration with the time change to the SP and boot from that configuration.

The following Bug IDs have been filed to resolve these issues: 15375997, 15387338, 15387606, and 15415199.

Oracle's Sun SNMP Management Agent Does Not Support Multiple Domains

Sun Simple Network Management Protocol (SNMP) Management Agent does not support multiple domains. Only a single global domain is supported.

Delayed Reconfiguration

When the primary domain is in a delayed reconfiguration state, resources that are managed by Oracle VM Server for SPARC are power-managed only after the primary domain reboots. Resources that are managed directly by the OS, such as CPUs that are managed by the Solaris Power Aware Dispatcher, are not affected by this state.

Cryptographic Units

Discrete cryptographic units are present only on UltraSPARC T2, UltraSPARC T2 Plus, and SPARC T3 systems.

Cryptographic unit dynamic reconfiguration (DR) enables you to add and remove cryptographic units from a domain. The Logical Domains Manager automatically detects whether a domain allows cryptographic unit DR, and enables the functionality only for those domains. In addition, CPU DR is no longer disabled in domains that have cryptographic units bound and then are running an appropriate version of the Oracle Solaris OS.

ldmp2v convert Command: VxVM Warning Messages During Boot

Running Veritas Volume Manager (VxVM) 5.x on the Oracle Solaris 10 OS is the only supported (tested) version for the Oracle VM Server for SPARC P2V tool. Older versions of VxVM, such as 3.x and 4.x running on the Solaris 8 and Solaris 9 operating systems, might also work. In those cases, the first boot after running the ldmp2v convert command might show warning messages from the VxVM drivers. You can ignore these messages. You can remove the old VRTS* packages after the guest domain has booted.

Boot device: disk0:a File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: normaal
Configuring devices.
/kernel/drv/sparcv9/vxdmp: undefined symbol 'romp'
WARNING: mod_load: cannot load module 'vxdmp'
WARNING: vxdmp: unable to resolve dependency, module 'misc/ted' not found
/kernel/drv/sparcv9/vxdmp: undefined symbol 'romp'
WARNING: mod_load: cannot load module 'vxdmp'
WARNING: vxdmp: unable to resolve dependency, module 'misc/ted' not found
/kernel/drv/sparcv9/vxio: undefined symbol 'romp'
WARNING: mod_load: cannot load module 'vxio'
WARNING: vxio: unable to resolve dependency, module 'drv/vxdmp' not found
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load
WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER
NOTICE: VxVM not started

Oracle Hard Partitioning Requirements for Software Licenses

For information about Oracle's hard partitioning requirements for software licenses, see Partitioning: Server/Hardware Partitioning.

Upgrade Option Not Presented When Using ldmp2v prepare -R

The Oracle Solaris Installer does not present the Upgrade option when the partition tag of the slice that holds the root (/) file system is not set to root. This situation occurs if the tag is not explicitly set when labeling the guest's boot disk. You can use the format command to set the partition tag as follows:

AVAILABLE DISK SELECTIONS:
0. c0d0 <SUN-DiskImage-10GB cyl 282 alt 2 hd 96 sec 768>
  /virtual-devices@100/channel-devices@200/disk@0
1. c4t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
  /pci@400/pci@0/pci@1/scsi@0/sd@2,0
2. c4t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
  /pci@400/pci@0/pci@1/scsi@0/sd@3,0
Specify disk (enter its number)[0]: 0
selecting c0d0
[disk formatted, no defect list found]
format> p


PARTITION MENU:
0      - change `0' partition
1      - change `1' partition
2      - change `2' partition
3      - change `3' partition
4      - change `4' partition
5      - change `5' partition
6      - change `6' partition
7      - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name   - name the current table
print  - display the current table
label  - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit

partition> 0
Part      Tag    Flag     Cylinders       Size            Blocks
0 unassigned    wm       0              0         (0/0/0)          0

Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 0
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 8g
partition> label
Ready to label disk, continue? y

partition>

Sometimes a Block of Dynamically Added Memory Can be Dynamically Removed Only as a Whole

Due to the way in which the Oracle Solaris OS handles the metadata for managing dynamically added memory, you might later be able to remove only the entire block of memory that was previously dynamically added rather than a proper subset of that memory.

This situation could occur if a domain with a small memory size is dynamically grown to a much larger size, as shown in the following example.

primary# ldm list ldom1
NAME  STATE FLAGS   CONS VCPU MEMORY UTIL UPTIME
ldom1 active -n--   5000 2    2G     0.4% 23h

primary# ldm add-mem 16G ldom1

primary# ldm rm-mem 8G ldom1
Memory removal failed because all of the memory is in use.

primary# ldm rm-mem 16G ldom1

primary# ldm list ldom1
NAME  STATE FLAGS   CONS VCPU MEMORY UTIL UPTIME
ldom1 active -n--   5000 2    2G     0.4% 23h

Workaround: Use the ldm add-mem command to sequentially add memory in smaller chunks rather than in chunks larger than you might want to remove in the future.

ldmp2v Command: ufsdump Archiving Method Is No Longer Used

Restoring ufsdump archives on a virtual disk that is backed by a file on a UFS file system might cause the system to hang. In such a case, the ldmp2v prepare command will exit. You might encounter this problem when you manually restore ufsdump archives in preparation for the ldmp2v prepare -R /altroot command when the virtual disk is a file on a UFS file system. For compatibility with previously created ufsdump archives, you can still use the ldmp2v prepare command to restore ufsdump archives on virtual disks that are not backed by a file on a UFS file system. However, the use of ufsdump archives is not recommended.

Only One CPU Configuration Operation Is Permitted to Be Performed During a Delayed Reconfiguration

Do not attempt to perform more than one CPU configuration operation on the primary domain while it is in a delayed reconfiguration. If you attempt more CPU configuration requests, they will be rejected.

Workaround: Perform one of the following actions:

Oracle VM Server for SPARC 3.1 ldmd Daemon Does Not Start If Multiple Virtual Switches Are Assigned to a Single Network Adapter

The Oracle VM Server for SPARC 3.0 software inadvertently exposed a capability to assign multiple virtual switches to a single network adapter. This capability is intended only to be used in a specific way by the Oracle VM Manager software.

The Oracle VM Server for SPARC 3.1 software restored the original behavior, which prevents you from assigning multiple virtual switches to a single network adapter. However, if you configured your Oracle VM Server for SPARC 3.0 system to assign multiple virtual switches to a single network adapter, the ldmd daemon does not start when you upgrade to Oracle VM Server for SPARC 3.1.

Workaround: Perform the following steps:

  1. Temporarily re-enable this capability on your Oracle VM Server for SPARC 3.1 system to enable the ldmd daemon to start.

    # svccfg -s ldoms/ldmd setprop ldmd/ovm_manager=true
    # svcadm refresh ldmd
    # svcadm disable ldmd
    # svcadm enable ldmd
  2. Update your configuration to assign only one virtual switch to a network device.

  3. Disable this capability on your Oracle VM Server for SPARC 3.1 system.

    # svccfg -s ldoms/ldmd setprop ldmd/ovm_manager=false
    # svcadm refresh ldmd
    # svcadm disable ldmd
    # svcadm enable ldmd

    It is important that you set the ovm_manager property to false because this property might introduce other side effects in future Oracle VM Server for SPARC releases.

Oracle Solaris Boot Disk Compatibility

Historically, the Oracle Solaris OS has been installed on a boot disk configured with an SMI VTOC disk label. Starting with the Oracle Solaris 11.1 OS, the OS is installed on a boot disk that is configured with an extensible firmware interface (EFI) GUID partition table (GPT) disk label by default. If the firmware does not support EFI, the disk is configured with an SMI VTOC disk label instead. This situation applies only to SPARC T4 servers that run at least system firmware version 8.4.0, to SPARC T5, SPARC M5, or SPARC M6 servers that run at least system firmware version 9.1.0, and to Fujitsu M10 systems that run at least XCP2230.

So, an Oracle Solaris 11.1 boot disk that is created on an up-to-date SPARC T4, SPARC T5, SPARC M5, SPARC M6, or Fujitsu M10 system cannot be used on older servers or on servers that run older firmware.

This limitation restrains the ability to use either cold or live migration to move a domain from a recent server to an older server. This limitation also prevents you from using an EFI GPT boot disk image on an older server.

To determine whether an Oracle Solaris 11.1 boot disk is compatible with your server and its firmware, ensure that the Oracle Solaris 11.1 OS is installed on a disk that is configured with an SMI VTOC disk label.