Skip Navigation Links | |
Exit Print View | |
Oracle® VM Server for SPARC 3.1.1.2, 3.1.1.1, 3.1.1, and 3.1 Release Notes |
Chapter 1 Oracle VM Server for SPARC 3.1.1.2, 3.1.1.1, 3.1.1, and 3.1 Release Notes
Oracle VM Server for SPARC 3.1.1.2 Maintenance Update
Oracle VM Server for SPARC 3.1.1.1 Maintenance Update
What's New in the Oracle VM Server for SPARC 3.1.1.1 Maintenance Update
What's New in the Oracle VM Server for SPARC 3.1.1 Release
What's New in the Oracle VM Server for SPARC 3.1 Release
Required Oracle Solaris OS Versions
Required Oracle Solaris OS Versions for the Oracle VM Server for SPARC 3.1.1.1 Maintenance Update
Required Oracle Solaris OS Versions for Oracle VM Server for SPARC 3.1.1
Required Oracle Solaris OS Versions for Oracle VM Server for SPARC 3.1
Required Software to Enable the Latest Oracle VM Server for SPARC Features
Required System Firmware Patches
Minimum Version of Software Required
Direct I/O Hardware and Software Requirements
PCIe SR-IOV Hardware and Software Requirements
Non-primary Root Domain Hardware and Software Requirements
Recovery Mode Hardware and Software Requirements
Location of the Oracle VM Server for SPARC Software
Software That Can Be Used With the Oracle VM Server for SPARC Software
System Controller Software That Interacts With Oracle VM Server for SPARC
Upgrading to the Current Oracle VM Server for SPARC Software
Upgrading to the Oracle VM Server for SPARC 3.1.1.1 Software
Upgrading to the Oracle VM Server for SPARC 3.1.1 Software
Upgrading to the Oracle VM Server for SPARC 3.1 Software
Deprecated Oracle VM Server for SPARC Features
Cannot Unbind Domains When They Provide Services to Each Other
Guest Domain Cannot Run the Oracle Solaris 10 OS When More Than 1024 CPUs Are Assigned
Avoid Creating a Configuration Where Two Domains Provide Services to Each Other
Upgrading From Oracle Solaris 10 OS Older Than Oracle Solaris 10 5/08 OS
Service Processor and System Controller Are Interchangeable Terms
How to Find a Guest Domain's Solaris Volume Manager Configuration or Metadevices
Booting a Large Number of Domains
Cleanly Shutting Down and Power Cycling an Oracle VM Server for SPARC System
How to Power Off a System With Multiple Active Domains
Memory Size Requested Might Be Different From Memory Allocated
Logical Domains Variable Persistence
Oracle's Sun SNMP Management Agent Does Not Support Multiple Domains
ldmp2v convert Command: VxVM Warning Messages During Boot
Oracle Hard Partitioning Requirements for Software Licenses
Upgrade Option Not Presented When Using ldmp2v prepare -R
Sometimes a Block of Dynamically Added Memory Can be Dynamically Removed Only as a Whole
ldmp2v Command: ufsdump Archiving Method Is No Longer Used
Only One CPU Configuration Operation Is Permitted to Be Performed During a Delayed Reconfiguration
Version Restrictions for Migration
CPU Restrictions for Migration
Version Restrictions for Cross-CPU Migration
Domains That Have Only One Virtual CPU Assigned Might Panic During a Live Migration
Oracle VM Server for SPARC MIB Issues
snmptable Command Does Not Work With the Version 2 or Version 3 Option
Control Domain Hangs When Stopping or Starting I/O Domains
Warnings Appear on Console When Creating Fibre Channel Virtual Functions
Fibre Channel Physical Function Configuration Changes Require Several Minutes to Complete
Fujitsu M10 System Has Different SR-IOV Feature Limitations
Misleading Messages Shown For InfiniBand SR-IOV Operations
Bugs Affecting the Oracle VM Server for SPARC Software
Bugs Affecting the Oracle VM Server for SPARC 3.1.1.2 Software
System Crashes When Applying the Whole-Core Constraint to a Partial Core primary Domain
Kernel Zones Block Live Migration of Guest Domains
Bugs Affecting the Oracle VM Server for SPARC 3.1.1.1 Software
Recovery Mode Fails With ldmd in Maintenance Mode When Virtual Switch net-dev Is Missing
Migration to a SPARC M5 or SPARC T5 System Might Panic With suspend: get stick freq failed
Logical Domains Manager Does Not Prohibit the Creation of Circular Dependencies
Bugs Affecting the Oracle VM Server for SPARC 3.1.1 Software
Very Large LDC Counts Might Result in Oracle Solaris Issues in Guest Domains
Fibre Channel Physical Function Is Faulted by FMA And Disabled
Sun Storage 16 Gb Fibre Channel Universal HBA Firmware Does Not Support Bandwidth Controls
Adding Memory After Performing a Cross-CPU Migration Might Cause a Guest Domain Panic
Incorrect Device Path for Fibre Channel Virtual Functions in a Root Domain
ldmd Dumps Core When Attempting to Bind a Domain in Either the Binding or Unbinding State
Bugs Affecting the Oracle VM Server for SPARC 3.1 Software
Issues Might Arise When FMA Detects Faulty Memory
ldmd Service Fails to Start Because of a Delay in Creating virtual-channel@0:hvctl
Cannot Install the Oracle Solaris 11.1 OS Using an EFI GPT Disk Label on Single-Slice Virtual Disk
After Being Migrated, A Domain Can Panic on Boot After Being Started or Rebooted
Size of Preallocated Machine Description Buffer Is Used During Migration
Virtual Network Hang Prevents a Domain Migration
ldmpower Output Sometimes Does Not Include Timestamps
mac_do_softlso Drops LSO Packets
Migration Failure: Invalid Shutdown-group: 0
Autosave Configuration Is Not Updated After the Removal of a Virtual Function or a PCIe Device
ldmp2v convert Command Failure Causes Upgrade Loop
Guest Domain Panics at lgrp_lineage_add(mutex_enter: bad mutex, lp=10351178)
Guest Domains in Transition State After Reboot of the primary Domain
Panic Occurs in Rare Circumstances When the Virtual Network Device Driver Operates in TxDring Mode
A Domain That Has Only One Virtual CPU Assigned Might Panic During a Live Migration
Recovery Mode Should Support PCIe Slot Removal in Non-primary Root Domains
ldm list Does Not Show the evacuated Property for Physical I/O Devices
Invalid Physical Address Is Received During a Domain Migration
send_mondo_set: timeout Panic Occurs When Using the ldm stop Command on a Guest Domain After Stress
Subdevices Under a PCIe Device Revert to an Unassigned Name
SPARC M5-32 and SPARC M6-32: panic: mpo_cpu_add: Cannot read MD
SPARC M5-32 and SPARC M6-32: Issue With Disks That Are Accessible Through Multiple Direct I/O Paths
ixgbevf Device in SR-IOV Domains Might Become Disabled When Rebooting the primary Domain
Oracle Solaris 10 Only: mutex_enter: bad mutex Panic in primary Domain During a Reboot or Shutdown
SPARC M5-32 and SPARC M6-32: LSI-SAS Controller Is Incorrectly Exported With SR-IOV
SPARC T5-8: Uptime Data Shows a Value of 0 for Some ldm List Commands
Cannot Set a Jumbo MTU for sxge Virtual Functions in the primary Domain of a SPARC T5-1B System
ldmd Is Unable to Set the mac-addr and alt-mac-addrs Property Values for the sxge Device
ldm list-io -d Output for an sxge Device on SPARC T5-1B System Is Missing Two Properties
ldm Fails to Evacuate a Faulty Core From a Guest Domain
Memory DR Operations Hang When Reducing Memory Below Four Gbytes
CPU DR of Very Large Number of Virtual CPUs Can Appear to Fail
SPARC T4-4: Unable to Bind a Guest Domain
Guest Domain Panics While Changing the threading Property Value From max-throughput to max-ipc
Control Domain Hangs on Reboot With Two Active Direct I/O Domains
No Error Message When a Memory DR Add is Partially Successful
Re-creating a Domain That Has PCIe Virtual Functions From an XML File Fails
ldm list -o Command No Longer Accepts Format Abbreviations
Control Domain Requires the Lowest Core in the System
After Canceling a Migration, ldm Commands That Are Run on the Target System Are Unresponsive
Some Emulex Cards Do Not Work When Assigned to I/O Domain
Guest Domain Panics When Running the cputrack Command During a Migration to a SPARC T4 System
Oracle Solaris 11: DRM Stealing Reports Oracle Solaris DR Failure and Retries
Limit the Maximum Number of Virtual Functions That Can Be Assigned to a Domain
Guest Domain That Uses Cross-CPU Migration Reports Random Uptimes After the Migration Completes
Guest Domain Console Randomly Hangs on SPARC T4 Systems
ldm remove-io of PCIe Cards That Have PCIe-to-PCI Bridges Should Be Disallowed
ldm stop Command Might Fail If Issued Immediately After an ldm start Command
init-system Does Not Restore Named Core Constraints for Guest Domains From Saved XML Files
Partial Core primary Fails to Permit Whole-Core DR Transitions
ldm list-io Command Shows the UNK or INV State After Boot
Removing a Large Number of CPUs From a Guest Domain Fails
Cannot Use Oracle Solaris Hot-Plug Operations to Hot-Remove a PCIe Endpoint Device
All ldm Commands Hang When Migrations Have Missing Shared NFS Resources
Logical Domains Agent Service Does Not Come Online If the System Log Service Does Not Come Online
Kernel Deadlock Causes Machine to Hang During a Migration
Virtual CPU Timeout Failures During DR
Migration Failure Reason Not Reported When the System MAC Address Clashes With Another MAC Address
Simultaneous Migration Operations in “Opposite Direction” Might Cause ldm to Hang
Removing a Large Number of CPUs From the Control Domain Fails
System Running the Oracle Solaris 10 8/11 OS That Has the Elastic Policy Set Might Hang
pkgadd Fails to Set ACL Entries on /var/svc/manifest/platform/sun4v/ldmd.xml
SPARC T3-1: Issue With Disks That Are Accessible Through Multiple Direct I/O Paths
An In-Use MAC Address Can be Reassigned
ldmconfig Cannot Create a Domain Configuration on the SP
Uncooperative Oracle Solaris Domain Migration Can Be Blocked If cpu0 Is Offline
Memory DR Is Disabled Following a Canceled Migration
Dynamic Reconfiguration of MTU Values of Virtual Network Devices Sometimes Fails
Confusing Migration Failure Message for Real Address Memory Bind Failures
Dynamically Removing All the Cryptographic Units From a Domain Causes SSH to Terminate
PCI Express Dual 10-Gigabit Ethernet Fiber Card Shows Four Subdevices in ldm list-io -l Output
Using Logical Domains mpgroup With MPXIO Storage Array Configuration for High-Disk Availability
ldm Commands Are Slow to Respond When Several Domains Are Booting
Oracle Solaris 11: Zones Configured With an Automatic Network Interface Might Fail to Start
Oracle Solaris 10: Virtual Network Devices Are Not Created Properly on the Control Domain
Newly Added NIU/XAUI Adapters Are Not Visible to the Host OS If Logical Domains Is Configured
I/O Domain or Guest Domain Panics When Booting From e1000g
Explicit Console Group and Port Bindings Are Not Migrated
Migration Does Not Fail If a vdsdev on the Target Has a Different Back End
Migration Can Fail to Bind Memory Even If the Target Has Enough Available
Logical Domains Manager Does Not Start If the Machine Is Not Networked and an NIS Client Is Running
Logical Domains Manager Displays Migrated Domains in Transition States When They Are Already Booted
Cannot Connect to Migrated Domain's Console Unless vntsd Is Restarted
Logical Domains Manager Can Take Over 15 Minutes to Shut Down a Domain
scadm Command Can Hang Following an SC or SP Reset
Simultaneous Net Installation of Multiple Domains Fails When in a Common Console Group
Guest Domain With Too Many Virtual Networks on the Same Network Using DHCP Can Become Unresponsive
Cannot Set Security Keys With Logical Domains Running
Behavior of the ldm stop-domain Command Can Be Confusing
ldm1M Man Page: Describe the Limitation for Using the mblock Property
ldm1M Man Page: Improve description of the ldm list -o status Command
ldm1M Man Page: Only ldm add-spconfig -r Performs a Manual Recovery
Resolved Issues in the Oracle VM Server for SPARC 3.1.1.2 Release
Resolved Issues in the Oracle VM Server for SPARC 3.1.1.1 Release
Resolved Issues in the Oracle VM Server for SPARC 3.1.1 Release
Resolved Issues in the Oracle VM Server for SPARC 3.1.0.1 Release
Resolved Issues in the Oracle VM Server for SPARC 3.1 Release
This section describes general known issues about this release of the Oracle VM Server for SPARC software that are broader than a specific bug number. Workarounds are provided where available.
Do not create a circular dependency between two domains in which each domain provides services to the other. Such a configuration creates a single point of failure condition where an outage in one domain causes the other domain to become unavailable. Circular dependency configurations also prevent you from unbinding the domains after they have been bound initially.
The Logical Domains Manager does not prevent the creation of circular domain dependencies.
If the domains cannot be unbound due to a circular dependency, remove the devices that cause the dependency and then attempt to unbind the domains.
A guest domain that has been assigned more than 1024 CPUs cannot run the Oracle Solaris 10 OS. In addition, you cannot use CPU DR to shrink the number of CPUs below 1024 to run the Oracle Solaris 10 OS.
To work around this problem, unbind the guest domain, remove CPUs until you have no more than 1024 CPUs, and then rebind the guest domain. You can then run the Oracle Solaris 10 OS on this guest domain.
Avoid creating a configuration where two domains provide services to each other. In such a case, an outage in one domain will take down the other domain. In addition, such domains cannot be unbound if they are bound with such a configuration. The Logical Domains Manager currently does not block such circular dependencies.
If you cannot unbind a domain because of this sort of dependency, remove the devices that cause the circular dependency and then attempt the unbind again.
If the control domain is upgraded from an Oracle Solaris 10 OS version older than Oracle Solaris 10 5/08 OS (or without patch 127127-11), and if volume manager volumes were exported as virtual disks, the virtual disk back ends must be re-exported with options=slice after the Logical Domains Manager has been upgraded. See Exporting Volumes and Backward Compatibility in Oracle VM Server for SPARC 3.1 Administration Guide .
For discussions in Oracle VM Server for SPARC documentation, the terms service processor (SP) and system controller (SC) are interchangeable.
If a service domain is running a version of Oracle Solaris 10 OS prior to Oracle Solaris 10 1/13 OS and is exporting a physical disk slice as a virtual disk to a guest domain, then this virtual disk will appear in the guest domain with an inappropriate device ID. If that service domain is then upgraded to Oracle Solaris 10 1/13 OS, the physical disk slice exported as a virtual disk will appear in the guest domain with no device ID.
This removal of the device ID of the virtual disk can cause problems to applications attempting to reference the device ID of virtual disks. In particular, Solaris Volume Manager might be unable to find its configuration or to access its metadevices.
Workaround: After upgrading a service domain to Oracle Solaris 10 1/13 OS, if a guest domain is unable to find its Solaris Volume Manager configuration or its metadevices, perform the following procedure.
md_devid_destroy=1; md_keep_repl_state=1;
After the domain has booted, the Solaris Volume Manager configuration and metadevices should be available.
During the reboot, you will see messages similar to this:
NOTICE: mddb: unable to get devid for 'vdc', 0x10
These messages are normal and do not report any problems.
The Oracle VM Server for SPARC software does not impose a memory size limitation when you create a domain. The memory size requirement is a characteristic of the guest operating system. Some Oracle VM Server for SPARC functionality might not work if the amount of memory present is smaller than the recommended size. For recommended and minimum memory requirements for the Oracle Solaris 10 OS, see System Requirements and Recommendations in Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade . For recommended and minimum memory requirements for the Oracle Solaris 11 OS, see Oracle Solaris 11 Release Notes and Oracle Solaris 11.1 Release Notes .
The OpenBoot PROM has a minimum size restriction for a domain. Currently, that restriction is 12 Mbytes. If you have a domain smaller than that size, the Logical Domains Manager will automatically boost the size of the domain to 12 Mbytes. The minimum size restriction for a Fujitsu M10 system is 256 Mbytes. Refer to the release notes for your system firmware for information about memory size requirements.
The memory dynamic reconfiguration (DR) feature enforces 256-Mbyte alignment on the address and size of the memory involved in a given operation. See Memory Alignment in Oracle VM Server for SPARC 3.1 Administration Guide .
You can boot the following number of domains depending on your platform:
Up to 256 on Fujitsu M10 systems per physical partition
Up to 128 on SPARC M6 systems per physical domain
Up to 128 on SPARC M5 systems per physical domain
Up to 128 on SPARC T5 systems
Up to 128 on SPARC T4 servers
Up to 128 on SPARC T3 servers
Up to 128 on UltraSPARC T2 Plus servers
Up to 64 on UltraSPARC T2 servers
If unallocated virtual CPUs are available, assign them to the service domain to help process the virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more than 32 domains. In cases where maximum domain configurations have only a single CPU in the service domain, do not put unnecessary stress on the single CPU when configuring and using the domain. The virtual switch (vsw) services should be spread across all the network adapters available in the machine. For example, if booting 128 domains on a Sun SPARC Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances. Assigning more than 32 vnet instances per vsw service could cause hard hangs in the service domain.
To run the maximum configurations, a machine needs an adequate amount of memory to support the guest domains. The amount of memory is dependent on your platform and your OS. See the documentation for your platform, Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade , Installing Oracle Solaris 11 Systems , and Installing Oracle Solaris 11.1 Systems .
Memory and swap space usage increases in a guest domain when the vsw services used by the domain provide services to many virtual networks in multiple domains. This increase is due to the peer-to-peer links between all the vnet instances connected to the vsw. The service domain benefits from having extra memory. The recommended minimum is four Gbytes when running more than 64 domains. Start domains in groups of 10 or fewer and wait for them to boot before starting the next batch. The same advice applies to installing operating systems on domains. You can reduce the number of links by disabling inter-vnet links. See Inter-Vnet LDC Channels in Oracle VM Server for SPARC 3.1 Administration Guide .
If you have made any configuration changes since last saving a configuration to the SC, before you attempt to power off or power cycle an Oracle VM Server for SPARC system, make sure that you save the latest configuration that you want to keep.
Because no other domains are bound, the firmware automatically powers off the system.
Because no other domains are bound, the firmware automatically power cycles the system before rebooting it. When the system restarts, it boots into the domain configuration last saved or explicitly set.
Under certain circumstances, the Logical Domains Manager rounds up the requested memory allocation to either the next largest 8-Kbyte or 4-Mbyte multiple. Thhe following example shows sample output of the ldm list-domain -l command, where the constraint value is smaller than the actual allocated size:
Memory: Constraints: 1965 M raddr paddr5 size 0x1000000 0x291000000 1968M
Variable updates persist across a reboot but not across a power cycle unless the variable updates are either initiated from OpenBoot firmware on the control domain or followed by saving the configuration to the SC.
Note the following conditions:
When the control domain reboots, if there are no bound guest domains and no delayed reconfiguration in progress, the SC performs a power cycle of the system.
When the control domain reboots, if guest domains are bound or active (or the control domain is in the middle of a delayed reconfiguration), the SC does not perform a power cycle of the system.
Logical Domains variables for a domain can be specified using any of the following methods:
At the OpenBoot prompt.
Using the Oracle Solaris OS eeprom(1M) command.
Using the Logical Domains Manager CLI (ldm).
In a limited fashion, from the system controller (SC) using the bootmode command. This method can be used for only certain variables, and only when in the factory-default configuration.
Variable updates that are made by using any of these methods should always persist across reboots of the domain. The variable updates also always apply to any subsequent domain configurations that were saved to the SC.
In Oracle VM Server for SPARC 3.1 software, variable updates do not persist as expected in a few cases:
All methods of updating a variable persist across reboots of that domain. However, they do not persist across a power cycle of the system unless a subsequent logical domain configuration is saved to the SC.
However in the control domain, updates made using either OpenBoot firmware commands or the eeprom command do persist across a power cycle of the system even without subsequently saving a new logical domain configuration to the SC. The eeprom command supports this behavior on SPARC T5, SPARC M5, and SPARC M6 systems, and on SPARC T3 and SPARC T4 systems that run at least version 8.2.1 of the system firmware.
In all cases, when reverting to the factory-default configuration from a configuration generated by the Logical Domains Manager, all Logical Domains variables start with their default values.
If you are concerned about Logical Domains variable changes, do one of the following:
Bring the system to the ok prompt and update the variables.
Update the variables while the Logical Domains Manager is disabled:
# svcadm disable ldmd update variables # svcadm enable ldmd
When running Live Upgrade, perform the following steps:
# svcadm disable -t ldmd # luactivate be3 # init 6
If you modify the time or date on a logical domain, for example, using the ntpdate command, the change persists across reboots of the domain but not across a power cycle of the host. To ensure that time changes persist, save the configuration with the time change to the SP and boot from that configuration.
The following Bug IDs have been filed to resolve these issues: 15375997, 15387338, 15387606, and 15415199.
Sun Simple Network Management Protocol (SNMP) Management Agent does not support multiple domains. Only a single global domain is supported.
When the primary domain is in a delayed reconfiguration state, resources that are managed by Oracle VM Server for SPARC are power-managed only after the primary domain reboots. Resources that are managed directly by the OS, such as CPUs that are managed by the Solaris Power Aware Dispatcher, are not affected by this state.
Discrete cryptographic units are present only on UltraSPARC T2, UltraSPARC T2 Plus, and SPARC T3 systems.
Cryptographic unit dynamic reconfiguration (DR) enables you to add and remove cryptographic units from a domain. The Logical Domains Manager automatically detects whether a domain allows cryptographic unit DR, and enables the functionality only for those domains. In addition, CPU DR is no longer disabled in domains that have cryptographic units bound and then are running an appropriate version of the Oracle Solaris OS.
Running Veritas Volume Manager (VxVM) 5.x on the Oracle Solaris 10 OS is the only supported (tested) version for the Oracle VM Server for SPARC P2V tool. Older versions of VxVM, such as 3.x and 4.x running on the Solaris 8 and Solaris 9 operating systems, might also work. In those cases, the first boot after running the ldmp2v convert command might show warning messages from the VxVM drivers. You can ignore these messages. You can remove the old VRTS* packages after the guest domain has booted.
Boot device: disk0:a File and args: SunOS Release 5.10 Version Generic_139555-08 64-bit Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: normaal Configuring devices. /kernel/drv/sparcv9/vxdmp: undefined symbol 'romp' WARNING: mod_load: cannot load module 'vxdmp' WARNING: vxdmp: unable to resolve dependency, module 'misc/ted' not found /kernel/drv/sparcv9/vxdmp: undefined symbol 'romp' WARNING: mod_load: cannot load module 'vxdmp' WARNING: vxdmp: unable to resolve dependency, module 'misc/ted' not found /kernel/drv/sparcv9/vxio: undefined symbol 'romp' WARNING: mod_load: cannot load module 'vxio' WARNING: vxio: unable to resolve dependency, module 'drv/vxdmp' not found WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER WARNING: VxVM vxspec V-5-0-0 vxspec: vxio not loaded. Aborting vxspec load WARNING: vxspec : CANNOT INITIALIZE vxio DRIVER NOTICE: VxVM not started
For information about Oracle's hard partitioning requirements for software licenses, see Partitioning: Server/Hardware Partitioning.
The Oracle Solaris Installer does not present the Upgrade option when the partition tag of the slice that holds the root (/) file system is not set to root. This situation occurs if the tag is not explicitly set when labeling the guest's boot disk. You can use the format command to set the partition tag as follows:
AVAILABLE DISK SELECTIONS: 0. c0d0 <SUN-DiskImage-10GB cyl 282 alt 2 hd 96 sec 768> /virtual-devices@100/channel-devices@200/disk@0 1. c4t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@400/pci@0/pci@1/scsi@0/sd@2,0 2. c4t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@400/pci@0/pci@1/scsi@0/sd@3,0 Specify disk (enter its number)[0]: 0 selecting c0d0 [disk formatted, no defect list found] format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> 0 Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: root Enter partition permission flags[wm]: Enter new starting cyl[0]: 0 Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 8g partition> label Ready to label disk, continue? y partition>
Due to the way in which the Oracle Solaris OS handles the metadata for managing dynamically added memory, you might later be able to remove only the entire block of memory that was previously dynamically added rather than a proper subset of that memory.
This situation could occur if a domain with a small memory size is dynamically grown to a much larger size, as shown in the following example.
primary# ldm list ldom1 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldom1 active -n-- 5000 2 2G 0.4% 23h primary# ldm add-mem 16G ldom1 primary# ldm rm-mem 8G ldom1 Memory removal failed because all of the memory is in use. primary# ldm rm-mem 16G ldom1 primary# ldm list ldom1 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldom1 active -n-- 5000 2 2G 0.4% 23h
Workaround: Use the ldm add-mem command to sequentially add memory in smaller chunks rather than in chunks larger than you might want to remove in the future.
Recovery: Perform one of the following actions:
Stop the domain, remove the memory, and then restart the domain.
Reboot the domain, which causes the Oracle Solaris OS to re-allocate its memory management metadata such that the previously added memory can now be removed dynamically in smaller chunks.
Restoring ufsdump archives on a virtual disk that is backed by a file on a UFS file system might cause the system to hang. In such a case, the ldmp2v prepare command will exit. You might encounter this problem when you manually restore ufsdump archives in preparation for the ldmp2v prepare -R /altroot command when the virtual disk is a file on a UFS file system. For compatibility with previously created ufsdump archives, you can still use the ldmp2v prepare command to restore ufsdump archives on virtual disks that are not backed by a file on a UFS file system. However, the use of ufsdump archives is not recommended.
Do not attempt to perform more than one CPU configuration operation on the primary domain while it is in a delayed reconfiguration. If you attempt more CPU configuration requests, they will be rejected.
Workaround: Perform one of the following actions:
Cancel the delayed reconfiguration, start another one, and request the configuration changes that were lost from the previous delayed reconfiguration.
Reboot the control domain with the incorrect CPU count and then make the allocation corrections after the domain reboots.
The Oracle VM Server for SPARC 3.0 software inadvertently exposed a capability to assign multiple virtual switches to a single network adapter. This capability is intended only to be used in a specific way by the Oracle VM Manager software.
The Oracle VM Server for SPARC 3.1 software restored the original behavior, which prevents you from assigning multiple virtual switches to a single network adapter. However, if you configured your Oracle VM Server for SPARC 3.0 system to assign multiple virtual switches to a single network adapter, the ldmd daemon does not start when you upgrade to Oracle VM Server for SPARC 3.1.
Workaround: Perform the following steps:
Temporarily re-enable this capability on your Oracle VM Server for SPARC 3.1 system to enable the ldmd daemon to start.
# svccfg -s ldoms/ldmd setprop ldmd/ovm_manager=true # svcadm refresh ldmd # svcadm disable ldmd # svcadm enable ldmd
Update your configuration to assign only one virtual switch to a network device.
Disable this capability on your Oracle VM Server for SPARC 3.1 system.
# svccfg -s ldoms/ldmd setprop ldmd/ovm_manager=false # svcadm refresh ldmd # svcadm disable ldmd # svcadm enable ldmd
It is important that you set the ovm_manager property to false because this property might introduce other side effects in future Oracle VM Server for SPARC releases.
Historically, the Oracle Solaris OS has been installed on a boot disk configured with an SMI VTOC disk label. Starting with the Oracle Solaris 11.1 OS, the OS is installed on a boot disk that is configured with an extensible firmware interface (EFI) GUID partition table (GPT) disk label by default. If the firmware does not support EFI, the disk is configured with an SMI VTOC disk label instead. This situation applies only to SPARC T4 servers that run at least system firmware version 8.4.0, to SPARC T5, SPARC M5, or SPARC M6 servers that run at least system firmware version 9.1.0, and to Fujitsu M10 systems that run at least XCP2230.
The following servers cannot boot from a disk that that has an EFI GPT disk label:
UltraSPARC T2, UltraSPARC T2 Plus, and SPARC T3 servers no matter which system firmware version is used
SPARC T4 servers that run system firmware versions prior to 8.4.0
SPARC T5, SPARC M5, and SPARC M6 servers that run system firmware versions prior to 9.1.0
Fujitsu M10 systems that run XCP versions prior to 2230
So, an Oracle Solaris 11.1 boot disk that is created on an up-to-date SPARC T4, SPARC T5, SPARC M5, SPARC M6, or Fujitsu M10 system cannot be used on older servers or on servers that run older firmware.
This limitation restrains the ability to use either cold or live migration to move a domain from a recent server to an older server. This limitation also prevents you from using an EFI GPT boot disk image on an older server.
To determine whether an Oracle Solaris 11.1 boot disk is compatible with your server and its firmware, ensure that the Oracle Solaris 11.1 OS is installed on a disk that is configured with an SMI VTOC disk label.
To maintain backward compatibility with systems that run older firmware, use one of the following procedures. Otherwise, the boot disk uses the EFI GPT disk label by default. These procedures show how to ensure that the Oracle Solaris 11.1 OS is installed on a boot disk with an SMI VTOC disk label on a SPARC T4 server with at least system firmware version 8.4.0, on a SPARC T5, SPARC M5, or SPARC M6 server with at least system firmware version 9.1.0, and on a Fujitsu M10 system with at least XCP version 2230.
Solution 1: Remove the gpt property so that the firmware does not report that it supports EFI.
From the OpenBoot PROM prompt, disable automatic booting and reset the system to be installed.
ok setenv auto-boot? false ok reset-all
After the system resets, it returns to the ok prompt.
Change to the /packages/disk-label directory and remove the gpt property.
ok cd /packages/disk-label ok " gpt" delete-property
Begin the Oracle Solaris 11.1 OS installation.
For example, perform a network installation:
ok boot net - install
Solution 2: Use the format -e command to write an SMI VTOC label on the disk to be installed with the Oracle Solaris 11.1 OS.
Write an SMI VTOC label on the disk.
For example, select the label option and specify the SMI label:
# format -e c1d0 format> label [0] SMI Label [1] EFI Label Specify Label type[1]: 0
Configure the disk with a slice 0 and slice 2 that cover the entire disk.
The disk should have no other partitions. For example:
format> partition partition> print Current partition table (unnamed): Total disk cylinders available: 14087 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 14086 136.71GB (14087/0/0) 286698624 1 unassigned wu 0 0 (0/0/0) 0 2 backup wu 0 - 14086 136.71GB (14087/0/0) 286698624 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0
Re-write the SMI VTOC disk label.
partition> label [0] SMI Label [1] EFI Label Specify Label type[0]: 0 Ready to label disk, continue? y
Configure your Oracle Solaris Automatic Installer (AI) to install the Oracle Solaris OS on slice 0 of the boot disk.
Change the <disk> excerpt in the AI manifest as follows:
<target> <disk whole_disk="true"> <disk_keyword key="boot_disk"/> <slice name="0" in_zpool="rpool"/> </disk> [...] </target>
Perform the installation of the Oracle Solaris 11.1 OS.