This section contains requirements and details about products managed by, or associated with, Ops Center.
Ensure that the target server meets the following requirements for provisioning Oracle VM Server for SPARC software.
The following hardware is supported for Oracle VM Server for SPARC:
SPARC T3 servers
Oracle SPARC T3-1 Server
Oracle SPARC T3-1B Server
Oracle SPARC T3-2 Server
Oracle SPARC T3-4 Server
Netra SPARC T3-1 Server
Netra SPARC T3-1BA Server
Note:
Oracle VM Server for SPARC 2.0 is supported only on Oracle SPARC T3-1 server.Note:
Only Oracle VM Server for SPARC 2.1 is supported on SPARC T3 servers.UltraSPARC T2 Plus based servers
Sun SPARC Enterprise T5140 and T5240 Servers
Sun SPARC Enterprise T5440 Server
Sun Blade T6340 Server Module
Netra T5440 Server
UltraSPARC T2 based servers
Sun SPARC Enterprise T5120 and T5220 Servers
Sun Blade T6320 Server Module
Netra CP3260 ATCA Blade Server
Netra T5220 Server
UltraSPARC T1 based servers
Note:
Starting with Logical Domains 1.3 version, the UltraSPARC T1 based servers are not supported. Earlier versions are supported on these servers.Sun Fire or Sun SPARC Enterprise T2000 Server
Sun Fire or Sun SPARC Enterprise T1000 Server
Netra T2000 Server
Netra CP3060 ATCA Blade
Sun Blade T6300 Server Module
The following operating systems are supported for Oracle VM Server for SPARC:
Control domain – At least Oracle Solaris 10 10/09
Logical domain – At least Oracle Solaris 10 5/08
To use all the features of the Oracle VM Server for SPARC 2.0 and 2.1 software, the operating system on all domains must be at least Oracle Solaris 10 9/10 OS.
The following patches need to be installed on supported operating systems:
Oracle Solaris 10 5/09 – 141778-02 and 139983-04
Oracle Solaris 10 10/08 – 139555-08
Oracle Solaris 10 5/08 – 139555-08
For Oracle VM Server for SPARC 2.0 and 2.1 version, the following patches need to be installed on systems running an OS version earlier than Oracle Solaris 10 9/10:
141514-02 – Control domain
142909-17 – Control domain and logical domain
The firmware requirements depend on the hardware that is used for Oracle VM Server for SPARC. The first public release of firmware to include Oracle VM Server for SPARC support is System Firmware Version 6.4.x. To enable all the features of Oracle VM Server for SPARC 2.0, at least firmware version 8.0.0 is required.
The following system firmware patches are required for use with Oracle VM Server for SPARC on supported servers.
System Firmware Version | Patches | Supported Servers |
---|---|---|
6.7.4 | 139434-03 | Sun Fire and SPARC Enterprise T2000 Servers |
6.7.4 | 139435-03 | Sun Fire and SPARC Enterprise T1000 Servers |
6.7.4 | 139436-02 | Netra T2000 Server |
6.7.4 | 139437-02 | Netra CP3060 ATCA Blade |
6.7.4 | 139438-03 | Sun Blade T6300 Server Module |
7.2.2 | 139439-04 | Sun SPARC Enterprise T5120 and T5220 Servers |
7.2.2 | 139440-03 | Sun Blade T6320 Server Module |
7.2.2 | 139442-06 | Netra T5220 Server |
7.2.2 | 139441 | Sun Netra CP3260 ATCA Blade Server |
7.2.2 | 139444-03 | Sun SPARC Enterprise T5140 and T5240 Servers |
7.2.2 | 139445-04 | Netra T5440 Server |
7.2.2 | 139446-03 | Sun SPARC Enterprise T5440 Server |
7.2.2 | 139448-02 | Sun Blade T6340 Server Module |
For Oracle VM Server for SPARC 2.1 version, you must have the following firmware version:
Firmware version 7.4.0 for UltraSPARC T2 and UltraSPARC T2 Plus Servers
Firmware version 8.1.0 for SPARC T3 Servers
Following system firmware patches are required for use with Oracle VM Server for SPARC 2.1:
Patches | Supported Servers |
---|---|
145673-04 | Sun SPARC Enterprise T5120 and T5220 Servers |
145674-04 | Sun Blade T6320 Server Module |
145675-05 | Netra T5220 Server |
145676-04 | Sun SPARC Enterprise T5140 and T5240 Servers |
145677-04 | Netra T5440 Server |
145678-04 | Sun SPARC Enterprise T5440 Server |
145679-04 | Sun Blade T6340 Server Module |
145680-04 | Sun Netra T6340 Server Module |
144609-07 | Sun Netra CP3260 Blade |
145679-04 | Sun Netra T6340 Server Module |
145665-04 | SPARC T3-1 Server |
145666-05 | SPARC T3-1B Server |
145667-05 | SPARC T3-2 Server |
145668-04 | SPARC T3-4 Server |
145669-04 | Netra SPARC T3-1 Server |
145670-02 | Netra SPARC T3-1BA Server |
Oracle VM Server provisioning is supported on only Solaris SPARC or x86 Proxy Controllers. The Proxy Controller can be remote or co-located, but it must be the Proxy Controller that was used during provisioning.
If the Proxy Controller is remote, the Enterprise Controller can run the Linux or Oracle Solaris OS. If the Proxy Controller is co-located, the Enterprise Controller and co-located Proxy Controller must run on an Oracle Solaris OS.
To successfully create and update alternate boot environments, install the latest Oracle Solaris Live Upgrade packages and patches. The latest packages and patches ensure that you have all the latest bug fixes and features. In addition, you should verify that the disk space and format are sufficient for an alternate boot environment.
Live Upgrade packages and patches are available for each Oracle Solaris software release beginning with Oracle Solaris 9 OS. Use the packages and patches that are appropriate for your software instance. To use Live Upgrade with Oracle Solaris 8 OS, following the special patching instructions.
Review the following sections and verify that all the packages and patches that are relevant to your system are installed and that the disk is properly formatted before creating a new boot environment.
The following packages are required to successfully upgrade a system using an ABE.
SUNWadmap
SUNWadmlib-sysid
SUNWadmr
SUNWlibC
SUNWgzip
(only for Oracle Solaris 10 3/05)
SUNWj5rt
(Required if you upgrade and use CD media)
If you installed Oracle Solaris 10 using any of the following software groups, you should have the required packages:
Entire Oracle Solaris Software Group Plus OEM Support
Entire Oracle Solaris Software Group
Developer Oracle Solaris Software Group
End User Oracle Solaris Software Group
If you install one of these Software Groups, then you might not have all the required packages:
Core System Support Software Group
Reduced Network Support Software Group
Perform the following steps to check for packages on your system:
Open a terminal window
Type the following command.
% pkginfo package_name
The following steps are in the Oracle Solaris Live Upgrade Software: Patch Requirements (Doc ID 1004881.1) document in My Oracle Support (MOS):
Become superuser or assume an equivalent role.
From the MOS web site, follow the instructions in document to remove and add Oracle Solaris Live Upgrade packages. The three Oracle Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, are required to use the software. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Oracle Solaris Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Oracle Solaris 10 8/07 (update 4) release. If you are using Oracle Solaris Live Upgrade packages from a release previous to Oracle Solaris 10 8/07, you do not need to remove this package.
# pkgrm SUNWlucfg SUNWluu SUNWlur
Install the new Oracle Solaris Live Upgrade packages from the release to which you are upgrading. For instructions, see Installing Oracle Solaris Live Upgrade.
Before running Oracle Solaris Live Upgrade, install the required patches to ensure that you have all the latest bug fixes and new features in the release. Search for requirements document 1004881.1 (formerly 206844 or 72099) on the MOS web site to get the latest list of patches.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the MOS web site, obtain the list of patches.
Change to the patch directory.
# cd /var/tmp/lupatches
Install the patches with the patchadd command.
# patchadd patch_id
patch_id is the patch number or numbers. Separate multiple patch names with a space.
Note:
The patches need to be applied in the order that is specified in the requirements document.Reboot the system if necessary. Certain patches require a reboot to be effective. x86 only: Rebooting the system is required or Oracle Solaris Live Upgrade fails.
# init 6
You now have the packages and patches necessary for a successful migration.
The list of required patches is channel-specific. Additional patches are required for a system that is running zones. The latest list of required patches is available in the Oracle Solaris Live Upgrade Software: Patch Requirements (Doc ID 1004881.1) document in MOS. This page contains information about Live Upgrade that is not relevant to patching in Enterprise Manager Ops Center. The relevant information is located in the section titled "Patch Lists for full Live Upgrade Feature Support".
The following patch is required in addition to the Oracle Solaris Live Upgrade patches detailed in MOS:
125952-1 – Fixes a webconsole bug (CR 6751843) in Oracle Solaris which causes shutdown, and the live update activation operation, to hang for a couple of hours.
Patches are no longer available for Oracle Solaris 8 OS Live Upgrade packages. To use Live Upgrade on an Oracle Solaris 8 OS, you must use the Oracle Solaris 10 OS Live Upgrade packages and patches with your Oracle Solaris 8 software.
Use the following procedure to enable Oracle Solaris 8 OS Live Upgrade to work with Enterprise Manager Ops Center.
Remove the Oracle Solaris 8 OS Live Upgrade packages.
pkgrm SUNWluu pkgrm SUNWlur rm /etc/default/lu rm -r /etc/lu
Install Oracle Solaris 8 OS software patches according to the Oracle Solaris Live Upgrade Software: Patch Requirements (Doc ID 1004881.1). The Oracle Solaris 8 OS patch list is in Section 2, "Using Live Upgrade to Manage or Patch the Boot Environments of a System."
Note:
You should always install the latest available revision of the patch utilities patch before applying other patches. Apply the highest available patch revision of the following patches:108987-18 patchadd/patchrm patch**
110380-06 libadm patches
110934-26 pkg utilities patch
112396-03 fgrep patches
111111-07 nawk patches
112097-07 cpio patch
111879-08 prodreg patches for Live Upgrade
109147-44 linker patches
108434-25 SUNWlibC patches
108435-25 SUNWlibCx patches
112345-04 Pax patches
111400-04 kcms_server and kcms_configure patch
112279-03 or higher ALC Procedural script patch
114251-01 or higher ALC Procedural script patch (Oracle Solaris 8 2/02)
108977-19 libsmedia patch
108974-57 sd and uata driver patch
108968-12 vol/vold/rmmount/dev_pcmem.so.1 patch
Install the platform-specific Oracle Solaris 10 5/09 OS Live Upgrade packages.
pkgadd -d <src-dir> SUNWlucfg pkgadd -d <src-dir> SUNWlur pkgadd -d <src-dir> SUNWluu
Install the platform-specific Oracle Solaris Live Upgrade patch:
SPARC – At least Patch 121430-53
x86 – At least Patch 121431-54 The example uses the SPARC platform patch.
patchadd 121430-41
Set up an alternate boot environment (ABE). You can create an ABE by running an update job that contains a pre-action script, as described on the "Creating an ABE" page, or from the command line. The following example shows how to create an ABE from the command line.
lucreate -c BE1 -n BE2 -m /:c0t1d0s0:ufs
Note:
Install the latest available revision of the utilities patch before applying other patches.In general, you must follow the general disk space requirements for an upgrade. When you create a boot environment, the space requirements are calculated. You can estimate the file system required to create a boot environment by starting to create a new boot environment and then canceling the process after the space requirements are calculated.
The disk on the new boot environment must be able to serve as a boot device. Some systems restrict which disks can serve as a boot device. Refer to your system's documentation to determine if any boot restrictions apply.
The disk might need to be prepared before you create the new boot environment. Check that the disk is formatted properly:
Identify slices large enough to hold the file systems to be copied.
Identify file systems that contain directories that you want to share between boot environments rather than copy. If you want a directory to be shared, you need to create a new boot environment with the directory put on its own slice. The directory is then a file system and can be shared with future boot environments. For more information about creating separate file systems for sharing, see "Guidelines for Selecting Slices for Shareable File Systems" in the Oracle Solaris Live Upgrade documentation.
Software and Storage Libraries can reside on the shares of an NFS server. These libraries can be created by one of the following methods:
Create the library, attach the virtual host to the Enterprise Manager Ops Center software, and then associate the storage library with the virtual host's virtual pool.
Use the Enterprise Ops Center software to create a library and add it to a virtual host's virtual pool.
Note:
NFS protocol requires agreement on the Domain Name System (DNS) that the NFS server and NFS clients use. The server and a client must agree on the identity of the authorized users accessing the share.Because the Enterprise Controller does not mount the NFS share, install the NFS server on a system that is close to the systems where the NFS share will be used, that is, the systems that host global zones and Oracle VM Server for SPARC.
On the NFS server, edit the /etc/default/nfs
file.
Locate the NFSMAPID_DOMAIN
variable and change the variable value to the domain name.
Define a group and user that are common to both the NFS server and the NFS clients. Create the group account with GID 60, and the user with UID 60. For example, use the following commands to create a group and a user named opsctr: Create the opsctr
group:
# groupadd -g 60 opsctr
Create the opsctr
user:
# useradd -u 60 -g 60 -d / -c 'Ops Center user' -s /bin/true opsctr
Create the directory that you want to share, and set its ownership and permission modes. For example:
# mkdir -p /export/lib/libX # chown opsctr:opsctr /export/lib/libX # chmod 777 /export/lib/libX
Edit the /etc/dfs/dfstab
file on the NFS server.
Add an entry to share the directory with the correct share options. For example, to share a directory named /export/lib/libX
, create the following entry:
share -F nfs -o rw,root=<domain name>,anon=60 -d "Share 0" /export/lib/libX
where <domain name> is the domain name that you specified in the /etc/default/nfs file. If you want the NFS share to be accessible outside the domain, use the rw option to specify the optional domain list.
share -F nfs -o rw=<domain name 1>,<domain name 2>,anon=60 -d "Share 0" /path/to/share
Replace the <domain name n> entries with the correct domain names.
Share the directory and verify that the directory is shared. For example:
# shareall # share - /export/lib/libX rw,root=_<domain>_,anon=60 "Share 0"
The share now allows the opsctr user account and root user on the NFS clients to have write privileges. The special options set the directory to allow write access, the domain to allow access as the root user, and specifies that the anonymous user is seen on the clients as the user with UID 60 (opsctr).
Note:
Add the domain name to the/etc/nsswitch.conf
file so that root=<domain name> is effective. If possible, use DNS instead of Network Information Service (NIS). DNS maps the host names to the IP addresses. Change the host line to DNS in the /etc/nsswitch.conf
file.After setting up a share on the NFS server, the NFS client has to be prepared to mount the share.
On each NFS client, edit the /etc/default/nfs
file.
Locate the NFSMAPID_DOMAIN
variable and change the variable value to the domain name.
Verify that the opsctr
user account has been created.
# grep opsctr /etc/passwd
A successful result is similar to the following:
ipsctr:x:60:60:opsctr user:/:
Verify the NFS share is visible on the client.
# showmount -e <server-name> export list for <server-name>: /export/virtlib/lib0 (everyone)
A service tag is a small xml file that represents a specific asset (software or hardware), and contains data about that asset. Assets with service tags can be discovered using an automatic discovery or using the Discover and Manage Assets wizard.
There are two sets of data contained in a service tag. The first set contains data specific to the asset. The second set contains platform data on which the product is installed, or in the case of hardware service tags, data about the hardware asset.
Unique identifier for that instance of the product
Name of the product
Product part number
Vendor of the product
Version of the product
Name of product parent
Unique identifier for product parent
Supplemental product identifier
User-defined product description
Day and time that the product was installed
Application which created the tag
Name of container (zone) in which the product is installed
Agentry Identifier
Agentry Version
Version of the file containing the product registration information
System Host Name
System Hostid
Operating System Name
Operating System Version
Operating System Architecture
System Model Name
System Manufacturer
CPU Manufacturer
Serial Number
Amount of physical memory
Number of physical CPUs
Number of CPU cores
Number of CPU threads
CPU name
CPU clock rate