Chapter 1 Configuring Oracle VM Server After Installation
After you successfully install Oracle VM Server, you can perform configuration tasks to customize your environment. These configuration tasks include installing vendor-specific Oracle VM Storage Connect plug-ins, enabling multipathing, optionally installing and configuring diagnostics tools, and changing the memory size of the management domain. If you are using Oracle VM Server for SPARC, you can also create local ZFS volumes or configure a secondary service domain.
1.1 Installing Oracle VM Storage Connect plug-ins
Vendor-specific (non-generic) Oracle VM Storage Connect plug-ins are available directly from your storage vendor. Generic Oracle VM Storage Connect plug-ins are already installed by default during the installation of Oracle VM Server and no further action is required if you select to use only the generic plug-ins. Vendor-specific Oracle VM Storage Connect plug-ins usually facilitate additional management functionality that you can take advantage of from within Oracle VM Manager.
You can find more information about Oracle VM Storage Connect plug-ins at:
https://www.oracle.com/virtualization/storage-connect-partner-program.html
Oracle VM Storage Connect plug-ins are delivered as an RPM, usually a single RPM, but your storage vendor may provide multiple RPMs. When you have the Oracle VM Storage Connect plug-in RPM from your storage vendor, install the RPM on your Oracle VM Servers. You must install the RPM on all the Oracle VM Servers that will use the particular storage.
To install the Oracle VM Storage Connect plug-in RPM, on the command line of the Oracle VM Server, enter
# rpm -ivh filename
.rpm
If you are upgrading an existing Oracle VM Storage Connect plug-in, use the RPM upgrade parameter:
# rpm -Uvh filename
.rpm
If you are installing or upgrading an Oracle VM Storage Connect plug-in on an Oracle VM Server already managed by Oracle VM Manager, rediscover the Oracle VM Server to update the database repository with the latest configuration information about the Oracle VM Server.
Read the install and configuration documentation for the Oracle VM Storage Connect plug-in from your storage vendor before you install and use it. There may be extra configuration required that is not documented here.
1.2 Enabling Multipath I/O Support
In case user action is required to enable multipathing, this sections explains how to do so. The required steps depend on the storage hardware implemented. Consequently, the steps below are intended as a guideline and priority should be given to the SAN hardware documentation. Note that some guidelines have already been provided for the configuration of multipathing on SPARC hardware in the Installing Oracle VM Server on SPARC Hardware section of the Oracle VM Installation and Upgrade Guide . Not all steps apply to your environment. Consult the SAN hardware vendor documentation for a complete list of steps, the order in which to run them, and their relevance to your specific environment.
-
Design and document the multipathing configuration you intend to apply to the SAN hardware used in your Oracle VM environment.
-
Ensure that the drivers for your Host Bus Adapters (HBAs) are present. If not, install the drivers.
-
Configure the appropriate zoning on the fibre channel switches.
-
Configure LUN masking on the storage arrays.
-
Configure path optimization features (ALUA or similar) on your disk subsystem, if so instructed by your vendor's documentation.
-
Check the fabric information on each Oracle VM Server that has access to the SAN hardware. Use multipath -ll and related commands.
-
Make the necessary changes to the file
/etc/multipath.conf
on the Oracle VM Servers.NoteYou must make the exact same changes to the multipath configuration file on all Oracle VM Servers in your environment.
ImportantIt is critical that the configuration parameter
user_friendly_names
remain set to no within the/etc/multipath.conf
configuration file.ImportantUnder the
multipath
section, themultipaths
configuration subsection is not supported within the/etc/multipath.conf
configuration file. -
Restart the multipath daemon,
multipathd
. -
Check the fabric information again to verify the configuration.
-
If instructed by the vendor documentation, rebuild
initrd
. -
Reboot the Oracle VM Servers to verify that the SAN and multipathing configuration come up after a restart.
For detailed information and instructions, consult the SAN hardware vendor documentation.
Booting from a multipath SAN is supported.
1.3 Configuring Software RAID for Storage
You can use software RAID devices for storage repositories or virtual disks. However you must first configure these devices on Oracle VM Server before Oracle VM Manager can discover the array for storage.
As a best practice, you should use software RAID devices as storage repositories in a deployment environment before using them in a production environment.
In environments where you use software RAID devices as storage repositories for server pools, unexpected behavior can occur with certain virtual machine migration operations. For example, if you clone a virtual machine and then attempt to live migrate it to an instance of Oracle VM Server in the same server pool, the migration fails with an error that indicates the virtual machine disk does not exist. In this case, you must stop the virtual machine and then move it to the appropriate instance of Oracle VM Server.
To configure software RAID devices as storage, do the following:
-
Connect to Oracle VM Server as the root user.
-
Ensure the local disks or multipath LUNs you want to configure as software RAID devices are available as mapped devices.
# ls /dev/mapper
-
Run the
multipath -ll
command to find the WWIDs for the devices, as follows:# multipath -ll
device1-WWID
dm-0 LSI,MR9261-8i size=558G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 2:2:1:0 sdb 8:16 active ready runningdevice2-WWID
dm-1 LSI,MR9261-8i size=558G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 2:2:2:0 sdc 8:32 active ready runningNoteThe multipathing service,
multipathd
, uses the underlying devices to create a single device that routes I/O from Oracle VM Server to those underlying devices. For this reason, you should not use theudev
device names to create a software RAID, such as/dev/sdb
. You should only the WWIDs of the devices to create a software RAID. If you attempt to use audev
device name, an error occurs to indicate that the device is busy. -
Create a software RAID configuration with the devices.
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 \ /dev/mapper/
device1-WWID
/dev/mapper/device2-WWID
-
Open
/etc/mdadm.conf
for editing. -
Comment out the
DEVICE /no/device
line. -
Specify each device to include in the software RAID configuration on separate
DEVICE
lines, as in the following example:DEVICE /dev/mapper/
device1-WWID
DEVICE /dev/mapper/device2-WWID
-
Save and close
/etc/mdadm.conf
. -
Run the following command to scan for software RAID devices and include them in
mdadm.conf
:# mdadm --detail --scan >> /etc/mdadm.conf
NoteThis command is optional. However, including the software RAID devices in
mdadm.conf
helps the system assemble them at boot time.If any software RAID devices already exist, this command creates duplicate entries for them in
mdadm.conf
. In this case, you should use a different method to include the new software RAID device, as in the following example:# mdadm --detail --scan ARRAY /dev/
md0
metadata=1.2 name=hostname
UUID=RAID1_UUID
ARRAY /dev/md1
metadata=1.2 name=hostname
UUID=RAID2_UUID
ARRAY /dev/md2
metadata=1.2 name=hostname
UUID=RAID3_UUID
# cp /etc/mdadm.conf /etc/mdadm.conf.backup # echo "ARRAY /dev/md2
metadata=1.2 name=hostname
UUID=RAID3_UUID
" >> /etc/mdadm.conf -
Confirm that the configuration includes the software RAID device.
# cat /etc/mdadm.conf # For OVS, don't scan any devices #DEVICE /no/device DEVICE /dev/mapper/
device1-WWID
DEVICE /dev/mapper/device2-WWID
ARRAY /dev/md0
metadata=1.2 name=hostname
UUID=RAID_UUID
-
Check the status of the software RAID device.
# mdadm --detail /dev/
md0
/dev/md0: Version : 1.2 Creation Time :time_stamp
Raid Level : raid1 Array Size : 55394112 (52.83 GiB 56.72 GB) Used Dev Size : 55394112 (52.83 GiB 56.72 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time :time_stamp
State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name :hostname
:0 UUID :RAID_UUID
Events : 17 Number Major Minor RaidDevice State 0 251 0 0 active sync /dev/dm-0 1 251 1 1 active sync /dev/dm-1
You can find more information about software RAID in the Oracle Linux documentation at:
http://docs.oracle.com/cd/E37670_01/E41138/html/ch18s04.html
1.3.1 Removing Software RAID Devices
You cannot use Oracle VM Manager to remove software RAID devices. You must manually remove these devices on Oracle VM Server as follows:
-
Connect to Oracle VM Server as the root user.
-
Stop the software RAID device.
# mdadm --stop /dev/
md0
-
Remove the software RAID superblock from the devices.
# mdadm --zero-superblock /dev/mapper/
device1-WWID
/dev/mapper/device2-WWID
-
Remove the software RAID device from
/etc/mdadm.conf
. -
Remove the software RAID device from Oracle VM Manager.
NoteAfter you remove the software RAID device, Oracle VM Manager displays an event with a severity of warning. The event message is similar to the following:
Warning
time_stamp
storage.device.offline. Physical disk is Offline No Description: OVMEVT_007005D_001 Rescan storage layer on server [hostname
] did not return physical disk [md-UUID
] for storage array [Generic Local Storage ArrayYou can ignore this warning.
1.4 Diagnostic Tools for Oracle VM Server
As an optional post-installation step, Oracle recommends that you also install and configure diagnostics tools on all Oracle VM Servers. These tools can be used to help debug and diagnose issues such as system crashes, hanging, unscheduled reboots, and OCFS2 cluster errors. The output from these tools can be used by Oracle Support and can significantly improve resolution and response times.
Obtaining a system memory dump, vmcore, can be very useful when attempting to diagnose and resolve the root cause of an issue. To get a useful vmcore dump, a kdump service configuration is required. See Section 1.4.2, “Manually Configuring kdump for Oracle VM Server” below for more information on this.
In addition, you can install netconsole, a utility allowing system console messages to be redirected across the network to another server. See the Oracle Support Document, How to Configure "netconsole" for Oracle VM Server 3.0, for information on how to install netconsole.
Additional information on using diagnostic tools is provided in the Oracle Linux documentation. See the chapter titled Support Diagnostic Tools in the Oracle Linux Administrator's Solutions Guide.
http://docs.oracle.com/cd/E37670_01/E37355/html/ol_diag.html
1.4.1 Working with the OSWatcher Utility on Oracle VM Server
OSWatcher (oswbb) is a collection of shell scripts that collect and archive operating system and network metrics to diagnose performance issues with Oracle VM Server. OSWatcher operates as a set of background processes to gather data with standard UNIX utilities such as vmstat, netstat and iostat.
By default, OSWatcher is installed on Oracle VM Server and is enabled to run at boot. The following table describes the OSWatcher program and main configuration file:
Name |
Description |
---|---|
|
The main OSWatcher program. If required, you can configure certain parameters for statistics collection. However, you should do so only if Oracle Support advises you to change the default configuration. |
|
This file defines the directory where OSWatcher log files are saved, the interval between statistics collection, and the maximum amount of time to retain archived statistics. Important It is not possible to specify a limit to the data that the OSWatcher utility collects. For this reason, you should be careful when modifying the default configuration so that the OSWatcher utility does not use all available space on the system disk. |
To start, stop, and check the status of OSWatcher, use the following command:
# service oswatcher {start|stop|status|restart|reload|condrestart}
For detailed information on the data that OSWatcher collects and how
to analyze the output, as well as for instructions on sending the
data to Oracle Support, see the OSWatcher User
Guide in the following directory on Oracle VM Server:
/usr/share/doc/oswatcher-
x
.x
.x
/
1.4.2 Manually Configuring kdump for Oracle VM Server
While Oracle VM Server uses the robust UEK4 kernel which is stable and
fault-tolerant and should rarely encounter errors that crash the
entire system, it is still possible that a system-wide error results
in a kernel crash. Information about the actual state of the system
at the time of a kernel crash is critical to accurately debug issues
and to resolve them. The kdump service is used to capture the memory
dump from dom0 and store it on the filesystem. The service does not
dump any system memory used by guest virtual machines, so the memory
dump is specific to dom0 and the Xen hypervisor itself. The memory
dump file that is generated by kdump is referred to as the
vmcore
file.
A description of the actions required to manually configure Oracle VM Server so that the kdump service is properly enabled and running is provided here, so that you are able to set up and enable this service after an installation. The Oracle VM Server installer provides an option to enable kdump at installation where many of these steps are performed automatically. See the Kdump Setting section of the Oracle VM Manager Command Line Interface User's Guide for more information on this.
Checking Pre-requisite Packages
By default, the required packages to enable the kdump service are included within the Oracle VM Server installation, but it is good practice to check that these are installed before continuing with any configuration work. You can do this by running the following command:
# rpm -qa | grep kexec-tools
If the kexec-tools
package is not installed,
you must install it manually.
Updating the GRUB2 Configuration
Oracle VM Server makes use of GRUB2 to handle the boot process. In this
step, you must configure GRUB2 to pass the
crashkernel
parameter to the Xen kernel at
boot. This can be done by editing the
/etc/default/grub
file and modifying the
GRUB_CMDLINE_XEN
variable by appending the
appropriate crashkernel
parameter.
The crashkernel
parameter specifies the amount
of space used in memory to load the crash kernel that is used to
generate the dump file, and also specifies the offset which is the
beginning of the crash kernel region in memory. The minimum amount
of RAM that may be specified for a crash kernel is 512 MB and this
should be offset by 64 MB. This would result in a configuration
that looks similar to the following:
GRUB_CMDLINE_XEN="dom0_mem=max:6144M allowsuperpage dom0_vcpus_pin \ dom0_max_vcpus=20 crashkernel=512M@64M"
This setting is sufficient for the vast majority of systems, however on systems that make use of a significant number of large drivers, the crash kernel may need to be allocated more space in memory. If you force a dump and it fails to generate a core file, you may need to increase the amount of memory allocated to the crash kernel.
While UEK4 supports the crashkernel=auto
option, the Xen hypervisor does not. You must specify values for
the RAM reservation and offset used for the crash kernel or the
kdump service is unable to run.
When you have finished modifying
/etc/default/grub
, you must rebuild the
system GRUB2 configuration that is used at boot time. This is done
by running:
# grub2-mkconfig -o /boot/grub2/grub.cfg
Optionally Preparing a Local Filesystem to Store Dump Files
Kdump is able to store vmcore files in a variety of locations,
including network accessible filesystems. By default, vmcore files
are stored in /var/crash/
, but this may not
be appropriate depending on your disk partitioning and available
space. The filesystem where the vmcore files are stored must have
enough space to match the amount of memory available to Oracle VM Server
for each dump.
Since the installation of Oracle VM Server only uses as much disk space as is required, a 'spare' partition is frequently available on a new installation. This partition is left available for use for hosting a local repository or for alternate use such as for hosting vmcore files generated by kdump. If you opt to use it for this purpose, you must first correctly identify and take note of the UUID of the partition and then format it with a usable filesystem.
The following steps serve as an illustration of how you might prepare the local spare partition.
-
Identify the partition that the installer left 'spare' after the installation. This is usually listed under
/dev/mapper
with a filename that starts withOVM_SYS_REPO_PART
. If you can identify this device, you can format it with an ext4 filesystem:# mkfs.ext4 /dev/mapper/OVM_SYS_REPO_PART_
VBd64a21cf-db4a5ad5
If you don't have a partition mapped like this, you may need to use a utilities like blkls, parted, fdisk or gdisk to identify any free partitions on your available disk devices.
-
Obtain the UUID for the filesystem. You can do this by running the blkid command:
# blkid /dev/mapper/OVM_SYS_REPO_PART_
VBd64a21cf-db4a5ad5
/dev/mapper/OVM_SYS_REPO_PART_VBd64a21cf-db4a5ad5
: UUID="51216552-2807-4f17-ab27-b8135f69896d
" TYPE="ext4"Take note of the UUID as you will need to use this later when you configure kdump.
Modifying the kdump Configuration
System configuration directing how the kdump service runs is
defined in /etc/sysconfig/kdump
, while
specific kdump configuration variables are defined in
/etc/kdump.conf
. Changes may need to be made
to either of these files depending on your environment. However,
the default configuration should be sufficient to run kdump
initially without any problems. The following list identifies
potential configuration changes that you may wish to make:
-
On systems with lots of memory (e.g. over 1 TB), it is advisable to disable the IO Memory Management Unit within the crash kernel for performance and stability reasons. This is achieved by editing
/etc/sysconfig/kdump
and appending theiommu=off
kernel boot parameter to theKDUMP_COMMANDLINE_APPEND
variable:KDUMP_COMMANDLINE_APPEND="irqpoll maxcpus=1 nr_cpus=1 reset_devices cgroup_disable=memory mce=off selinux=0 iommu=off"
-
If you intend to change the partition where the vmcore files are stored, using the spare partition on the server after installation for instance, you must edit
/etc/kdump.conf
to provide the filesystem type and device location of the partition. If you followed the instructions above, it is preferable that you do this by specifying the UUID that you obtained for the partition using the blkid command. A line similar to the following should appear in the configuration:ext4 UUID=
51216552-2807-4f17-ab27-b8135f69896d
-
You may edit the default path where vmcore files are stored, but note that this path is relative to the partition that kdump is configured to use to store vmcores. If you have configured kdump to store vmcores on a separate filesystem, when you mount the filesystem, the vmcore files are located in the path specified by this directive on the mounted filesystem:
path /var/crash
-
If you are having issues obtaining a vmcore or you are finding that your vmcore files are particularly large using the makedumpfile utility, you may reconfigure kdump to use the cp command to copy the vmcore in sparse mode. To do this, edit
/etc/kdump.conf
to comment out the line containing setting thecore_collector
to use the makedumpfile utility and uncomment the lines to enable the cp command:# core_collector makedumpfile -EXd 1 --message-level 1 --non-cyclic core_collector cp --sparse=always extra_bins /bin/cp
Your mileage with this may vary, and the makedumpfile utility is generally recommended instead.
Enabling the kdump Service
You can enable the kdump service to run at every boot by running the following command:
# chkconfig kdump on
You must restart the kdump service at this point to allow it to detect the changes that have been made to the kdump configuration and to determine whether a kdump crash kernel has been generated and is up to date. If the kernel image needs to be updated, kdump does this automatically, otherwise it restarts without any attempt to rebuild the crash kernel image:
# service kdump restart Stopping kdump: [ OK ] Detected change(s) the following file(s): /etc/kdump.conf Rebuilding /boot/initrd-4.1.12-25.el6uek.x86_64kdump.img Starting kdump: [ OK ]
Confirming that kdump is Configured and Working Correctly
You can confirm that the kernel loaded for dom0 is correctly configured, by running the following command and checking that output is returned to show that your crashkernel parameter is in use:
# xl dmesg|grep -i crashkernel (XEN) Command line: placeholder dom0_mem=max:6144M allowsuperpage dom0_vcpus_pin dom0_max_vcpus=20 crashkernel=512M@64M
You can also check that the appropriate amount of memory is reserved for kdump by running the following:
# xl dmesg|grep -i kdump (XEN) Kdump: 512MB (524288kB) at 0x4000000
or alternately:
# kexec --print-ckr-size 536870912
You can check that the kdump service is running by checking the service status:
# service kdump status Kdump is operational
If there are no errors in /var/log/messages
or on the console, you can assume that kdump is running correctly.
To test that kdump is able to generate a vmcore and store it correctly, you can trigger a kernel panic by issuing the following commands:
# echo 1 > /proc/sys/kernel/sysrq # echo c > /proc/sysrq-trigger
These commands cause the kernel on the Oracle VM Server to panic and crash. If kdump is working correctly, the crash kernel should take over and generate the vmcore file which is copied to the configured location before the server reboots automatically. If kdump fails to load the crash kernel, the server may hang with the kernel panic and requires a hard-reset to reboot.
After you have triggered a kernel panic and the system has successfully rebooted, you may check that the vmcore file was properly generated:
-
If you have not configured kdump to use an alternate partition, you should be able to locate the vmcore file in
/var/crash/127.0.0.1-
, wheredate
-time
/vmcoredate
andtime
represent the date and time when the vmcore was generated. -
If you configured kdump to use an alternate partition to store the vmcore file, you must mount it first. If you used the spare partition generated by a fresh installation of Oracle VM Server, this can be done in the following way:
# mount /dev/mapper/OVM_SYS_REPO_PART_
VBd64a21cf-db4a5ad5
/mntYou may then find the vmcore file in
/mnt/var/crash/127.0.0.1-
, wheredate
-time
/vmcoredate
andtime
represent the date and time when the vmcore was generated, for example:# file /mnt/var/crash/127.0.0.1-2015-12-08-16\:12\:28/vmcore /mnt/var/crash/127.0.0.1-2015-12-08-16:12:28/vmcore: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style
Remember to unmount the partition after you have obtained the vmcore file for analysis, so that it is free for use by kdump.
If you find that a vmcore file is not being created or that the system hangs without automatically rebooting, you may need to adjust your configuration. The most common problem is that there is insufficient memory allocated for the crash kernel to run and complete its operations. Your starting point to resolving issues with kdump is always to try increasing the reserved memory that is specified in your GRUB2 configuration.
1.5 Disabling Paravirtualized Guests on Oracle VM Server
Paravirtualization (PVM) is considered a less secure guest domain type. To keep your virtualized environment safe and secure, you should prevent paravirtualized guest VMs from starting and running within Oracle VM.
As of Release 3.4.5, the Xen hypervisor allows you to disable PVM guests through a configuration file setting. After you upgrade your servers to Oracle VM Server Release 3.4.5, PVM guests are not disabled by default, because that would cause a variety of problems in existing PVM guests. Oracle recommends that you switch to PV-HVM guests and disable PVM guests as described in this section.
As of Release 3.4.6, support for PVM guests is removed. With the removal of PVM guest support, the following new behavior restrictions exist:
-
A new virtual machine cannot be created of the PVM doman type from the Oracle VM Manager Web Interface, Oracle VM Manager Command Line Interface, or Oracle VM Web Services API.
-
An existing virtual machine of the PVM domain type can be converted to a supported type from the Oracle VM Manager Web Interface, Oracle VM Manager Command Line Interface, or Oracle VM Web Services API.
-
During server discovery, warnings are raised for each virtual machine of the PVM domain type. The warnings appear of type "vm.unsupported.domain" on the Error Conditions subtab of the Health tab. The error event cannot be acknowledged by the user.
NoteExisting virtual machines of the PVM domain type continue to work as before; however, the error event that is raised goes away only after the PVM domain type issue is resolved.
-
After editing the domain type to a supported type, the event is then acknowledged.
If you have existing PVM guests, you should convert them to HVM with PV drivers before you disable PVM on your Oracle VM Servers. For details about changing the guest virtualization mode, please consult the Support Note with ID 2247664.1.
-
Using SSH, log into the Oracle VM Server.
-
Open the file
xend-config.sxp
and locate the entry "xend-allow-pv-guests
".vi /etc/xen/xend-config.sxp # -*- sh -*- # # Xend configuration file. [...] # # By default allow PV guests to be created #(xend-allow-pv-guests 1)
-
Uncomment the line by removing the "
#
" and set the parameter to "0" to disable PV guests. Save the changes to the file.# By default allow PV guests to be created (xend-allow-pv-guests 0)
-
Stop and start the xend service on the Oracle VM Server for the new settings to take effect.
# service xend stop # service xend status xend daemon is stopped # service xend start # service xend status xend daemon (pid 9641) is running...
Any attempt to start a PVM guest on an Oracle VM Server with PVM guests disabled, or to migrate a PVM guest to it, results in a failure: "
Error: PV guests disabled by xend
".NoteIf secure VM migration is enabled – which is the default setting –, the wrong error message may be displayed. A known issue may lead to a confusing error message containing "[Errno 9] Bad file descriptor".
-
Repeat these steps for each of the remaining Oracle VM Servers to protect your entire virtualized environment.
1.6 Changing the Memory Size of the Management Domain
When you install Oracle VM Server, the installer sets a default memory size for dom0. The algorithm used is:
(768 + 0.0205 * Physical memory (MB)) round to 8
You can use this calculation to determine memory allocation for the Oracle VM Server installation. However, you should not make the memory allocation for dom0 smaller than the calculated value. You can encounter performance issues if the dom0 memory size is not set appropriately for your needs on the Oracle VM Server.
Example sizes are set out in table Table 1.1.
Physical Memory |
Dom0 Memory |
---|---|
2 GB |
816 MB |
4 GB |
856 MB |
8 GB |
936 MB |
16 GB |
1104 MB |
32 GB |
1440 MB |
64 GB |
2112 MB |
128 GB |
3456 MB |
256 GB |
6144 MB |
512 GB |
11520 MB |
1024 GB |
22264 MB |
2048 GB |
32768 MB Note 32768 MB is the maximum allowed memory for dom0. |
To change the dom0
memory allocation, edit your grub configuration on the Oracle VM Server to adjust the value for the
dom0_mem
parameter. If you are using UEFI boot, the grub configuration
file is located at /boot/efi/EFI/redhat/grub.cfg
, otherwise the grub
configuration file is located at /boot/grub2/grub.cfg
.
Edit the line starting with multiboot2
/xen.mb.efi
and append the required boot parameters. For example, to change the memory
allocation to 1440 MB, edit the file to contain:
multiboot2 /xen.mb.efi dom0_mem=max:1440M placeholder ${xen_rm_opts}
1.7 Configuring Oracle VM Server for SPARC
This section describes configuration tasks for Oracle VM Server for SPARC only.
Access the Oracle VM Server for SPARC documentation at http://www.oracle.com/technetwork/documentation/vm-sparc-194287.html. To determine the version of the Oracle VM Server for SPARC documentation to reference, run the pkg list ldomsmanager command.
1.7.1 Creating ZFS Volumes
Local ZFS volumes are supported as local physical disks on Oracle VM Server for SPARC. While Oracle VM Manager does not provide tools to create or manage ZFS volumes, it does detect ZFS volumes as local physical disks that can either be used for virtual disks by the virtual machines hosted on the Oracle VM Server where the volume resides, or for use as a local repository to store virtual machine resources. In this section, we describe the steps required to manually create ZFS volumes on a SPARC-based Oracle VM Server and how to detect these within Oracle VM Manager.
See Creating ZFS Volumes on NVMe Devices if you plan to create a ZFS volume on an NVMe devices, such as an SSD.
In the control domain for the Oracle VM Server where you wish to create the ZFS volumes that you intend to use, use the zfs create command to create a new ZFS volume:
# zfs create -p -VX
Gpool
/OVS/volume
The size of the volume, represented by X
G
can be any size that you require as long as your hardware supports
it. The pool
, that the volume belongs to
can be any ZFS pool. Equally, the volume
name can be of your choosing. The only requirement is that the
volume resides under
OVS within
the pool, so that Oracle VM Manager is capable of detecting it. The following
example shows the creation of two ZFS volumes of 20 GB in size:
# zfs create -V 20G rpool/OVS/DATASET0 # zfs create -V 20G rpool/OVS/DATASET1
Once you have created the ZFS volumes that you wish to use, you must rediscover your server within Oracle VM Manager. See the Discover Servers section of the Oracle VM Manager User's Guide for more information on how to do this. Once the server has been rediscovered, the ZFS volumes appear as physical disks attached to the server in the Physical Disks perspective within the Oracle VM Manager Web Interface. See the Physical Disks Perspective section of the Oracle VM Manager User's Guide for more information on this perspective.
As long as a ZFS volume is unused and Oracle VM Manager is able to detect it as a local physical disk attached to the server, you can create a repository on the ZFS volume by selecting to use this disk when you create the repository. See the Create New Repository section of the Oracle VM Manager User's Guide on creating repositories.
Using this feature, you can use a single SPARC server to create virtual machines without any requirement to use an NFS repository or any additional physical disks.
Creating ZFS Volumes on NVMe Devices
If you plan to create a ZFS volume on an NVM Express (NVMe) device, use the following procedure:
-
Determine the LUN of the NVMe device with the format command, as in the following example:
# format ....
5.
c1t1d0 <INTEL-SSDPE2ME016T4S-8DV1-1.46TB> /pci@306/pci@1/pci@0/pci@4/nvme@0/disk@1 /dev/chassis/SYS/DBP/NVME0/disk ...In the preceding example, the NVMe device has the following LUN:
c1t1d0
.NoteIn most cases, NVMe devices have the following path:
/SYS/DBP/NVME[0..n]
. -
Create a ZFS pool with the NVMe device, as follows:
# zpool create
pool_name
c1t1d0
Where:
-
pool_name
is any valid ZFS pool name. -
c1t1d0
is the LUN of the NVMe device.
-
-
Create a ZFS volume on the ZFS pool, as follows:
# zfs create -p -V
size
Gpool_name
/OVS/volume_name
Where:
-
size
is an integer value that specifies the size of the ZFS volume in Gigabytes. Ensure that the size of the ZFS pool is not greater than the size of the NVMe disk. -
pool_name
is the name of the ZFS pool on which you are creating the volume. -
volume_name
is the name of the ZFS volume.
ImportantThe first path element of the ZFS pool must be
/OVS/
as in the preceding example. This path element ensures that Oracle VM Manager discovers the ZFS volume as a local physical disk. -
-
Repeat the preceding step to create additional ZFS volumes, as required.
-
From Oracle VM Manager, discover, or re-discover, the instance of Oracle VM Server for SPARC that has the NVMe device attached to it.
After the discovery process completes, the Oracle VM Manager Web Interface displays each ZFS volume as a physical disk attached to Oracle VM Server for SPARC in the Physical Disks perspective. See the Physical Disks Perspective section of the Oracle VM Manager User's Guide .
1.7.2 Configuring a Secondary Service Domain
The default configuration of the Oracle VM Agent uses a single service domain, the primary domain, which provides virtual disk and virtual network services to guest virtual machines (guest domains). To increase the availability of guest domains, you can configure a secondary service domain to provide virtual disk and virtual network services through both the primary and the secondary service domains. With such a configuration, guest domains can use virtual disk and virtual network multipathing and continue to be fully functional even if one of the service domains is unavailable.
The primary domain is always the first service domain and this is the domain that is discovered by Oracle VM Manager. The second service domain, named secondary, is a root domain that is configured with a PCIe root complex.The secondary domain should be configured similarly to the primary domain; it must use the same operating system version, same number of CPUs and same memory allocation. Unlike the primary domain, the secondary service domain is not visible to Oracle VM Manager. The secondary domain mimics the configuration of the primary service domain and is transparently managed by the Oracle VM Agent. In the case where the primary service domain becomes unavailable, the secondary service domain ensures that guest domains continue to have access to virtualized resources such as disks and networks. When the primary service domain becomes available again, it resumes the role of managing these resources.
From a high level, the following tasks should be performed to configure the Oracle VM Agent to use a secondary service domain:
-
Install the Oracle VM Agent as described in the Installing Oracle VM Agent for SPARC section of the Oracle VM Installation and Upgrade Guide .
-
Create the secondary service domain.
-
Install the secondary service domain.
-
Configure the Oracle VM Agent to use the secondary service domain.
If you have a secondary service domain already configured and you have successfully updated your system to Oracle Solaris 11.3 on the primary domain, the secondary service domain can also be upgraded using the same Oracle Solaris IPS repository as the primary domain. To upgrade the secondary service domain, you should upgrade from the Oracle Solaris command line using the following command:
# pkg update --accept
Reboot the system after the upgrade completes, as follows:
# init 6
For detailed install and upgrade instructions for Oracle Solaris 11.3, see http://docs.oracle.com/cd/E53394_01/.
1.7.2.1 Requirements
To configure the Oracle VM Agent with a secondary service domain, your SPARC server must meet the minimum requirements listed in this section, in addition to the standard installation requirements described in the Installing Oracle VM Server on SPARC Hardware section of the Oracle VM Installation and Upgrade Guide .
Hardware
Use a supported Oracle SPARC T-series server, M-series, or S-series server. See
Supported Platforms in the Oracle VM Server for SPARC Installation
Guide. The SPARC server must have at least two PCIe buses, so that you can
configure a root domain in addition to the primary
domain. For more
information, see I/O Domain Overview in the Oracle VM Server for SPARC
Administration Guide.
Both domains must be configured with at least one PCIe bus. The PCIe buses that you assign to each domain must be unique. You cannot assign the same PCIe bus to two different domains.
By default, after a fresh installation, all PCIe buses are assigned to the primary domain. When adding a new service domain, some of these PCIe buses must be released from the primary domain and then assigned to the secondary domain.
For example, a SPARC T5-2 server with two SPARC T5 processors has 4 PCIe buses. This server can be configured with a primary domain and a secondary domain. You can assign two PCIe buses to the primary domain, and two PCIe buses to the secondary domain.
Network
The network ports used by the primary domain must all be connected to the PCIe buses that are assigned to the primary domain.
Similarly the network ports used by the secondary domain must all be connected to the PCIe buses that are assigned to the secondary domain.
In addition, the primary and secondary domains must have the same number of network ports. Each network port in the primary domain must have a corresponding network port in the secondary domain, and they must be connected to the same physical network.
For example, a SPARC T5-2 server with two SPARC T5 processors has 4 PCIe buses (pci_0, pci_1, pci_2, and pci_3). The server also has 4 onboard network ports. Two network ports are connected to pci_0, and the other two are connected to pci_3. You can assign 2 PCIe buses (pci_0 and pci_1) to the primary domain, and 2 PCIe buses (pci_2 and pci_3) to the secondary domain. That way, both domains have two ports configured. You must ensure that each port is connected to the same physical network as the port in the corresponding domain.
Storage
Physical disks or LUNs used by the primary domain must all be accessible through one or several host bus adapters (HBAs) connected to the PCIe buses that are assigned to the primary domain. The primary domain requires at least one disk for booting and hosting the operating system. The primary domain usually has access to all, or a subset of, local SAS disks present on the server through an onboard SAS HBA connected to one of the PCIe buses of the server.
Similarly, physical disks or LUNs used by the secondary domain must all be accessible through one or several HBAs connected to the PCIe buses assigned to the secondary domain. The secondary domain needs at least one disk for booting and hosting the operating system. Depending on the server used, the secondary domain might not have access to any local SAS disks present on the server, or it might have access to a subset of the local SAS disks. If the secondary domain does not have access to any of the local SAS disks then it must have an HBA card on one of its PCIe buses and access to an external storage array LUN that it can use for booting.
If the boot disk of the secondary domain is on a storage array shared between multiple servers or multiple domains, make sure that the boot disk is accessible by the secondary domain only. Otherwise the disk might be used by mistake by another server or domain, which can corrupt the boot disk of the secondary domain. Depending on the storage array and the storage area network, this can usually be achieved using zoning or LUN masking.
In addition, if a Fibre Channel (FC) storage area network (SAN) is used, then the primary and the secondary domains must have access to the same FC disks. So one or more FC HBAs must be connected to the FC SAN and to the PCIe buses that are assigned to the primary domain. And, one or more FC HBAs must be connected to the FC SAN and to the PCIe buses that are assigned to the secondary domain.
The primary and the secondary domain do not need to have access to same SAS or iSCSI disks. Only the SAS or iSCSI disks accessible from the primary domain are visible to Oracle VM Manager. Oracle VM Manager does not have visibility of any SAS or iSCSI disks accessible only from the secondary domain. If a virtual machine is configured with SAS or iSCSI disks, then the corresponding virtual disks in the virtual machine have a single access path, through the primary domain. If a virtual machine is configured with FC disks, then the corresponding virtual disks in the virtual machine have two access paths: one through the primary domain; and one through the secondary domain.
For example, a SPARC T5-2 server with two SPARC T5 processors has 4 PCIe buses (pci_0, pci_1, pci_2, pci_3). The server also has 2 onboard SAS HBAs to access the 6 internal SAS disks. One SAS HBA is connected to PCIe bus pci_0 and accesses 4 internal disks. The other SAS HBA is connected to PCIe bus pci_4 and accesses the 2 other internal SAS disks. You can assign 2 PCIe buses (pci_0 and pci_1) to the primary domain, and 2 PCIe buses (pci_2 and pci_3) to the secondary domain. That way, both domains have access to internal SAS disks that can be used for booting. The primary domain has access to four SAS disks, and the secondary domain has access to two SAS disks.
If you want to connect the server to an FC SAN, then you can add an FC HBA to the primary domain (for example on PCIe bus pci_1) and an FC HBA to the secondary domain (for example, on PCIe bus pci_2). Then you should connect both FC HBAs to the same SAN.
1.7.2.2 Limitations
While using secondary service domains can improve the availability of guest virtual machines, there are some limitations to using them with Oracle VM. The following list outlines each of these limitations:
-
Clustering: Clustering cannot be used with a secondary service domain. If a server is configured with a secondary service domain then that server cannot be part of a clustered server pool.
-
Network Configuration: Network bonds/aggregation and VLANs are not automatically configured on the secondary domain. If you configure bonds/aggregation or VLANs on the primary domain using Oracle VM Manager, then corresponding bonds/aggregation or VLANs are not automatically configured on the secondary domain. To use any such bond/aggregation or VLANs with virtual machines, the corresponding bonds/aggregation or VLANs must be manually configured on the secondary domain.
-
Storage: NFS, SAS, iSCSI, and ZFS volumes accessible only from the secondary domain cannot be used or managed using Oracle VM Manager.
ImportantSecondary service domains cannot access NFS repositories. For this reason, virtual machine I/O to virtual disks is served by the control domain only. If the control domain stops or reboots, virtual machine I/O to virtual disks is suspended until the control domain resumes operation. Use physical disks (LUNs) for virtual machines that require continuous availability during a control domain reboot.
-
Virtual Machine Disk Multipathing: When assigning a disk to a virtual machine, only fibre channel (FC) disks are configured with disk multipathing through the primary and the secondary domains. NFS, SAS, iSCSI or ZFS disks assigned to a virtual machine are configured with a single path through the primary domain.
-
Virtual Machine Network Port: When assigning a network port to a virtual machine, two network ports are effectively configured on the virtual machine: one connected to the primary domain, and one connected to the secondary domain. The network port connected to the primary domain is configured with a MAC address that can be defined from within Oracle VM Manager. The MAC address must be selected in the range [00:21:f6:00:00:00, 00:21:f6:0f:ff:ff]. The network port connected to the secondary domain is configured with a MAC address derived from the MAC address of the network port connected to the primary domain. This MAC address starts with 00:21:f6:8.
For example, if the MAC address defined in Oracle VM Manager is 00:21:f6:00:12:34 then this MAC address is used on the network port connected to the primary domain. The derived MAC address is then 00:21:f6:80:12:34 and should be used on the network port connected to the secondary domain. Oracle VM Manager uses a default dynamic MAC address range of [00:21:f6:00:00:00, 00:21:f6:ff:ff:ff]. When using a secondary service domain, this range must be reduced to [00:21:f6:00:00:00, 00:21:f6:0f:ff:ff]. See the Virtual NICs section in the Oracle VM Manager Online Help for more information on changing the default range of MAC addresses within the Oracle VM Manager Web Interface.
-
Live Migration: A virtual machine cannot be live migrated to a server configured with a different number of service domains. In other words, you cannot migrate a virtual machine running on a server with a secondary service domain to a server without a secondary service domain; and you cannot migrate a virtual machine running on a server without a secondary service domain to a server with a secondary service domain.
1.7.2.3 Creating a Secondary Service Domain
The following requirements apply to secondary service domains within an Oracle VM context:
-
No domain, other than the primary domain, must exist before you start to set up a secondary domain. You can see all existing domains in the output of the ldm list command.
-
No virtual switch must exist before you start to set up a secondary domain. You can see all virtual switches in the VSW section in the output of the ldm list-services command.
-
The name of the secondary service domain must be
secondary
. -
The secondary service domain should be a root domain.
-
The secondary service domain should be configured with 1 CPU core.
-
The secondary service domain should be configured with 8 GB of memory.
-
The secondary service domain should have virtual disk service (VDS) with the name
secondary-vds0
. -
The secondary service domain should be completely independent of any other domain, in particular of the primary domain. For this reason, the secondary domain should have no virtual disks and no virtual network interfaces, and use only physical disks and physical network interfaces.
For more information about creating a root domain, see Creating a Root Domain by Assigning PCIe Buses in the Oracle VM Server for SPARC Administration Guide.
Use the ovs-agent-secondary command to make sure that you meet these requirements, and to simplify the process of setting up and configuring the secondary service domain. See Section 1.7.2.6, “Automatically Creating and Setting Up a Secondary Domain”.
The following instructions describe how to create a secondary service domain manually:
-
Create the service domain and set the core CPU and memory requirements using the following commands:
# ldm add-domain secondary # ldm set-core 1 secondary # ldm set-memory 8g secondary
-
Assign the PCI buses that you wish the secondary service domain to use. For each bus, issue the following command, substituting
pci_2
with the correct bus identifier:ldm add-io
pci_2
secondary -
Add the secondary virtual disk service to the secondary domain, using the following command:
ldm add-vds secondary-vds0 secondary
-
Remove any PCI buses that you added to the secondary service domain from the primary domain. To begin reconfiguring the primary domain, enter the following command:
# ldm start-reconf primary
For each bus that you added to the secondary domain, enter the following command to remove it from the primary domain, substituting
pci_2
with the correct bus identifier:# ldm remove-io
pci_2
primary -
When you have finished reconfiguring the primary domain, you must reboot it:
# reboot
1.7.2.4 Installing the Secondary Service Domain
After the secondary service domain has been created and the primary domain has finished rebooting, start the secondary service domain using the following commands in the control domain:
# ldm bind-domain secondary # ldm start-domain secondary
Once the secondary service domain has been started, you can access its console by obtaining the console port using the following command:
# ldm list secondary NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME secondary active -t--v- 5000 8 8G 0.0% 0.0% 0s
Note that the console port is listed in the CONS column. You can open a telnet connection to this port as follows:
# telnet 0 5000 Trying 0.0.0.0... Connected to 0. Escape character is '^]'. Connecting to console "secondary" in group "secondary" .... Press ~? for control options .. {0} ok
Now you must install the Oracle Solaris 11 operating system into the secondary domain. This can be achieved by following the instructions provided in the Oracle Solaris 11.3 documentation available at:
http://docs.oracle.com/cd/E53394_01/html/E54756/index.html
Do not attempt to install either the Oracle VM Agent or the Logical Domains Manager into the secondary service domain. Only the Oracle Solaris 11 operating system is required.
Make sure that the secondary service domain is properly configured
so that it can boot automatically. In particular, the OpenBoot PROM
(OBP) variables of the domain must be correctly set. For instance,
the auto-boot?
parameter should be set to
true, and
boot-device
parameter should contain the device
path of the boot disk that is configured for the secondary domain.
1.7.2.5 Manually Configuring the Oracle VM Agent to Support the Secondary Domain
You can use the ovs-agent-secondary command to assist you with the process of setting the Oracle VM Agent to support the secondary domain, see Section 1.7.2.6, “Automatically Creating and Setting Up a Secondary Domain”. The instructions that follow describe how to configure the Oracle VM Agent manually.
-
Create a configuration file in
/etc/ovs-agent/shadow.conf
on the primary domain. This configuration file is in JSON format and, at absolute minimum, should contain the following content to enable support for the secondary domain:{ "enabled": true }
NoteEnsure that the JSON file is correctly formatted as defined at http://json.org/.
-
Each network link in the primary domain should have a corresponding network link in the secondary domain connected to the same physical network. By default, a network link in the primary domain is associated with the network link with the same name in the secondary domain. If a network link in the primary domain should be associated with a network link in the secondary domain with a different name, then you need to define a network link mapping. To define a network mapping, you need to add a 'nic-mapping' entry in
/etc/ovs-agent/shadow.conf
. Typically, entries of this sort look similar to the following:{ enabled": true, nic-mapping": [ ["^
net4
$", "net2
" ], ["^net5
$", "net3
" ] ] }In the above example,
net4
is a network interface in the primary domain and is connected to the same physical network as the network interface namednet2
in the secondary domain. Equally,net5
is a network interface in the primary domain and is connected to the same physical network as the network interface namednet3
in the secondary domain. Note that network interface names in the primary domain are encapsulated with the regular expression characters caret (^) and dollar ($) to ensure an exact match for the network interface name in the primary domain. -
Each Fibre Channel (FC) disk accessible from the primary domain domain should also be accessible from the secondary domain. By default, a FC disk is accessed using the same device path in the primary domain and in the secondary domain. In particular, each disk is accessed using the same disk controller name. If a disk controller in the primary domain should be associated with a disk controller in the secondary domain with a different name, then you need to define a disk controller mapping.
It is recommended that Solaris I/O multipathing is enabled in the primary and in the secondary domain on all multipath-capable controller ports, in particular on all FC ports. In that case, all FC disks appear under a single disk controller (usually c0), and disk controller mapping is usually not needed in that case.
To define a disk controller mapping, add a 'disk-mapping' entry in the
/etc/ovs-agent/shadow.conf
file. For example:{ "enabled": true, "disk-mapping": [ [ "
c0t
", "c1t
" ] ] }In the preceding example,
c0t
is a disk controller in the primary domain that is connected to the same FC disk as the disk controller namedc1t
in the secondary domain. -
An example of an
/etc/ovs-agent/shadow.conf
file that requires both network interface and disk controller mapping follows:{ "enabled": true, "nic-mapping": [ [ "^
net4
$", "net2
" ], [ "^net5
$", "net3
" ] ], "disk-mapping": [ [ "c0t
", "c1t
" ] ] }
-
-
Save the logical domain configuration with the secondary service domain to the service processor.
WarningBefore saving the configuration, ensure that the secondary service domain is active. If the configuration is saved while the secondary service domain is not active, then the secondary service domain won't start automatically after a power-cycle of the server
# ldm add-spconfig ovm-shadow
-
To complete the configuration, reconfigure the Oracle VM Agent by running the following command:
# ovs-agent-setup configure
The configuration values that are used for this process map onto the values that you entered for the configuration steps when you first configured Oracle VM Agent for your primary control domain, as described in the Configuring Oracle VM Agent for SPARC section of the Oracle VM Installation and Upgrade Guide
When the Oracle VM Agent configuration has completed, the secondary domain is running and Oracle VM Agent is able to use it in the case that the primary domain becomes unavailable.
1.7.2.6 Automatically Creating and Setting Up a Secondary Domain
The ovs-agent-secondary command can be used to automatically create and setup a secondary domain. In particular, the command indicates whether the server is suitable for creating a secondary service domain, and which PCIe buses are available for the secondary service domain.
A system reboot is not equivalent to powering off the system and restarting it. Furthermore, you should ensure that the system does not power off until you complete each step in the procedure to create a secondary service domain.
To create a secondary service domain, run the following command on the control domain:
# ovs-agent-secondary create
The ovs-agent-secondary command is a helper script that is provided as is. This command might not work with some servers or configurations. If the command does not work, create the secondary service domain manually, as described in Section 1.7.2.3, “Creating a Secondary Service Domain”.
Listing PCIe Buses Present on the Server
The list of all PCIe buses present on the server is displayed, with information indicating whether or not they are available for creating a secondary service domain. Example output from the ovs-agent-secondary command, for this step, is displayed below:
Gathering information about the server...
The server has 2 PCIe buses.
----------------------------------------------------------------------
This is the list of PCIe buses present on the server, and whether
or not they are available for creating a secondary service domain
Bus Available Reason
--- --------- ------
pci_0 no Bus is assigned and used by the primary domain
pci_1 yes Bus is assigned to the primary domain but it is not used
Enter + or - to show or hide details about PCIe buses.
+) Show devices in use
Or select one of the following options.
0) Exit and do not create a secondary service domain
1) Continue and select PCIe buses to create a secondary service domain
Choice (0-1): 1
Use this information to figure out which PCIe buses are available, and which buses you want to use for the secondary service domain. You can display more or less information about the PCIe buses by entering "+" or "-".
A PCIe bus is not available for creating a secondary service in the following cases:
-
The PCIe bus is assigned to a domain other than the primary domain.
If you want to use such a PCIe bus for the secondary service domain then you must first remove it from the domain it is currently assigned to.
-
The PCIe bus is assigned to the primary domain and devices on that bus are used by the primary domain.
If you want to use such a PCIe bus for the secondary service domain then you must reconfigure the primary domain so that it stops using devices from that bus.
When a PCIe bus is assigned to the primary domain, the tool may not always be able to figure out if devices from the bus are used by the primary domain. Furthermore, the tool only identifies common devices (such as network interfaces and disks) and the common usage of these devices (including link aggregation, IP configuration or ZFS pool). If you want to create a secondary domain with a PCIe bus that is currently assigned to the primary domain, make sure that this bus is effectively not used by the primary domain at all.
Selecting PCIe Buses for the Secondary Service Domain
The next step provided by the ovs-agent-secondary command allows you to actually select the PCIe buses that are to be used for the secondary service domain. Typically, this step may appear as follows:
The following PCIe buses can be selected for creating a secondary
service domain.
Bus Selected Slot Devices Count
--- -------- ---- -------------
pci_1 no
/SYS/MB/PCIE5
/SYS/MB/PCIE6
/SYS/MB/PCIE7 ETH(2)
/SYS/MB/PCIE8 FC(2)
/SYS/MB/SASHBA1 DSK(2)
/SYS/MB/NET2 ETH(2)
Enter + or - to show or hide details about PCIe buses.
+) Show devices
-) Hide PCIe slots
Or enter the name of one or more buses that you want to add to the
selection of PCIe buses to create a secondary service domain.
Or select one of the following option.
0) Exit and do not create a secondary service domain
1) Add all PCIe buses to the selection
2) Remove all PCIe buses from the selection
Choice (0-2): pci_1
adding bus pci_1 to selection
Note that in addition to the menu options, which allow you to add all available PCIe buses to the secondary service domain, you can also manually specify a space separated list of PCIe buses by bus name to individually add particular buses to the secondary service domain.
As soon as at least one PCIe bus is marked as selected, the menu options change to allow you to create the secondary service domain with the selected PCIe buses:
The following PCIe buses can be selected for creating a secondary
service domain.
Bus Selected Slot Devices Count
--- -------- ---- -------------
pci_1 yes
/SYS/MB/PCIE5
/SYS/MB/PCIE6
/SYS/MB/PCIE7 ETH(2)
/SYS/MB/PCIE8 FC(2)
/SYS/MB/SASHBA1 DSK(2)
/SYS/MB/NET2 ETH(2)
Enter + or - to show or hide details about PCIe buses.
+) Show devices
-) Hide PCIe slots
Or enter the name of one or more buses that you want to add to the
selection of PCIe buses to create a secondary service domain.
Or select one of the following option.
0) Exit and do not create a secondary service domain
1) Add all PCIe buses to the selection
2) Remove all PCIe buses from the selection
3) Create a secondary services domain with the selected buses
Choice (0-3): 3
Confirming the Selection of PCIe Buses for the Secondary Service Domain
A final confirmation screen displays the buses selected for the secondary service domain, before you can proceed to create the secondary service domain. This confirmation screen looks as follows:
You have selected the following buses and devices for the secondary
domain.
Bus Current Domain Slot Devices Count
--- -------------- ---- -------------
pci_1 primary
/SYS/MB/PCIE5
/SYS/MB/PCIE6
/SYS/MB/PCIE7 ETH(2)
/SYS/MB/PCIE8 FC(2)
/SYS/MB/SASHBA1 DSK(2)
/SYS/MB/NET2 ETH(2)
Verify that the selection is correct.
0) Exit and do not create a secondary service domain
1) The selection is correct, create a secondary domain with pci_1
2) Go back to selection menu and change the selection
Choice (0-2): 1
Creating the Secondary Service Domain
After the selection of PCIe buses for the secondary service domain has been confirmed, the secondary domain is created and instructions for configuring the secondary service domain are displayed. The output from the tool looks similar to the following:
ldm add-domain secondary ldm set-core 1 secondary ldm set-memory 8G secondary ldm add-vds secondary-vds0 secondary ldm add-io pci_1 secondary ldm start-reconf primary ldm remove-io pci_1 primary ---------------------------------------------------------------------- The secondary service domain has been created. Next, you need to install Solaris on that domain. Then you can configure the Oracle VM Agent to run with the secondary domain. Once the secondary service domain is up and running with Solaris, run the following command to configure the Oracle VM Agent to run with the secondary domain: # ovs-agent-secondary configure
If a reboot is required to complete the creation of the secondary service domain then a corresponding menu is displayed, otherwise the tool terminates and the creation of secondary service domain is already finished. The following menu is displayed if a reboot is required:
To complete the configuration of the Oracle VM Agent, the system has to be rebooted. Do you want to reboot the system now? 1) Yes, reboot the system now 2) No, I will reboot the system later Choice (1-2):1
Server Reboot !!! WARNING !!! You are not connected to the system console. Rebooting the server will close this connection with the server. !!! WARNING !!! Are you sure that you want to continue? 1) Yes, continue and reboot the system now 2) No, cancel the reboot, I will reboot the system later Choice (1-2):1
Rebooting the system...
Installing the Service Domain
When you have finished creating the new service domain, you need to install it. Complete the instructions in Section 1.7.2.4, “Installing the Secondary Service Domain”.
Configuring the Oracle VM Agent for the Secondary Domain
Once the secondary service domain is correctly installed, you must configure the Oracle VM Agent to use it by running the ovs-agent-secondary command on the control domain, as follows:
# ovs-agent-secondary configure
Checking the Installation of the Secondary Service Domain
The first step in the configuration process requires you to confirm that the secondary domain is installed and running. This step is displayed as follows:
The secondary service domain exists and is active. It should be up
and running Solaris 11.3.
Confirm that the secondary service domain is up and running Solaris 11.3
1) Yes, the secondary service domain is up and running Solaris 11.3.
2) No, the secondary service domain is not running Solaris 11.3
Choice (1-2): 1
Removing Virtual Switches
The configuration process notifies you if virtual switches are defined.
The secondary domain can only be configured when no virtual
switches are defined. Remove any virtual switch, and restart
the configuration.
The following virtual switches are defined: 0a010000
You must remove any virtual switches defined in the secondary service domain before you can configure it, as in the following example:
# ldm list-services
VCC
NAME LDOM PORT-RANGE
primary-vcc0 primary 5000-5127
VSW
NAME LDOM MAC NET-DEV ID DEVICE
0a010000 primary 00:14:4f:fb:53:0e net0 0 switch@0
LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
1 1 1500 on
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
secondary-vds0 secondary
# ldm remove-vsw 0a010000
Restart the configuration of the secondary service domain after you remove the virtual switch.
# ovs-agent-secondary configure
The secondary service domain exists and is active. It should be up
and running Solaris 11.3.
Confirm that the secondary service domain is up and running Solaris 11.3
1) Yes, the secondary service domain is up and running Solaris 11.3.
2) No, the secondary service domain is not running Solaris 11.3
Choice (1-2): 1
Mapping Network Interfaces Between the Primary and the Secondary Domain
Each network link in the primary domain should have a corresponding network link in the secondary domain connected to the same physical network. By default, a network link in the primary domain is associated with the network link with the same name in the secondary domain. If a network link in the primary domain should be associated with a network link in the secondary domain with a different name, then you need to define a network link mapping. This is achieved in the next step of the configuration process, which is displayed as follows:
Each network link in the primary domain should have a corresponding
network link in the secondary domain connected to the same physical
network. By default, a network link in the primary domain will be
associated with the network link with the same name in the secondary
domain.
Network links in the primary domain and corresponding link in the
secondary domain:
Primary Secondary
------- ---------
net0 net0
net1 net1
net4 net4
net5 net5
net6 net6
net7 net7
If a network link in the primary domain should be associated with
a network link in the secondary domain with a different name, then
you need to define a network link mapping.
Do you need to define a network link mapping?
1) Yes, I need to map a network link in the primary domain to
a network link in the secondary domain with a different name.
2) No, each network link in the primary domain has a corresponding
network link in the secondary domain with the same name.
Choice (1-2): 1
Ideally, you should be able to select option 2
here to continue. However, it is possible that network link names
may not correspond correctly. In this case, you should select
option 1
and redefine the mapping as follows:
Enter the mapping for net0 [net0]: Enter the mapping for net1 [net1]: Enter the mapping for net4 [net4]:net2
Enter the mapping for net5 [net5]:net3
Enter the mapping for net6 [net6]: Enter the mapping for net7 [net7]: Network links in the primary domain and corresponding link in the secondary domain: Primary Secondary ------- --------- net0 net0 net1 net1 net4 net2 net5 net3 net6 net6 net7 net7 Is the mapping correct? 1) Yes, the mapping is correct. 2) No, the mapping is not correct, redo the mapping. Choice (1-2):1
Note that you are prompted for the mapping for each network link in the primary domain. If you enter a blank line, the existing default mapping is used. If you need to change a mapping, you must specify the network link name in the secondary domain that is connected to the same physical network as the network link listed in the primary domain.
When you have finished redefining the mappings, you should select
option 1
to continue to the next step in the
configuration process.
Mapping Fibre Channel Disk Controllers Between the Primary and the Secondary Domain
Each Fibre Channel (FC) disk accessible from the primary domain should also be accessible from the secondary domain. By default, a FC disk is accessed using the same device path in the primary domain and in the secondary domain. In particular, each disk is accessed using the same disk controller name. If a disk controller in the primary domain should be associated with a disk controller in the secondary domain with a different name, then you must define a disk controller mapping.
It is recommended that Solaris I/O multipathing is enabled in the primary and in the secondary domain on all multipath-capable controller ports, in particular on all FC ports. In this case, all FC disks appear under a single disk controller (usually c0), and disk controller mapping is usually not needed.
The following screen is displayed for this step in the configuration process:
Each Fibre Channel (FC) disk accessible from the primary domain
domain should also be accessible from the secondary domain. By
default, a FC disk will be access using the same device path in
the primary domain and in the secondary domain. In particular,
each disk will be accessed using the same disk controller name.
FC disk controllers in the primary domain and corresponding
controller in the secondary domain:
Primary Secondary
------- ---------
c0 c0
If a disk controller in the primary domain should be associated with
a disk controller in the secondary domain with a different name, then
you need to define a disk controller mapping.
Do you need to define a disk controller mapping?
1) Yes, I need to map a disk controller in the primary domain to
a disk controller in the secondary domain with a different name.
2) No, each disk controller in the primary domain has a corresponding
disk controller in the secondary domain with the same name.
Choice (1-2): 1
Ideally, you should be able to select option 2
to
continue. However, it is possible that disk controller names might
not correspond correctly. In this case, you should select option
1
and redefine the mapping as follows:
Enter the mapping for c0 [c0]: c1
FC disk controllers in the primary domain and corresponding
controller in the secondary domain:
Primary Secondary
------- ---------
c0 c1
Is the mapping correct?
1) Yes, the mapping is correct.
2) No, the mapping is not correct, redo the mapping.
Choice (1-2): 1
Note that you are prompted for the mapping for each FC disk controller in the primary domain. If you enter a blank line, the existing default mapping is used. If you need to change a mapping, you must specify the FC disk controller name in the secondary domain that is connected to the same FC disk listed in the primary domain.
When you have finished redefining the mappings, you should select
option 1
to continue to the next step in the
configuration process.
Saving the Oracle VM Agent Configuration for the Secondary Service Domain
The Oracle VM Agent uses a configuration file to access and configure itself for resources in the secondary service domain. In this step of the configuration process, the configuration file is created and saved to disk within the primary control domain:
Creating configuration file
Saving configuration ovm-shadow on the service processor
The secondary service domain is configured. Continuing with
the configuration of the Oracle VM Agent.
This command can not be run while the ovs-agent is online.
Do you want to disable the ovs-agent service?
1) Yes, disable the ovs-agent service
2) No, exit the ovs-agent-setup tool
Choice (1-2): 1
Reconfiguring the Oracle VM Agent
Finally, the Oracle VM Agent is automatically reconfigured to use the secondary service domain and the Oracle VM Agent is enabled:
Network Configuration Network Configuration OK Storage Configuration Storage Configuration OK OVS Agent Configuration OVS Agent Configuration OK Cluster Configuration Cluster Configuration OK LDoms Manager Configuration LDoms Manager Configuration OK Virtual I/O Services Configuration Virtual I/O Services Configuration OK LDoms Configuration LDoms Configuration OK Enabling Oracle VM Agent Services
The configuration values that are used for this process map onto the values that you entered for the configuration steps when you first configured Oracle VM Agent for your primary control domain, as described in the Oracle VM Installation and Upgrade Guide .
When the process is complete, the Oracle VM Agent is enabled and your environment is configured to use both a primary and a secondary service domain.