Chapter 3 Performing a Network Installation of Oracle VM Server
-
3.1 Installing Oracle VM Server for x86 from PXE Boot
- 3.1.1 PXE Boot Overview
- 3.1.2 Configuring the DHCP Service
- 3.1.3 Configuring the TFTP Service
- 3.1.4 Copying the Xen Hypervisor, Installer Kernel, and RAM Disk Image
- 3.1.5 Hosting the Contents of the Oracle VM Server ISO File
- 3.1.6 Copying the Kickstart Configuration File
- 3.1.7 Setting Up the Boot Loader
- 3.1.8 Starting the Installation Process
- 3.2 Using the Automated Installer (AI) for Oracle VM Server for SPARC
This chapter covers different automated installation techniques for installing Oracle VM Server.
It is important to understand that in the case of Oracle VM Server on x86 hardware, an installation is equivalent to a full operating system installation and that the ISO image file provided is a bootable image. On SPARC hardware, the hypervisor is built into the firmware and Oracle Solaris 11.3 includes its own logical domain manager. Installation of Oracle VM Server on SPARC hardware involves installing the Oracle VM Agent for SPARC that allows Oracle VM Manager to interface with the Logical Domains Manager. In this case, an installation of Oracle VM Server for SPARC involves installing additional packages and configuring the environment for Oracle VM.
For deployments of a large number of x86 Oracle VM Servers, network installation using PXE boot may be preferred to using a bootable physical media such as DVD-ROM. For deployments of a large number of SPARC Oracle VM Servers, network installation using Solaris Auto-Install (AI) may be preferred.
Installations of Oracle VM Server for x86 hardware can be largely automated by taking advantage of a kickstart configuration file that guides the Anaconda-based install wizard through each of the installation options available. Installations of Oracle VM Server for SPARC hardware can be largely automated by taking advantage of the AI manifest to automatically configure the system.
This chapter provides you with the information required for each of these installation techniques. Commands are provided for both x86 and SPARC environments where relevant.
3.1 Installing Oracle VM Server for x86 from PXE Boot
- 3.1.1 PXE Boot Overview
- 3.1.2 Configuring the DHCP Service
- 3.1.3 Configuring the TFTP Service
- 3.1.4 Copying the Xen Hypervisor, Installer Kernel, and RAM Disk Image
- 3.1.5 Hosting the Contents of the Oracle VM Server ISO File
- 3.1.6 Copying the Kickstart Configuration File
- 3.1.7 Setting Up the Boot Loader
- 3.1.8 Starting the Installation Process
In deployments where multiple systems must be installed, it is common to perform a network-based installation by configuring target systems to load a PXE boot image from a TFTP server configured on the same network. This deployment strategy typically suits environments where many Oracle VM Server instances are to be installed on x86 hardware at once.
This section describes some of the basic configuration steps required on a single Oracle Linux server that is set up to provide all of the services needed to handle a PXE boot environment. There are many different approaches to the architecture and choices of software required to service PXE boot requests. The information provided here is intended only as a guideline for setting up such an environment.
As of Release 3.4.5, the updated Xen hypervisor for Oracle VM Server is delivered as a single
binary, named xen.mb.efi
instead of xen.gz
,
which can be loaded by the EFI loader, Multiboot, and Multiboot2 protocols.
3.1.1 PXE Boot Overview
PXE boot is a method of installing Oracle VM Server on multiple client machines across the network. In general, to successfully perform a PXE boot, you need to do the following:
-
Install and configure an Oracle Linux server to provide services and host files across the network.
-
Configure a DHCP service to direct client machines to the location of a boot loader.
-
Configure a TFTP service to host the boot loader, kernel, initial RAM disk (initrd) image, Xen hypervisor, and configuration files.
-
Host the contents of the Oracle VM Server ISO image file on an NFS or HTTP server.
-
Create a kickstart configuration file for the Oracle VM Server installation.
A kickstart configuration file allows you to automate the Oracle VM Server installation steps that require user input. While not necessary to perform a PXE boot, using a kickstart configuration file is recommended. For more information, see Section 2.1.4, “Performing a Kickstart Installation of Oracle VM Server”.
-
Set up the PXE client boot loaders.
-
For BIOS-based PXE clients you use the
pxelinux.0
boot loader that is available from thesyslinux
package. For UEFI-based PXE clients in a non-Secure Boot configuration, you use thegrubx64.efi
boot loader.NoteOracle VM Release 3.4.1 and Release 3.4.2 require you to build the boot loader for UEFI-based PXE clients. For more information, see Section A.1, “Setting Up PXE Boot for Oracle VM Server Release 3.4.1 and Release 3.4.2”.
-
Create the required boot loader configuration files.
-
Host the boot loader and configuration files on the TFTP server.
-
3.1.2 Configuring the DHCP Service
The DHCP service handles requests from PXE clients to specify the location of the TFTP service and boot loader files.
-
If your network already has a DHCP service configured, you should edit that configuration to include an entry for PXE clients. If you configure two different DHCP services on a network, requests from clients can conflict and result in network issues and PXE boot failure.
-
The following examples and references are specific to ISC DHCP.
Configure the DHCP service as follows:
-
Install the
dhcp
package.# yum install dhcp
-
Edit
/etc/dhcp/dhcpd.conf
and configure an entry for the PXE clients as appropriate. See Example DHCP Entry for PXE Clients. -
Start the DHCP service and configure it to start after a reboot.
# service dhcpd start # chkconfig dhcpd on
NoteIf the server has more that one network interface, the DHCP service uses the
/etc/dhcp/dhcpd.conf
file to determine which interface to listen on. If you make any changes to /etc/dhcp/dhcpd.conf, restart the dhcpd service. -
Configure the firewall to accept DHCP requests, if required.
Example DHCP Entry for PXE Clients
The following is an example entry in
dhcpd.conf
for PXE clients:
set vendorclass = option vendor-class-identifier; option pxe-system-type code 93 = unsigned integer 16; set pxetype = option pxe-system-type; option domain-name "example.com"; subnet 192.0.2.0 netmask 255.255.255.0 { option domain-name-servers 192.0.2.1; option broadcast-address 192.0.2.2; option routers 192.0.2.1; default-lease-time 14400; max-lease-time 28800; if substring(vendorclass, 0, 9)="PXEClient" { if pxetype=00:07 or pxetype=00:09 { filename "tftpboot/grub2/grubx64.efi"; } else { filename "tftpboot/pxelinux/pxelinux.0"; } pool { range 192.0.2.14 192.0.2.24; } next-server 10.0.0.6; } host svr1 { hardware ethernet 08:00:27:c6:a1:16; fixed-address 192.0.2.5; option host-name "svr1"; } host svr2 { hardware ethernet 08:00:27:24:0a:56; fixed-address 192.0.2.6; option host-name "svr2"; }
-
The preceding example configures a pool of generally available IP addresses in the range
192.0.2.14
through192.0.2.24
on the192.0.2/24
subnet. Any PXE-booted system on the subnet uses the boot loader that thefilename
parameter specifies for its PXE type. -
The boot loader
grubx64.efi
for UEFI-based clients is also located in thegrub2
subdirectory of the TFTP server directory. In a non-Secure Boot configuration, you can specifygrubx64.efi
as the boot loader. -
The boot loader
pxelinux.0
for BIOS-based clients is located in thepxelinux
subdirectory of the TFTP server directory. -
The
next-server
statement specifies the IP address of the TFTP server from which a client can download the boot loader file.NoteYou should have a
next-server
statement even if you use the same server to host both DHCP and TFTP services. Otherwise, some boot loaders cannot get their configuration files, which the client to reboot, hang, or display a prompt. -
The static IP addresses
192.0.2.5
and192.0.2.6
are reserved for svr1 and svr2, which are identified by their MAC addresses.
3.1.3 Configuring the TFTP Service
The TFTP service hosts boot loader files, configuration files, and binaries on the network so PXE clients can retrieve them.
Configure the TFTP service as follows:
-
Install the
tftp-server
package.# yum install tftp-server
-
Open
/etc/xinetd.d/tftp
for editing and then:-
Set
no
as the value of thedisable
parameter.disable = no
-
Set
/tftpboot
as the TFTP root.server_args = -s /tftpboot
-
-
Save and close
/etc/xinetd.d/tftp
. -
Create the
/tftpboot
directory if it does not already exist.# mkdir /tftpboot
-
Restart the
inetd
server.# service xinetd restart
-
Configure the firewall to allow TFTP traffic, if required.
3.1.4 Copying the Xen Hypervisor, Installer Kernel, and RAM Disk Image
The TFTP service hosts the following files so that PXE clients can retrieve them over the network:
-
xen.mb.efi
- Xen hypervisor for Oracle VM Server -
vmlinuz
- installer kernel -
initrd.img
- initial RAM disk image
Copy the files to your TFTP service as follows:
-
Create an
isolinux
subdirectory in the TFTP server root.# mkdir /tftpboot/isolinux
-
Mount the Oracle VM Server ISO image file as a loopback device. For instructions, see Section 1.4, “Loopback ISO Mounts”.
-
Copy the contents of
images/pxeboot
from the ISO image file into theovs
subdirectory you have created.# cp /
mnt
/images/pxeboot/* /tftpboot/isolinux/Substitute
mnt
with the path to the mount point where you mounted the ISO image file.
3.1.5 Hosting the Contents of the Oracle VM Server ISO File
You must host the contents of the Oracle VM Server ISO image file over the network so that the PXE clients can access them. You can use an NFS or HTTP server as appropriate.
You cannot host only the ISO image file itself over the network. You must make the entire contents of the ISO image file available in a single directory.
The following steps provide an example using an NFS server:
-
Install an NFS server if necessary.
# yum install nfs-utils
-
Create a directory for the contents of the Oracle VM Server ISO image file.
# mkdir -p /srv/install/ovs
-
Mount the Oracle VM Server ISO image file as a loopback device. For instructions, see Section 1.4, “Loopback ISO Mounts”.
-
Copy the contents of the Oracle VM Server ISO image file into the directory you created.
# cp -r /
mnt
/* /srv/install/ovsSubstitute
mnt
with the path to the mount point where you mounted the ISO image file. -
Edit
/etc/exports
to configure your NFS exports./srv/install *(ro,async,no_root_squash,no_subtree_check,insecure)
Depending on your security requirements, you can configure this export only to cater to particular hosts.
-
Start the NFS service.
# service nfs start
If the NFS service is already running and you make any changes to the
/etc/exports
file, run the following command to update the exports table within the NFS kernel server:# exportfs -va
-
Configure the NFS service to always start at boot.
# chkconfig nfs on # chkconfig nfslock on
-
Configure the firewall to allow clients to access the NFS server, if required.
3.1.6 Copying the Kickstart Configuration File
To perform a PXE boot, you should create a kickstart
configuration file to automate the installation process. The
kickstart configuration file provides the input that the
Anaconda installation wizard requires. If you have not already
created a kickstart configuration file,
ks.config
, see
Section 2.1.4, “Performing a Kickstart Installation of Oracle VM Server”.
You must make the kickstart configuration file available to PXE clients over the network. To do this, you can copy the file to the NFS or HTTP server where you host the contents of the Oracle VM Server ISO image file, as follows:
# cp /tmp/OVS_ks.conf
/srv/install/kickstart/ks.cfg
Substitute /tmp/OVS_ks.conf
with the
path to your kickstart configuration file for the Oracle VM Server
installation.
3.1.7 Setting Up the Boot Loader
PXE clients require a boot loader to load the Xen hypervisor and the Linux installation kernel.
For BIOS-based PXE clients you use the
pxelinux.0
boot loader that is available
from the syslinux
package.
For UEFI-based PXE clients in a non-Secure Boot configuration,
you use the grubx64.efi
boot loader that is
available from the Oracle VM Server ISO image file.
Oracle VM Release 3.4.1 and Release 3.4.2 require you to build the boot loader for UEFI-based PXE clients. Before you proceed with any of the following steps, you must first complete the following procedure: Section A.1.1.1, “Building the GRUB2 Boot Loader”.
3.1.7.1 Setting Up the PXELinux Boot Loader for BIOS-based PXE Clients
If you are performing a PXE boot for BIOS-based PXE clients,
you use the pxelinux.0
boot loader from
the syslinux
package.
Getting the PXELinux Boot Loader
To get the PXELinux boot loader, you must install syslinux.
The PXELinux boot loader files must match the kernel requirements of the DHCP server. You should install the syslinux package that is specific to the Oracle Linux installation on which your DHCP service runs.
Complete the following steps:
-
Install the
syslinux
package.# yum install syslinux
-
If you have SELinux enabled, install the
syslinux-tftpboot
package to ensure files have the correct SELinux context.# yum install syslinux-tftpboot
Hosting the PXELinux Boot Loader
After you get the PXELinux boot loader, you copy the following files to the TFTP server so the BIOS-based PXE clients can access them over the network:
-
pxelinux.0
- PXELinux binary -
vesamenu.c32
- graphical menu system module -
mboot.c32
- text only menu system module. You can usemboot.c32
withoutvesamenu.c32
if you do not require a graphical boot menu.
To host the boot loader, do the following:
-
Create a
pxelinux
directory in the TFTP root. -
Copy the boot loader and menu modules to the
pxelinux
directory.# cp /usr/share/syslinux/pxelinux.0 /tftpboot/pxelinux/ # cp /usr/share/syslinux/vesamenu.c32 /tftpboot/pxelinux/ # cp /usr/share/syslinux/mboot.c32 /tftpboot/pxelinux/
Configuring the PXELinux Boot Loader
For BIOS-based PXE clients, you must create two boot loader configuration files on the TFTP server, as follows:
-
Create the
pxelinux.cfg
directory.# mkdir /tftpboot/pxelinux/pxelinux.cfg
-
Create a PXE menu configuration file.
# touch /tftpboot/pxelinux/pxelinux.cfg/pxe.conf
-
Create a PXE configuration file.
# touch /tftpboot/pxelinux/pxelinux.cfg/default
-
Configure
pxe.conf
anddefault
as appropriate. See Example Boot Loader Configurations.
Example Boot Loader Configurations
The following is an example of
pxelinux.cfg/pxe.conf
:
MENU TITLE PXE Server NOESCAPE 1 ALLOWOPTIONS 1 PROMPT 0 menu width 80 menu rows 14 MENU TABMSGROW 24 MENU MARGIN 10 menu color border 30;44 #ffffffff #00000000 std
The following is an example of
pxelinux.cfg/default
:
DEFAULT vesamenu.c32 TIMEOUT 800 ONTIMEOUT BootLocal PROMPT 0 MENU INCLUDE pxelinux.cfg/pxe.conf NOESCAPE 1 LABEL BootLocal localboot 0 TEXT HELP Boot to local hard disk ENDTEXT LABEL OVS MENU LABEL OVS KERNEL mboot.c32 # Note that the APPEND statement must be a single line, the \ delimiter indicates # line breaks that you should remove APPEND /tftpboot/isolinux/xen.mb.efi --- /tftpboot/isolinux/vmlinuz ip=dhcp \ dom0_mem=max:128G dom0_max_vcpus=20 \ ksdevice=eth0 ks=nfs:192.0.2.0
:/srv/install/kickstart/ks.cfg \ method=nfs:192.0.2.0
:/srv/install/ovs --- /tftpboot/isolinux/initrd.img TEXT HELP Install OVM Server ENDTEXT
The default behavior on timeout is to boot to the local
hard disk. To change the default behavior to force an
install, you can change the ONTIMEOUT
parameter to point to the OVS
menu
item. The important thing to remember here is that when an
install is completed, the server reboots and if this
option is not changed back to
BootLocal
, the server enters into an
installation loop. There are numerous approaches to
handling this, and each depend on your own environment,
requirements and policies. The most common approach is to
boot the servers using one configuration, wait for a
period until they are all in the install process and then
change this configuration file to ensure that they return
to local boot at the time that they reboot.
The KERNEL
location points to the
mboot.c32
module. This allows us to
perform a multiboot operation so that the installer loads
within a Xen environment. This is necessary for two
reasons. First, it is useful to establish that the Xen
hypervisor is at least able to run on the hardware prior
to installation. Second, and more importantly, device
naming may vary after installation if you do not run the
installer from within the Xen hypervisor, leading to
problems with device configuration post installation.
In the APPEND
line of the preceding
example:
-
Some parameters in
APPEND
are broken into separate lines with the\
delimiter for readability purposes. A valid configuration places the entireAPPEND
statement on a single line. -
The Xen hypervisor is loaded first from
isolinux/xen.mb.efi
in the TFTP server root. -
The installer kernel is located within the path
isolinux/vmlinuz
in the TFTP server root. -
The IP address for the installer kernel is acquired using DHCP.
-
Limits are applied to dom0 for the installer to ensure that the installer is stable while it runs. This is achieved using the default parameters:
dom0_mem=max:128G
anddom0_max_vcpus=20
. -
The
ksdevice
parameter specifies the network interface to use. You should specify a value that reflects your network configuration, such aseth0
, a specific MAC address, or an appropriate keyword. Refer to the appropriate kickstart documentation for more information. -
The initial ramdisk image is located within the path
isolinux/initrd.img
in the TFTP server root.
3.1.7.2 Setting Up the GRUB2 Boot Loader for UEFI-based PXE Clients
If you are performing a PXE boot for UEFI-based PXE clients, you can use the GRUB2 boot loader that is available on the Oracle VM Server ISO image file.
Hosting the GRUB2 Boot Loader
Host the GRUB2 boot loader on the TFTP server so PXE clients can access it over the network, as follows:
-
Create a
grub2
directory in the TFTP root. -
Mount the Oracle VM Server ISO image file as a loopback device. For instructions, see Section 1.4, “Loopback ISO Mounts”.
-
In a non-Secure Boot configuration, you just need to copy the
grubx64.efi
boot loader from the/EFI/BOOT/
directory to thegrub2
directory.# cp -r
path
/EFI/BOOT/grubx64.efi /tftpboot/grub2/Substitute
path
with the directory where you mounted the Oracle VM Server ISO image file. -
Copy the GRUB2 modules and files to the appropriate directory.
# cp -r
path
/grub2/lib/grub/x86_64-efi/*.{lst,mod} /tftpboot/grub2/x86_64-efiSubstitute
path
with the path to the contents of the Oracle VM Server ISO image file on your file system.
Setting Up the GRUB2 Configuration
Complete the following steps to set up the GRUB2 configuration:
-
Create an
EFI/redhat
subdirectory in the TFTP server root. -
Copy
grub.cfg
from the Oracle VM Server ISO image file to the directory.# cp -r
path
/EFI/BOOT/grub.cfg /tftpboot/grub2/EFI/redhat/grub.cfg-01-cd-ef-gh-ij-kl-mn
Where
-cd-ef-gh-ij-kl-mn
is the MAC address for the Network Interface Card (NIC) for the PXE boot client.Substitute
path
with the directory where you mounted the Oracle VM Server ISO image file. -
Modify
grub.cfg
on the TFTP server as appropriate. See Example grub.cfg.Oracle VM Server provides a GRUB2 boot loader for UEFI and BIOS-based PXE clients. However, the
grub.cfg
must be compatible with GRUB, not GRUB2. The Anaconda installation program for Oracle VM Server is compatible with GRUB only. You can find more information at: http://docs.oracle.com/cd/E37670_01/E41138/html/ch04s02s01.html
Example grub.cfg
The following is an example of
grub.cfg
:
menuentry 'Install Oracle VM Server' --class fedora --class gnu-linux --class gnu --class os { echo 'Loading Xen...' multiboot2 /tftpboot/isolinux/xen.mb.efi dom0_mem=max:128G dom0_max_vcpus=20 echo 'Loading Linux Kernel...' module2 /tftpboot/isolinux/vmlinuz ip=dhcp \ repo=nfs:192.0.2.0
:/srv/install/ovs \ ks=nfs:192.0.2.0
:/srv/install/kickstart/ks.cfg \ ksdevice=00:10:E0:29:B6:C0
echo 'Loading initrd...' module2 /tftpboot/isolinux/initrd.img }
In the preceding example,
-
Some parameters in
module2
statement are broken into separate lines with the\
delimiter for readability purposes. A valid configuration contains all parameters and values in a single line. -
The Xen hypervisor is loaded first from
isolinux/xen.mb.efi
in the TFTP server root. -
The following parameters apply limits to dom0. These limits ensure that the installation program is stable while it runs:
dom0_mem=max:128G
anddom0_max_vcpus=20
. -
The installer kernel is located within the path
isolinux/vmlinuz
in the TFTP server root. -
The IP address for the installer kernel is acquired using DHCP.
-
The
repo
parameter specifies the IP address of the NFS server that hosts the contents of the Oracle VM Server ISO image file and the path to those contents. -
The
ks
parameter specifies the IP address of the NFS server that hosts the kickstart configuration file and the path to the file. -
The
ksdevice
parameter specifies the network interface to use. You should specify a value that reflects your network configuration, such aseth0
, a specific MAC address, or an appropriate keyword. -
The initial ramdisk image is located within the path
isolinux/initrd.img
in the TFTP server root.
3.1.8 Starting the Installation Process
To start the Oracle VM Server installation for PXE clients, do the following:
-
For BIOS-based PXE clients using the PXELinux boot loader, update the
/tftpboot/pxelinux.cfg/default
configuration file to use theOVS
option as the default in the case of a timeout. -
Configure the network boot or PXE boot option in the BIOS or UEFI settings for each client, as appropriate.
-
Reboot each target client.
Each client makes a DHCP request during the network boot process. The DHCP service allocates an IP address to the client and provides the path to the boot loader on the TFTP server. Each target client then makes a TFTP request for the boot loader and loads the default menu. After they reach the menu option timeout, each client loads the kernel and initrd image from the TFTP service and begins the boot process. The client then connects to the NFS or HTTP server and installation begins.
3.2 Using the Automated Installer (AI) for Oracle VM Server for SPARC
Oracle Solaris and the Oracle VM Agent for SPARC can be automatically installed on SPARC servers over the network using Solaris Automated Installer (AI). This allows for the rapid deployment of Oracle VM Server for SPARC across multiple SPARC systems, reducing administrative overhead and the likelihood of configuration or installation errors. Solaris AI is described in detail in the document titled Installing Oracle Solaris 11.3 Systems at:
http://docs.oracle.com/cd/E53394_01/html/E54756/useaipart.html
In this section we assume that you already have a sufficient understanding of Solaris AI and are able to set up an Install Server to deploy Solaris to your SPARC systems. This section highlights the additional steps that you need to perform to ensure that the Oracle VM Agent for SPARC and any other required packages are also installed and configured on your SPARC systems.
To set up and configure SPARC AI for rapid deployment of Oracle VM Server for SPARC across multiple SPARC systems, the following steps must be taken:
-
Set up an IPS repository with the Oracle VM Agent for SPARC software.
-
Create an Oracle Solaris Installation Service.
-
Create an installation manifest for Oracle VM Agent for SPARC.
-
Create a configuration profile for the installation of Oracle VM Server for SPARC.
-
Install Oracle VM Server for SPARC on your SPARC hardware.
3.2.1 Installing the Distributed Lock Manager (DLM) Package
If you have not already installed the DLM package, you should download and install it before you install Oracle VM Agent. The DLM package is required to support server pool clustering.
Download the DLM package,
ovs-dlm-3.4.
,
from
https://edelivery.oracle.com/oraclevm.
For more information about downloading software, see
Section 1.2, “Getting Installation ISOs and Packages”.
x
-bxxx
.p5p
You can add the DLM package to an IPS repository and install from there. See Section 3.2.2, “Setting up an IPS repository”.
To install the DLM package, do the following:
-
Stop the ovs-config service:
# svcadm disable -s ovs-config
-
Install the DLM package:
# pkg install -g ovs-dlm-
3.4.x
-bxxx
.p5p dlm -
Restart the ovs-config service:
# svcadm enable ovs-config
3.2.2 Setting up an IPS repository
In addition to installing Solaris, your IPS repository must be configured to install the Oracle VM Agent for SPARC software. To do this, you must set up an IPS repository that contains the Oracle VM Agent for SPARC software packages to be used during the installation.
-
If you have not already created a package repository that is accessible over HTTP, you must create one by performing the following actions on the system where you intend to host your repositories:
# pkgrepo create
/path/to/my-repository
# svccfg -s application/pkg/server setprop pkg/inst_root=/path/to/my-repository
# svccfg -s application/pkg/server setprop pkg/port=8888 # svcadm refresh application/pkg/server # svcadm enable application/pkg/server -
Check that the package repository server is online:
# svcs pkg/server STATE STIME FMRI online
timestamp
svc:/application/pkg/server:default -
Download the latest Oracle VM Agent for SPARC software from https://edelivery.oracle.com/oraclevm, as described in Section 1.2, “Getting Installation ISOs and Packages”.
-
Extract the software, for example:
# tar xzf ovs-ldoms-3.4.
x
-bxxx
.tar.gz -
Copy the software to the package repository, for example:
# pkgrecv -s ovs-ldoms-3.4.
x
-bxxx
/ovs-ldoms.p5p -d/path/to/my-repository
'ovm/*' # pkgrecv -s ovs-dlm-3.4.x
-bxxx
.p5p -d/path/to/my-repository
'ovm/*' -
Restart the package repository server and ensure that it is online:
# svcadm restart application/pkg/server # svcs pkg/server
-
If the package repository server is in maintenance status, clear the service:
# svcadm clear pkg/server
-
Check that the contents of the repository are available, for example:
# pkgrepo list -s
/path/to/my-repository
# pkgrepo list -shttp://my-repo-server:8888/
3.2.3 Creating an Oracle Solaris Installation Service
To install Oracle Solaris 11.3 over the network, you must create an Oracle Solaris installation service using the installadm create-service command.
The Automatic Installation (AI) tools package provides the installadm command. You can install the AI tools package with the pkg install install/installadm command.
For instructions to create an Oracle Solaris installation service, see Installing Oracle Solaris 11.3 Systems at:
http://docs.oracle.com/cd/E53394_01/html/E54756/useaipart.html
After the procedure is completed, you can check that your installation service is correctly set up by using the installadm list command. The output from this command should look similar to the following:
# installadm list Service Name Alias Of Status Arch Image Path ------------ -------- ------ ---- ----------solaris11_3_12_5_0-sparc
- on sparc/export/auto_install/solaris11_3_12_5_0-sparc
In the example output, the installation service is
solaris11_3_12_5_0-sparc
.
To download the software, refer to Oracle Solaris 11.3 Support Repository Updates (SRU) Index ID 2045311.1 from My Oracle Support, available at:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=2045311.1
Oracle VM Server for SPARC 3.3 has been integrated into Oracle Solaris 11.3.
3.2.4 Creating an Installation Manifest
You need to create a custom XML AI manifest file to install an configure the Oracle VM Agent automatically. For more information about custom XML AI manifest file, see Customizing an XML AI Manifest File at:
http://docs.oracle.com/cd/E53394_01/html/E54756/gmfbv.html#scrolltoc
-
Start by copying the default manifest of your install service:
# installadm list -n solaris11_3_12_5_0-sparc -m Service/Manifest Name Status Criteria --------------------- ------ -------- solaris11_3_12_5_0-sparc orig_default Default None # installadm export -n solaris11_3_12_5_0-sparc -m orig_default -o manifest_ai_ovm.xml
-
Open the exported
manifest_ai_ovm.xml
in a text editor and customize it in the following way:-
In the <source> section, make sure that a Solaris publisher is defined and that it points to a Solaris IPS repository for the Solaris 11.3 version that also contains the Oracle VM Server for SPARC Release 3.3 or higher packages. For example:
<publisher name="solaris"> <origin name="http://solaris-11-repository"/> </publisher>
-
In the <source> section, add the Oracle VM (ovm) publisher with a reference to the IPS repository that you have set up with Oracle VM Agent for SPARC software. For example:
<publisher name="ovm"> <origin name="http://my-repo-server:8888"/> </publisher>
-
In the <software_data> install section, add the following lines to have the Oracle VM Agent for SPARC software and the DLM software installed:
<name>pkg:/ovm/ovs-agent</name> <name>pkg:/ovm/dlm</name>
-
-
Add the manifest to the installation service. In addition, you can specify criteria associated with this manifest. This manifest is only applicable to SPARC sun4v system, so you should at least use the sun4v criteria:
# installadm create-manifest -n solaris11_3_12_5_0-sparc -f manifest_ai_ovm.xml -m ovm -c \ arch="sun4v" # installadm list -m -n solaris11_3_12_5_0-sparc Service/Manifest Name Status Criteria --------------------- ------ -------- solaris11_3_12_5_0-sparc ovm arch = sun4v orig_default Default None
3.2.5 Creating a Configuration Profile
To have the server automatically configured after the installation, you need to provide a configuration profile. For more information about creating a configuration profile, see Creating System Configuration Profiles at:
http://docs.oracle.com/cd/E53394_01/html/E54756/syscfg-2.html
To create a configuration profile, run the interactive
configuration tool and save the output to a file. The following
command creates a valid profile in
profile_ovm.xml
from responses you enter
interactively:
# sysconfig create-profile -o profile_ovm.xml
In the interactive configuration tool, you must select the option to configure the network manually or it will not be possible to automatically configure Oracle VM Agent for SPARC using the installation service.
To have the Oracle VM Agent for SPARC configured automatically during the
installation, add the following section inside the
<service_bundle> section
of the generated profile_ovm.xml
file:
<service version="1" type="service" name="ovm/ovs-config">
<instance enabled="true" name="default">
<property_group type="application" name="config">
<propval type="astring" name="password" value="encrypted-password
"/>
<propval type="boolean" name="autoconfig" value="true"/>
</property_group>
</instance>
</service>
Replace the encrypted-password
value
with the encrypted version of the password that you want to use
for the Oracle VM Agent.
You can generate the encrypted version of the password on any system using the following command:
# python -c "import crypt, sys; print crypt.crypt(sys.argv[-1], \
'\$6\$%s\$' % sys.argv[-2])" $(pwgen -s 16 1) password
Substitute password
with the password
that you want to use for the Oracle VM Agent, for example:
# python -c "import crypt, sys; print crypt.crypt(sys.argv[-1], \ '\$6\$%s\$' % sys.argv[-2])" $(pwgen -s 16 1) s3cr3tp4ssw0rd $6$-c$pgcCqd6Urrepi9EzdK93x5XSpyiNzup7SAcDNjVOtsqm6HFNeg385wMu1GjE.J.S.FL8J7gtl5VZnq7tOAd/N0
The output from this command is the
encrypted-password
value that you
should substitute in the section that you added to the
configuration profile.
Finally, add the configuration profile to the installation service. In addition, you can specify criteria associated with this profile. This profile is only applicable to SPARC sun4v system, so you should at least use the sun4v criteria:
# installadm create-profile -n solaris11_3_12_5_0-sparc --file profile_ovm.xml -c arch=sun4v # installadm list -p Service/Profile Name Criteria -------------------- -------- solaris11_3_12_5_0-sparc profile_ovm.xml arch = sun4v
3.2.6 Performing an Installation
For more information about installing a server with the Solaris Auto-Install, see Installing Client Systems at:
http://docs.oracle.com/cd/E53394_01/html/E54756/client.html
On the installation server, you must associate the MAC address of each server, that you wish to install, with the installation service that you have set up. This is achieved by running the following command:
# installadm create-client -n solaris11_3_12_5_0-sparc -e mac-address
Substitute mac-address
with the
actual MAC address for the network interface on the server that
is used to connect to the installation service.
On your target servers, if you have configured your DHCP to provide the information for the installation service, you can issue the following command at boot:
ok boot net:dhcp - install
If you have not configured DHCP, on your target server issue the following commands at boot:
ok setenv network-boot-arguments host-ip=client-ip
,router-ip=router-ip
,\ subnet-mask=subnet-mask
,hostname=hostname
,\ file=http://install-server-ip-address:5555
/cgi-bin/wanboot-cgi ok boot net - install
Substitute client-ip
with the IP
address that you intend to allocate to the server,
router-ip
with the IP address of your
router or default gateway,
subnet-mask
with the subnet mask of
your network, and hostname
with the
hostname that you wish to use for your server. Finally, ensure
that the URL that you provide for the file
parameter matches the URL to access your Solaris AI server.