6 Managing Oracle Linux KVM Guests

Starting with Oracle Exadata X8M-2, Oracle Linux KVM is the virtualization technology for systems that use RoCE Network Fabric.

6.1 Oracle Linux KVM and Oracle Exadata

When deploying Oracle Exadata X8M-2 or later, you can decide to implement Oracle Linux KVM on the database servers.

A KVM host and one or more guests are installed on every database server. You can configure Oracle Linux KVM environments on your initial deployment using scripts created by Oracle Exadata Deployment Assistant (OEDA) or you can migrate an existing environment to Oracle Linux KVM.

Note:

Oracle Linux KVM is not supported on 8-socket servers, such as X8M-8.

6.1.1 About Oracle Linux KVM

Oracle Linux KVM enables you to deploy the Oracle Linux operating system and application software within a supported virtual environment that is managed by KVM.

Starting with Oracle Exadata System Software release 19.3.0, KVM is the virtualization technology used with Oracle Exadata systems configured with RDMA over Converged Ethernet (RoCE) interconnects. An Oracle Linux KVM environment consists of a management server (the KVM host), virtual machines, and resources. A KVM host is a managed virtual environment providing a lightweight, secure, server platform which runs multiple virtual machines (VMs), also known as guests.

The KVM host is installed on a bare metal computer. The hypervisor on each KVM host is an extremely small-footprint VM manager and scheduler. It is designed so that it is the only fully privileged entity in the system. It controls only the most basic resources of the system, including CPU and memory usage, privilege checks, and hardware interrupts.

The hypervisor securely runs multiple VMs on one host computer. Each VM runs in its own guest and has its own operating system. The KVM host has privileged access to the hardware and device drivers and is the environment from where you manage all the guests.

A guest is an unprivileged VM that uses a defined set of system resources. The guest is started and managed on the KVM host. Because a guest operates independently of other VMs, a configuration change applied to the virtual resources of a guest does not affect any other guests. A failure of the guest does not impact any other guests.

In general, each KVM host supports up to 12 guests. However, the limit is 8 guests on servers that contain 384 GB of RAM and are configured to support Exadata Secure RDMA Fabric Isolation.

Each guest is started alongside the KVM host. The guests never interact with the KVM host directly. Their requirements are handled by the hypervisor. The KVM host only provides a means to administer the hypervisor.

Oracle Exadata Deployment Assistant (OEDA) provides facilities to configure Oracle Linux KVM on Oracle Exadata. You can also use the vm_maker command-line utility to administer Oracle Linux KVM guests.

Note:

Exadata does not support direct manipulation of KVM guests by using the virsh command.

6.1.2 Oracle Linux KVM Deployment Specifications and Limits

This topic describes the deployment specifications and limits for using Oracle Linux KVM on Oracle Exadata Database Machine.

Table 6-1 Oracle Linux KVM Deployment Specifications and Limits for Exadata X10M

Attribute Value for X10M

Maximum number of Oracle Linux KVM guests on each database server

12

Total physical memory on each database server

Minimum: 512 GB

Maximum: 3072 GB

Total available memory on each database server for all Oracle Linux KVM guests

Minimum: 440 GB

Maximum: 2800 GB

Minimum memory limit for each Oracle Linux KVM guest

16 GB

Total CPU cores (vCPUs) on each database server

192 (384)

CPU core (vCPU) limits for each Oracle Linux KVM guest

Minimum: 2 (4)

Maximum: 190 (380)

Over-provisioning limit for CPU cores (vCPUs) on each database server for all Oracle Linux KVM guests

Note: CPU over-provisioning may cause performance conflicts.

380 (760)

Note:
  • Due to hypervisor memory constraints, CPU over-provisioning is not supported on servers containing 512 GB of RAM.
  • CPU oversubscription for KVM guests is not permitted when capacity-on-demand is used on Exadata X10M KVM hosts. CPU oversubscription is only allowed on Exadata X10M when all CPU cores are active on the KVM hosts.

Total usable disk storage for Oracle Linux KVM guests on each database server

Minimum: 3.40 TB

Maximum: 6.97 TB

Table 6-2 Oracle Linux KVM Deployment Specifications and Limits for Exadata X9M-2

Attribute Value for X9M-2 Value for X9M-2 Eighth Rack

Maximum number of Oracle Linux KVM guests on each database server

12

4

Total physical memory on each database server

Minimum: 512 GB

Maximum: 2048 GB

Minimum: 384 GB

Maximum: 1024 GB

Total available memory on each database server for all Oracle Linux KVM guests

Minimum: 440 GB

Maximum: 1870 GB

Minimum: 328 GB

Maximum: 920 GB

Minimum memory limit for each Oracle Linux KVM guest

16 GB

16 GB

Total CPU cores (vCPUs) on each database server

64 (128)

32 (64)

CPU core (vCPU) limits for each Oracle Linux KVM guest

Minimum: 2 (4)

Maximum: 62 (124)

Minimum: 2 (4)

Maximum: 31 (62)

Over-provisioning limit for CPU cores (vCPUs) on each database server for all Oracle Linux KVM guests

Note: CPU over-provisioning may cause performance conflicts.

124 (248)

62 (124)

Total usable disk storage for Oracle Linux KVM guests on each database server

Minimum: 3.40 TB

Maximum: 6.97 TB

Minimum: 3.40 TB

Maximum: 6.97 TB

Table 6-3 Oracle Linux KVM Deployment Specifications and Limits for Exadata X8M-2

Attribute Value for X8M-2 Value for X8M-2 Eighth Rack

Maximum number of Oracle Linux KVM guests on each database server

12

Note: The limit is 8 on servers that contain 384 GB of RAM and are configured to support Exadata Secure RDMA Fabric Isolation.

4

Total physical memory on each database server

Minimum: 384 GB

Maximum: 1536 GB

Minimum: 384 GB

Maximum: 768 GB

Total available memory on each database server for all Oracle Linux KVM guests

Minimum: 328 GB

Maximum: 1390 GB

Minimum: 328 GB

Maximum: 660 GB

Minimum memory limit for each Oracle Linux KVM guest

16 GB

16 GB

Total CPU cores (vCPUs) on each database server

48 (96)

24 (48)

CPU core (vCPU) limits for each Oracle Linux KVM guest

Minimum: 2 (4)

Maximum: 46 (92)

Minimum: 2 (4)

Maximum: 23 (46)

Over-provisioning limit for CPU cores (vCPUs) on each database server for all Oracle Linux KVM guests

Note: CPU over-provisioning may cause performance conflicts.

92 (184)

46 (92)

Total usable disk storage for Oracle Linux KVM guests on each database server

Minimum: 3.15 TB

Maximum: 6.3 TB

Minimum: 3.15 TB

Maximum: 6.3 TB

Note:

1 CPU core = 1 OCPU = 2 vCPUs = 2 hyper-threads

Table 6-4 Oracle Linux KVM Available Memory Limits

Total installed memory DIMM configuration Applicable Exadata database server models Total available memory for Oracle Linux KVM guests

3072 GB

24 x 128 GB

X10M

2800 GB

2304 GB

24 x 96 GB

X10M

2090 GB

2048 GB

32 x 64 GB

X9M-2

1870 GB

1536 GB

24 x 64 GB

  • X10M
  • X9M-2
  • X8M-2

1390 GB

1024 GB

16 x 64 GB

  • X9M-2
  • X9M-2 Eighth Rack

920 GB

768 GB

12 x 64 GB

  • X8M-2
  • X8M-2 Eighth Rack

660 GB

512 GB

16 x 32 GB

  • X10M
  • X9M-2

440 GB

384 GB

12 x 32 GB

  • X9M-2 Eighth Rack
  • X8M-2
  • X8M-2 Eighth Rack

328 GB

6.1.3 Supported Operations in the KVMHost

Manually modifying the KVM host can result in configuration issues, which can degrade performance or cause a loss of service.

Caution:

Oracle does not support any changes that are made to the KVM host beyond what is documented. Third-party applications can be installed on the KVM host and guests, but if there are issues with the Oracle software, then Oracle Support Services may request the removal of the third-party software while troubleshooting the cause.

If you are in doubt whether an operation on the KVM host is supported, contact Oracle Support Services.

6.1.4 Oracle Linux KVM Resources

Two fundamental parts of the Oracle Linux KVM infrastructure – networking and storage – are configured outside of Oracle Linux KVM.

Networking

When specifying the configuration details for your Oracle Exadata Rack using Oracle Exadata Deployment Assistant (OEDA), you provide input on how the required network IP addresses for Oracle Linux KVM environments should be created. The generated OEDA setup files are transferred to the Oracle Exadata Rack and used to create the network addresses.

Storage

Oracle Linux KVM always requires a location to store environment resources that are essential to the creation and management of virtual machines (VMs). These resources include ISO files (virtual DVD images), VM configuration files and VM virtual disks. The location of such a group of resources is called a storage repository.

On Oracle Exadata, storage for the Oracle Linux KVMs uses an XFS file system.

On 2-socket Oracle Exadata Database Machine systems only, you can purchase a disk expansion kit to increase storage capacity. You can use the additional disk space to support more Oracle Linux KVM guests (up to a maximum of 12) by expanding /EXAVMIMAGES or to increase the size of the /u01 partition in each guest.

6.2 Migrating a Bare Metal Oracle RAC Cluster to an Oracle RAC Cluster in Oracle Linux KVM

You can move an existing Oracle Real Application Clusters (Oracle RAC) cluster into a virtual environment that is managed by KVM.

Note:

This topic applies only to two-socket x86 servers. It does not apply to eight-socket servers such as Oracle Exadata X8M-8.

The migration of a bare metal Oracle RAC cluster to an Oracle RAC cluster in Oracle Linux KVM can be achieved in the following ways:

  • Migrate to Oracle RAC cluster in Oracle Linux KVM using the existing bare metal Oracle RAC cluster with zero downtime.

  • Migrate to Oracle RAC cluster in Oracle Linux KVM by creating a new Oracle RAC cluster in Oracle Linux KVM with minimal downtime.

  • Migrate to Oracle RAC cluster in Oracle Linux KVM using Oracle Data Guard with minimal downtime.

  • Migrate to Oracle RAC cluster in Oracle Linux KVM using Oracle Recovery Manager (RMAN) backup and restore with complete downtime.

The conversion of a bare metal Oracle RAC cluster to an Oracle RAC cluster in Oracle Linux KVM has the following implications:

  • Each of the database servers will be converted to an Oracle Linux KVM server on which a KVM host is created along with one or more guests, depending on the number of Oracle RAC clusters being deployed. Each guest on a database server will belong to a particular Oracle RAC cluster.

  • As part of the conversion procedure, the bare metal Oracle RAC cluster will be converted to one Oracle RAC cluster in Oracle Linux KVM to start with. There will be one guest per database server.

  • At the end of the conversion, the cell disk and grid disk configuration of the storage cells are the same as they were at the beginning of the conversion.

  • The KVM host uses a small portion of the system resources on each database server. Typically, the KVM host uses 24 GB plus 6% of the server RAM, and 4 virtual CPUs. Take these resource requirements into consideration when sizing the SGA for databases running in conjunction with Oracle Linux KVM.

  • Refer to My Oracle Support note 2099488.1 for the complete instructions.

6.3 Showing Running Domains

Use the vm_maker utility to list the running domains.

  1. Connect to the management domain.
  2. Run the command /opt/exadata_ovm/vm_maker --list-domains to list the domains.
    # /opt/exadata_ovm/vm_maker --list-domains
    dm01db01vm01.example.com(55)      : running
    dm01db01vm02.example.com(57)      : running
    dm01db01vm03.example.com(59)      : running

    To view memory or CPU distribution for the domains, there are separate commands:

    • /opt/exadata_ovm/vm_maker --list --memory
    • /opt/exadata_ovm/vm_maker --list --vcpu

6.4 Starting a Guest

You can start a guest manually, or configure the guest to start automatically when the KVM host is started.

  1. Connect to the KVM host.
  2. To manually start a guest, use vm_maker to start the guest.

    In the following example, db01_guest01.example.com is the name of the guest.

    # /opt/exadata_ovm/vm_maker --start-domain db01_guest01.example.com
    [INFO] Running 'virsh start db01_guest01.example.com...
    Domain db01_guest01.example.com started
    
    [INFO] The domain has been started but may not have network connectivity for 
    several minutes.
  3. To configure autostart for a guest, use the vm_maker --autostart command.

    In the following example, db01_guest01.example.com is the name of the guest.

    # /opt/exadata_ovm/vm_maker --autostart db01_guest01.example.com --enable
    [INFO] Running 'virsh autostart db01_guest01.example.com'...
    Domain db01_guest01.example.com marked as autostarted

6.5 Starting a Guest using the Diagnostic ISO File

Use this procedure to boot a guest using the diagnostic ISO file (diagnostics.iso).

  1. Connect to the KVM host.
  2. Download the diagnostic ISO file (diagnostics.iso) corresponding to your current Oracle Exadata System Software release.

    If required, use the imageinfo command to determine your current Oracle Exadata System Software release.

    To find the diagnostic ISO file, search the My Oracle Support (MOS) patch repository using "exadata diagnostic iso" as the search term. You can also locate the Diagnostic ISO file in the Supplemental README that is associated with your Oracle Exadata System Software release. The Supplemental README for each Oracle Exadata System Software release is documented in My Oracle Support document 888828.1.

  3. To start a guest using the diagnostic ISO file:
    1. Configure the guest to boot using the diagnostic ISO file.

      Run the vm_maker --boot-from-iso command:

      # vm_maker --boot-from-iso ISO-file --domain guest-name

      In the command:

      • ISO-file specifies the name of the diagnostic ISO that you want to use to boot the specified guest.
      • guest-name specifies the name of the guest that you want to boot using the specified ISO file.

      For example:

      # vm_maker --boot-from-iso /root/home/diagnostics.iso --domain dm01vm01
      [INFO] Running 'virsh undefine dm01vm01.example.com'...
      [INFO] Running 'virsh define /var/log/exadatatmp/dm01vm01.example.com.xml.new.357b'...
      [INFO] The domain 'dm01vm01.example.com' is ready for booting.
      [INFO] Run the following command to boot from the diagnostic iso:
      [INFO] 
      [INFO] virsh start dm01vm01.example.com --console
      [INFO] 
      [INFO] If network is needed to be setup on the VM, run 
      [INFO] setup_management.sh from the console after the guest has booted.
      [INFO] 
      [INFO] When finished, run the following commands to restore
      [INFO] the domain to boot from its hard disk:
      [INFO] 
      [INFO] vm_maker --stop-domain dm01vm01.example.com --force
      [INFO] vm_maker --boot-from-hd --domain dm01vm01.example.com
    2. Boot the guest using the diagnostic ISO file.

      Use the virsh start command specified in the output from the previous vm_maker --boot-from-iso command.

      For example:

      # virsh start dm01vm01.example.com --console

      The guest now boots using the diagnostic ISO file and the console is displayed in the terminal session.

    3. If required, start the guest network.

      If you need network access to the guest while in diagnostic mode, you can start a network interface and SSH server by running setup_management.sh from the guest console and following the prompts to supply the network details.

      For example:

      Welcome to Exadata Shell!
      bash-4.2# setup_management.sh
      Ethernet interface (eth0,1,2,3) with optional VLAN id (ethX.YYYY) [eth0]:
      IP Address of this host: 192.0.2.132
      Netmask of this host: 255.255.255.128
      Default gateway: 192.0.2.129
      [INFO     ] 192.0.2.129 added as default gateway.
      * sshd.service - OpenSSH server daemon
        Loaded: loaded (/usr/lib/systemd/system/sshd.service; disabled; vendor preset: enabled)
        Active: inactive (dead)
        Docs: man:sshd(8)
        man:sshd_config(5)
      [INFO     ] Starting sshd service
  4. When you are finished using the guest in diagnostic mode, stop the domain and reconfigure it to boot using its primary boot device.

    Use the commands specified in the output from the previous vm_maker --boot-from-iso command.

    For example:

    # vm_maker --stop-domain dm01vm01.example.com --force
    [INFO] Running 'virsh destroy dm01vm01.example.com --graceful'...
    Domain dm01vm01.example.com destroyed
    [INFO] Checking for DomU shutdown...
    [INFO] DomU successfully shutdown.
    
    # vm_maker --boot-from-hd --domain dm01vm01.example.com
    [INFO] Running 'virsh undefine dm01vm01.example.com'...
    [INFO] Running 'virsh define /var/log/exadatatmp/dm01vm01.example.com.xml.new.eab9'...
    [INFO] The domain is ready to be restarted.

    The guest is now ready to be restarted by using the vm_maker --start-domain command.

6.6 Monitoring a Guest Console During Startup

To see Oracle Linux boot messages during guest startup, use the --console option with the vm_maker --start-domain command.

  1. Connect as the root user to the KVM host.
  2. Obtain the guest name using the /opt/exadata_ovm/vm_maker --list-domains command.
  3. Use the following command to attach to the guest console, as part of starting the guest:

    In the following command, GuestName is the name of the guest.

    # vm_maker --start-domain GuestName --console
  4. Press CTRL+] to disconnect from the console.

6.7 Managing Automatic Startup of Oracle Linux KVM Guests

By default, when you create a guest, it is configured to automatically start when the KVM host is started. You can enable and disable this feature as needed.

6.7.1 Enabling Guest Automatic Start

You can configure a guest to automatically start when the KVM host is started.

  1. Connect to the KVM host.
  2. Use vm_maker to enable autostart for the guest.

    In the following example, db01_guest01.example.com is the name of the guest.

    # /opt/exadata_ovm/vm_maker --autostart db01_guest01.example.com --enable
    [INFO] Running 'virsh autostart db01_guest01.example.com --enable'...
    Domain db01_guest01.example.com marked as autostarted

6.7.2 Disabling Guest Automatic Start

You can disable a guest from automatically starting when the KVM host is started.

  1. Connect to the KVM host.
  2. Use vm_maker to disable autostart for the guest.

    In the following example, db01_guest01.example.com is the name of the guest.

    # /opt/exadata_ovm/vm_maker --autostart db01_guest01.example.com --disable
    [INFO] Running 'virsh autostart db01_guest01.example.com --disable'...
    Domain db01_guest01.example.com unmarked as autostarted

6.8 Shutting Down a User Domain From Within the User Domain

The following procedure describes how to shut down a user domain from within a user domain:

  1. Connect as the root user to the user domain.
  2. Use the following command to shut down the domain:
    # shutdown -h now
    

6.9 Shutting Down a Guest From Within the KVM host

You can shut down a guest from within a KVM host.

  1. Connect as the root user to the KVM host.
  2. Use the following command to shut down the guest, where GuestName is the name of the guest:
    # /opt/exadata_ovm/vm_maker --stop-domain GuestName

    To shut down all guests within the KVM host, use the following command:

    # /opt/exadata_ovm/vm_maker --stop-domain --all

    The following is an example of the output:

    [INFO] Running 'virsh shutdown db01_guest01.example.com'...
    Domain db01_guest01.example.com is being shutdown

6.10 Backing up the KVM host and Guests

In an Oracle Linux KVM deployment, you need to back up the KVM host and the guests.

Backups are required to restore and recover from a database server physical or logical data issue where you need to restore database server operating system files.

6.10.1 Backing up the KVM host Using Snapshot-Based Backup

This procedure describes how to take a snapshot-based backup of the KVM host.

The values shown in the following steps are examples and you may need to substitute different values to match your situation.

All steps must be performed as the root user.

  1. Prepare a destination to hold the backup.

    The destination should reside outside of the local machine, such as a writable NFS location, and be large enough to hold the backup. For non-customized partitions, the space needed for holding the backup is approximately 50 GB.

    You can use the following commands to prepare a backup destination using NFS.

    # mkdir -p /root/remote_FS
    # mount -t nfs -o rw,intr,soft,proto=tcp,nolock ip_address:/nfs_location/ /root/remote_FS

    In the mount command, ip_address is the IP address of the NFS server, and nfs_location is the NFS location holding the backups.

  2. Remove the LVDoNotRemoveOrUse logical volume.

    The logical volume /dev/VGExaDb/LVDoNotRemoveOrUse is a placeholder to make sure there is always free space available to create a snapshot.

    Use the following script to check for the existence of the LVDoNotRemoveOrUse logical volume and remove it if present.

    lvm lvdisplay --ignorelockingfailure /dev/VGExaDb/LVDoNotRemoveOrUse
    if [ $? -eq 0 ]; then 
      # LVDoNotRemoveOrUse logical volume exists. 
      lvm lvremove -f /dev/VGExaDb/LVDoNotRemoveOrUse 
      if [ $? -ne 0 ]; then 
        echo "Unable to remove logical volume: LVDoNotRemoveOrUse. Do not proceed with backup." 
      fi
    fi

    If the LVDoNotRemoveOrUse logical volume does not exist, then do not proceed with the remaining steps and determine the reason.

  3. Determine the active system volume.
    You can use the imageinfo command and examine the device hosting the active system partition.
    # imageinfo
    
    Kernel version: 4.14.35-1902.5.1.4.el7uek.x86_64 #2 SMP Wed Oct 9 19:29:16 PDT 2019 x86_64
    Image kernel version: 4.14.35-1902.5.1.4.el7uek
    Image version: 19.3.1.0.0.191018
    Image activated: 2019-11-04 19:18:32 -0800
    Image status: success
    Node type: KVMHOST
    System partition on device: /dev/mapper/VGExaDb-LVDbSys1

    In the imageinfo output, the system partition device ends with the name of the logical volume supports the active root (/) file system. Depending on the system image that is in use, the logical volume name is LVDbSys1 or LVDbSys2. Likewise, the logical volume for the /var file system is either LVDbVar1 or LVDbVar2.

    You can also confirm the active devices by using the df -hT command and examining the output associated with the root (/) and /var file systems. For example:

    # df -hT
    Filesystem                          Type      Size  Used Avail Use% Mounted on
    devtmpfs                            devtmpfs  378G     0  378G   0% /dev
    tmpfs                               tmpfs     755G  1.0G  754G   1% /dev/shm
    tmpfs                               tmpfs     378G  4.8M  378G   1% /run
    tmpfs                               tmpfs     378G     0  378G   0% /sys/fs/cgroup
    /dev/mapper/VGExaDb-LVDbSys1        xfs        15G  7.7G  7.4G  52% /
    /dev/sda1                           xfs       510M  112M  398M  22% /boot
    /dev/sda2                           vfat      254M  8.5M  246M   4% /boot/efi
    /dev/mapper/VGExaDb-LVDbHome        xfs       4.0G   33M  4.0G   1% /home
    /dev/mapper/VGExaDb-LVDbVar1        xfs       2.0G  139M  1.9G   7% /var
    /dev/mapper/VGExaDb-LVDbVarLog      xfs        18G  403M   18G   3% /var/log
    /dev/mapper/VGExaDb-LVDbVarLogAudit xfs      1014M  143M  872M  15% /var/log/audit
    /dev/mapper/VGExaDb-LVDbTmp         xfs       3.0G  148M  2.9G   5% /tmp
    /dev/mapper/VGExaDb-LVDbOra1        xfs       100G   32G   69G  32% /u01
    tmpfs                               tmpfs      76G     0   76G   0% /run/user/0

    The remaining examples in the procedure use LVDbSys1 and LVDbVar1, which is consistent with the above imageinfo and df output. However, if the active image is using LVDbSys2, then modify the examples in the following steps to use LVDbSys2 instead of LVDbSys1, and LVDbVar2 instead of LVDbVar1.

  4. Take snapshots of the logical volumes on the server.

    Depending on the active system partition identified in the previous step, remember to use either LVDbSys1 or LVDbSys2 to identify the logical volume for the root (/) file system, and likewise use either LVDbVar1 or LVDbVar2 to identify the logical volume for the /var file system.

    # lvcreate -L1G -s -c 32K -n root_snap /dev/VGExaDb/LVDbSys1
    # lvcreate -L1G -s -c 32K -n home_snap /dev/VGExaDb/LVDbHome
    # lvcreate -L1G -s -c 32K -n var_snap /dev/VGExaDb/LVDbVar1
    # lvcreate -L1G -s -c 32K -n varlog_snap /dev/VGExaDb/LVDbVarLog
    # lvcreate -L1G -s -c 32K -n audit_snap /dev/VGExaDb/LVDbVarLogAudit
    # lvcreate -L1G -s -c 32K -n tmp_snap /dev/VGExaDb/LVDbTmp
  5. Label the snapshots.
    # xfs_admin -L DBSYS_SNAP /dev/VGExaDb/root_snap
    # xfs_admin -L HOME_SNAP /dev/VGExaDb/home_snap
    # xfs_admin -L VAR_SNAP /dev/VGExaDb/var_snap
    # xfs_admin -L VARLOG_SNAP /dev/VGExaDb/varlog_snap
    # xfs_admin -L AUDIT_SNAP /dev/VGExaDb/audit_snap
    # xfs_admin -L TMP_SNAP /dev/VGExaDb/tmp_snap
  6. Mount the snapshots.
    Mount all of the snapshots under a common directory location; for example, /root/mnt.
    # mkdir -p /root/mnt
    # mount -t xfs -o nouuid /dev/VGExaDb/root_snap /root/mnt
    # mkdir -p /root/mnt/home
    # mount -t xfs -o nouuid /dev/VGExaDb/home_snap /root/mnt/home
    # mkdir -p /root/mnt/var
    # mount -t xfs -o nouuid /dev/VGExaDb/var_snap /root/mnt/var
    # mkdir -p /root/mnt/var/log
    # mount -t xfs -o nouuid /dev/VGExaDb/varlog_snap /root/mnt/var/log
    # mkdir -p /root/mnt/var/log/audit
    # mount -t xfs -o nouuid /dev/VGExaDb/audit_snap /root/mnt/var/log/audit
    # mkdir -p /root/mnt/tmp
    # mount -t xfs -o nouuid /dev/VGExaDb/tmp_snap /root/mnt/tmp
  7. Back up the snapshots.
    Use the following commands to write a backup to your prepared NFS backup destination as a compressed archive file.
    # cd /root/mnt
    # tar --acls --xattrs --xattrs-include=* --format=pax -pjcvf /root/remote_FS/myKVMbackup.tar.bz2 * /boot > /tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr
  8. Check the /tmp/backup_tar.stderr file for any significant errors.
    Errors about failing to archive open sockets, and other similar errors, can be ignored.
  9. Unmount and remove all of the snapshots.
    # cd /
    # umount /root/mnt/tmp
    # umount /root/mnt/var/log/audit
    # umount /root/mnt/var/log
    # umount /root/mnt/var
    # umount /root/mnt/home
    # umount /root/mnt
    # lvremove /dev/VGExaDb/tmp_snap
    # lvremove /dev/VGExaDb/audit_snap
    # lvremove /dev/VGExaDb/varlog_snap
    # lvremove /dev/VGExaDb/var_snap
    # lvremove /dev/VGExaDb/home_snap
    # lvremove /dev/VGExaDb/root_snap
  10. Unmount the NFS backup destination.
    # umount /root/remote_FS
  11. Remove the mount point directories that you created during this procedure.
    # rm -r /root/mnt
    # rmdir /root/remote_FS
  12. Recreate the /dev/VGExaDb/LVDoNotRemoveOrUse logical volume.
    # lvm lvcreate -n LVDoNotRemoveOrUse -L2G VGExaDb -y

6.10.2 Backing up the Oracle Linux KVM Guests

You can back up Oracle Linux KVM guests by using the following procedures.

There are three ways to back up the guests:

Table 6-5 Oracle Linux KVM guest backup approaches

Method Description Managed By Best For
Method 1: Back Up All of the KVM Guests

From the KVM host, back up all guests in the /EXAVMIMAGES storage repository using XFS reflinks to get a consistent backup.

KVM host administrator

Recovering all guests after a compute node failure, which renders the guests unbootable.

Method 2: Back Up an Individual Guest

From the KVM host, selectively back up a guest in the /EXAVMIMAGES storage repository using XFS reflinks to get a consistent backup.

KVM host administrator

Selective recovery of a guest after a compute node failure that renders the guest unbootable but does not affect all of the other guests.

Method 3: Back Up a Guest Internally

Back up a guest using a snapshot-based backup procedure that is run inside the guest.

Guest administrator

Recovery of a guest after a failure where the guest is still bootable and allows root login. This method also enables selective recovery of specific files.

6.10.2.1 Method 1: Back Up All of the KVM Guests

You can back up all of the guests by backing up the storage repository under /EXAVMIMAGES.

The backup destination should be separate from the KVM host server, such as a writable NFS location, and be large enough to hold the backup. The space needed for the backup is proportional to the number of guests deployed on the system. The space needed for each guest backup is approximately 200 GB.

  1. Prepare the guest images.

    Use the following script to prepare the guest image backups under /EXAVMIMAGES/Backup.

    #!/bin/bash
    
    ScriptStarttime=$(date +%s)
    printf "This script is going to remove the directory /EXAVMIMAGES/Backup if it exists.  If that is not acceptable, exit the script by typing n, manually remove /EXAVMIMAGES/Backup and come back to rerun the script.  Otherwise, press y to continue :"
    read proceed
    if [[ ${proceed} == "n" ]] || [[ ${proceed} == "N" ]]
    then
      exit 0
    elif [[ ${proceed} != "n" ]] && [[ ${proceed} != "N" ]] && [[ ${proceed} != "y" ]] && [[ ${proceed} != "Y" ]]
    then
      echo "Invalid input"
      exit 1
    fi
    rm -rf /EXAVMIMAGES/Backup
    
    ## Create the Backup Directory
    
    mkdirStartTime=$(date +%s)
    find /EXAVMIMAGES -type d|grep -v 'lost+found'| awk '{print "mkdir -p /EXAVMIMAGES/Backup"$1}'|sh
    mkdir -p /EXAVMIMAGES/Backup/XML
    mkdirEndTime=$(date +%s)
    mkdirTime=$(expr ${mkdirEndTime} - ${mkdirStartTime})
    echo "Backup Directory creation time :" ${mkdirTime}" seconds"
    
    ## Create reflinks for files not in /EXAVMIMAGES/GuestImages
    
    relinkothesStartTime=$(date +%s)
    
    find /EXAVMIMAGES/ -not -path "/EXAVMIMAGES/GuestImages/*" -not -path "/EXAVMIMAGES/Backup/*" -type f|awk '{print "cp --reflink",$0,"/EXAVMIMAGES/Backup"$0}'|sh
    
    relinkothesEndTime=$(date +%s)
    reflinkothesTime=$(expr ${relinkothesEndTime} - ${relinkothesStartTime})
    
    echo "Reflink creation time for files other than in /EXAVMIMAGES/GuestImages :" ${reflinkothesTime}" seconds"
    
    cp /etc/libvirt/qemu/*.xml /EXAVMIMAGES/Backup/XML
    
    for hostName in $(virsh list|egrep -v 'Id|^-'|awk '{print $2}'|sed '/^$/d')
    do
    
    ## Pause the guests
    
      PauseStartTime=$(date +%s)
      virsh suspend ${hostName}
      PauseEndTime=$(date +%s)
      PauseTime=$(expr ${PauseEndTime} - ${PauseStartTime})
      echo "SuspendTime for guest - ${hostName} :" ${PauseTime}" seconds"
    
    ## Create reflinks for all the files in /EXAVMIMAGES/GuestImages
    
      relinkStartTime=$(date +%s)
    
      find /EXAVMIMAGES/GuestImages/${hostName} -type f|awk '{print "cp --reflink", $0,"/EXAVMIMAGES/Backup"$0}'|sh
    
      relinkEndTime=$(date +%s)
      reflinkTime=$(expr ${relinkEndTime} - ${relinkStartTime})
      echo "Reflink creation time for guest - ${hostName} :" ${reflinkTime}" seconds"
    
    ## Unpause the guest
    
      unPauseStartTime=$(date +%s)
      virsh resume ${hostName}
      unPauseEndTime=$(date +%s)
      unPauseTime=$(expr ${unPauseEndTime} - ${unPauseStartTime})
      echo "ResumeTime for guest - ${hostName} :" ${unPauseTime}" seconds"
    
    done
    
    ScriptEndtime=$(date +%s)
    ScriptRunTime=$(expr ${ScriptEndtime} - ${ScriptStarttime})
    echo ScriptRunTime ${ScriptRunTime}" seconds"
  2. Create a backup of the guest images.

    Back up all of the reflink files under /EXAVMIMAGES/Backup to a remote location. The backup enables restoration if the KVM host is permanently damaged or lost.

    For example:

    # mkdir -p /remote_FS
    # mount -t nfs -o rw,intr,soft,proto=tcp,nolock ip_address:/nfs_location/ /remote_FS
    # cd /EXAVMIMAGES/Backup
    # tar --acls --xattrs --xattrs-include=* --format=pax -pjcvf /remote_FS/exavmimages.tar.bz2 * > /tmp/exavmimages_tar.stdout 2> /tmp/exavmimages_tar.stderr

    In the mount command, ip_address is the IP address of the NFS server, and nfs_location is the NFS location holding the backup.

    After the backup completes, check for any significant errors from the tar command. In the previous example, the tar command writes errors to the file at /tmp/exavmimages_tar.stderr.

  3. Remove the /EXAVMIMAGES/Backup directory and its contents.

    For example:

    # cd /
    # rm -rf /EXAVMIMAGES/Backup
  4. Unmount the NFS backup location and remove the mount point directory.

    For example:

    # umount /remote_FS
    # rmdir /remote_FS
6.10.2.2 Method 2: Back Up an Individual Guest

You can back up an individual guest by backing up its specific folder under /EXAVMIMAGES.

The backup destination should be separate from the KVM host server, such as a writable NFS location, and be large enough to hold the backup. The space needed for an individual guest backup is approximately 200 GB.

  1. Prepare the guest image.

    Use the following script to prepare the guest image backup under /EXAVMIMAGES/Backup.

    #!/bin/bash
    
    ScriptStarttime=$(date +%s)
    
    printf "This script is going to remove the directory /EXAVMIMAGES/Backup if it exists. If that is not acceptable, exit the script by typing n, manually remove /EXAVMIMAGES/Backup and come back to rerun the script.  Otherwise, press y to continue :"
    
    read proceed
    
    if [[ ${proceed} == "n" ]] || [[ ${proceed} == "N" ]]
    then
      exit 0
    elif [[ ${proceed} != "n" ]] && [[ ${proceed} != "N" ]] && [[ ${proceed} != "y" ]] && [[ ${proceed} != "Y" ]]
    then
      echo "Invalid input"
      exit 1
    fi
    
    rm -rf /EXAVMIMAGES/Backup
    
    printf "Enter the name of the KVM guest to be backed up :"
    
    read KVMGuestName
    
    ## Create the Backup Directory
    
    if [ ! -d /EXAVMIMAGES/GuestImages/${KVMGuestName} ]
    then
      echo "Guest ${KVMGuestName} does not exist"
      exit 1
    fi
    
    mkdirStartTime=$(date +%s)
    
    find /EXAVMIMAGES/GuestImages/${KVMGuestName} -type d|grep -v 'lost+found'|awk '{print "mkdir -p /EXAVMIMAGES/Backup"$1}'|sh
    
    mkdir -p /EXAVMIMAGES/Backup/XML
    
    mkdirEndTime=$(date +%s)
    mkdirTime=$(expr ${mkdirEndTime} - ${mkdirStartTime})
    echo "Backup Directory creation time :" ${mkdirTime}" seconds"
    
    cp /etc/libvirt/qemu/${KVMGuestName}.xml /EXAVMIMAGES/Backup/XML
    
    ## Pause the guest
    
    PauseStartTime=$(date +%s)
    virsh suspend ${KVMGuestName}
    PauseEndTime=$(date +%s)
    PauseTime=$(expr ${PauseEndTime} - ${PauseStartTime})
    echo "PauseTime for guest - ${KVMGuestName} :" ${PauseTime}" seconds"
    
    ## Create reflinks for all the files in /EXAVMIMAGES/GuestImages/${KVMGuestName}
    
    relinkStartTime=$(date +%s)
    
    find /EXAVMIMAGES/GuestImages/${KVMGuestName} -type f|awk '{print "cp --reflink", $0,"/EXAVMIMAGES/Backup"$0}'|sh
    
    relinkEndTime=$(date +%s)
    reflinkTime=$(expr ${relinkEndTime} - ${relinkStartTime})
    echo "Reflink creation time for guest - ${KVMGuestName} :" ${reflinkTime}" seconds"
    
    ## Unpause the guest
    
    unPauseStartTime=$(date +%s)
    virsh resume ${KVMGuestName}
    unPauseEndTime=$(date +%s)
    unPauseTime=$(expr ${unPauseEndTime} - ${unPauseStartTime})
    echo "unPauseTime for guest - ${KVMGuestName} :" ${unPauseTime}" seconds"
    
    ScriptEndtime=$(date +%s)
    ScriptRunTime=$(expr ${ScriptEndtime} - ${ScriptStarttime})
    echo ScriptRunTime ${ScriptRunTime}" seconds"
  2. Create a backup of the guest image.

    Back up the reflink files under /EXAVMIMAGES/Backup to a remote location. The backup enables restoration of the specific guest if the KVM host is permanently damaged or lost.

    For example:

    # mkdir -p /remote_FS
    # mount -t nfs -o rw,intr,soft,proto=tcp,nolock ip_address:/nfs_location/ /remote_FS
    # cd /EXAVMIMAGES/Backup
    # tar --acls --xattrs --xattrs-include=* --format=pax -pjcvf /remote_FS/exavmimage.tar.bz2 * > /tmp/exavmimage_tar.stdout 2> /tmp/exavmimage_tar.stderr

    In the mount command, ip_address is the IP address of the NFS server, and nfs_location is the NFS location holding the backup.

    In the example, the backup file is named exavmimage.tar.bz2. You may choose another name that identifies the guest being backed up.

    After the backup completes, check for any significant errors from the tar command. In the previous example, the tar command writes errors to the file at /tmp/exavmimage_tar.stderr.

  3. Remove the /EXAVMIMAGES/Backup directory and its contents.

    For example:

    # cd /
    # rm -rf /EXAVMIMAGES/Backup
  4. Unmount the NFS backup location and remove the mount point directory.

    For example:

    # umount /remote_FS
    # rmdir /remote_FS
6.10.2.3 Method 3: Back Up a Guest Internally

You can take a snapshot-based backup of a guest from inside the guest.

All steps are performed from inside the guest.

Note:

This backup method is performed internally within the guest and uses logical volume snapshots. Compared with other backup methods, this method provides more limited recovery options because the backup is only useful when the guest is bootable and allows root user login.

This procedure backs up the contents of all currently active file systems in the guest. Before starting, ensure that all of the file systems that you want to back up are mounted.

The values shown in the following steps are examples and you may need to substitute different values to match your situation.

All steps must be performed as the root user.

  1. Prepare a destination to hold the backup.

    The destination should reside outside of the local machine, such as a writable NFS location, and be large enough to hold the backup. For non-customized partitions, the space needed for holding the backup is approximately 60 GB.

    You can use the following commands to prepare a backup destination using NFS.

    # mkdir -p /root/remote_FS
    # mount -t nfs -o rw,intr,soft,proto=tcp,nolock ip_address:/nfs_location/ /root/remote_FS

    In the mount command, ip_address is the IP address of the NFS server, and nfs_location is the NFS location holding the backups.

  2. Remove the LVDoNotRemoveOrUse logical volume.

    The logical volume /dev/VGExaDb/LVDoNotRemoveOrUse is a placeholder to make sure there is always free space available to create a snapshot.

    Use the following script to check for the existence of the LVDoNotRemoveOrUse logical volume and remove it if present.

    lvm lvdisplay --ignorelockingfailure /dev/VGExaDb/LVDoNotRemoveOrUse
    if [ $? -eq 0 ]; then 
      # LVDoNotRemoveOrUse logical volume exists. 
      lvm lvremove -f /dev/VGExaDb/LVDoNotRemoveOrUse 
      if [ $? -ne 0 ]; then 
        echo "Unable to remove logical volume: LVDoNotRemoveOrUse. Do not proceed with backup." 
      fi
    fi

    If the LVDoNotRemoveOrUse logical volume does not exist, then do not proceed with the remaining steps and determine the reason.

  3. Gather information about the currently active file systems and logical volumes.

    In this step, you must gather information from your guest to use later in the commands that create the logical volume snapshots and backup files.

    Run the following command:

    # df -hT | grep VGExa

    For every entry in your command output, determine the following information and create a table of values to use later:

    • The volume group (VG) name and logical volume (LV) name are contained in the file system name as follows:

      /dev/mapper/VG-name-LV-name

      For example, in /dev/mapper/VGExaDb-LVDbHome, the VG name is VGExaDb and the LV name is LVDbHome.

    • The backup label is a string that identifies the file system and its backup file. Use root for the root (/) file system. Otherwise, you can use a string that concatenates the directories in the mount point. For example, you can use varlogaudit for /var/log/audit.
    • Define a short label, which contains 12 or fewer characters. You will use the short label to label the snapshot file system.

    For example:

    # df -hT | grep VGExa
    /dev/mapper/VGExaDb-LVDbSys1                              xfs   15G  4.2G   11G  28% /
    /dev/mapper/VGExaDb-LVDbHome                              xfs  4.0G   45M  4.0G   2% /home
    /dev/mapper/VGExaDb-LVDbVar1                              xfs  2.0G   90M  2.0G   5% /var
    /dev/mapper/VGExaDb-LVDbVarLog                            xfs   18G  135M   18G   1% /var/log
    /dev/mapper/VGExaDb-LVDbVarLogAudit                       xfs 1014M   89M  926M   9% /var/log/audit
    /dev/mapper/VGExaDb-LVDbTmp                               xfs  3.0G   33M  3.0G   2% /tmp
    /dev/mapper/VGExaDb-LVDbKdump                             xfs   20G   33M   20G   1% /crashfiles
    /dev/mapper/VGExaDbDisk.u01.5.img-LVDBDisk                xfs  5.0G   33M  5.0G   1% /u01
    /dev/mapper/VGExaDbDisk.u02.10.img-LVDBDisk               xfs   10G   33M   10G   1% /u02
    /dev/mapper/VGExaDbDisk.u03.15.img-LVDBDisk               xfs   15G   33M   15G   1% /u03
    /dev/mapper/VGExaDbDisk.grid19.7.0.0.200414.img-LVDBDisk  xfs   20G  6.0G   15G  30% /u01/app/19.0.0.0/grid

    From the above output, you could derive the following table of information to use later in the commands that create the logical volume snapshots and backup files.

    File system VG Name LV Name Backup Label Short Label

    /dev/mapper/VGExaDb-LVDbSys1

    VGExaDb

    LVDbSys1

    root

    root_snap

    /dev/mapper/VGExaDb-LVDbHome

    VGExaDb

    LVDbHome

    home

    home_snap

    /dev/mapper/VGExaDb-LVDbVar1

    VGExaDb

    LVDbVar1

    var

    var_snap

    /dev/mapper/VGExaDb-LVDbVarLog

    VGExaDb

    LVDbVarLog

    varlog

    varlog_snap

    /dev/mapper/VGExaDb-LVDbVarLogAudit

    VGExaDb

    LVDbVarLogAudit

    varlogaudit

    audit_snap

    /dev/mapper/VGExaDb-LVDbTmp

    VGExaDb

    LVDbTmp

    tmp

    tmp_snap

    /dev/mapper/VGExaDb-LVDbKdump

    VGExaDb

    LVDbKdump

    crashfiles

    crash_snap

    /dev/mapper/VGExaDbDisk.u01.5.img-LVDBDisk

    VGExaDbDisk.u01.5.img

    LVDBDisk

    u01

    u01_snap

    /dev/mapper/VGExaDbDisk.u02.10.img-LVDBDisk

    VGExaDbDisk.u02.10.img

    LVDBDisk

    u02

    u02_snap

    /dev/mapper/VGExaDbDisk.u03.15.img-LVDBDisk

    VGExaDbDisk.u03.15.img

    LVDBDisk

    u03

    u03_snap

    /dev/mapper/VGExaDbDisk.grid19.7.0.0.200414.img-LVDBDisk

    VGExaDbDisk.grid19.7.0.0.200414.img

    LVDBDisk

    u01app19000grid

    grid_snap

    Note:

    • The information gathered from your guest may be significantly different from this example. Ensure that you gather the required information directly from your guest and only use current information.

    • Depending on the currently active system volume, the logical volume for the root (/) file system is LVDbSys1 or LVDbSys2. Likewise, the logical volume for the /var file system is either LVDbVar1 or LVDbVar2.
  4. Create the file system snapshots and backup files.

    Use the table of information for your guest, which you gathered in the previous step.

    Perform the following for each row in your table, substituting the appropriate values in each command.

    1. Create the snapshot.
      # lvcreate -L1G -s -n LV-Name_snap /dev/VG-Name/LV-Name
    2. Label the snapshot.
      # xfs_admin -L Short-Label /dev/VG-Name/LV-Name_snap
    3. Mount the snapshot.
      # mkdir -p /root/mnt/Backup-Label
      # mount -o nouuid /dev/VG-Name/LV-Name_snap /root/mnt/Backup-Label
    4. Change to the directory for the backup.
      # cd /root/mnt/Backup-Label
    5. Create the backup file.
      • For the root (/) file system only, use the following command to include the contents of /boot in the backup file:

        # tar --acls --xattrs --xattrs-include=* --format=pax -pjcvf /root/remote_FS/rootfs-boot.tar.bz2 * /boot > /tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr
      • Otherwise, use the following command template:

        # tar --acls --xattrs --xattrs-include=* --format=pax -pjcvf /root/remote_FS/Backup-Label.tar.bz2 * > /tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr
    6. Check the /tmp/backup_tar.stderr file for any significant errors.

      You can ignore errors about failing to archive open sockets, and other similar errors.

    7. Unmount and remove the snapshot.
      # cd /
      # umount /root/mnt/Backup-Label
      # /bin/rmdir /root/mnt/Backup-Label
      # lvremove -f /dev/VG-Name/LV-Name_snap
  5. Unmount the NFS share.
    # umount /root/remote_FS
  6. Recreate the /dev/VGExaDb/LVDoNotRemoveOrUse logical volume.
    # lvm lvcreate -n LVDoNotRemoveOrUse -L2G VGExaDb -y

6.11 Backing Up and Restoring Oracle Databases on KVM Guests

Backing up and restoring Oracle databases on KVM guests is the same as backing up and restoring Oracle databases on physical nodes.

6.12 Modifying the Memory Allocated to a Guest

You can modify the memory allocated to a guest using vm_maker.

This operation requires a guest restart. You can let vm_maker restart the guest after changing the memory configuration.

  1. If you are decreasing the amount of memory used by the guest, then you must first review and adjust Oracle Database memory usage and the operating system huge pages configuration settings in the guest.
    1. Review the SGA size of databases and reduce if necessary.

      If you do not first reduce the memory requirements of the databases running in the guest, then the guest might fail to restart because too much memory is reserved for huge pages when the Oracle Linux operating system attempts to boot. See My Oracle Support Doc ID 361468.1 for details.

    2. Review the operating system configuration and reduce the memory allocation for huge pages if necessary.
    3. If you modify the huge pages settings in the operating system kernel configuration file (/etc/sysctl.conf), regenerate the initramfs file to reflect the system configuration change.

      You should backup up the existing initramfs file and then regenerate it by using the dracut command. For example:

      # ls -l /boot/initramfs-$(uname -r).img
      -rw------- 1 root root 55845440 Jan  8 10:34 /boot/initramfs-4.14.35-2047.508.3.3.el7uek.x86_64.img
      
      # cp /boot/initramfs-$(uname -r).img backup_directory
      
      # dracut --force
  2. Connect to the KVM host.

    The remainder of this procedure is preformed inside the KVM host.

  3. If you are increasing the amount of memory used by the guest, then use the following command to determine the amount of free memory available:
    # /opt/exadata_ovm/vm_maker --list --memory

    In the output, the lowest value between Available memory (now) and Available memory (delayed) is the limit for free memory.

    Note:

    When assigning free memory to a guest, reserve approximately 1% to 2% of free memory for storing metadata and control structures.
  4. Modify the guest memory allocation and restart the guest.

    For example, to modify db01_guest01.example.com and set a memory allocation of 32 GB, use the following command:

    # /opt/exadata_ovm/vm_maker --set --memory 32G --domain db01_guest01.example.com --restart-domain

    The command shuts down the guest, modifies the memory setting, and restarts the guest.

6.13 Modifying the Number of Virtual CPUs Allocated to a Guest

You can dynamically modify the number of virtual CPUs allocated to a guest with the vm_maker --set --vcpu command.

All actions to modify the number of vCPUs allocated to a guest are performed in the KVM host.

It is possible to over-commit vCPUs such that the total number of vCPUs assigned to all guests exceeds the number of physical CPUs on the system. However, over-committing CPUs should be done only when competing workloads for oversubscribed resources are well understood and concurrent demand does not exceed physical capacity.

  1. Determine the number of physical CPUs.
    # /opt/exadata_ovm/vm_maker --list --vcpu --domain db01_guest01.example.com
  2. Modify the number of allocated vCPUs.
    The number of vCPUs must be a multiple of 2.

    For example, if you want to change the number of vCPUs allocated to 4 for the db01_guest01.example.com guest, you would use the following command:

    # /opt/exadata_ovm/vm_maker --set --vcpu 4 --domain db01_guest01.example.com

6.14 Increasing the Disk Space in a Guest

The KVM guest local space can be extended after initial deployment by adding local disk images.

6.14.1 Adding a New LVM Disk to a Guest

You can add a new LVM disk to an Oracle Linux KVM guest to increase the amount of usable disk space in a guest.

You might add an LVM disk to a guest so that the size of a file system or swap space can be increased. The system remains online while you perform this procedure.

Note:

During this procedure you perform actions in both the KVM host and in the guest.

Run all steps in this procedure as the root user.

  1. In the KVM host, verify that there is sufficient free disk space in /EXAVMIMAGES. For example:
    # df -h /EXAVMIMAGES
    Filesystem                           Size  Used Avail Use%  Mounted on
    /dev/mapper/VGExaDb-LVDbExaVMImages  1.5T   39G  1.5T   3%  /EXAVMIMAGES
  2. In the KVM host, create a new disk image and attach it to the guest.

    For example, the following command adds a guest-specific disk image named pv2_vgexadb.img to the guest dm01db01vm01.example.com:

    # /opt/exadata_ovm/vm_maker --create --disk-image /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img --attach --domain dm01db01vm01.example.com
    [INFO] Allocating an image for /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img, size 52.000000G...
    [INFO] Running 'qemu-img create /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img 52.000000G '...
    [INFO] Create label gpt on /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img.
    [INFO]  Running 'parted -a none -s /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img mklabel gpt'...
    [INFO] Running 'losetup -P -f /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img'...
    [INFO] Finding loop device...
    [INFO]   loop device is /dev/loop0
    [INFO] Finding number of sectors...
    [INFO]   109051904 sectors
    [INFO] Finding sector size...
    [INFO]   512 bytes per sector
    [INFO] Creating filesystem on /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk...
    [INFO]  Running 'mkfs -t xfs   -b size=4096 -f /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk '...
    [INFO] Checking that we have a file system on /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk...
    [INFO] Releasing loop device /dev/loop0...
    [INFO]   Removing device maps for /dev/loop0...
    [INFO]    Running 'kpartx -d -v /dev/loop0'...
    [INFO]  Removing loop device /dev/loop0...
    [INFO] ##
    [INFO] ## Finished .
    [INFO] ##
    [INFO] Created image /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img
    [INFO] Running 'vgscan --cache'...
    [INFO] -------- MANUAL STEPS TO BE COMPLETED FOR MOUNTING THE DISK WITHIN DOMU dm01db01vm01.example.com --------
    [INFO] 1. Check a disk with name /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk exists.
    [INFO] -  Check for the existence of a disk named: /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk. Use the 'lvdisplay' command and check the output.
    [INFO] 2. Create a mount directory for the new disk
    [INFO] 3. Add the following line to /etc/fstab: /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk <mount_point_from_step_2> <fstype> defaults 1 1
    [INFO] 4. Mount the new disk. Use the 'mount -a' command.
    [INFO] Note: when detaching and re-attaching the same disk multiple times, run the following command after detaching and before attaching in the guest domain:
    [INFO] 'lvm vgchange VGExaDbDisk.pv2_vgexadb.img -a -n' when re-attaching the same disk.

    At this time, do not perform the manual steps described at the end of the output. However, take note of the logical volume path identified in manual step number 1. In general, the logical volume path has the form: /dev/VolumeGroupName/LogicalVolumeName. In the example, the logical volume path is /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk.

  3. On the KVM host, list the available disk images for the guest and verify the creation of the new disk image.

    In the example in the previous step, the disk image file is identified as /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img. This image should now appear in the list of disk images for the guest. For example:

    # /opt/exadata_ovm/vm_maker --list --disk-image --domain dm01db01vm01.example.com
    File /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/System.img
    File /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/grid19.2.0.0.0.img
    File /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/db19.2.0.0.0-3.img
    File /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv1_vgexadb.img
    File /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/pv2_vgexadb.img
  4. On the guest, identify the newly added disk.

    Use the lvdisplay command along with the logical volume path noted earlier.

    # lvdisplay /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk
      --- Logical volume ---
      LV Path                /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk
      LV Name                LVDBDisk
      VG Name                VGExaDbDisk.pv2_vgexadb.img
      LV UUID                ePC0Qe-PfOX-oCoP-Pd5n-2nDj-z0KU-c9IygG
      LV Write Access        read/write
      LV Creation host, time dm01db01vm01.example.com, 2022-01-10 03:06:18 -0800
      LV Status              available
      # open                 0
      LV Size                50.00 GiB
      Current LE             12800
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           252:11
  5. In the guest, remove the logical volume and volume group that were created for the added disk.
    You must perform this step in order to use the newly created disk to extend an existing volume group.
    1. Remove the logical volume.

      In this example, the logical volume path is /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk.

      # lvremove /dev/VGExaDbDisk.pv2_vgexadb.img/LVDBDisk
      Do you really want to remove active logical volume VGExaDbDisk.pv2_vgexadb.img/LVDBDisk? [y/n]: y
        Logical volume "LVDBDisk" successfully removed
    2. Remove the volume group that came with the logical volume.

      In this example, the volume group name is VGExaDbDisk.pv2_vgexadb.img.

      # vgremove VGExaDbDisk.pv2_vgexadb.img
        Volume group "VGExaDbDisk.pv2_vgexadb.img" successfully removed
    At this point, all that is left is the physical volume with no logical volume and no volume group.
  6. In the guest, identify the physical volume device for the newly added disk.

    Use the pvdisplay command and look for the new physical volume.

    In the following example, the output is truncated to highlight the new physical volume:

    # pvdisplay
    ...  
      "/dev/sdf1" is a new physical volume of "<50.00 GiB"
      --- NEW Physical volume ---
      PV Name /dev/sdf1
      VG Name
      PV Size <50.00 GiB
      Allocatable NO
      PE Size 0
      Total PE 0
      Free PE 0
      Allocated PE 0
      PV UUID tfb8lM-eHe9-SPch-8UAu-pkHe-dAYx-ez3Sru
    ...
  7. In the guest, use the new physical volume to extend an existing volume group.

    In the following example, the new physical volume (/dev/sdf1) is used to extend the volume group VGExaDb. The vgdisplay output is truncated to highlight VGExaDb.

    # vgdisplay -s
    ...
      "VGExaDb" 88.00 GiB [88.00 GiB used / 0 free]
    ...
    
    # vgextend VGExaDb /dev/sdf1
      Volume group "VGExaDb" successfully extended
    
    # vgdisplay -s
    ...
      "VGExaDb" <139.24 GiB [88.00 GiB used / <51.24 GiB free]
    ...

To increase the size of various file systems, using the additional space added to the volume group by this procedure, refer to the following topics:

6.14.2 Increasing the Size of the root File System

This procedure describes how to increase the size of the system partition and / (root) file system.

This procedure is performed while the file system remains online.

Note:

There are two system partitions, LVDbSys1 and LVDbSys2. One partition is active and mounted. The other partition is inactive and used as a backup location during upgrade. The size of both system partitions must be equal.

Keep at least 1 GB of free space in the VGExaDb volume group. The free space is used for the LVM snapshot created by the dbnodeupdate.sh utility during software maintenance. If you make snapshot-based backups of the / (root) and /u01 directories as described in Creating a Snapshot-Based Backup of Oracle Linux Database Server, then keep at least 6 GB of free space in the VGExaDb volume group.

This task assumes that additional disk space is available to be used. If that is not the case, then complete the task Adding a New LVM Disk to a Guest before starting this procedure.
  1. Collect information about the current environment.
    1. Use the df command to identify the amount of free and used space in the root partition (/)
      # df -h /

      The following is an example of the output from the command:

      Filesystem                     Size  Used  Avail Use% Mounted on
      /dev/mapper/VGExaDb-LVDbSys1   12G   5.1G  6.2G  46%  / 
      

      Note:

      The active root partition may be either LVDbSys1 or LVDbSys2, depending on previous maintenance activities.
    2. Use the lvs command to display the current logical volume configuration.
      # lvs -o lv_name,lv_path,vg_name,lv_size

      The following is an example of the output from the command:

      LV        Path                   VG      LSize 
      LVDbOra1  /dev/VGExaDb/LVDbOra1  VGExaDb 10.00g
      LVDbSwap1 /dev/VGExaDb/LVDbSwap1 VGExaDb  8.00g
      LVDbSys1  /dev/VGExaDb/LVDbSys1  VGExaDb 12.00g
      LVDbSys2  /dev/VGExaDb/LVDbSys2  VGExaDb 12.00g 
      
  2. Verify there is available space in the volume group VGExaDb using the vgdisplay command.
    # vgdisplay VGExaDb -s

    The following is an example of the output from the command:

    "VGExaDb" 53.49 GiB [42.00 GiB used / 11.49 GiB free]

    The volume group must contain enough free space to increase the size of both system partitions, and maintain at least 1 GB of free space for the LVM snapshot created by the dbnodeupdate.sh utility during upgrade. If there is not sufficient free space in the volume group, then add a new disk to LVM.

  3. Resize both LVDbSys1 and LVDbSys2 logical volumes using the lvextend command.
    # lvextend -L +size /dev/VGExaDb/LVDbSys1
    # lvextend -L +size /dev/VGExaDb/LVDbSys2

    In the preceding command, size is the amount of space to be added to the logical volume. The amount of space added to each system partition must be the same.

    The following example extends the logical volumes by 10 GB:

    # lvextend -L +10G /dev/VGExaDb/LVDbSys1
    # lvextend -L +10G /dev/VGExaDb/LVDbSys2
  4. Resize the partition using the xfs_growfs command.
    # xfs_growfs /
  5. Verify the space was extended for the active system partition using the df command.
    # df -h /

6.14.3 Increasing the Size of the /u01 File System

This procedure describes how to increase the size of the /u01 file system in Oracle Linux KVM.

This procedure is performed while the file system remains online.

Note:

Keep at least 1 GB of free space in the VGExaDb volume group. The free space is used for the LVM snapshot created by the dbnodeupdate.sh utility during software maintenance. If you make snapshot-based backups of the / (root) and /u01 directories as described in Creating a Snapshot-Based Backup of Oracle Linux Database Server, then keep at least 6 GB of free space in the VGExaDb volume group
This task assumes that additional disk space is available to be used. If that is not the case, then complete the task Adding a New LVM Disk to a Guest before starting this procedure.
  1. Collect information about the current environment.
    1. Use the df command to identify the amount of free and used space in the /u01 partition.
      # df -h /u01
      

      The following is an example of the output from the command:

      Filesystem            Size  Used Avail Use% Mounted on
      /dev/mapper/VGExaDb-LVDbOra1
                            9.9G  1.7G  7.8G  18% /u01
      
    2. Use the lvs command to display the current logical volume configuration used by the /u01 file system.
      # lvs -o lv_name,lv_path,vg_name,lv_size /dev/VGExaDb/LVDbOra1
      

      The following is an example of the output from the command:

      LV        Path                  VG       LSize 
      LVDbOra1 /dev/VGExaDb/LVDbOra1  VGExaDb 10.00g
      
  2. Verify there is available space in the volume group VGExaDb using the vgdisplay command.
    # vgdisplay VGExaDb -s
    

    The following is an example of the output from the command:

    "VGExaDb" 53.49 GiB [42.00 GiB used / 11.49 GiB free]
    

    If the output shows there is less than 1 GB of free space, then neither the logical volume nor file system should be extended. Maintain at least 1 GB of free space in the VGExaDb volume group for the LVM snapshot created by the dbnodeupdate.sh utility during an upgrade. If there is not sufficient free space in the volume group, then add a new disk to LVM.

  3. Resize the logical volume using the lvextend command.
    # lvextend -L +sizeG /dev/VGExaDb/LVDbOra1
    

    In the preceding command, size is the amount of space to be added to the logical volume.

    The following example extends the logical volume by 10 GB:

    # lvextend -L +10G /dev/VGExaDb/LVDbOra1
    
  4. Resize the partition using the xfs_growfs command.
    # xfs_growfs /u01
    
  5. Verify the space was extended using the df command.
    # df -h /u01
    

6.14.4 Increasing the Size of the Grid Infrastructure Home or Database Home File System

You can increase the size of the Oracle Grid Infrastructure or Oracle Database home file system in a Oracle Linux KVM guest.

The Oracle Grid Infrastructure software home and the Oracle Database software home are created as separate disk image files in the KVM host. The disk image files are located in the /EXAVMIMAGES/GuestImages/DomainName/ directory. The disk image files are attached to the guest automatically during virtual machine startup, and mounted as separate, non-LVM file systems in the guest.

This task assumes that additional disk space is available to be used.

  1. Connect to the guest, and check the file system size using the df command. For example:
    # df -h
    Filesystem  
       Size Used Avail Use% Mounted on
    ...
    /dev/mapper/VGExaDbDisk.grid--klone--Linux--x86--64--190000.50.img-LVDBDisk 
       50G  5.9G 45G   12%  /u01/app/19.0.0.0/grid
    /dev/mapper/VGExaDbDisk.db--klone--Linux--x86--64--190000.50.img-LVDBDisk 
       50G  6.5G 44G   13%  /u01/app/oracle/product/19.0.0.0/DbHome_3
    ...
  2. Connect to the KVM host, and then shut down the guest.
    In the example, the guest name is dm01db01vm01.example.com.
    # /opt/exadata_ovm/vm_maker --stop-domain dm01db01vm01.example.com
    
  3. In the KVM host, create a new disk image and attach it to the guest.

    For example, the following command adds the disk image db03.img to the guest dm01db01vm01.example.com:

    # /opt/exadata_ovm/vm_maker --create --disk-image /EXAVMIMAGES/db03.img --attach 
    --domain dm01db01vm01.example.com
    [INFO] Allocating an image for /EXAVMIMAGES/db03.img, size 50.000000G...
    [INFO] Running 'qemu-img create /EXAVMIMAGES/db03.img 50.000000G '...
    [INFO] Create label gpt on /EXAVMIMAGES/db03.img.
    [INFO] Running 'parted -a none -s /EXAVMIMAGES/db03.img mklabel gpt'...
    [INFO] Running 'losetup -P -f /EXAVMIMAGES/rk02.img'...
    [INFO] Finding loop device...
    [INFO] loop device is /dev/loop0
    [INFO] Finding number of sectors...
    [INFO] 104857600 sectors
    [INFO] Releasing loop device /dev/loop0...
    [INFO] Removing device maps for /dev/loop0...
    [INFO] Running 'kpartx -d -v /dev/loop0'...
    [INFO] Removing loop device /dev/loop0...
    [INFO] ##
    [INFO] ## Finished .
    [INFO] ##
    [INFO] Created image /EXAVMIMAGES/db03.img
    [INFO] File /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/db03.img is a reflink from 
    /EXAVMIMAGES/db03.img and added as disk to domain dm01db01vm01.example.com
    [INFO] -------- MANUAL STEPS TO BE COMPLETED FOR MOUNTING THE DISK WITHIN DOMU dm01db01vm01
    .example.com --------
    [INFO] 1. Check a disk with name /dev/VGExaDbDisk.db03.img/LVDBDisk exists.
    [INFO] - Run the command 'lvdisplay' to verify a disk with name '/dev/VGExaDbDisk.db03.img/
    LVDBDisk' exists.
    [INFO] 2. Create a directory that will to be used for mounting the new disk.
    [INFO] 3. Add the following line to /etc/fstab: /dev/VGExaDbDisk.db03.img/LVDBDisk <mount_
    point_from_step_2> <fstype> defaults 1 1
    [INFO] 4. Mount the newly added disk to mount point through the command: mount -a.

    Do not perform the manual steps described in the output. However, take note of the logical volume path identified in manual step number 1.

    In general, the logical volume path has the form: /dev/VolumeGroupName/LogicalVolumeName.

    In the example, the logical volume path is /dev/VGExaDbDisk.db03.img/LVDBDisk.

  4. Restart the guest.
    For example:
    # /opt/exadata_ovm/vm_maker --start-domain dm01db01vm01.example.com --console
  5. In the guest, confirm the newly added disk device.

    Use the lvdisplay command along with the logical volume path noted earlier.

    # lvdisplay /dev/VGExaDbDisk.db03.img/LVDBDisk
      LV Path /dev/VGExaDbDisk.db03.img/LVDBDisk
      LV Name LVDBDisk
      VG Name VGExaDbDisk.db03.img
      LV UUID u3RBKF-UmCK-JQxc-iFf5-6WqS-GWAw-3nLjdn
      LV Write Access read/write
      LV Creation host, time dm01db01vm01.example.com, 2019-10-28 04:11:28 -0700
      LV Status available
      # open 0
      LV Size <50.00 GiB
      Current LE 12799
      Segments 1
      Allocation inherit
      Read ahead sectors auto
      - currently set to 256
      Block device 252:14
  6. In the guest, remove the logical volume and volume group that were created for the added disk.
    You must perform this step in order to use the newly created disk to extend an existing volume group.
    1. Remove the logical volume.

      In this example, the logical volume path is /dev/VGExaDbDisk.db03.img/LVDBDisk.

      # lvremove /dev/VGExaDbDisk.db03.img/LVDBDisk
      Do you really want to remove active logical volume VGExaDbDisk.db03.img/LVDBDisk? [y/n]: y
        Logical volume "LVDBDisk" successfully removed
    2. Remove the volume group that came with the logical volume.

      In this example, the volume group name is VGExaDbDisk.db03.img.

      # vgremove VGExaDbDisk.db03.img
        Volume group "VGExaDbDisk.db03.img" successfully removed
    At this point, all that is left is the physical volume with no logical volume and no volume group.
  7. In the guest, identify the physical volume device for the newly added disk.

    The physical volume identifies itself as a NEW Physical volume in pvdisplay output. For example:

    # pvdisplay
    ...  
        "/dev/sdf4" is a new physical volume of "<50.00 GiB"
      --- NEW Physical volume ---
      PV Name /dev/sdf4
      VG Name
      PV Size <50.00 GiB
      Allocatable NO
      PE Size 0
      Total PE 0
      Free PE 0
      Allocated PE 0
      PV UUID tfb8lM-eHe9-SPch-8UAu-pkHe-dAYx-ru3Sez
    ...
  8. In the guest, identify the volume group for the file system that you want to extend.
    Use the vgdisplay command. The volume group name contains grid for Oracle Grid Infrastructure or db for Oracle Database. For example:
    # vgdisplay -s
    ...
      "VGExaDbDisk.grid-klone-Linux-x86-64-190000.50.img" <50.00 GiB [<50.00 GiB used / 0 free]
      "VGExaDbDisk.db-klone-Linux-x86-64-190000.50.img" <50.00 GiB [<50.00 GiB used / 0 free]
    ...
  9. In the guest, extend the volume group, then verify the additional space in the volume group.
    Use the vgextend command and specify the volume group name and physical volume device that you identified previously. For example:
    # vgextend VGExaDbDisk.db-klone-Linux-x86-64-190000.50.img /dev/sdf4
      Volume group "VGExaDbDisk.db-klone-Linux-x86-64-190000.50.img" successfully extended
    Use the vgdisplay command to verify that the volume group now contains some free space. For example:
    # vgdisplay -s
    ...
      "VGExaDbDisk.grid-klone-Linux-x86-64-190000.50.img" <50.00 GiB [<50.00 GiB used / 0 free]
      "VGExaDbDisk.db-klone-Linux-x86-64-190000.50.img" <101.24 GiB [<50.00 GiB used / <51.24 GiB free]
    ...
  10. In the guest, resize the logical volume using the following lvextend command:
    # lvextend -L +sizeG LogicalVolumePath

    The following example extends the logical volume by 10 GB:

    # lvextend -L +10G /dev/VGExaDbDisk.db-klone-Linux-x86-64-190000.50.img/LVDBDisk
  11. In the guest, resize the file system partition using the xfs_growfs command.
    # xfs_growfs /u01/app/oracle/product/19.0.0.0/DbHome_3
  12. In the guest, verify the file system size was increased. For example:
    # df -h
    Filesystem                                                
       Size Used Avail Use% Mounted on
    ...
    /dev/mapper/VGExaDbDisk.db--klone--Linux--x86--64--190000.50.img-LVDBDisk 
       60G  6.5G 53G   10%  /u01/app/oracle/product/19.0.0.0/DbHome_3
    ...
  13. Connect to the KVM host, and remove the backup image.

    Use a command similar to the following where pre_resize.db19.0.0.img is the name of the backup image file created in step 3:

    # cd /EXAVMIMAGES/GuestImages/DomainName
    # rm pre_resize.db19.0.0.img

6.14.5 Increasing the Size of the Swap Area

You can increase the amount of swap configured in a guest.

  1. In the KVM host, create a new disk image and attach it to the guest.

    For example, the following command adds the disk image swap2.img to the guest dm01db01vm01.example.com:

    # /opt/exadata_ovm/vm_maker --create --disk-image /EXAVMIMAGES/swap2.img
     --attach --domain dm01db01vm01.example.com
    [INFO] Allocating an image for /EXAVMIMAGES/swap2.img, size 50.000000G...
    [INFO] Running 'qemu-img create /EXAVMIMAGES/swap2.img 50.000000G '...
    [INFO] Create label gpt on /EXAVMIMAGES/swap2.img.
    [INFO] Running 'parted -a none -s /EXAVMIMAGES/swap2.img mklabel gpt'...
    [INFO] Running 'losetup -P -f /EXAVMIMAGES/swap2.img'...
    [INFO] Finding loop device...
    [INFO] loop device is /dev/loop0
    [INFO] Finding number of sectors...
    [INFO] 104857600 sectors
    [INFO] Releasing loop device /dev/loop0...
    [INFO] Removing device maps for /dev/loop0...
    [INFO] Running 'kpartx -d -v /dev/loop0'...
    [INFO] Removing loop device /dev/loop0...
    [INFO] ##
    [INFO] ## Finished .
    [INFO] ##
    [INFO] Created image /EXAVMIMAGES/swap2.img
    [INFO] File /EXAVMIMAGES/GuestImages/dm01db01vm01.example.com/swap2.img is a reflink from 
    /EXAVMIMAGES/swap2.img and added as disk to domain dm01db01vm01.example.com
    [INFO] -------- MANUAL STEPS TO BE COMPLETED FOR MOUNTING THE DISK WITHIN DOMU dm01db01vm01
    .example.com --------
    [INFO] 1. Check a disk with name /dev/VGExaDbDisk.swap2.img/LVDBDisk exists.
    [INFO] - Run the command 'lvdisplay' to verify a disk with name '/dev/VGExaDbDisk.swap2.img/
    LVDBDisk' exists.
    [INFO] 2. Create a directory that will to be used for mounting the new disk.
    [INFO] 3. Add the following line to /etc/fstab: /dev/VGExaDbDisk.swap2.img/LVDBDisk <mount_
    point_from_step_2> <fstype> defaults 1 1
    [INFO] 4. Mount the newly added disk to mount point through the command: mount -a.

    Do not perform the manual steps described in the output. However, take note of the logical volume path identified in manual step number 1.

    In general, the logical volume path has the form: /dev/VolumeGroupName/LogicalVolumeName.

    In the example, the logical volume path is /dev/VGExaDbDisk.swap2.img/LVDBDisk.

  2. In the guest, configure the new logical volume as a swap device.

    Use the mkswap command, and configure the new logical volume with a unique label, which is not currently in use in the /etc/fstab file.

    In the following example, the swap device label is SWAP2 and the logical volume path is /dev/VGExaDbDisk.swap2.img/LVDBDisk.

    # mkswap -L SWAP2 /dev/VGExaDbDisk.swap2.img/LVDBDisk
  3. In the guest, enable the new swap device.

    Use the swapon command with the -L option and specify the label of the newly created swap device.

    For example:

    # swapon -L SWAP2
  4. In the guest, verify that the new swap device is enabled by using the swapon -s command.

    For example:

    # swapon -s
    Filename                   Type            Size      Used     Priority
    /dev/dm-3                  partition       8388604   306108   -1
    /dev/VGExaDb/LVDbSwap2     partition       8388604   0        -2
    
  5. In the guest, edit the /etc/fstab file to include the new swap device.

    You can copy the existing swap entry, and then change the LABEL value in the new entry to the label used to create the new swap device.

    In the following example, the new swap device is added to the /etc/fstab file using LABEL=SWAP2.

    # cat /etc/fstab
    LABEL=DBSYS   /                                            ext4    defaults        1 1
    LABEL=BOOT    /boot                                        ext4    defaults,nodev  1 1
    tmpfs         /dev/shm                                     tmpfs   defaults,size=7998m 0
    devpts        /dev/pts                                     devpts  gid=5,mode=620  0 0
    sysfs         /sys                                         sysfs   defaults        0 0
    proc          /proc                                        proc    defaults        0 0
    LABEL=SWAP    swap                                         swap    defaults        0 0
    LABEL=SWAP2   swap                                         swap    defaults        0 0
    LABEL=DBORA   /u01                                         ext4    defaults        1 1
    /dev/xvdb     /u01/app/12.1.0.2/grid                       ext4    defaults        1 1
    /dev/xvdc     /u01/app/oracle/product/12.1.0.2/dbhome_1    ext4    defaults        1 1
    

6.15 Expanding /EXAVMIMAGES on the KVM host

Use this procedure to expand /EXAVMIMAGES using available space on the KVM host.

On Exadata database servers, local disk storage is governed by a volume manager, with all of the available storage space allocated to a single volume group. In turn, the volume group contains numerous logical volumes that support various file systems. On KVM hosts, most of the space is allocated to /EXAVMIMAGES, which is used for guest storage.

Typically, a modest amount of free space is preserved in the volume group so that a file system can be easily extended if required. Additional space is also available by adding the disk expansion kit to a database server. The kit consists of 4 hard drives, which are installed in the unused slots in the database server.

Note:

The disk expansion kit is supported on 2-socket Oracle Exadata Database Machine systems only.

If you installed the disk expansion kit, ensure that you have completed the procedure outlined in Adding the Disk Expansion Kit to Database Servers: X8M-2 and Prior before you proceed with this procedure:

  1. Examine the volume group to confirm the available free space.
    # vgs
       VG      #PV #LV #SN Attr   VSize VFree
       VGExaDb   1  11   0 wz--n- 3.27t <1.73t
  2. Confirm the current space allocation for /EXAVMIMAGES.
    # df /EXAVMIMAGES
    Filesystem                                1K-blocks      Used  Available Use% Mounted on 
    /dev/mapper/VGExaDb-LVDbExaVMImages      1572096000 250734224 1321361776  16% /EXAVMIMAGES
  3. Extend the logical volume associated with /EXAVMIMAGES.
    Use the lvextend command to add space to the logical volume.

    In the following example, all of the available free space is added to the logical volume.

    # lvextend -l +100%FREE /dev/VGExaDb/LVDbExaVMImages
       Size of logical volume VGExaDb/LVDbExaVMImages changed from 1.46 TiB (384000 extents) to 3.19 TiB (837430 extents).
       Logical volume VGExaDb/LVDbExaVMImages successfully resized.

    If you want to retain some free space for future use, then you can use a subset of the available free space. For example, the following command uses 90% of the available free space:

    # lvextend -l +90%FREE /dev/VGExaDb/LVDbExaVMImages

    Or, you can specify the amount of space that you want to add. For example, the following command expands the logical volume by 500 GB:

    # lvextend -L +500G /dev/VGExaDb/LVDbExaVMImages
  4. Extend the file system associated with /EXAVMIMAGES.
    Use the xfs_growfs command to extend the file system into the expanded logical volume.
    # xfs_growfs /EXAVMIMAGES 
    meta-data=/dev/mapper/VGExaDb-LVDbExaVMImages isize=512 agcount=32, agsize=12288000 blks
          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=1        finobt=0 spinodes=0 rmapbt=0
          =                       reflink=1    data
          =                       bsize=4096   blocks=393216000, imaxpct=5         
          =                       sunit=256    swidth=256 blks naming  
          =version 2              bsize=4096   ascii-ci=0 ftype=1 log     
          =internal               bsize=4096   blocks=192000, version=2
          =                       sectsz=512   sunit=8 blks, lazy-count=1 realtime
          =none                   extsz=4096   blocks=0, rtextents=0 data blocks changed from 393216000 to 857528320
  5. Confirm the expansion of /EXAVMIMAGES.
    # df /EXAVMIMAGES
    Filesystem                                1K-blocks      Used  Available Use% Mounted on 
    /dev/mapper/VGExaDb-LVDbExaVMImages      3429345280 261835784 3167509496   8% /EXAVMIMAGE

6.16 Adding an Oracle Linux KVM Cluster

You can use Oracle Exadata Deployment Assistant (OEDA) to create a new Oracle Linux KVM cluster on an existing Oracle Exadata.

6.17 Expanding an Oracle RAC Cluster in Oracle Linux KVM Using OEDACLI

You can expand an existing Oracle RAC cluster on Oracle Linux KVM by adding guests using the Oracle Exadata Deployment Assistant command-line interface (OEDACLI).

OEDACLI is the preferred method if you have a known, good version of the OEDA XML file for your cluster.

Note:

During the execution of this procedure, the existing Oracle RAC cluster nodes along with their database instances incur zero downtime.

Note:

During deployment, the cloned guest inherits various configuration attributes from the source guest, including the client network configuration and the backup network configurations (if present).

If all of the KVM hosts have the same network configuration, then the inherited attributes work as expected.

However, if the new KVM host uses a different physical network configuration, deployment of the cloned guest will fail. This situation is most likely when an Exadata system contains different versions of compute node hardware. For example, when adding an X10M server to an X8M-2 rack.

In this case, you must manually adjust the relevant network definition by using the ALTER NETWORK command before deployment. Contact Oracle Support for details.

Use cases for this procedure include:

  • You have an existing Oracle RAC cluster that uses only a subset of the database servers of an Oracle Exadata Rack, and now the nodes not being used by the cluster have become candidates for use.
  • You have an existing Oracle RAC cluster on Oracle Exadata that was recently extended with additional database servers.
  • You have an existing Oracle RAC cluster that had a complete node failure and the node was removed and replaced with a newly re-imaged node.

Before preforming the steps in this section, the new database servers should have been set up as detailed in Adding a New Database Server to the Cluster, including the following:

  • The new database server is installed and configured on the network with a KVM host.
  • Download the latest Oracle Exadata Deployment Assistant (OEDA); ensure the version you download is the July 2019 release, or later.
  • You have an OEDA configuration XML file that accurately reflects the existing cluster configuration. You can validate the XML file by generating an installation template from it and comparing it to the current configuration. See the OEDACLI command SAVE FILES.
  • Review the OEDA Installation Template report for the current system configuration to obtain node names and IP addresses for existing nodes. You will need to have new host names and IP addresses for the new nodes being added. The new host names and IP addresses required are:
    • Administration host names and IP addresses (referred to as ADMINNET) for the KVM host and the guests.
    • Private host names and IP addresses (referred to as PRIVNET) for the KVM host and the guests.
    • Integrated Lights Out Manager (ILOM) host names and IP addresses for the KVM host.
    • Client host names and IP addresses (referred to as CLIENTNET) for the guests.
    • Virtual IP (VIP) host names and IP addresses (referred to as VIPNET) for the guests.
    • Physical rack number and location of the new node in the rack (in terms of U number)
  • Each KVM host has been imaged or patched to the same image in use on the existing database servers. The current system image must match the version of the /EXAVMIMAGES/ System.first.boot.*.img file on the new KVM host node.

    Note:

    The ~/dom0_group file referenced below is a text file that contains the host names of the KVM hosts for all existing and new nodes being added.

    Check that the image version across all KVM hosts are the same.

    dcli -g ~/dom0_group -l root "imageinfo -ver"
    
    exa01adm01: 19.2.0.0.0.190225
    exa01adm02: 19.2.0.0.0.190225
    exa01adm03: 19.2.0.0.0.190225

    If any image versions differ, you must upgrade the nodes as needed so that they match.

    Ensure that the System.first.boot version across all KVM hosts matches the image version retrieved in the previous step.

    dcli -g ~/dom0_group -l root "ls  -1 /EXAVMIMAGES/System.first.boot*.img" 
    exa01adm01:  /EXAVMIMAGES/System.first.boot.19.2.0.0.0.190225.img
    exa01adm02:  /EXAVMIMAGES/System.first.boot.19.2.0.0.0.190225.img
    exa01adm03:  /EXAVMIMAGES/System.first.boot.19.2.0.0.0.190225.img

    If any nodes are missing the System.first.boot.img file that corresponds to the current image, then obtain the required file. See the “Supplemental README note” for your Exadata release in My Oracle Support Doc ID 888828.1 and look for the patch file corresponding to this description, “DomU System.img OS image for V.V.0.0.0 VM creation on upgraded dom0s”

  • Place the klone.zip files (gi-klone*.zip and db-klone*.zip) in the /EXAVMIMAGES location on the freshly imaged KVM host node you are adding to the cluster. These files can be found in the/EXAVMIMAGES directory on the KVM host node from where the system was initially deployed.

The following examples show how to add a new KVM host node named exa01adm03 that will have a new guest named exa01adm03vm01. The steps show how to extend an existing Oracle RAC cluster onto the guest using OEDACLI commands. The existing cluster has KVM host nodes named exa01adm01 and exa01adm02 and guest nodes named exa01adm01vm01 and exa01adm02vm01.

  1. Add the KVM host information to the OEDA XML file using the CLONE COMPUTE command.

    In the following examples, the OEDA XML file is assumed to be in: unzipped_OEDA_location/ExadataConfigurations.

    OEDACLI> LOAD FILE NAME=exa01_original_deployment.xml 
    
    OEDACLI> CLONE COMPUTE SRCNAME=exa01adm01 TGTNAME=exa01adm03
    OEDACLI> SET ADMINNET NAME=exa01adm03,IP=xx.xx.xx.xx
    OEDACLI> SET PRIVNET NAME1=exa01adm03-priv1,IP1=xx.xx.xx.xx,NAME2=exa01adm03-priv2,IP2=xx.xx.xx.xx
    OEDACLI> SET ILOMNET NAME=exa01adm03-c,IP=xx.xx.xx.xx
    OEDACLI> SET RACK NUM=NN,ULOC=XX 
    
    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS FORCE
    OEDACLI> SAVE FILE NAME=exa01_plus_adm03_node.xml

    At this point we have a new XML file (exa01_plus_adm03_node.xml) that has the new compute node KVM host in the configuration. This file will be used in the following steps.

  2. Add the new guest information to the OEDA XML file using the CLONE GUEST command and deploy the guest.
    • The first example shows how to control deployment of the new guest by using a WHERE clause in the CLONE GUEST command to specify the name of each step. If you choose to perform deployment this way, you must run all of the other deployment steps in order as follows:

      OEDACLI> LOAD FILE NAME=exa01_plus_adm03_node.xml 
      
      OEDACLI> CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01 WHERE STEPNAME=CREATE_GUEST
      OEDACLI> SET PARENT NAME=exa01adm03
      OEDACLI> SET ADMINNET NAME=exa01adm03vm01,IP=xx.xx.xx.xx
      OEDACLI> SET PRIVNET NAME1=exa01db03vm01-priv1,IP1=xx.xx.xx.xx,NAME2=exa01db03vm01-priv2,IP2=xx.xx.xx.xx
      OEDACLI> SET CLIENTNET NAME=exa01client03vm01,IP=xx.xx.xx.xx
      OEDACLI> SET VIPNET NAME=exa01client03vm01-vip,IP=xx.xx.xx.xx
      
      OEDACLI> SAVE ACTION
      OEDACLI> MERGE ACTIONS
      OEDACLI> DEPLOY ACTIONS
      
      OEDACLI> CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01 WHERE STEPNAME=CREATE_USERS

      OEDACLI> SAVE ACTION
      OEDACLI> MERGE ACTIONS
      OEDACLI> DEPLOY ACTIONS
      
      OEDACLI> CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01 WHERE STEPNAME=CELL_CONNECTIVITY

      OEDACLI> SAVE ACTION
      OEDACLI> MERGE ACTIONS
      OEDACLI> DEPLOY ACTIONS
      
      OEDACLI> CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01 WHERE STEPNAME=ADD_NODE

      OEDACLI> SAVE ACTION
      OEDACLI> MERGE ACTIONS
      OEDACLI> DEPLOY ACTIONS
      
      OEDACLI> CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01 WHERE STEPNAME=EXTEND_DBHOME

      OEDACLI> SAVE ACTION
      OEDACLI> MERGE ACTIONS
      OEDACLI> DEPLOY ACTIONS
      
      OEDACLI> CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01 WHERE STEPNAME=ADD_INSTANCE

      OEDACLI> SAVE ACTION
      OEDACLI> MERGE ACTIONS
      OEDACLI> DEPLOY ACTIONS
    • Alternatively, you can perform all of the deployment steps using one CLONE GUEST command by omitting the WHERE clause. For example:

      OEDACLI> LOAD FILE NAME=exa01_plus_adm03_node.xml 
      
      OEDACLI> CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01
      OEDACLI> SET PARENT NAME=exa01adm03
      OEDACLI> SET ADMINNET NAME=exa01adm03vm01,IP=xx.xx.xx.xx
      OEDACLI> SET PRIVNET NAME1=exa01db03vm01-priv1,IP1=xx.xx.xx.xx,NAME2=exa01db03vm01-priv2,IP2=xx.xx.xx.xx
      OEDACLI> SET CLIENTNET NAME=exa01client03vm01,IP=xx.xx.xx.xx
      OEDACLI> SET VIPNET NAME=exa01client03vm01-vip,IP=xx.xx.xx.xx
      
      OEDACLI> SAVE ACTION
      OEDACLI> MERGE ACTIONS
      OEDACLI> DEPLOY ACTIONS

    Regardless of the deployment method, for each step OEDACLI displays progress information similar to the following:

    Deploying Action ID : 39 CLONE GUEST SRCNAME=exa01adm01vm01 TGTNAME=exa01adm03vm01 where STEPNAME=ADD_INSTANCE 
    Deploying CLONE GUEST 
    Cloning Guest 
    Cloning Guest  :  exa01adm03vm01.example.com_id 
    Adding new instance for database [dbm] on exa01adm03vm01.example.com 
    Setting up Huge Pages for Database..[dbm] 
    Adding instance dbm3 on host exa01adm03vm01.example.com 
    Successfully completed adding database instance on the new node [elapsed Time [Elapsed = 
    249561 mS [4.0  minutes] Fri Jun 28 13:35:52 PDT 2019]] 
    Done...
    Done
  3. Save the current state of the configuration and generate configuration information.
    OEDACLI> SAVE FILES LOCATION=/tmp/exa01_plus_adm03_config

    The above command writes all the configuration files to the directory /tmp/exa01_plus_adm03_config. Save a copy of these files in a safe place since they now reflect the changes made to your cluster.

  4. Gather an Oracle EXAchk report and examine it to ensure the cluster is in good health.

6.18 Moving a Guest to a Different Database Server

Guests can move to different database servers.

The target Oracle Exadata database server must meet the following requirements:

  • The target database server must have the same Oracle Exadata System Software release installed with Oracle Linux KVM.

  • The target database server must have the same network visibility.

  • The target database server must have access to the same Oracle Exadata storage servers.

  • The target database server must have sufficient free resources (CPU, memory, and local disk storage) to operate the guest.

    • It is possible to over-commit virtual CPUs such that the total number of virtual CPUs assigned to all domains exceeds the number of physical CPUs on the system. Over-committing CPUs can be done only when the competing workloads for over-subscribed resources are well understood and the concurrent demand does not exceed physical capacity.

    • It is not possible to over-commit memory.

    • Copying disk images to the target database server may increase space allocation of the disk image files because the copied files are no longer able to benefit from the disk space savings gained by using reflinks.

  • The guest name must not be already in use on the target database server.

The following procedure moves a guest to a new database server in the same Oracle Exadata System Software configuration.

  1. In the KVM host, shut down the guest that is being moved.
    # vm_maker --stop-domain GuestName
  2. Copy the guest disk image and configuration files to the target database server.

    In the following examples, replace GuestName with the name of the guest, and replace target with the host name of the target KVM host.

    # scp -r /EXAVMIMAGES/GuestImages/GuestName/ target:/EXAVMIMAGES/GuestImages
  3. Copy the guest XML definition to the target database server.
    # scp /etc/libvirt/qemu/GuestName.xml target:/EXAVMIMAGES/GuestImages
  4. In the target KVM host, define the domain.
    # virsh define /EXAVMIMAGES/GuestImages/GuestName.xml
  5. If you are using Oracle Exadata System Software release 20.1 or later, run the following vm_maker command in the target KVM host to complete the guest migration.
    # vm_maker --update-mac GuestName

    Note:

    • The vm_maker --update-mac command is first introduced in Oracle Exadata System Software release 20.1.4 (November 2020). If you are using an earlier 20.1 release, you must perform an update to get this command.

    • This step is not required on systems using a release of Oracle Exadata System Software prior to 20.1.

  6. Start the migrated guest on the target KVM host.
    # vm_maker --start-domain GuestName

6.19 Recovering a KVM Deployment

A KVM host can be recovered from backups and guests can be recovered from snapshot backups.

A KVM host can be recovered from a snapshot-based backup when severe disaster conditions damage the Oracle KVM host, or when the server hardware is replaced to such an extent that it amounts to new hardware.

For example, replacing all hard disks leaves no trace of original software on the system. This is similar to replacing the complete system as far as the software is concerned.

The recovery procedures described in this section do not include backup or recovery of Exadata storage servers or the data in an Oracle Database. Oracle recommends testing the backup and recovery procedures on a regular basis.

6.19.1 Overview of Snapshot-Based Recovery of KVM Hosts

The recovery of a KVM host consists of a series of tasks.

The recovery procedures use the diagnostics.iso image as a virtual CD-ROM to restart the KVM host in rescue mode using the Integrated Lights Out Manager (ILOM). At a high-level, the steps are:

  1. Re-create the following:
    • Boot partitions
    • Physical volumes
    • Volume groups
    • Logical volumes
    • File system
    • Swap partition
  2. Activate the swap partition
  3. Ensure the /boot partition is the active boot partition
  4. Restore the data
  5. Reconfigure GRUB
  6. Restart the server

6.19.2 KVM System Recovery Scenarios

How to recover a KVM system deployment.

The following scenarios are applicable to a KVM system recovery:

6.19.2.1 Recovering a KVM Host and the Guests from Backup

This procedure recovers the KVM host and all its guest from a backup of the KVM host and a backup of the guests from the KVM host.

A KVM host can be recovered from a snapshot-based backup using the steps below when severe disaster conditions damage the management domain, or when the server hardware is replaced to such an extent that it amounts to new hardware.

Prepare an NFS server to host the backup archives created in Backing up the KVM host Using Snapshot-Based Backup

The NFS server must be accessible by IP address. For example, on an NFS server with the IP address nfs_ip, where the directory /Backup contains the backup archives.

6.19.2.1.1 Recover the KVM Host on Exadata X10M

This procedure describes how to recover the KVM host on an Oracle Exadata X10M database server.

  1. Boot the server and use the system BIOS menus to check the disk controller status. If required, configure the disk controller and set up the disks.
  2. Boot the server in diagnostic mode.
    See Booting a Server using the Diagnostic ISO File in Oracle Exadata System Software User's Guide.
  3. Log in to the diagnostics shell as the root user.
    When prompted, enter the diagnostics shell.

    For example:

    Choose from following by typing letter in '()':
    (e)nter interactive diagnostics shell. Must use credentials 
    from Oracle support to login (reboot or power cycle to exit
    the shell),
    (r)estore system from NFS backup archive, 
    Type e to enter the diagnostics shell and log in as the root user.
    If prompted, log in to the system as the root user. If you are prompted for the root user password and do not have it, then contact Oracle Support Services.
  4. If it is mounted, unmount /mnt/cell
    # umount /mnt/cell
  5. Confirm the md devices on the server.

    Confirm that the server contains the devices listed in the following example. Do not proceed and contact Oracle Support if your server differs substantially.

    # ls -al /dev/md*
    brw-rw---- 1 root disk   9, 24 Jan  5 05:42 /dev/md24
    brw-rw---- 1 root disk 259,  4 Jan  5 05:42 /dev/md24p1
    brw-rw---- 1 root disk 259,  5 Jan  5 05:42 /dev/md24p2
    brw-rw---- 1 root disk   9, 25 Jan  5 05:42 /dev/md25
    
    /dev/md:
    total 0
    drwxr-xr-x  2 root root  120 Jan  5 05:42 .
    drwxr-xr-x 19 root root 3380 Jan  5 05:49 ..
    lrwxrwxrwx  1 root root    7 Jan  5 05:42 24 -> ../md24
    lrwxrwxrwx  1 root root    9 Jan  5 05:42 24p1 -> ../md24p1
    lrwxrwxrwx  1 root root    9 Jan  5 05:42 24p2 -> ../md24p2
    lrwxrwxrwx  1 root root    7 Jan  5 05:42 25 -> ../md25
  6. Remove the logical volumes, the volume group, and the physical volume, in case they still exist after the disaster.
    # lvm vgremove VGExaDb --force
    # lvm pvremove /dev/md25 --force
  7. Remove the existing partitions, then verify all partitions were removed.
    1. Use the following command to remove the existing partitions:
      # for v_partition in $(parted -s /dev/md24 print|awk '/^ / {print $1}')
      do
        parted -s /dev/md24 rm ${v_partition}
      done
    2. Verify by running the following command:
      # parted  -s /dev/md24 unit s print

      The command output should not display any partitions.

  8. Create the boot partition.
    1. Start an interactive session using the partd command.
      # parted /dev/md24
    2. Assign a disk label.
      (parted) mklabel gpt
    3. Set the unit size as sector.
      (parted) unit s
    4. Check the partition table by displaying the existing partitions.
      (parted) print
    5. Remove the partitions listed in the previous step.
      (parted) rm part#
    6. Create a new first partition.
      (parted) mkpart primary 64s 15114206s
    7. Make the new partition bootable.
      (parted) set 1 boot on
  9. Create the second primary (boot) partition.
    1. Create a second primary partition as a UEFI boot partition with fat32.
      (parted) mkpart primary fat32 15114207s 15638494s 
      (parted) set 2 boot on
    2. Write the information to disk, then quit.
      (parted) quit
  10. Create the physical volume and volume group.
    # lvm pvcreate /dev/md25
    # lvm vgcreate VGExaDb /dev/md25

    If the physical volume or volume group already exists, then remove and then re-create them as follows:.

    # lvm vgremove VGExaDb
    # lvm pvremove /dev/md25
    # lvm pvcreate /dev/md25
    # lvm vgcreate VGExaDb /dev/md25
  11. Create the LVM partitions, then create and mount the file systems.
    1. Create the logical volumes.
      # lvm lvcreate -n LVDbSys1 -L15G VGExaDb -y
      # lvm lvcreate -n LVDbSwap1 -L16G VGExaDb -y
      # lvm lvcreate -n LVDbSys2 -L15G VGExaDb -y
      # lvm lvcreate -n LVDbHome -L4G VGExaDb -y
      # lvm lvcreate -n LVDbVar1 -L2G VGExaDb -y
      # lvm lvcreate -n LVDbVar2 -L2G VGExaDb -y
      # lvm lvcreate -n LVDbVarLog -L18G VGExaDb -y
      # lvm lvcreate -n LVDbVarLogAudit -L1G VGExaDb -y
      # lvm lvcreate -n LVDbTmp -L3G VGExaDb -y
      # lvm lvcreate -n LVDoNotRemoveOrUse -L2G VGExaDb -y
      # lvm lvcreate -n LVDbExaVMImages -L1500G VGExaDb -y
      # lvextend -l +98%FREE /dev/VGExaDb/LVDbExaVMImages
    2. Create the file systems.
      # mkfs.xfs -f /dev/VGExaDb/LVDbSys1
      # mkfs.xfs -f /dev/VGExaDb/LVDbSys2
      # mkfs.xfs -f /dev/VGExaDb/LVDbHome
      # mkfs.xfs -f /dev/VGExaDb/LVDbVar1
      # mkfs.xfs -f /dev/VGExaDb/LVDbVar2
      # mkfs.xfs -f /dev/VGExaDb/LVDbVarLog
      # mkfs.xfs -f /dev/VGExaDb/LVDbVarLogAudit
      # mkfs.xfs -f /dev/VGExaDb/LVDbTmp
      # mkfs.xfs -m crc=1 -m reflink=1 -f /dev/VGExaDb/LVDbExaVMImages
      # mkfs.xfs -f /dev/md24p1
      # mkfs.vfat -v -c -F 32 -s 2 /dev/md24p2
    3. Label the file systems.
      # xfs_admin -L DBSYS /dev/VGExaDb/LVDbSys1
      # xfs_admin -L HOME /dev/VGExaDb/LVDbHome
      # xfs_admin -L VAR /dev/VGExaDb/LVDbVar1
      # xfs_admin -L DIAG /dev/VGExaDb/LVDbVarLog
      # xfs_admin -L AUDIT /dev/VGExaDb/LVDbVarLogAudit
      # xfs_admin -L TMP /dev/VGExaDb/LVDbTmp
      # xfs_admin -L EXAVMIMAGES /dev/VGExaDb/LVDbExaVMImages
      # xfs_admin -L BOOT /dev/md24p1
      # dosfslabel /dev/md24p2 ESP
    4. Create mount points for all the partitions, and mount the respective partitions.

      For example, assuming that /mnt is used as the top level directory for the recovery operation, you could use the following commands to create the directories and mount the partitions:

      # mount -t xfs /dev/VGExaDb/LVDbSys1 /mnt
      # mkdir -p /mnt/home
      # mount -t xfs /dev/VGExaDb/LVDbHome /mnt/home
      # mkdir -p /mnt/var
      # mount -t xfs /dev/VGExaDb/LVDbVar1 /mnt/var
      # mkdir -p /mnt/var/log
      # mount -t xfs /dev/VGExaDb/LVDbVarLog /mnt/var/log
      # mkdir -p /mnt/var/log/audit
      # mount -t xfs /dev/VGExaDb/LVDbVarLogAudit /mnt/var/log/audit
      # mkdir -p /mnt/tmp
      # mount -t xfs /dev/VGExaDb/LVDbTmp /mnt/tmp
      # mkdir -p /mnt/EXAVMIMAGES
      # mount -t xfs /dev/VGExaDb/LVDbExaVMImages /mnt/EXAVMIMAGES
      # mkdir -p /mnt/boot
      # mount -t xfs /dev/md24p1 /mnt/boot
      # mkdir -p /mnt/boot/efi
      # mount -t vfat /dev/md24p2 /mnt/boot/efi
  12. Create the system swap space.

    For example:

    # mkswap -L SWAP /dev/VGExaDb/LVDbSwap1
  13. Bring up the network.
    # ip address add ip_address_for_eth0/netmask_for_eth0 dev eth0
    # ip link set up eth0
    # ip route add default via gateway_address dev eth0
  14. Mount the NFS server containing the backup.

    The following example assumes that the backup is located in the /export directory of the NFS server with IP address nfs_ip.

    # mkdir -p /root/mnt
    # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/export /root/mnt
  15. Restore the files from the backup.

    Assuming that the backup was created using the procedure in Backing up the KVM host Using Snapshot-Based Backup, you can restore the files by using the following command:

    # tar --acls --xattrs --xattrs-include=* --format=pax -pjxvf /root/mnt/myKVMbackup.tar.bz2 -C /mnt
  16. Create the directory for kdump service.
    # mkdir /mnt/EXAVMIMAGES/crashfiles
  17. Check the restored fstab file (at /mnt/etc/fstab), and comment out any line that references /EXAVMIMAGES.
  18. Unmount the restored file systems.

    For example:

    # umount /mnt/tmp
    # umount /mnt/var/log/audit
    # umount /mnt/var/log
    # umount /mnt/var
    # umount /mnt/home
    # umount /mnt/EXAVMIMAGES
    # umount /mnt/boot/efi
    # umount /mnt/boot
    # umount /mnt
  19. Check the boot devices and set the boot order.
    1. Check the available boot devices, and identify the boot device that is associated with Redhat Boot Manager (\EFI\REDHAT\SHIMX64.EFI).

      For example:

      # efibootmgr -v
      BootCurrent: 0019
      Timeout: 1 seconds
      BootOrder:
      0019,0000,0002,0010,0009,0017,000A,000B,0018,0005,0006,0007,0008,0013,0014,0015,0016,0003,0011,0004,0012,001A
      Boot0000* RedHat Boot Manager HD(2,GPT,eec54dfd-8928-4874-833d-5b0b9e914b99,0xe69fdf,0x80000)/File(\EFI\REDHAT\SHIMX64.EFI)
      Boot0002* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection /Pci(0x1c,0x4)/Pci(0x0,0x0)/MAC(0010e0fc6e94,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0003* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef622380a,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0004* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef622380b,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0005* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(3cfdfe915070,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0006* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(3cfdfe915071,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0007* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x2)/MAC(3cfdfe915072,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0008* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x3)/MAC(3cfdfe915073,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0009* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef644519c,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot000A* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef644519d,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot000B* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef644519d,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0010* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection /Pci(0x1c,0x4)/Pci(0x0,0x0)/MAC(0010e0fc6e94,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0011* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef622380a,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0012* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef622380b,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0013* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(3cfdfe915070,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0014* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(3cfdfe915071,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0015* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x2)/MAC(3cfdfe915072,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0016* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x3)/MAC(3cfdfe915073,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0017* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef644519c,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0018* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef644519d,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0019* USB:SP:SUN Remote ISO CDROM1.01 /Pci(0x14,0x0)/USB(7,0)/USB(3,0)/CDROM(1,0x28,0x3100)..BO
      Boot001A* Oracle Linux (grubx64.efi) HD(2,GPT,eec54dfd-8928-4874-833d-5b0b9e914b99,0xe69fdf,0x80000)/File(\EFI\REDHAT\GRUBX64.EFI)..BO
      MirroredPercentageAbove4G: 0.00
      MirrorMemoryBelow4GB: false
    2. Configure the device that is associated with Redhat Boot Manager (\EFI\REDHAT\SHIMX64.EFI) to be first in the boot order.

      In this example, Redhat Boot Manager is associated with boot device 0000:

      # efibootmgr -o 0000
      BootCurrent: 0019
      Timeout: 1 seconds
      BootOrder: 0000
      Boot0000* RedHat Boot Manager
      Boot0002* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection
      Boot0003* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A
      Boot0004* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B
      Boot0005* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0006* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0007* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0008* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0009* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C
      Boot000A* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D
      Boot000B* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D
      Boot0010* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection
      Boot0011* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A
      Boot0012* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B
      Boot0013* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0014* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0015* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0016* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0017* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C
      Boot0018* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D
      Boot0019* USB:SP:SUN Remote ISO CDROM1.01
      Boot001A* Oracle Linux (grubx64.efi)
      MirroredPercentageAbove4G: 0.00
      MirrorMemoryBelow4GB: false
  20. Disconnect the diagnostics.iso file.
    See Booting a Server using the Diagnostic ISO File in Oracle Exadata System Software User's Guide.
  21. Restart the system.
    # reboot
  22. Log back into the server as the root user.
  23. Recreate the boot device mirroring configuration.
    1. Back up the /etc/mdadm.conf file.

      For example:

      # cp /etc/mdadm.conf /etc/mdadm.conf.backup
    2. Edit /etc/mdadm.conf and remove the lines starting with ARRAY.

      After you edit the file, the remaining contents should be similar to the following example:

      # cat /etc/mdadm.conf
      MAILADDR root 
      AUTO +imsm +1.x -all
    3. Recreate the boot device mirroring configuration.
      # mdadm -Esv | grep ^ARRAY >> /etc/mdadm.conf
    4. Examine /etc/mdadm.conf and verify the addition of new lines starting with ARRAY.

      In particular, verify that the file contains entries for /dev/md/24 and /dev/md/25.

      For example:

      # cat /etc/mdadm.conf 
      MAILADDR root
      AUTO +imsm +1.x -all
      ARRAY /dev/md/24 level=raid1 metadata=1.2 num-devices=2 UUID=2a92373f:572a5a3a:807ae329:b4135cf3 name=localhost:24
      ARRAY /dev/md/25 level=raid1 metadata=1.2 num-devices=2 UUID=cc7b75df:25f3a281:b4b65c44:0b8a2de3 name=localhost:25
  24. Recreate the initramfs image files.
    1. Back up the /boot/initramfs*.img files.

      For example:

      # mkdir /boot/backup
      # cp /boot/initramfs*.img /boot/backup
    2. Recreate the initramfs image files.
      # dracut -f
  25. Restart the system.
    # reboot
  26. Log back into the server as the root user.
  27. Run the imageinfo command and verify that the image status is success.

    For example:

    # imageinfo
    
    Kernel version: 5.4.17-2136.320.7.el8uek.x86_64 #2 SMP Mon Jun 5 14:17:11 PDT 2023 x86_64
    Image kernel version: 5.4.17-2136.320.7.el8uek
    Image version: 23.1.0.0.0.230707
    Image activated: 2023-07-07 17:12:37 -0700
    Image status: success
    Exadata software version: 23.1.0.0.0.230707
    Node type: KVMHOST
    System partition on device: /dev/mapper/VGExaDb-LVDbSys1
The KVM host has been recovered.
6.19.2.1.2 Recover the KVM Host on Exadata X9M-2

This procedure describes how to recover the KVM host on an Oracle Exadata X9M-2 database server.

  1. Boot the server and use the system BIOS menus to check the disk controller status. If required, configure the disk controller and set up the disks.
  2. Boot the server in diagnostic mode.
    See Booting a Server using the Diagnostic ISO File in Oracle Exadata System Software User's Guide.
  3. Log in to the diagnostics shell as the root user.
    When prompted, enter the diagnostics shell.

    For example:

    Choose from following by typing letter in '()':
    (e)nter interactive diagnostics shell. Must use credentials 
    from Oracle support to login (reboot or power cycle to exit
    the shell),
    (r)estore system from NFS backup archive, 
    Type e to enter the diagnostics shell and log in as the root user.
    If prompted, log in to the system as the root user. If you are prompted for the root user password and do not have it, then contact Oracle Support Services.
  4. If it is mounted, unmount /mnt/cell
    # umount /mnt/cell
  5. Confirm the md devices on the server.

    Confirm that the server contains the devices listed in the following example. Do not proceed and contact Oracle Support if your server differs substantially.

    # ls -al /dev/md*
    brw-rw---- 1 root disk   9, 126 Jul 15 06:59 /dev/md126
    brw-rw---- 1 root disk 259,   4 Jul 15 06:59 /dev/md126p1
    brw-rw---- 1 root disk 259,   5 Jul 15 06:59 /dev/md126p2
    brw-rw---- 1 root disk   9, 127 Jul 15 06:28 /dev/md127
    brw-rw---- 1 root disk   9,  25 Jul 15 06:28 /dev/md25
    
    /dev/md:
    total 0
    drwxr-xr-x  2 root root  140 Jul 15 06:59 .
    drwxr-xr-x 18 root root 3400 Jul 15 06:59 ..
    lrwxrwxrwx  1 root root    8 Jul 15 06:59 24_0 -> ../md126
    lrwxrwxrwx  1 root root   10 Jul 15 06:59 24_0p1 -> ../md126p1
    lrwxrwxrwx  1 root root   10 Jul 15 06:59 24_0p2 -> ../md126p2
    lrwxrwxrwx  1 root root    7 Jul 15 06:28 25 -> ../md25
    lrwxrwxrwx  1 root root    8 Jul 15 06:28 imsm0 -> ../md127
  6. Remove the logical volumes, the volume group, and the physical volume, in case they still exist after the disaster.
    # lvm vgremove VGExaDb --force
    # lvm pvremove /dev/md25 --force
  7. Remove the existing partitions, then verify all partitions were removed.
    1. Use the following command to remove the existing partitions:
      # for v_partition in $(parted -s /dev/md126 print|awk '/^ / {print $1}')
      do
        parted -s /dev/md126 rm ${v_partition}
      done
    2. Verify by running the following command:
      # parted  -s /dev/md126 unit s print

      The command output should not display any partitions.

  8. Create the boot partition.
    1. Start an interactive session using the partd command.
      # parted /dev/md126
    2. Assign a disk label.
      (parted) mklabel gpt
    3. Set the unit size as sector.
      (parted) unit s
    4. Check the partition table by displaying the existing partitions.
      (parted) print
    5. Remove the partitions listed in the previous step.
      (parted) rm part#
    6. Create a new first partition.
      (parted) mkpart primary 64s 15114206s
    7. Make the new partition bootable.
      (parted) set 1 boot on
  9. Create the second primary (boot) partition.
    1. Create a second primary partition as a UEFI boot partition with fat32.
      (parted) mkpart primary fat32 15114207s 15638494s 
      (parted) set 2 boot on
    2. Write the information to disk, then quit.
      (parted) quit
  10. Create the physical volume and volume group.
    # lvm pvcreate /dev/md25
    # lvm vgcreate VGExaDb /dev/md25

    If the physical volume or volume group already exists, then remove and then re-create them as follows:.

    # lvm vgremove VGExaDb
    # lvm pvremove /dev/md25
    # lvm pvcreate /dev/md25
    # lvm vgcreate VGExaDb /dev/md25
  11. Create the LVM partitions, then create and mount the file systems.
    1. Create the logical volumes.
      # lvm lvcreate -n LVDbSys1 -L15G VGExaDb -y
      # lvm lvcreate -n LVDbSwap1 -L16G VGExaDb -y
      # lvm lvcreate -n LVDbSys2 -L15G VGExaDb -y
      # lvm lvcreate -n LVDbHome -L4G VGExaDb -y
      # lvm lvcreate -n LVDbVar1 -L2G VGExaDb -y
      # lvm lvcreate -n LVDbVar2 -L2G VGExaDb -y
      # lvm lvcreate -n LVDbVarLog -L18G VGExaDb -y
      # lvm lvcreate -n LVDbVarLogAudit -L1G VGExaDb -y
      # lvm lvcreate -n LVDbTmp -L3G VGExaDb -y
      # lvm lvcreate -n LVDoNotRemoveOrUse -L2G VGExaDb -y
      # lvm lvcreate -n LVDbExaVMImages -L1500G VGExaDb -y
      # lvextend -l +98%FREE /dev/VGExaDb/LVDbExaVMImages
    2. Create the file systems.
      # mkfs.xfs -f /dev/VGExaDb/LVDbSys1
      # mkfs.xfs -f /dev/VGExaDb/LVDbSys2
      # mkfs.xfs -f /dev/VGExaDb/LVDbHome
      # mkfs.xfs -f /dev/VGExaDb/LVDbVar1
      # mkfs.xfs -f /dev/VGExaDb/LVDbVar2
      # mkfs.xfs -f /dev/VGExaDb/LVDbVarLog
      # mkfs.xfs -f /dev/VGExaDb/LVDbVarLogAudit
      # mkfs.xfs -f /dev/VGExaDb/LVDbTmp
      # mkfs.xfs -m crc=1 -m reflink=1 -f /dev/VGExaDb/LVDbExaVMImages
      # mkfs.xfs -f /dev/md126p1
      # mkfs.vfat -v -c -F 32 -s 2 /dev/md126p2
    3. Label the file systems.
      # xfs_admin -L DBSYS /dev/VGExaDb/LVDbSys1
      # xfs_admin -L HOME /dev/VGExaDb/LVDbHome
      # xfs_admin -L VAR /dev/VGExaDb/LVDbVar1
      # xfs_admin -L DIAG /dev/VGExaDb/LVDbVarLog
      # xfs_admin -L AUDIT /dev/VGExaDb/LVDbVarLogAudit
      # xfs_admin -L TMP /dev/VGExaDb/LVDbTmp
      # xfs_admin -L EXAVMIMAGES /dev/VGExaDb/LVDbExaVMImages
      # xfs_admin -L BOOT /dev/md126p1
      # dosfslabel /dev/md126p2 ESP
    4. Create mount points for all the partitions, and mount the respective partitions.

      For example, assuming that /mnt is used as the top level directory for the recovery operation, you could use the following commands to create the directories and mount the partitions:

      # mount -t xfs /dev/VGExaDb/LVDbSys1 /mnt
      # mkdir -p /mnt/home
      # mount -t xfs /dev/VGExaDb/LVDbHome /mnt/home
      # mkdir -p /mnt/var
      # mount -t xfs /dev/VGExaDb/LVDbVar1 /mnt/var
      # mkdir -p /mnt/var/log
      # mount -t xfs /dev/VGExaDb/LVDbVarLog /mnt/var/log
      # mkdir -p /mnt/var/log/audit
      # mount -t xfs /dev/VGExaDb/LVDbVarLogAudit /mnt/var/log/audit
      # mkdir -p /mnt/tmp
      # mount -t xfs /dev/VGExaDb/LVDbTmp /mnt/tmp
      # mkdir -p /mnt/EXAVMIMAGES
      # mount -t xfs /dev/VGExaDb/LVDbExaVMImages /mnt/EXAVMIMAGES
      # mkdir -p /mnt/boot
      # mount -t xfs /dev/md126p1 /mnt/boot
      # mkdir -p /mnt/boot/efi
      # mount -t vfat /dev/md126p2 /mnt/boot/efi
  12. Create the system swap space.

    For example:

    # mkswap -L SWAP /dev/VGExaDb/LVDbSwap1
  13. Bring up the network.
    # ip address add ip_address_for_eth0/netmask_for_eth0 dev eth0
    # ip link set up eth0
    # ip route add default via gateway_address dev eth0
  14. Mount the NFS server containing the backup.

    The following example assumes that the backup is located in the /export directory of the NFS server with IP address nfs_ip.

    # mkdir -p /root/mnt
    # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/export /root/mnt
  15. Restore the files from the backup.

    Assuming that the backup was created using the procedure in Backing up the KVM host Using Snapshot-Based Backup, you can restore the files by using the following command:

    # tar --acls --xattrs --xattrs-include=* --format=pax -pjxvf /root/mnt/myKVMbackup.tar.bz2 -C /mnt
  16. Create the directory for kdump service.
    # mkdir /mnt/EXAVMIMAGES/crashfiles
  17. Check the restored fstab file (at /mnt/etc/fstab), and comment out any line that references /EXAVMIMAGES.
  18. Unmount the restored file systems.

    For example:

    # umount /mnt/tmp
    # umount /mnt/var/log/audit
    # umount /mnt/var/log
    # umount /mnt/var
    # umount /mnt/home
    # umount /mnt/EXAVMIMAGES
    # umount /mnt/boot/efi
    # umount /mnt/boot
    # umount /mnt
  19. Check the boot devices and set the boot order.
    1. Check the available boot devices, and identify the boot device that is associated with Redhat Boot Manager (\EFI\REDHAT\SHIMX64.EFI).

      For example:

      # efibootmgr -v
      BootCurrent: 0019
      Timeout: 1 seconds
      BootOrder:
      0019,0000,0002,0010,0009,0017,000A,000B,0018,0005,0006,0007,0008,0013,0014,0015,0016,0003,0011,0004,0012,001A
      Boot0000* RedHat Boot Manager HD(2,GPT,eec54dfd-8928-4874-833d-5b0b9e914b99,0xe69fdf,0x80000)/File(\EFI\REDHAT\SHIMX64.EFI)
      Boot0002* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection /Pci(0x1c,0x4)/Pci(0x0,0x0)/MAC(0010e0fc6e94,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0003* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef622380a,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0004* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef622380b,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0005* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(3cfdfe915070,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0006* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(3cfdfe915071,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0007* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x2)/MAC(3cfdfe915072,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0008* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x3)/MAC(3cfdfe915073,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0009* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef644519c,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot000A* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef644519d,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot000B* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef644519d,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0010* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection /Pci(0x1c,0x4)/Pci(0x0,0x0)/MAC(0010e0fc6e94,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0011* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef622380a,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0012* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef622380b,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0013* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(3cfdfe915070,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0014* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(3cfdfe915071,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0015* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x2)/MAC(3cfdfe915072,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0016* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter /Pci(0x2,0x0)/Pci(0x0,0x3)/MAC(3cfdfe915073,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0017* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C /Pci(0x2,0x0)/Pci(0x0,0x0)/MAC(b8cef644519c,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0018* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D /Pci(0x2,0x0)/Pci(0x0,0x1)/MAC(b8cef644519d,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO
      Boot0019* USB:SP:SUN Remote ISO CDROM1.01 /Pci(0x14,0x0)/USB(7,0)/USB(3,0)/CDROM(1,0x28,0x3100)..BO
      Boot001A* Oracle Linux (grubx64.efi) HD(2,GPT,eec54dfd-8928-4874-833d-5b0b9e914b99,0xe69fdf,0x80000)/File(\EFI\REDHAT\GRUBX64.EFI)..BO
      MirroredPercentageAbove4G: 0.00
      MirrorMemoryBelow4GB: false
    2. Configure the device that is associated with Redhat Boot Manager (\EFI\REDHAT\SHIMX64.EFI) to be first in the boot order.

      In this example, Redhat Boot Manager is associated with boot device 0000:

      # efibootmgr -o 0000
      BootCurrent: 0019
      Timeout: 1 seconds
      BootOrder: 0000
      Boot0000* RedHat Boot Manager
      Boot0002* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection
      Boot0003* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A
      Boot0004* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B
      Boot0005* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0006* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0007* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0008* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0009* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C
      Boot000A* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D
      Boot000B* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D
      Boot0010* NET0:PXE IPv4 Intel(R) I210 Gigabit  Network Connection
      Boot0011* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0A
      Boot0012* PCIE5:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:22:38:0B
      Boot0013* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0014* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0015* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0016* PCIE3:PXE IPv4 Oracle Quad Port 10GBase-T Adapter
      Boot0017* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9C
      Boot0018* PCIE1:PXE IPv4 Mellanox Network Adapter - B8:CE:F6:44:51:9D
      Boot0019* USB:SP:SUN Remote ISO CDROM1.01
      Boot001A* Oracle Linux (grubx64.efi)
      MirroredPercentageAbove4G: 0.00
      MirrorMemoryBelow4GB: false
  20. Restart the system.
    # reboot
  21. Disconnect the diagnostics.iso file.
    See Booting a Server using the Diagnostic ISO File in Oracle Exadata System Software User's Guide.
  22. Log back into the server as the root user.
  23. Run the imageinfo command and verify that the image status is success.

    For example:

    # imageinfo
    
    Kernel version: 4.14.35-2047.502.5.el7uek.x86_64 #2 SMP Wed Apr 14 15:08:41
    PDT 2021 x86_64
    Uptrack kernel version: 4.14.35-2047.503.1.el7uek.x86_64 #2 SMP Fri Apr 23
    15:20:41 PDT 2021 x86_64
    Image kernel version: 4.14.35-2047.502.5.el7uek
    Image version: 21.2.1.0.0.210608
    Image activated: 2021-07-12 14:58:03 +0900
    Image status: success
    Node type: COMPUTE
    System partition on device: /dev/mapper/VGExaDb-LVDbSys1
The KVM host has been recovered.
6.19.2.1.3 Recover the KVM Host on Exadata X8M-2

This procedure describes how to recover the KVM host on an Oracle Exadata X8M-2 database server.

  1. Boot the server in diagnostic mode.
    See Booting a Server using the Diagnostic ISO File in Oracle Exadata System Software User's Guide.
  2. Log in to the diagnostics shell as the root user.
    When prompted, enter the diagnostics shell.

    For example:

    Choose from following by typing letter in '()':
    (e)nter interactive diagnostics shell. Must use credentials 
    from Oracle support to login (reboot or power cycle to exit
    the shell),
    (r)estore system from NFS backup archive, 
    Type e to enter the diagnostics shell and log in as the root user.
    If prompted, log in to the system as the root user. If you are prompted for the root user password and do not have it, then contact Oracle Support Services.
  3. If required, use /opt/MegaRAID/storcli/storcli64 to configure the disk controller to set up the disks.
  4. Remove the logical volumes, the volume group, and the physical volume, in case they still exist after the disaster.
    # lvm vgremove VGExaDb --force
    # lvm pvremove /dev/sda3 --force
  5. Remove the existing partitions, then verify all partitions were removed. The below script can be used.
    # for v_partition in $(parted -s /dev/sda print|awk '/^ / {print $1}')
    do
      parted -s /dev/sda rm ${v_partition}
    done
     
    # parted  -s /dev/sda unit s print
    Model: AVAGO MR9[ 2783.921605]  sda:361-16i (scsi)
    Disk /dev/sda: 3509760000s
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:  
    
    Number  Start   End  Size  File system Name  Flags
  6. Create three partitions on /dev/sda
    1. Get the end sector for the disk /dev/sda from a running KVM host and store it in a variable:
      # end_sector_logical=$(parted -s /dev/sda unit s print|perl -ne '/^Disk\s+\S+:\s+(\d+)s/
                and print $1')
      # end_sector=$( expr $end_sector_logical - 34 )
      # echo $end_sector
      The values for the start and end sectors in the commands below were taken from an existing KVM host. Because these values can change over time, it is recommended that these values are checked from a KVM host at the time of performing this procedure. For example, for an Oracle Exadata Database Machine X8M-2 database server the following might be seen:
      # parted -s /dev/sda  unit s print
      Model:  AVAGO MR9361-16i (scsi)
      Disk  /dev/sda: 7025387520s
      Sector  size (logical/physical): 512B/512B
      Partition  Table: gpt
      Disk  Flags:  
      Number   Start     End         Size         File system   Name     Flags  
      1        64s       1048639s    1048576s     xfs           primary  boot  
      2        1048640s  1572927s    524288s      fat32         primary  boot  
      3        1572928s  7025387486s 7023814559s                primary  lvm
    2. Create the boot partition, /dev/sda1.
      # parted -s /dev/sda  mklabel gpt mkpart primary 64s 1048639s set 1 boot on
    3. Create the efi boot partition , /dev/sda2.
      # parted -s /dev/sda  mkpart primary fat32 1048640s 1572927s set 2 boot on
    4. Create the partition that will hold the logical volumes, /dev/sda3.
      # parted -s /dev/sda mkpart primary 1572928s ${end_sector}s set 3 lvm on
    5. Verify all the partitions have been created.
      # parted -s /dev/sda unit s print
      Model: AVAGO MR9[2991.834796]  sda: sda1 sda2 sda3
      361-16i(scsi)
      Disk /dev/sda:3509760000s
      Sector size(logical/physical): 512B/512B
      Partition Table:gpt
      Disk Flags:
      Number  Start    End            Size          File system    Name    Flags 
      1       64s      1048639s       1048576s      xfs            primary boot 
      2       1048640s 1572927s       524288s       fat32          primary boot   
      3       1572928s 3509759966s    3508187039s                  primary lvm
  7. Create logical volumes and file systems.
    1. Create the physical volume and the volume group.
      # lvm pvcreate /dev/sda3
      # lvm vgcreate VGExaDb /dev/sda3
    2. Create and label the logical volume for the file system that will contain the first system partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbSys1 -L15G VGExaDb -y
      # mkfs.xfs -L DBSYS /dev/VGExaDb/LVDbSys1 -f
    3. Create and label the logical volume for the swap directory.
      # lvm lvcreate -n LVDbSwap1 -L16G VGExaDb -y
      # mkswap -L SWAP /dev/VGExaDb/LVDbSwap1
    4. Create the logical volume for the second system partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbSys2 -L15G VGExaDb -y
      # mkfs.xfs /dev/VGExaDb/LVDbSys2
    5. Create and label the logical volume for the HOME partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbHome -L4G VGExaDb -y
      # mkfs.xfs -L HOME /dev/VGExaDb/LVDbHome
    6. Create the logical volume for the tmp partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbTmp -L3G VGExaDb -y
      # mkfs.xfs -L TMP /dev/VGExaDb/LVDbTmp -f
    7. Create the logical volume for the first var partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbVar1 -L2G VGExaDb -y
      # mkfs.xfs -L VAR /dev/VGExaDb/LVDbVar1 -f
    8. Create the logical volume for the second var partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbVar2 -L2G VGExaDb -y
      # mkfs.xfs /dev/VGExaDb/LVDbVar2 -f
    9. Create and label the logical volume for the LVDbVarLog partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbVarLog -L18G VGExaDb -y
      # mkfs.xfs -L DIAG /dev/VGExaDb/LVDbVarLog -f
    10. Create and label the logical volume for the LVDbVarLogAudit partition and build a xfs file system on it.
      # lvm lvcreate -n LVDbVarLogAudit -L1G VGExaDb -y
      # mkfs.xfs -L AUDIT /dev/VGExaDb/LVDbVarLogAudit -f
    11. Create the LVDoNotRemoveOrUse logical volume.
      # lvm lvcreate -n LVDoNotRemoveOrUse -L2G VGExaDb -y
    12. Create the logical volume for the guest storage repository and build a xfs file system on it.
      # lvm lvcreate -n LVDbExaVMImages -L1500G VGExaDb -y
      # mkfs.xfs -m crc=1 -m reflink=1 -L EXAVMIMAGES /dev/VGExaDb/LVDbExaVMImages -f
    13. Create a file system on the /dev/sda1 partition, and label it.
      # mkfs.xfs -L BOOT /dev/sda1 -f
    14. Create a file system on the /dev/sda2 partition, and label it.
      # mkfs.vfat -v -c -F 32 -s 2 /dev/sda2
      # dosfslabel /dev/sda2 ESP
  8. Create mount points for all the partitions and mount the respective partitions.
    For example, if /mnt is used as the top-level directory, the mounted list of partitions might look like:
    /dev/VGExaDb/LVDbSys1 on /mnt
    /dev/sda1 on /mnt/boot
    /dev/sda2 on /mnt/boot/efi
    /dev/VGExaDb/LVDbHome on /mnt/home
    /dev/VGExaDb/LVDbTmp on /mnt/tmp
    /dev/VGExaDb/LVDbVar1 on /mnt/var
    /dev/VGExaDb/LVDbVarLog on /mnt/var/log
    /dev/VGExaDb/LVDbVarLogAudit on /mnt/var/log/audit
    /dev/VGExaDb/LVDbExaVMImages on /mnt/EXAVMIMAGES
    The following example mounts the system partition and creates 2 mount points for the boot partitions.
    # mount /dev/VGExaDb/LVDbSys1 /mnt -t xfs
    # mkdir /mnt/boot
    # mount /dev/sda1 /mnt/boot -t xfs
    # mkdir /mnt/boot/efi
    # mount /dev/sda2 /mnt/boot/efi -t vfat
    # mkdir /mnt/home
    # mount /dev/VGExaDb/LVDbHome /mnt/home -t xfs
    # mkdir /mnt/tmp
    # mount /dev/VGExaDb/LVDbTmp /mnt/tmp -t xfs
    # mkdir /mnt/var
    # mount /dev/VGExaDb/LVDbVar1 /mnt/var -t xfs
    # mkdir /mnt/var/log
    # mount /dev/VGExaDb/LVDbVarLog /mnt/var/log -t xfs
    # mkdir /mnt/var/log/audit
    # mount /dev/VGExaDb/LVDbVarLogAudit /mnt/var/log/audit -t xfs
    # mkdir /mnt/EXAVMIMAGES
    # mount /dev/VGExaDb/LVDbExaVMImages /mnt/EXAVMIMAGES -t xfs
  9. Bring up the network on eth0 and (if not using DHCP) assign the host's IP address and netmask to it. If using DHCP then manually configure the IP address for the host.
    # ip link set eth0 up
    # ip address add ip_address_for_eth0/netmask_for_eth0 dev eth0
    # ip route add default via gateway_ip_address dev eth0
  10. Mount the NFS server holding the backup.
    # mkdir -p /root/mnt
    # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/location_of_backup  /root/mnt
  11. Restore the files from the backup.

    Assuming that the backup was created using the procedure in Backing up the KVM host Using Snapshot-Based Backup, you can restore the files by using the following command:

    # tar --acls --xattrs --xattrs-include=* --format=pax -pjxvf /root/mnt/myKVMbackup.tar.bz2 -C /mnt
  12. Create the directory for kdump service.
    # mkdir /mnt/EXAVMIMAGES/crashfiles
  13. Set the boot device using efibootmgr.
    1. Disable and delete the Oracle Linux boot device.
      If the entry ExadataLinux_1 is seen, then remove this entry and recreate it. Example:
      # efibootmgr
      BootCurrent:  0000
      Timeout:  1 seconds
      BootOrder: 0000,0001,000A,000B,0007,0008,0004,0005
      Boot0000*  ExadataLinux_1
      Boot0001*  NET0:PXE IP4 Intel(R) I210 Gigabit Network Connection
      Boot0004*  PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet Adapter
      Boot0005*  PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet  Adapter
      Boot0007*  NET1:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller
      Boot0008*  NET2:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller
      Boot000A   PCIE2:PXE IP4 Mellanox Network Adapter - 50:6B:4B:CB:EF:F2
      Boot000B   PCIE2:PXE IP4 Mellanox Network Adapter - 50:6B:4B:CB:EF:F3
      MirroredPercentageAbove4G:  0.00
      MirrorMemoryBelow4GB:  false    
      In this example, ExadataLinux_1 (Boot000) would be disabled and removed. Use the commands below to disable and delete the boot device.

      Disable old ExadataLinux_1:

      # efibootmgr -b 0000 -A
      Delete old ExadataLinux_1:
      # efibootmgr -b 0000 -B
    2. Recreate the boot entry for ExadataLinux_1 and then view the boot order entries.
      # efibootmgr -c -d /dev/sda  -p 2 -l '\EFI\REDHAT\SHIM.EFI' -L 'ExadataLinux_1'
      # efibootmgr
      BootCurrent:  0000
      Timeout:  1 seconds
      BootOrder: 0000,0001,0007,0008,0004,0005,000B,000C
      Boot0000*  ExadataLinux_1
      Boot0001*  NET0:PXE IP4 Intel(R) I210 Gigabit Network Connection
      Boot0004*  PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet  Adapter
      Boot0005*  PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet  Adapter
      Boot0007*  NET1:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller
      Boot0008*  NET2:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller
      Boot000B*  PCIE2:PXE IP4 Mellanox Network Adapter - EC:0D:9A:CC:1E:46
      Boot000C*  PCIE2:PXE IP4 Mellanox Network Adapter - EC:0D:9A:CC:1E:47
      MirroredPercentageAbove4G: 0.00
      MirrorMemoryBelow4GB: false
    3. In the output from the efibootmgr command, make note of the boot order number for ExadataLinux_1 and use that value in the following commands.
      # efibootmgr -b entry number -A
      # efibootmgr -b entry number -a
      For example, in the previous output shown in step 13.b, ExadataLinux_1 was listed as (Boot0000). So you would use the following commands:
      # efibootmgr -b 0000 -A
      # efibootmgr -b 0000 -a
    4. Set the correct boot order. Set ExadataLinux_1 as the first boot device.
      The remaining devices should stay in the same boot order.
      # efibootmgr -o 0000,0001,0004,0005,0007,0008,000B,000C
    5. Check the boot order.
      The boot order should now look like the following:
      # efibootmgr
      BootCurrent: 0000
      Timeout: 1 seconds
      BootOrder: 0000,0001,0004,0005,0007,0008,000B,000C
      Boot0000* ExadataLinux_1
      Boot0001* NET0:PXE IP4 Intel(R) I210 Gigabit Network Connection
      Boot0004* PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet Adapter
      Boot0005* PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet Adapter
      Boot0007* NET1:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller
      Boot0008* NET2:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller
      Boot000B* PCIE2:PXE IP4 Mellanox Network Adapter - EC:0D:9A:CC:1E:46
      Boot000C* PCIE2:PXE IP4 Mellanox Network Adapter - EC:0D:9A:CC:1E:47
      MirroredPercentageAbove4G: 0.00
      MirrorMemoryBelow4GB: false 
    6. Check the boot order using the ubiosconfig command.
      # ubiosconfig export all -x /tmp/ubiosconfig.xml
      Make sure the ExadataLinux_1 entry is the first child element of boot_order.
      # cat /tmp/ubiosconfig.xml
      <boot_order>
       <boot_device>
        <description>ExadataLinux_1</description>
        <instance>1</instance> </boot_device>
       <boot_device>
        <description>NET0:PXE IP4 Intel(R) I210 Gigabit Network Connection</description>
        <instance>1</instance>
       </boot_device>
       <boot_device>
        <description>PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet Adapter</description>
        <instance>1</instance> 
       </boot_device>
       <boot_device>
        <description>PCIE1:PXE IP4 Oracle dual 25Gb Ethernet Adapter or dual 10Gb Ethernet Adapter</description>
        <instance>2</instance> 
       </boot_device>
       <boot_device>
        <description>NET1:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller</description>
        <instance>1</instance>
       </boot_device>
       <boot_device>
        <description>NET2:PXE IP4 Oracle Dual Port 10Gb/25Gb SFP28 Ethernet Controller</description>
        <instance>1</instance> 
       </boot_device>
       <boot_device>
        <description>PCIE2:PXE IP4 Mellanox Network Adapter - EC:0D:9A:CC:1E:46</description>
        <instance>1</instance>
       </boot_device>
       <boot_device> <description>PCIE2:PXE IP4 Mellanox Network Adapter - EC:0D:9A:CC:1E:47</description>
        <instance>1</instance> </boot_device> </boot_order>
    7. Check the restored fstab file and comment out any reference to /EXAVMIMAGES.

      Navigate to /mnt/etc.

      # cd /mnt/etc

      In the /mount/etc/fstab file, comment out any line that references /EXAVMIMAGES.

The KVM host has been recovered.
6.19.2.1.4 Reboot the KVM Host

Start the KVM host, and continue administrator operations.

The KVM host partitions have been restored to the server and the boot order now has ExadataLinux_1 as the first device.
  1. Detach the diagnostics.iso file.
    Assuming that the ILOM web interface console was used to connect the CDROM image, click the Disconnect button.
  2. Unmount the restored /dev/sda1 partitions so /dev/sda1 can be remounted on /boot.
    # cd /
    # umount /mnt/boot/efi
    # umount /mnt/boot
    # umount /mnt/home
    # umount /mnt/var/log/audit
    # umount /mnt/var/log/
    # umount /mnt/var
    # umount /mnt/EXAVMIMAGES/
    # umount /mnt
    # umount /root/mnt
  3. Restart the system.
    # reboot
    The current /boot partition will be used to start the server.
6.19.2.1.5 Recover and Restart the KVM Guests

From the KVM host recover all of the KVM guests.

This procedure complements the recommended backup procedure described in Method 1: Back Up All of the KVM Guests. It assumes that the KVM host is operational after recovery and all the guests are being recovered.

  1. Mount the backup NFS server that holds the guest storage repository (/EXAVMIMAGES) backup to restore the /EXAVMIMAGES file system.
    # mkdir -p /root/mnt
    # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/location_of_backup /root/mnt
  2. Mount the /EXAVMIMAGES file system.
    # mount -t xfs /dev/VGExaDb/LVDbExaVMImages /EXAVMIMAGES
  3. Restore the /EXAVMIMAGES file system.

    For example, use the command below to restore all guests:

    # tar --acls --xattrs --xattrs-include=* --format=pax -Spxvf /root/mnt/exavmimages.tar.bz2 -C /
  4. Check each guest XML configuration file.

    During recovery of the KVM host, each guest XML configuration file is restored to /etc/libvirt/qemu/guestname.xml. Additionally, the previous step restores a copy of each guest XML file to /XML/guestname.xml.

    For each guest, compare the XML configuration files. For example:

    # diff /etc/libvirt/qemu/guestname.xml /XML/guestname.xml

    In most cases the XML configuration files will be identical. However, if there are differences, you should examine both files and copy the most up-to-date configuration to /etc/libvirt/qemu/guestname.xml.

  5. Define each restored guest in the KVM hypervisor.
    # virsh define /etc/libvirt/qemu/guestname.xml
  6. Enable autostart for each restored guest.
    # /opt/exadata_ovm/vm_maker --autostart guestname --disable
    # /opt/exadata_ovm/vm_maker --autostart guestname --enable
  7. Unmount the NFS backup.
    # umount /root/mnt
  8. Restore any reference to /EXAVMIMAGES in the fstab file.

    In the /etc/fstab file, remove comments for any line that references /EXAVMIMAGES.

  9. Bring up each guest.
    # /opt/exadata_ovm/vm_maker --start-domain guestname
At this point all of the guests should come up along with Oracle Grid Infrastructure and the Oracle Database instances.
6.19.2.2 Re-imaging a KVM Host and Restoring the Guests from Backup

This procedure re-images the management domain and reconstructs all the user domains from a backup of the KVM guests.

The following procedure can be used when the KVM host is damaged beyond repair and no backup exists for the KVM host, but there is a backup available of the storage repository (/EXAVMIMAGES file system) containing all the guests.

  1. Re-image the KVM host with the image used in the other hosts in the rack using the procedure described in Re-Imaging the Oracle Exadata Database Server.
  2. If the recovery is on Oracle Exadata Database Machine eighth rack, then perform the procedure described in Configuring Oracle Exadata Database Machine Eighth Rack Oracle Linux Database Server After Recovery.
  3. Create the bonded interface and bonded network bridge.

    The following command uses vmbondeth0 as the name of the network bridge.

    # /opt/exadata_ovm/vm_maker --add-bonded-bridge vmbondeth0 --first-slave eth1 --second-slave eth2
  4. Mount the backup NFS server containing the guest backups.

    In this procedure, the backup NFS server is mounted on /remote_FS.

    # mkdir -p /remote_FS
    # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/location_of_backup /remote_FS
  5. Restore the /EXAVMIMAGES file system.
    # tar --acls --xattrs --xattrs-include=* --format=pax -Spxvf /remote_FS/backup-of-exavmimages.tar -C /

    Note:

    Following restoration, the guest image files under /EXAVMINAGES are separate regular files. Consequently, the space usage in /EXAVMINAGES may go up after the restoration process compared to the original space usage, which may have benefited from savings associated with reflinked files.

  6. Restore the guest specific files for the KVM hypervisor.
    # /usr/bin/cp /XML/*.xml /etc/libvirt/qemu/

    Note:

    This step assumes that the files restored to /XML were copied into the backup location during the backup procedure.

  7. Define each guest in the KVM hypervisor.
    # virsh define /etc/libvirt/qemu/guestname.xml
  8. Enable autostart for each restored guest.
    # /opt/exadata_ovm/vm_maker --autostart guestname --disable
    # /opt/exadata_ovm/vm_maker --autostart guestname --enable
  9. Bring up each guest.
    # /opt/exadata_ovm/vm_maker --start-domain guestname
    At this point all the guests should start along with the Oracle Grid Infrastructure and the database instances.
6.19.2.3 Recover and Restart a KVM Guest

From the KVM host you can recover a specific guest.

This procedure complements the backup procedure described in Method 2: Back Up an Individual Guest. It assumes that the KVM host is operational and that the guest being recovered does not exist on the KVM host.

  1. Mount the backup NFS server that holds the guest backup.
    # mkdir -p /root/mnt
    # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/location_of_backup /root/mnt
  2. Restore the guest image files.

    For example, use the following command the extracts the contents of the guest backup file named exavmimage.tar.bz2:

    # tar --acls --xattrs --xattrs-include=* --format=pax -Spxvf /root/mnt/exavmimage.tar.bz2 -C /
  3. Restore the guest XML configuration file.
    # cp /XML/guestname.xml /etc/libvirt/qemu
  4. Define the guest in the KVM hypervisor.
    # virsh define /etc/libvirt/qemu/guestname.xml
  5. Enable autostart for the restored guest.
    # /opt/exadata_ovm/vm_maker --autostart guestname --disable
    # /opt/exadata_ovm/vm_maker --autostart guestname --enable
  6. Unmount the NFS backup.
    # umount /root/mnt
  7. Bring up the guest.
    # /opt/exadata_ovm/vm_maker --start-domain guestname
6.19.2.4 Restoring and Recovering Guests from Snapshot Backups

This procedure can be used to restore lost or damaged files of a KVM guest using a snapshot-based backup of the guest taken from inside the guest.

Use this procedure to restore lost/damaged files of a KVM guest by using a snapshot-based backup created from within the guest as described in section Method 3: Back Up a Guest Internally .
Log in to the KVM guest as the root user.
  1. Mount the backup NFS server to restore the damaged or lost files.
    # mkdir -p /root/mnt
    # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/location_of_backup /root/mnt
  2. Extract the damaged or lost files from the backup to a staging area.
    1. Prepare a staging area to hold the extracted files. The backup logical volume LVDbSys2 may be used for this:
      # mkdir /backup-LVM 
      # mount /dev/mapper/VGExaDb-LVDbSys2 /backup-LVM
      # mkdir /backup-LVM/tmp_restore 
    2. Extract the needed files.
      # tar --acls --xattrs --xattrs-include=* --format=pax -pjxvf /root/mnt/tar_file_name -C /backup-LVM/tmp_restore absolute_path_of_file_to_be_restored
  3. Restore the damaged or lost files from the temporary staging area as needed.
  4. Restart the KVM guest if needed.

6.20 Removing a Guest

You can remove a guest in Oracle Linux KVM using either OEDACLI or the vm_maker utility.

6.20.1 Removing a Guest from an Oracle RAC Cluster Using OEDACLI

You can use OEDACLI to remove a guest from a cluster.

The following procedure removes a guest from a cluster. If the guest is not part of a cluster, then you can skip the cluster-related commands.

  1. Load the XML configuration file (es.xml for example) into OEDACLI.
    ./oedacli -c full_path_to_XML_file
  2. Use the following OEDACLI commands to delete the database instance from the cluster node.
    DELETE GUEST WHERE srcname=guest_FQDN stepname=ADD_INSTANCE
    SAVE ACTION
    MERGE ACTIONS
    DEPLOY ACTIONS
  3. Use the following OEDACLI commands to delete the Oracle Database home from the cluster.
    DELETE GUEST WHERE srcname=guest_FQDN stepname=EXTEND_DBHOME
    SAVE ACTION
    MERGE ACTIONS
    DEPLOY ACTIONS
  4. Use the following OEDACLI commands to delete the guest node from the cluster.
    DELETE GUEST WHERE srcname=guest_FQDN stepname=ADD_NODE
    SAVE ACTION
    MERGE ACTIONS
    DEPLOY ACTIONS
  5. Use the following OEDACLI commands to remove connectivity to the storage servers and delete the users on the guest.
    DELETE GUEST WHERE srcname=guest_FQDN stepname=CELL_CONNECTIVITY
    SAVE ACTION
    DELETE GUEST WHERE srcname=guest_FQDN stepname=CREATE_USERS
    SAVE ACTION
    MERGE ACTIONS
    DEPLOY ACTIONS
  6. Use the following OEDACLI commands to remove the guest.
    DELETE GUEST WHERE srcname=guest_FQDN stepname=CREATE_GUEST
    SAVE ACTION
    MERGE ACTIONS
    DEPLOY ACTIONS
  7. Save the updated configuration information.
    Specify the full path to a directory where the updated configuration information will be saved.
    SAVE FILES LOCATION=output_directory
  8. View the list of existing guests on the KVM host.
    # /opt/exadata_ovm/vm_maker --list-domains

    The guest you just removed should not be listed.

6.20.2 Removing a Guest Using vm_maker

You can use the vm_maker utility to remove a guest.

The following procedure removes a guest from a cluster. If the guest is not part of a cluster, then you can skip the commands related to Oracle Clusterware.

  1. If your cluster uses Quorum failure groups, these need to be deleted first.
    1. List the existing quorum devices.
      ~]# /opt/oracle.SupportTools/quorumdiskmgr --list --target
    2. Delete the quorum devices.
      ~]# /opt/oracle.SupportTools/quorumdiskmgr --delete --target
  2. Remove the database instance on the guest from the cluster node.
    Refer to Deleting Instances from Oracle RAC Databases for the instructions.
  3. Remove the guest node from the cluster.
  4. Delete the guest using vm_maker.
    # /opt/exadata_ovm/vm_maker --remove-domain guest_name
  5. View the list of existing guests on the KVM host.
    # /opt/exadata_ovm/vm_maker --list-domains

    The guest you just removed should not be listed.

6.21 Using Client Network VLAN Tagging with Oracle Linux KVM

This topic describes the implementation of tagged VLAN interfaces for the client network in conjunction with Oracle Linux KVM.

Oracle databases running in Oracle Linux KVM guests on Oracle Exadata are accessed through the client Ethernet network defined in the Oracle Exadata Deployment Assistant (OEDA) configuration tool. Client network configuration in both the KVM host and guests is done automatically when the OEDA installation tool creates the first guest during initial deployment.

The following figure shows a default bonded client network configuration:

Figure 6-1 NIC Layout in an Oracle Virtual Environment

Description of Figure 6-1 follows
Description of "Figure 6-1 NIC Layout in an Oracle Virtual Environment"

The network has the following configuration:

  1. In the KVM host, eth slave interfaces (for example, eth1 and eth2, or eth4 and eth5) that allow access to the guest client network defined in OEDA are discovered, configured, and brought up, but no IP is assigned.

  2. In the KVM host, bondeth0 master interface is configured and brought up, but no IP is assigned.

  3. In the KVM host, bridge interface vmbondeth0 is configured, but no IP is assigned.

  4. In the KVM host, one virtual backend interface (VIF) per guest that maps to that particular guest's bondeth0 interface is configured and brought up, but no IP is assigned. These VIFs are configured on top of the bridge interface vmbondeth0, and the mapping between the KVM host VIF interface and its corresponding guest interface bondeth0 is defined in the guest configuration file called vm.cfg, located in /EXAVMIMAGES/GuestImages/guest name.

For default installations, a single bondeth0 and a corresponding vmbondeth0 bridge interface is configured in the KVM host as described above. This bondeth0 interface is based on the default Access VLAN. The ports on the switch used by the slave interfaces making up bondeth0 are configured for Access VLAN.

Using VLAN Tagging

If there is a need for virtual deployments on Exadata to access additional VLANs on the client network, such as enabling network isolation across guests, then 802.1Q-based VLAN tagging is a solution. The following figure shows a client network configuration with VLAN tagging.

Figure 6-2 NIC Layout for Oracle Virtual Environments with VLAN Tagging

Description of Figure 6-2 follows
Description of "Figure 6-2 NIC Layout for Oracle Virtual Environments with VLAN Tagging"

Note:

Commencing with the March 2020 OEDA release, the bridge names now have the form vmbethXY.VLANID, where X and Y are the numeric identifiers associated with the slave interface, and VLANID is the VLAN ID.

This avoids a potential naming conflict that could previously occur in some cases.

For example, under the new naming scheme the bridges in the previous diagram would be named vmbeth45.3005, vmbeth45.3006, and vmbeth45.3007.

For instructions on how to manually configure tagged VLAN interfaces in conjunction with Oracle Linux KVM, see My Oracle Support note 2710712.1.

6.22 Using Exadata Secure RDMA Fabric Isolation with Oracle Linux KVM

This topic describes the implementation of Exadata Secure RDMA Fabric Isolation in conjunction with Oracle Linux KVM.

Secure Fabric enables secure consolidation and strict isolation between multiple tenants on Oracle Exadata. Each tenant resides in their own dedicated virtual machine (VM) cluster, using database server VMs running on Oracle Linux KVM.

With Secure Fabric, each database cluster uses a dedicated network partition and VLAN ID for cluster networking between the database servers, which supports Oracle RAC inter-node messaging. In this partition, all of the database servers are full members. They can communicate freely within the partition but cannot communicate with database servers in other partitions.

Another partition, with a separate VLAN ID, supports the storage network partition. The storage servers are full members in the storage network partition, and every database server VM is also a limited member. By using the storage network partition:

  • Each database server can communicate with all of the storage servers.
  • Each storage server can communicate with all of the database servers that they support.
  • Storage servers can communicate directly with each other to perform cell-to-cell operations.

The following diagram illustrates the network partitions that support Exadata Secure RDMA Fabric Isolation. In the diagram, the line connecting the Sales VMs illustrates the Sales cluster network. The Sales cluster network is the dedicated network partition that supports cluster communication between the Sales VMs. The line connecting the HR VMs illustrates the HR cluster network. The HR cluster network is another dedicated network partition that supports cluster communication between the HR VMs. The lines connecting the database server VMs (Sales and HR) to the storage servers illustrate the storage network. The storage network is the shared network partition that supports communications between the database server VMs and the storage servers. But, it does not allow communication between the Sales and HR clusters.

Figure 6-3 Secure Fabric Network Partitions

Description of Figure 6-3 follows
Description of "Figure 6-3 Secure Fabric Network Partitions"

As illustrated in the diagram, each database server (KVM host) can support multiple VMs in separate database clusters. However, Secure Fabric does not support configurations where one database server contains multiple VMs belonging to the same database cluster. In other words, using the preceding example, one database server cannot support multiple Sales VMs or multiple HR VMs.

To support the cluster network partition and the storage network partition, each database server VM is plumbed with 4 virtual interfaces:

  • clre0 and clre1 support the cluster network partition.
  • stre0 and stre1 support the storage network partition.

    Corresponding stre0 and stre1 interfaces are also plumbed on each storage server.

On each server, the RoCE network interface card acts like a switch on the hypervisor, which performs VLAN tag enforcement. Since this is done at the KVM host level, cluster isolation cannot be bypassed by any software exploits or misconfiguration on the database server VMs.

You can only enable Secure Fabric as part of the initial system deployment using Oracle Exadata Deployment Assistant (OEDA). You cannot enable Secure Fabric on an existing system without wiping the system and re-deploying it using OEDA. When enabled, Secure Fabric applies to all servers and clusters that share the same RoCE Network Fabric.

6.23 Adding a Bonded Network Interface to an Oracle Linux KVM Guest

Use this procedure to add a bonded network interface to an existing Oracle Linux KVM guest.

For example, you can use this procedure to add a backup network to an Oracle Linux KVM guest.

To add a bonded network interface to an Oracle Linux KVM guest:

  1. On the KVM host, create a bonded network bridge.

    Use the following vm_maker --add-bonded-bridge command:

    # vm_maker --add-bonded-bridge bond-name  --first-slave ethX --second-slave ethY

    In the command:

    • bond-name specifies the name of the bonded network interface. The recommended naming convention uses the string vmbeth concatenated with the numbers that identify the network interfaces being bonded. For example, when bonding eth5 and eth6, the recommended bond name is vmbeth56.
    • ethX and ethY specify the KVM host network interfaces that are being bonded.
  2. On the KVM host, associate the bonded network bridge with the guest.

    Use the following vm_maker --allocate-bridge command:

    # vm_maker --allocate-bridge bond-name --domain guest-name

    In the command:

    • bond-name specifies the name of the bonded network bridge created in the previous step.
    • guest-name specifies the name of the guest to which you want to add the bonded network interface.

    The output from the vm_maker --allocate-bridge command includes a series of additional manual steps that must be performed to complete the procedure.

  3. Perform the additional manual steps outlined in the output from the previous vm_maker --allocate-bridge command.

    In the first manual step, you must select a unique network interface name to use inside the guest. This guest interface maps to the bridged network interface on the KVM host. You can use your naming convention for this interface name, or you can choose to carry through the interface name from the KVM host. Whatever your choice, ensure that the NAME="guest-interface" entry in manual step 2 is adjusted accordingly.

    The following is an example of the manual steps.

    Note:

    Your procedure will be specific to your environment. So, ensure that you perform the additional manual steps outlined in your output from the vm_maker --allocate-bridge command. Do not copy the commands from the following example.

    [INFO] Please perform the following manual steps:
    [INFO] 1. Determine a unique network interface name within the domain to which
    [INFO]    you are attaching this interface. Typically bonded
    [INFO]    interfaces are named bondeth<number>, for example 'bondeth1', and
    [INFO]    non-bonded interfaces are named eth<number>, for example 'eth2'.
    [INFO]    The name must be unique within the domain. In the example below
    [INFO]    the name 'bondeth0' has been chosen.
    [INFO] 2. Add the following line to the file
    [INFO]    '/etc/udev/rules.d/70-persistent-net.rules' within the domain:
    [INFO]    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", KERNELS=="0000:00:PCI-slot-number.0", ATTR{type}=="1", NAME="guest-interface"
    [INFO] You MUST execute steps 4 and 5! You CANNOT simply reboot from within
    [INFO] the domain.
    [INFO] 4. vm_maker --stop-domain guest-name
    [INFO] 5. vm_maker --start-domain guest-name
    [INFO] 6. Login to the domain and run the following command:
    [INFO]    /opt/oracle.cellos/ipconf.pl -nocodes
    [INFO]    This command will ask you for configuration information for this
    [INFO]    new interface.
    [INFO] NOTE: if you have more than one interface to add to this domain,
    [INFO] please execute the instructions above, and then call this command again.
    [INFO] The domain must be stopped and started between invocations of this
    [INFO] command.

6.24 Using Oracle EXAchk in Oracle Linux KVM Environments

Oracle EXAchk version 12.1.0.2.2 and higher supports virtualization on Oracle Exadata.

6.24.1 Running Oracle EXAchk in Oracle Linux KVM Environments

To perform the complete set of Oracle EXAchk audit checks in an Oracle Exadata Oracle Linux KVM environment, Oracle EXAchk must be installed in and run from multiple locations.

  1. Run Oracle EXAchk from one KVM host.
  2. Run Oracle EXAchk from one guest in each Oracle Real Application Clusters (Oracle RAC) cluster running in Oracle Linux KVM.

For example, an Oracle Exadata Quarter Rack with two database servers containing 4 Oracle RAC clusters (2 nodes per cluster for a total of 8 guests across both database servers) requires running Oracle EXAchk five separate times, as follows:

  1. Run Oracle EXAchk in the first guest for the first cluster.

  2. Run Oracle EXAchk in the first guest for the second cluster.

  3. Run Oracle EXAchk in the first guest for the third cluster.

  4. Run Oracle EXAchk in the first guest for the fourth cluster.

  5. Run Oracle EXAchk in the first KVM host.

6.24.2 Audit Checks Performed by Oracle EXAchk

Oracle EXAchk runs different audit checks on the KVM host and the guests.

When you install and run Oracle EXAchk on the KVM host, it performs the following hardware and operating system level checks:

  • Database servers (KVM hosts)
  • Storage servers
  • RDMA Network Fabric
  • RDMA Network Fabric switches

When you install and run Oracle EXAchk on the guest, it performs operating system checks for guests, and checks for Oracle Grid Infrastructure and Oracle Database.

6.24.3 Oracle EXAchk Command Line Options for Oracle Exadata

Oracle EXAchk requires no special command line options. It automatically detects that it is running in an Oracle Exadata Oracle Linux KVM environment. However, you can use command line options to run Oracle EXAchk on a subset of servers or switches.

Oracle EXAchk automatically detects whether it is running in a KVM host or guest and performs the applicable audit checks. For example, in the simplest case, you can run Oracle EXAchk with no command line options:

./exachk

When Oracle EXAchk is run in the KVM host, it performs audit checks on all database servers, storage servers, and RDMA Network Fabric switches accessible through the RDMA Network Fabric network.

To run Oracle EXAchk on a subset of servers or switches, use the following command line options:

Options

  • -clusternodes: Specifies a comma-separated list of database servers.

  • -cells: Specifies a comma-separated list of storage servers.

Example 6-1 Running Oracle EXAchk on a Subset of Nodes and Switches

For example, for an Oracle Exadata Full Rack where only the first Quarter Rack is configured for virtualization, but all components are accessible through the RDMA Network Fabric network, you can run a command similar to the following from the database server dm01adm01:

./exachk -clusternodes dm01adm01,dm01adm02
         -cells dm01celadm01,dm01celadm02,dm01celadm03