6 Managing Oracle VM Domains Using KVM

Oracle VM user domains on Oracle Exadata Database Machine running Oracle Exadata System Software release 19.3.0 are managed using the vm_maker utility run from the management domain.

The Kernel Virtual Machine (KVM) is a module of the Linux kernel. The KVM module allows a program to access and make use of the virtualization capabilities of modern processors, by exposing the /dev/kvm interface. Qemu is the software that actually performs the emulation. Qemu is an open source machine emulator and virtualizer.

For a full list of Oracle VM administrative commands, run the /opt/exadata_ovm/vm_maker --help command.

Note:

Unless otherwise noted, all commands run in the following procedures are run as the root user.

6.1 Oracle VM and Oracle Exadata Database Machine

When deploying Oracle Exadata Database Machine, you can decide to implement Oracle VM on the database servers.

Oracle VM Server and one or more Oracle VM guests are installed on every database server. You can configure Oracle VM environments on your initial deployment using scripts created by Oracle Exadata Deployment Assistant (OEDA) or you can migrate an existing environment to Oracle VM.

6.1.1 About Oracle VM

Oracle VM enables you to deploy the Oracle Linux operating system and application software within a supported virtual environment that is managed by KVM.

If you use Oracle VM on Oracle Exadata Database Machine, then Oracle VM provides CPU, memory, operating system, and sysadmin isolation for your workloads. You can combine virtual machines (VMs) with network and I/O prioritization to achieve full stack isolation. For consolidation, you can create multiple trusted databases or pluggable databases in an Oracle VM, allowing resources to be shared more dynamically.

Starting with Oracle Exadata System Software release 19.3.0, KVM is the virtualization technology used with Oracle Exadata Database Machine systems configured with RDMA over Converged Ethernet (RoCE) interconnects. An Oracle VM environment consists of a management server (the kvmhost), virtual machines, and resources. A kvmhost is a managed virtual environment providing a lightweight, secure, server platform which runs VMs, also known as guests.

The kvmhost is installed on a bare metal computer. The hypervisor on each kvmhost is an extremely small-footprint VM manager and scheduler. It is designed so that it is the only fully privileged entity in the system. It controls only the most basic resources of the system, including CPU and memory usage, privilege checks, and hardware interrupts.

The hypervisor securely executes multiple VMs on one host computer. Each VM runs in its own guest and has its own operating system. A primary kvmhost also runs as a guest on top of the hypervisor. The kvmhost has privileged access to the hardware and device drivers. This is the environment from where you manage the guest.

A guest is an unprivileged VM that can access the RoCE interface. The guest is started and managed on an Oracle VM Server by the kvmhost. Because a guest operates independently of other VMs, a configuration change applied to the virtual resources of a guest does not affect any other guests. A failure of the guest does not impact any other guests.

The terms "guest" and "virtual machine" are often used interchangeably.

When using KVM, you can have up to 12 guests on the same kvmhost.

Each guest is started alongside the kvmhost. The guests never interact with the kvmhost directly. Their requirements are handled by the hypervisor itself. The kvmhost only provides a means to administer the hypervisor.

You use Oracle Exadata Deployment Assistant (OEDA) to create and configure Oracle VMs on Oracle Exadata Database Machine.

6.1.2 Maximum Supported Virtual Machines on Oracle Exadata Database Machine

When using RDMA over Converged Ethernet (RoCE), the maximum number of supported virtual machines is 12.

For the software prerequisites, refer to My Oracle Support documents 888828.1.

6.1.3 Supported Operations in the KVMHost

Manually modifying the kvmhost can result in configuration issues, which can degrade performance or cause a loss of service.

WARNING:

Oracle does not support any changes that are made to the kvmhost beyond what is documented. Third-party applications can be installed on the kvmhost and user domains, but if there are issues with the Oracle software, then Oracle Support Services may request the removal of the third-party software while troubleshooting the cause.

If you are in doubt whether an operation on the kvmhost is supported, contact Oracle Support Services.

6.1.4 Oracle VM Resources

Two fundamental parts of the Oracle VM infrastructure – networking and storage – are configured outside of the Oracle VM.

Networking

When specifying the configuration details for your Oracle Exadata Rack using Oracle Exadata Deployment Assistant (OEDA), you provide input on how the required network IP addresses for Oracle VM environments should be created. The generated OEDA setup files are transferred to the Oracle Exadata Rack and used to create the network addresses.

Storage

Oracle VM always requires a location to store environment resources that are essential to the creation and management of virtual machines (VMs). These resources include ISO files (virtual DVD images), VM configuration files and VM virtual disks. The location of such a group of resources is called a storage repository.

On Oracle Exadata Database Machine, storage for the Oracle VMs uses an XFS file system.

If you need more storage space for Oracle VM, you can purchase a disk expansion kit. The additional disk space can be used to support more Oracle VM guests by expanding /EXAVMIMAGES or to increase the size of the /u01 partition in each user domain.

Maximum Supported VMs on Exadata

For any existing Exadata Database Server, the maximum number of supported VMs is eight if using InfiniBand and 12 if using RDMA over Converged Ethernet (RoCE). For software prerequisites, refer to My Oracle Support notes 888828.1 and 1270094.1.

6.2 Migrating a Bare Metal Oracle RAC Cluster to an Oracle RAC Cluster in Oracle VM

You can move an existing Oracle RAC cluster into a virtual environment that is managed by KVM.

Note:

This topic applies only to two-socket x86 servers. It does not apply to eight-socket servers such as Oracle Exadata Database Machine X8M-8.

The migration of a bare metal Oracle RAC cluster to an Oracle RAC cluster in Oracle VM can be achieved in the following ways:

  • Migrate to Oracle RAC cluster in Oracle VM using the existing bare metal Oracle RAC cluster with zero downtime.

  • Migrate to Oracle RAC cluster in Oracle VM by creating a new Oracle RAC cluster in Oracle VM with minimal downtime.

  • Migrate to Oracle RAC cluster in Oracle VM using Oracle Data Guard with minimal downtime.

  • Migrate to Oracle RAC cluster in Oracle VM using Oracle Recovery Manager (RMAN) backup and restore with complete downtime.

The conversion of a bare metal Oracle RAC cluster to an Oracle RAC cluster in Oracle VM has the following implications:

  • Each of the database servers will be converted to an Oracle VM Server on which a kvmhost is created along with one or more guests, depending on the number of Oracle RAC clusters being deployed. Each guest on a database server will belong to a particular Oracle RAC cluster.

  • As part of the conversion procedure, the bare metal Oracle RAC cluster will be converted to one Oracle RAC cluster in Oracle VM to start with. There will be one guest per database server.

  • At the end of the conversion, the cell disk and grid disk configuration of the storage cells are the same as they were at the beginning of the conversion.

  • The kvmhost will use a small portion of the system resources on each database server. Typically a kvmhost uses 16 GB or 6% of the available machine RAM, whichever is more. A kvmhost also uses 4 virtual CPUs. These resource requirements have to be taken into consideration when sizing the SGA of the databases running on the Oracle RAC cluster in Oracle VM.

  • Refer to My Oracle Support note 2099488.1 for the complete instructions.

6.3 Showing Running Domains

Use the vm-maker utility to list the running domains.

  1. Connect to the management domain.
  2. Run the command /opt/exadata_ovm/vm_maker --list-domains to list the domains.
    # /opt/exadata_ovm/vm_maker --list-domains
    dm01db01vm01.us.oracle.com(55)      : running
    dm01db01vm02.us.oracle.com(57)      : running
    dm01db01vm03.us.oracle.com(59)      : running

    To view memory or CPU distribution for the domains, there are separate commands:

    • /opt/exadata_ovm/vm_maker --list --memory
    • /opt/exadata_ovm/vm_maker --list --vcpu

6.4 Starting a Guest

You can start a guest manually, or configure the guest to start automatically when the kvmhost is started.

  1. Connect to the kvmhost.
  2. To manually start a guest, use vm_maker to start the guest.

    In the following example, db01_guest01.example.com is the name of the guest.

    # /opt/exadata_ovm/vm_maker --start-domain db01_guest01.example.com
    [INFO] Running 'virsh start db01_guest01.example.com...
    Domain db01_guest01.example.com started
    [INFO] Attempting to ping db01_guest01.example.com...
    [INFO] Ping successful.
  3. To configure autostart for a guest, use the vm_maker --autostart command.

    In the following example, db01_guest01.example.com is the name of the guest.

    # /opt/exadata_ovm/vm_maker --autostart db01_guest01.example.com --enable
    [INFO] Running 'virsh autostart db01_guest01.example.com'...
    Domain db01_guest01.example.com marked as autostarted

6.5 Monitoring a Guest Console During Startup

To see Oracle Linux boot messages during guest startup, use the --console option with the vm_maker --start-domain command.

  1. Connect as the root user to the kvmhost.
  2. Obtain the guest name using the /opt/exadata_ovm/vm_maker --list-domains command.
  3. Use the following command to attach to the guest console, as part of starting the guest:

    In the following command, GuestName is the name of the guest.

    # vm_maker --start-domain GuestName --console
  4. Press CTRL+] to disconnect from the console.

6.6 Disabling Guest Automatic Start

You can disable a guest from automatically starting when the kvmhost is started.

  1. Connect to the kvmhost.
  2. Use vm_maker to disable autostart for the guest.

    In the following example, db01_guest01.example.com is the name of the guest.

    # /opt/exadata_ovm/vm_maker --autostart db01_guest01.example.com --disable
    [INFO] Running 'virsh autostart db01_guest01.example.com --disable'...
    Domain db01_guest01.example.com unmarked as autostarted

6.7 Shutting Down a User Domain From Within the User Domain

The following procedure describes how to shut down a user domain from within a user domain:

  1. Connect as the root user to the user domain.
  2. Use the following command to shut down the domain:
    # shutdown -h now
    

6.8 Shutting Down a Guest From Within the kvmhost

You can shut down a guest from within a kvmhost.

  1. Connect as the root user to the kvmhost.
  2. Use the following command to shut down the guest, where GuestName is the name of the guest:
    # /opt/exadata_ovm/vm_maker --stop-domain GuestName

    To shut down all guests within the kvmhost, use the following command:

    # /opt/exadata_ovm/vm_maker --stop-domain --all

    The following is an example of the output:

    [INFO] Running 'virsh shutdown db01_guest01.example.com'...
    Domain db01_guest01.example.com is being shutdown

6.9 Backing Up and Restoring Oracle Databases on KVM Guests

Backing up and restoring Oracle databases on KVM guests is the same as backing up and restoring Oracle databases on physical nodes.

6.10 Modifying the Memory Allocated to a Guest

You can modify the memory allocated to a guest using vm_maker.

This operation requires a guest restart. You can let vm_maker restart the guest after changing the memory configuration.

  1. Connect to the kvmhost.
  2. If you are increasing the amount of memory used by the guest, then use the following command to determine the amount of free memory available:
    # /opt/exadata_ovm/vm_maker --list --memory

    In the output, the lowest value between Available memory (now) and Available memory (delayed) is the limit for free memory.

    Note:

    When assigning free memory to a guest, reserve approximately 1% to 2% of free memory for storing metadata and control structures.
  3. If you are decreasing the amount of memory used by the guest, then you must first review and adjust the memory usage of the databases running in the guest.
    1. Review the SGA size of databases and reduce if necessary.
    2. Review the huge pages operating system configuration for the databases and reduce if necessary.
    If you do not first reduce the memory requirements of the databases running in the guest, then the guest might fail to restart because too much memory is reserved for huge pages when the Oracle Linux operating system attempts to boot. See My Oracle Support Doc ID 361468.1 for details.
  4. Specify a new size for the memory.

    For example, if you want to increase the memory used to 32 GB for the db01_guest01.example.com guest, you would use the following command:

    # /opt/exadata_ovm/vm_maker --set --memory 32G --domain db01_guest01.example.com --restart-domain

    This command shuts down the guest, modifies the memory settings, and then restarts the guest.

6.11 Modifying the Number of Virtual CPUs Allocated to a Guest

You can dynamically modify the number of virtual CPUs allocated to a guest with the vm_maker --set vcpu command.

All actions to modify the number of vCPUs allocated to a guest are performed in the kvmhost.

It is possible to over-commit vCPUs such that the total number of vCPUs assigned to all guests exceeds the number of physical CPUs on the system. However, over-committing CPUs should be done only when competing workloads for oversubscribed resources are well understood and concurrent demand does not exceed physical capacity.

  1. Determine the number of physical CPUs.
    # /opt/exadata_ovm/vm_maker --list vcpu --domain db01_guest01.example.com
  2. Modify the number of allocated vCPUs.

    For example, if you want to change the number of vCPUs allocated to 4 for the db01_guest01.example.com guest, you would use the following command:

    # /opt/exadata_ovm/vm_maker --set vcpu 4 --domain db01_guest01.example.com

6.12 Increasing the Disk Space in a User Domain

You can increase the size of Logical Volume Manager (LVM) partitions, swap space, and file systems in a user domain.

6.12.1 Adding a New LVM Disk to a Guest

You can add a new LVM disk to a guest to increase the amount of usable LVM disk space in a guest.

You might add an LVM disk to a guest so that the size of a file system or swap LVM partition can be increased. This procedure is performed while the system remains online.

Note:

This procedure requires steps be run in the kvmhost, and in the guest.

Run all steps in this procedure as the root user.

  1. In the kvmhost, verify the free disk space in /EXAVMIMAGES.
    # df -h /EXAVMIMAGES

    The following is an example of the output from the command:

    Filesystem            Size  Used Avail Use% Mounted on
     /dev/sda3            721G  111G  611G  16% /EXAVMIMAGES
    
  2. In the kvmhost, select a name for the new disk image, and verify that the name is not already used in the guest.
    # ls -l /EXAVMIMAGES/GuestImages/DomainName/new_disk_image_name
    
    ls: /EXAVMIMAGES/GuestImages/DomainName/new_disk_image_name: No such file or \
    directory
    

    In the preceding command, DomainName is the name of the domain, and new_disk_image_name is the new disk image name.

  3. In the kvmhost, create a new disk image and attach it to the guest.
    #  /opt/exadata_ovm/vm_maker --create --disk-image /EXAVMIMAGES/new_disk_image_name
        --size size --filesystem xfs  --attach --domain DomainName
  4. In the guest, list the available disk images.
    /opt/exadata_ovm/vm_maker --list --disk-image --domain DomainName
  5. In the guest, partition the new disk device. In the following example, disk device /dev/xvde is partitioned.
    # parted /dev/xvde mklabel gpt
    # parted -s /dev/xvde mkpart primary 0 100%
    # parted -s /dev/xvde set 1 lvm on
    

    The parted mkpart command may report the following message. This message can be ignored:

    Warning: The resulting partition is not properly aligned for best performance.
    
  6. In the guest, create an LVM physical volume on the new disk partition.

    In the following example, an LVM physical volume is created on disk partition /dev/xvde1.

    # pvcreate /dev/xvde1
    
  7. In the guest, extend the volume group and verify the additional space in the volume group. In the following example, disk name xvde is now available in the guest.
    # vgextend VGExaDb /dev/xvde1
    # vgdisplay -s
    
  8. In the kvmhost, make a backup of the guest configuration file vm.cfg.
    # cp /EXAVMIMAGES/GuestImages/DomainName/vm.cfg   \
         /EXAVMIMAGES/GuestImages/DomainName/vm.cfg.backup
    
  9. In the kvmhost, obtain the UUID of the guest using the following command:
    # grep ^uuid /EXAVMIMAGES/GuestImages/DomainName/vm.cfg
    

    In the following example, the guest UUID is 49ffddce4efe43f5910d0c61c87bba58.

    # grep ^uuid /EXAVMIMAGES/GuestImages/dm01db01vm01/vm.cfg
    uuid = '49ffddce4efe43f5910d0c61c87bba58'
    
  10. In the kvmhost, generate a UUID for the new disk image using the following command:
    # uuidgen | tr -d '-'
    

    In the following example, the new disk UUID is 0d56da6a5013428c97e73266f81c3404.

    # uuidgen | tr -d '-'
    0d56da6a5013428c97e73266f81c3404
    
  11. In the kvmhost, create a symbolic link from /OVS/Repositories to the new disk image using the following command:
    # ln -s /EXAVMIMAGES/GuestImages/DomainName/newDiskImage.img    \
     /OVS/Repositories/user_domain_uuid/VirtualDisks/new_disk_uuid.img
    

    In the following example, a symbolic link is created to the new disk image file pv2_vgexadb.img for guest dm01db01vm01. The UUID for guest dm01db01vm01 is 49ffddce4efe43f5910d0c61c87bba58. The UUID for the new disk image is 0d56da6a5013428c97e73266f81c3404.

    # ln -s /EXAVMIMAGES/GuestImages/dm01db01vm01/pv2_vgexadb.img \
    /OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/   \
    0d56da6a5013428c97e73266f81c3404.img
    
  12. In the kvmhost, append an entry for the new disk to the disk parameter in the guest configuration file vm.cfg. This makes the new disk image attach automatically to the guest during the next startup. The new entry matches the following format:
    'file:/OVS/Repositories/user_domain_uuid/VirtualDisks/new_disk_uuid.img,disk_device,w'
    

    The following is an example of an original disk parameter entry in the vm.cfg file:

    disk=['file:/OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/  \
    76197586bc914d3d9fa9d4f092c95be2.img,xvda,w',                                 \
    'file:/OVS/Repositories/49ffddce4efe43f591 0d0c61c87bba58/VirtualDisks/       \
    78470933af6b4253b9ce27814ceddbbd.img,xvdb,w',                                 \
    'file:/OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/        \
    20d5528f5f9e4fd8a96f151a13d2006b.img,xvdc,w',                                 \
    'file:/OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/        \
    058af368db2c4f27971bbe1f19286681.img,xvdd,w']
    

    The following example shows an entry appended to the disk parameter for a new disk image that is accessible within the guest as disk device /dev/xvde:.

    disk=['file:/OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/  \
    76197586bc914d3d9fa9d4f092c95be2.img,xvda,w',                                 \
    'file:/OVS/Repositories/49ffddce4efe43f591 0d0c61c87bba58/VirtualDisks/       \
    78470933af6b4253b9ce27814ceddbbd.img,xvdb,w',                                 \
    'file:/OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/        \
    20d5528f5f9e4fd8a96f151a13d2006b.img,xvdc,w',                                 \
    'file:/OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/        \
    058af368db2c4f27971bbe1f19286681.img,xvdd,w',                                 \
    'file:/OVS/Repositories/49ffddce4efe43f5910d0c61c87bba58/VirtualDisks/        \
    0d56da6a5013428c97e73266f81c3404.img,xvde,w']
    

6.12.2 Increasing the Size of the root File System

This procedure describes how to increase the size of the system partition and / (root) file system.

This procedure is performed while the file system remains online.

Note:

There are two system partitions, LVDbSys1 and LVDbSys2. One partition is active and mounted. The other partition is inactive and used as a backup location during upgrade. The size of both system partitions must be equal.

Keep at least 1 GB of free space in the VGExaDb volume group. The free space is used for the LVM snapshot created by the dbnodeupdate.sh utility during software maintenance. If you make snapshot-based backups of the / (root) and /u01 directories as described in Creating a Snapshot-Based Backup of Oracle Linux Database Server, then keep at least 6 GB of free space in the VGExaDb volume group.

  1. Collect information about the current environment.
    1. Use the df command to identify the amount of free and used space in the root partition (/)
      # df -h /
      

      The following is an example of the output from the command:

      Filesystem            Size  Used Avail Use% Mounted on
      /dev/mapper/VGExaDb-LVDbSys1
                             12G  5.1G  6.2G  46% / 
      

      Note:

      The active root partition may be either LVDbSys1 or LVDbSys2, depending on previous maintenance activities.

    2. Use the lvs command to display the current logical volume configuration.
      # lvs -o lv_name,lv_path,vg_name,lv_size
      

      The following is an example of the output from the command:

      LV        Path                   VG      LSize 
      LVDbOra1  /dev/VGExaDb/LVDbOra1  VGExaDb 10.00g
      LVDbSwap1 /dev/VGExaDb/LVDbSwap1 VGExaDb  8.00g
      LVDbSys1  /dev/VGExaDb/LVDbSys1  VGExaDb 12.00g
      LVDbSys2  /dev/VGExaDb/LVDbSys2  VGExaDb 12.00g 
      
  2. Verify there is available space in the volume group VGExaDb using the vgdisplay command.
    # vgdisplay VGExaDb -s
    

    The following is an example of the output from the command:

    "VGExaDb" 53.49 GiB [42.00 GiB used / 11.49 GiB free]
    

    The volume group must contain enough free space to increase the size of both system partitions, and maintain at least 1 GB of free space for the LVM snapshot created by the dbnodeupdate.sh utility during upgrade. If there is not sufficient free space in the volume group, then add a new disk to LVM.

  3. Resize both LVDbSys1 and LVDbSys2 logical volumes using the lvextend command.
    # lvextend -L +size /dev/VGExaDb/LVDbSys1
    # lvextend -L +size /dev/VGExaDb/LVDbSys2
    

    In the preceding command, size is the amount of space to be added to the logical volume. The amount of space added to each system partition must be the same.

    The following example extends the logical volumes by 10 GB:

    # lvextend -L +10G /dev/VGExaDb/LVDbSys1
    # lvextend -L +10G /dev/VGExaDb/LVDbSys2
    
  4. Resize the file system within the logical volume using the resize2fs command.
    # resize2fs /dev/VGExaDb/LVDbSys1
    # resize2fs /dev/VGExaDb/LVDbSys2
    
  5. Verify the space was extended for the active system partition using the df command.
    # df -h /
    

6.12.3 Increasing the Size of the /u01 File System

This procedure describes how to increase the size of the /u01 file system.

This procedure is performed while the file system remains online.

Note:

Keep at least 1 GB of free space in the VGExaDb volume group. The free space is used for the LVM snapshot created by the dbnodeupdate.sh utility during software maintenance. If you make snapshot-based backups of the / (root) and /u01 directories as described in "Creating a Snapshot-Based Backup of Oracle Linux Database Server," then keep at least 6 GB of free space in the VGExaDb volume group

  1. Collect information about the current environment.
    1. Use the df command to identify the amount of free and used space in the /u01 partition.
      # df -h /u01
      

      The following is an example of the output from the command:

      Filesystem            Size  Used Avail Use% Mounted on
      /dev/mapper/VGExaDb-LVDbOra1
                            9.9G  1.7G  7.8G  18% /u01
      
    2. Use the lvs command to display the current logical volume configuration used by the /u01 file system.
      # lvs -o lv_name,lv_path,vg_name,lv_size /dev/VGExaDb/LVDbOra1
      

      The following is an example of the output from the command:

      LV        Path                  VG       LSize 
      LVDbOra1 /dev/VGExaDb/LVDbOra1  VGExaDb 10.00g
      
  2. Verify there is available space in the volume group VGExaDb using the vgdisplay command.
    # vgdisplay VGExaDb -s
    

    The following is an example of the output from the command:

    "VGExaDb" 53.49 GiB [42.00 GiB used / 11.49 GiB free]
    

    If the output shows there is less than 1 GB of free space, then neither the logical volume nor file system should be extended. Maintain at least 1 GB of free space in the VGExaDb volume group for the LVM snapshot created by the dbnodeupdate.sh utility during an upgrade. If there is not sufficient free space in the volume group, then add a new disk to LVM.

  3. Resize the logical volume using the lvextend command.
    # lvextend -L +sizeG /dev/VGExaDb/LVDbOra1
    

    In the preceding command, size is the amount of space to be added to the logical volume.

    The following example extends the logical volume by 10 GB:

    # lvextend -L +10G /dev/VGExaDb/LVDbOra1
    
  4. Resize the file system within the logical volume using the resize2fs command.
    # resize2fs /dev/VGExaDb/LVDbOra1
    
  5. Verify the space was extended using the df command.
    # df -h /u01
    

6.12.4 Increasing the Size of the Grid Infrastructure Home or Database Home File System

You can increase the size of the Oracle Grid Infrastructure or Oracle Database home file system in a guest.

The Oracle Grid Infrastructure software home and the Oracle Database software home are created as separate disk image files in the kvmhost. The disk image files are located in the /EXAVMIMAGES/GuestImages/DomainName/ directory. The disk image files are attached to the guest automatically during virtual machine startup, and mounted as separate, non-LVM file systems in the guest.

  1. Connect to the guest, and check the file system size using the df command, where $ORACLE_HOME is an environment variable that points to the Oracle Database home directory, for example, /u01/app/oracle/product/12.1.0.2/dbhome_1.
    # df -h $ORACLE_HOME
    

    The following is an example of the output from the command:

    Filesystem  Size  Used Avail Use% Mounted on
     /dev/xvdc    20G  6.5G   13G  35% /u01/app/oracle/product/12.1.0.2/dbhome_1
    
  2. Connect to the kvmhost, and then shut down the guest.
    # opt/exadata_ovm/vm_maker --stop-domain DomainName
    
  3. Create an OCFS reflink to serve as a backup of the disk image that will be increased, where version is the release number for example, 12.1.0.2.1-3.
    # cd /EXAVMIMAGES/GuestImages/DomainName
    
    # reflink dbversion.img before_resize.dbversion.img
    
  4. Create an empty disk image using the qemu-img command, and append it to the database home disk image.

    The empty disk image size is the size to extend the file system. The last command removes the empty disk image after appending to the database home disk image.

    # qemu-img create emptyfile 10G
    # cat emptyfile >> dbversion.img
    # rm emptyfile
    
  5. Check the file system using the e2fsck command.
    # e2fsck -f dbversion.img
    
  6. Resize the file system using the resize2fs command.
    # resize2fs dbversion.img
    
  7. Start the guest.
    # /opt/exadata_ovm/vm_maker --start-domain DomainName --console
    
  8. Connect to the guest, and verify the file system size was increased.
    # df -h $ORACLE_HOME
    

    The following is an example of the output from the command:

    Filesystem      Size  Used Avail Use% Mounted on
    /dev/xvdc        30G  6.5G   22G  23% /u01/app/oracle/product/12.1.0.2/dbhome_1
    
  9. Connect to the kvmhost, and remove the backup image.

    Use a command similar to the following where back_up_image.img is the name of the backup image file:

    # cd /EXAVMIMAGES/GuestImages/DomainName
    # rm back_up_image.img
    

6.12.5 Increasing the Size of the Swap Area

You can increase the amount of swap configured in a guest.

  1. Verify there is available space in the volume group VGExaDb using the vgdisplay command.
    # vgdisplay VGExaDb -s
    

    The following is an example of the output from the command:

    "VGExaDb" 53.49 GiB [42.00 GiB used / 11.49 GiB free]
    

    If the command shows that there is less than 1 GB of free space, then neither the logical volume nor file system should be extended. Maintain at least 1 GB of free space in the VGExaDb volume group for the LVM snapshot created by the dbnodeupdate.sh utility during an upgrade. If there is not sufficient free space in the volume group, then add a new disk to LVM.

  2. Create a new logical volume of the size to increase swap space using the lvcreate command.

    In the following example, a new 8 GB logical volume named LVDbSwap2 is created.

    # lvcreate -L 8G -n LVDbSwap2 VGExaDb
    
  3. Setup the new logical volume as a swap device with a unique label, such as SWAP2, using the mkswap command. The unique label is a device LABEL entry that is currently unused in the /etc/fstab file.
    # mkswap -L SWAP2 /dev/VGExaDb/LVDbSwap2
    
  4. Enable the new swap device using the swapon command.
    # swapon -L SWAP2
    
  5. Verify the new swap device is enabled using the swapon command.
    # swapon -s
    

    The following is an example of the output from the command:

    Filename         Type            Size      Used     Priority
    /dev/dm-3        partition       8388604   306108   -1
    /dev/dm-4        partition       8388604   0         -2
    
  6. Edit the /etc/fstab file to add the new swap device by copying the existing swap entry, and then changing the LABEL value in the new entry to the label used to create the new swap device. In the following example, the new swap device was added to the /etc/fstab file as LABEL=SWAP2.
    # cat /etc/fstab
    LABEL=DBSYS   /                       ext4    defaults        1 1
    LABEL=BOOT    /boot                   ext4    defaults,nodev        1 1
    tmpfs         /dev/shm                tmpfs   defaults,size=7998m 0
    devpts        /dev/pts                devpts  gid=5,mode=620  0 0
    sysfs         /sys                    sysfs   defaults        0 0
    proc          /proc                   proc    defaults        0 0
    LABEL=SWAP    swap                    swap    defaults        0 0
    LABEL=SWAP2   swap                    swap    defaults        0 0
    LABEL=DBORA   /u01                    ext4    defaults        1 1
    /dev/xvdb     /u01/app/12.1.0.2/grid  ext4    defaults        1 1
    /dev/xvdc       /u01/app/oracle/product/12.1.0.2/dbhome_1       ext4   defaults        1 1
    

6.13 Expanding /EXAVMIMAGES on the kvmhost

You can expand the /EXAVMIMAGES file system on the kvmhost following the addition of a disk expansion kit.

During deployment, all available disk space on a database server will be allocated in the kvmhost with the majority of the space allocated to /EXAVMIMAGES for guest storage. The /EXAVMIMAGES file system is created on /dev/VGExaDb/LVDbExaVMImages.

In the example below, dm01db01 is the name of the kvmhost, and dm01db01vm01 is a guest.

  1. Ensure reclaimdisks.sh has been run in the kvmhost by using the -check option.

    Note that the last line reads "Layout: DOM0". If reclaimdisks.sh was not run, it would read "Layout: DOM0 + Linux".

    [root@dm01db01 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -check
    Model is ORACLE SERVER X6-2
    Number of LSI controllers: 1
    Physical disks found: 4 (252:0 252:1 252:2 252:3)
    Logical drives found: 1
    Linux logical drive: 0
    RAID Level for the Linux logical drive: 5
    Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3)
    Dedicated Hot Spares for the Linux logical drive: 0
    Global Hot Spares: 0
    Valid. Disks configuration: RAID5 from 4 disks with no global and dedicated hot spare disks.
    Valid. Booted: DOM0. Layout: DOM0.
    
  2. Add the disk expansion kit to the database server.
    The kit consists of 4 additional hard drives to be installed in the 4 available slots. Remove the filler panels and install the drives. The drives may be installed in any order.
  3. Verify that the RAID reconstruction is completed by seeing the warning and clear messages in the alert history.

    This may take several hours to complete. The example below shows that it took approximately 7 hours. Once the clear message (message 1_2 below) is present, the reconstruction is completed and it is safe to proceed.

    [root@dm01db01 ~]# dbmcli -e list alerthistory
    
             1_1     2016-02-15T14:01:00-08:00       warning         "A disk
     expansion kit was installed. The additional physical drives were automatically
     added to the existing RAID5 configuration, and reconstruction of the
     corresponding virtual drive was automatically started."
    
             1_2     2016-02-15T21:01:01-08:00       clear           "Virtual drive
     reconstruction due to disk expansion was completed."
    
  4. Collect information about the current environment.
    [root@dm01db01 ~]# df -h /EXAVMIMAGES
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3             1.6T   44G  1.5T   3% /EXAVMIMAGES
    
    [root@dm01db01 ~]# /opt/exadata_ovm/vm_maker --list --domain domain_name --detail
    
  5. Stop all guests by running vm_maker from the kvmhost.

    After all guests are shut down, only the kvmhost should be listed.

    [root@dm01db01 ~]# /opt/exadata_ovm/vm_maker --stop-domain -all
    All domains terminated
    
    [root@dm01db01 ~]# /opt/exadata_ovm/vm_maker --list --domain
    
  6. Run parted to view the sector start and end values.

    Check the size of the disk against the end of the third partition. If you see a request to fix the GPT, respond with F.

    root@dm01db01 ~]# parted /dev/sda 
    GNU Parted 2.1Using /dev/sda
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) unit s 
    (parted) print
    Warning: Not all of the space available to /dev/sda appears to be used, you can
    fix the GPT to use all of the space (an extra 4679680000 blocks) or continue
    with the current setting? Fix/Ignore? F  
    
    Model: LSI MR9361-8i (scsi) 
    Disk /dev/sda: 8189440000s 
    Sector size (logical/physical): 512B/512B 
    Partition Table: gpt 
    
    Number  Start       End           Size         File system  Name     Flags 
    1       64s         1046591s      1046528s     ext3         primary  boot 
    4       1046592s    1048639s      2048s                     primary  bios_grub
    2       1048640s    240132159s    239083520s                primary  lvm 
    
    (parted) q

    The partition table shown above lists partition 2 as ending at sector 240132159 and disk size as 8189440000 sectors. You will use these values in step 7.

  7. Create a fourth partition.
    The start sector is the end of the third partition from step 6 plus 1 sector (240132159+1=240132160). The end sector of the fourth partition is the size of the disk minus 34 (8189440000-34=8189439966).
    [root@dm01db01 ~]# parted -s /dev/sda mkpart primary 240132160s 8189439966s 

    This command produces no output.

  8. Set the LVM flag for the fourth partition.
    [root@dm01db01 ~]# parted -s /dev/sda set 3 lvm on
    Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or
     resource busy).  As a result, it may not reflect all of your changes until after reboot.
  9. Review the updated partition table.
    [root@dm01db01 ~]# parted -s /dev/sda unit s print
    Model: LSI MR9361-8i (scsi)
    Disk /dev/sda: 8189440000s
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt 
    Number  Start        End          Size         File system  Name     Flags
    1       64s         1046591s      1046528s     ext4         primary  boot 
    4       1046592s    1048639s      2048s                     primary  bios_grub
    2       1048640s    240132159s    239083520s                primary  lvm 
    3       240132160s  8189439966s   7949307807s               primary  lvm
    
  10. Restart the Exadata server.
    [root@dm01db01 ~]# shutdown -r now
  11. Check the size of the disk against the end of the fourth partition.
    [root@dm01db01 ~]# parted -s /dev/sda unit s print
    Model: LSI MR9361-8i (scsi)
    Disk /dev/sda: 8189440000s
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt 
    Number  Start        End          Size         File system  Name     Flags
    1       64s          1048639s     1048576s     ext4         primary  boot 
    4       1048640s     3509759966s  3508711327s               primary  lvm 
    2       3509759967s  8189439966s  4679680000s               primary  lvm
    3 
  12. Create a LVM physical volume (PV) on the newly created fourth partition.
    [root@dm01db01 ~]# lvm pvcreate --force /dev/sda3
      Physical volume "/dev/sda3" successfully created
  13. Extend the LVM volume group VGExaDb to the newly created third partition.
    [root@dm01db01 ~]# lvm vgextend VGExaDb /dev/sda3
      Volume group "VGExaDb" successfully extended
  14. Dismount the /EXAVMIMAGES OCFS2 partition.
    [root@dm01db01 ~]# umount /EXAVMIMAGES/
  15. Extend the logical volume that contains the OCFS2 partition to include the rest of the free space.
    [root@dm01db01 ~]# lvm lvextend -l +100%FREE /dev/VGExaDb/LVDbExaVMImages
    Size of logical volume VGExaDb/LVDbExaVMImages changed from 1.55 TiB (406549 extents) to 
    3.73 TiB (977798 extents).  
    Logical volume LVDbExaVMImages successfully resized.
  16. Resize the OCFS2 file system to the rest of the logical volume.

    The tunefs.ocfs2 command typically runs very quickly and does not produce output.

    [root@dm01db01 ~]# tunefs.ocfs2 -S /dev/VGExaDb/LVDbExaVMImages
    
  17. Mount the OCFS2 partition and then view the file system disk space usage for this partition.
    [root@dm01db01 ~]# mount -a
    
    [root@dm01db01 ~]# ls -al /EXAVMIMAGES/
    total 4518924
    drwxr-xr-x  3 root root        3896 Jul 18 18:01 .
    drwxr-xr-x 26 root root        4096 Jul 24 14:50 ..
    drwxr-xr-x  2 root root        3896 Jul 18 17:51 lost+found
    -rw-r-----  1 root root 26843545600 Jul 18 18:01 System.first.boot.12.2.1.1.8.180510.1.img
    
    [root@dm01db01 ~]# df -h /EXAVMIMAGES/
    Filesystem                            Size  Used Avail Use% Mounted on
    /dev/mapper/VGExaDb-LVDbExaVMImages   3.8T  9.0G  3.8T   1% /EXAVMIMAGES
    
  18. Restart the guests.

6.14 Creating Oracle VM Oracle RAC Clusters

This procedure creates Oracle VM Oracle RAC clusters using Oracle Exadata Deployment Assistant (OEDA) configuration tool and deployment tool.

The requirements for adding an Oracle VM Oracle RAC cluster are as follows:

  • The system has already been deployed with one or more Oracle VM Oracle RAC clusters.

  • System has available resources, such as memory, CPU, local disk space, and Oracle Exadata Storage Server disk space.

  • OEDA deployment files used for initial system configuration are available.

  1. Verify there are sufficient resources to add a new guest in the kvmhost.

    If you are creating an Oracle VM Oracle RAC cluster, then verify resources in all kvmhosts where you are creating a new guest.

  2. Use the following command to verify the Oracle Exadata Storage Server disk space:
    # dcli -l celladmin -g cell_group "cellcli -e 'list celldisk attributes name, \
     diskType, freeSpace where freeSpace>0'"
    
  3. Download the latest OEDA from My Oracle Support note 888828.1, and place it on a system capable of running a graphic-based program.

    By default, database servers in Oracle Exadata Database Machine contain only packages required to run Oracle Database, and are not capable of running OEDA configuration tool.

  4. Obtain the OEDA template files used to deploy the system.
  5. Run the OEDA configuration tool as follows:
    1. Click Import.
    2. Select and open the XML file used to deploy the system with the name CustomerName-NamePrefix.xml.
    3. Click Next as needed to get to the Define Clusters page, and verify the IP address and host name information as you navigate the pages. If there have been no networking changes since the initial deployment, then no changes are needed.
    4. Increment the number of clusters on the Define Clusters page.
    5. Select the new cluster tab to edit the cluster information. Do not change any other clusters.
    6. Enter a unique cluster name for the cluster.
    7. Select the Oracle VM Server and CELL components for the new cluster, and then click Add.

      Note:

      The recommended practice for best performance and simplest administration is to select all cells.
    8. Click Next as needed to get to the new cluster page. Do not change any other clusters.
    9. Enter the information for the new cluster. Information includes the virtual guest size, disk group details, and database name. The database name must be unique for all databases that use the same Oracle Exadata Storage Servers.
    10. Click Next to get to the Review and Edit page, and verify the information for the new cluster.
    11. Click Next as needed to get to the Generate page.
    12. Click Next to generate the new configuration files.
    13. Select the destination directory for the configuration files.
    14. Click Save.

      Note:

      If the Oracle VM Defaults were altered for this new cluster, then configuration details for existing clusters will be re-written to match the new template settings. For example, if you previously deployed vm01 as SMALL with memory=8GB, and then change the SMALL template to memory=10GB for this new VM, then the new OEDA XML files show vm01 with memory=10GB even though there was no intent to change vm01.
    15. Click Installation Template on the Finish page to review the details of the new cluster.
    16. Click Finish to exit the configuration tool.
  6. Verify the XML file for the new cluster exists and has the name CustomerName-NamePrefix-ClusterName.xml in the destination folder.
  7. Obtain the deployment files for the Oracle Grid Infrastructure and Oracle Database releases selected, and place them in the OEDA WorkDir directory.
  8. Run the OEDA Deployment Tool using the -cf option to specify the XML file for the new cluster, and the -l option to list the steps using the following command:
    $ ./install.sh -cf    \
    ExadataConfigurations/CustomerName-NamePrefix-ClusterName.xml -l
    

    You should see output similar to the following:

    Initializing 
    ||||| 
    1. Validate Configuration File 
    2. Update Nodes for Eighth Rack 
    3. Create Virtual Machine 
    4. Create Users 
    5. Setup Cell Connectivity 
    6. Calibrate Cells 
    7. Create Cell Disks 
    8. Create Grid Disks 
    9. Configure Alerting 
    10. Install Cluster Software 
    11. Initialize Cluster Software 
    12. Install Database Software 
    13. Relink Database with RDS 
    14. Create ASM Diskgroups 
    15. Create Databases 
    16. Apply Security Fixes 
    17. Install Exachk 
    18. Create Installation Summary 
    19. Resecure Machine
  9. Skip the following steps when adding new Oracle VM clusters in an existing Oracle VM environment on Oracle Exadata Database Machine:
    • (For Eight Rack systems only) 2. Update Nodes for Eighth Rack
    • 6. Calibrate Cells
    • 7. Create Cell Disks
    • 19. Resecure Machine

    Note:

    The step numbers change based on the selected hardware configuration. Use the step names to identify the correct steps on your system.

    For example, to execute step 1, run the following command:

    $ ./install.sh -cf \
    ExadataConfigurations/CustomerName-NamePrefix-ClusterName.xml -s 1
    To make OEDA run only a subset of the steps, you can specify a range, for example:
    $ ./install.sh -cf \
    ExadataConfigurations/CustomerName-NamePrefix-ClusterName.xml –r 3–5
  10. For all other systems, run all steps except for the Configure Alerting step using the XML file for the new cluster.

    To run an individual step, use a command similar to the following, which executes the first step:

    $ ./install.sh -cf \
    ExadataConfigurations/CustomerName-NamePrefix-ClusterName.xml -s 1

6.15 Expanding an Oracle VM Oracle RAC Cluster on Exadata Using OEDACLI

You can expand an existing Oracle RAC cluster on Oracle VMOracle RAC by adding guest domains using the Oracle Exadata Deployment Assistant command-line interface (OEDACLI).

OEDACLI is the preferred method if you have a known, good version of the OEDA XML file for your cluster.

Note:

During the execution of this procedure, the existing Oracle RAC cluster nodes along with their database instances incur zero downtime.

Use cases for this procedure include:

  • You have an existing Oracle RAC cluster that uses only a subset of the database servers of an Oracle Exadata Rack, and now the nodes not being used by the cluster have become candidates for use.
  • You have an existing Oracle RAC cluster on Oracle Exadata Database Machine that was recently extended with additional database servers.
  • You have an existing Oracle RAC cluster that had a complete node failure and the node was removed and replaced with a newly re-imaged node.

Before preforming the steps in this section, the new database servers should have been set up as detailed in Adding a New Database Server to the Cluster, including the following:

  • The new database server is installed and configured on the network with a kvmhost or management domain.
  • Download the latest Oracle Exadata Deployment Assistant (OEDA); ensure the version you download is the July 2019 release, or later.
  • You have an OEDA configuration XML file that accurately reflects the existing cluster configuration. You can validate the XML file by generating an installation template from it and comparing it to the current configuration. See the OEDACLI command SAVE FILES.
  • Review the OEDA Installation Template report for the current system configuration to obtain node names and IP addresses for existing nodes. You will need to have new host names and IP addresses for the new nodes being added. The new host names and IP addresses required are:
    • Administration host names and IP addresses (referred to as ADMINNET) for themanagement domain (kvmhost) and the user domains (guests).
    • Private host names and IP addresses (referred to as PRIVNET) for the management domain (kvmhost) and the user domains (guests).
    • Integrated Lights Out Manager (ILOM) host names and IP addresses for the management domain (kvmhost).
    • Client host names and IP addresses (referred to as CLIENTNET) for the user domains (guests).
    • Virtual IP (VIP) host names and IP addresses (referred to as VIPNET) for the user domains (guests).
    • Physical rack number and location of the new node in the rack (in terms of U number)
  • Each management domain or kvmhost has been imaged or patched to the same image in use on the existing database servers. The current system image must match the version of the /EXAVMIMAGES/ System.first.boot.*.img file on the new management domain (kvmhost) node.

    Note:

    The ~/dom0_group file referenced below is a text file that contains the host names of the management domains or kvmhosts for all existing and new nodes being added.

    Check the image version across all management domains or kvmhosts are the same.

    dcli -g ~/dom0_group -l root "imageinfo -ver"
    
    exa01adm01: 19.2.0.0.0.190225
    exa01adm02: 19.2.0.0.0.190225
    exa01adm03: 19.2.0.0.0.190225

    If any image versions differ, you must upgrade the nodes as needed so that they match.

    Ensure that the System.first.boot version across all management domains or kvmhosts matches the image version retrieved in the previous step.

    dcli -g ~/dom0_group -l root "ls  -1 /EXAVMIMAGES/System.first.boot*.img" 
    exa01adm01:  /EXAVMIMAGES/System.first.boot.19.2.0.0.0.190225.img
    exa01adm02:  /EXAVMIMAGES/System.first.boot.19.2.0.0.0.190225.img
    exa01adm03:  /EXAVMIMAGES/System.first.boot.19.2.0.0.0.190225.img

    If any nodes are missing the System.first.boot.img file that corresponds to the current image, then obtain the required file. See the “Supplemental README note” for your Exadata release in My Oracle Support Doc ID 888828.1 and look for the patch file corresponding to this description, “DomU System.img OS image for V.V.0.0.0 VM creation on upgraded dom0s”

  • Place the klone.zip files (gi-klone*.zip and db-klone*.zip) in the /EXAVMIMAGES location on the freshly imaged management domain or kvmhost node you are adding to the cluster. These files can be found in the/EXAVMIMAGES directory on the management domain or kvmhost node from where the system was initially deployed.

The steps here show how to add a new management domain or kvmhost node called exa01adm03 that will have a new user domain or guest called exa01adm03vm01. The steps show how to extend an existing Oracle RAC cluster onto the user domain (guest) using OEDACLI commands. The existing cluster has management domain (kvmhost) nodes named exa01adm01 and exa01adm02 and user domain(guest) nodes named exa01adm01vm01 and exa01adm02vm01.

  1. Add the management domain (kvmhost) information to the OEDA XML file using the CLONE COMPUTE command.

    In the examples below, the OEDA XML file is assumed to be in: unzipped_OEDA_location/ExadataConfigurations.

    OEDACLI> LOAD FILE NAME=exa01_original_deployment.xml 
    
    OEDACLI> CLONE COMPUTE SRCNAME  = exa01adm01 TGTNAME = exa01adm03
    SET ADMINNET NAME=exa01adm03,IP=xx.xx.xx.xx
    SET PRIVNET NAME1=exa01adm03-priv1,IP1=  xx.xx.xx.xx, 
    SET PRIVNET NAME2=exa01adm03-priv2,IP2=  xx.xx.xx.xx
    SET ILOMNET NAME=exa01adm03-c,IP=xx.xx.xx.xx
    SET RACK NUM=NN,ULOC=XX 
    
    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS FORCE
    OEDACLI> SAVE FILE NAME=exa01_plus_adm03_node.xml

    At this point we have a new XML file that has the new compute node management domain (kvmhost) in the configuration. This file will be used by the subsequent steps.

  2. Add the new guest information to the OEDA XML file using the CLONE GUEST command and deploy the guest.
    OEDACLI> LOAD FILE NAME=exa01_plus_adm03_node.xml 
    
    OEDACLI> CLONE GUEST SRCNAME  = exa01adm01vm01 TGTNAME = exa01adm03vm01
    WHERE STEPNAME=CREATE_GUEST
    SET PARENT NAME = exa01adm03
    SET ADMINNET NAME=exa01adm03vm01,IP=xx.xx.xx.xx
    SET PRIVNET NAME1=exa01db03vm01-priv1,IP1=  xx.xx.xx.xx, 
    SET PRIVNET NAME2=exa01db03vm01-priv2,IP2=  xx.xx.xx.xx
    SET CLIENTNET NAME=exa01client03vm01,IP=xx.xx.xx.xx
    SET VIPNET NAME=exa01client03vm01-vip,IP=xx.xx.xx.xx
    
    
    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS
    OEDACLI> DEPLOY ACTIONS

    If you prefer that OEDACLI runs all steps automatically, omit the following clause above, WHERE STEPNAME=CREATE_GUEST and skip step 3 below.

    At this point we have a guest created on our new compute node.

  3. Use OEDACLI to extend the cluster to the new guest.

    Note:

    Continue using the same XML file, exa01_plus_adm03_node.xml in this example. You will continue to update this file as you proceed through these steps. At the very end of the procedure, this XML file will properly reflect the new state of the clusters.
    OEDACLI> CLONE GUEST TGTNAME=exa01adm03vm01 WHERE STEPNAME = CREATE_USERS

    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS
    OEDACLI> DEPLOY ACTIONS
    
    OEDACLI> CLONE GUEST TGTNAME=exa01adm03vm01 WHERE STEPNAME = CELL_CONNECTIVITY

    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS
    OEDACLI> DEPLOY ACTIONS
    
    OEDACLI> CLONE GUEST TGTNAME=exa01adm03vm01 WHERE STEPNAME = ADD_NODE

    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS
    OEDACLI> DEPLOY ACTIONS
    
    OEDACLI> CLONE GUEST TGTNAME=exa01adm03vm01 WHERE STEPNAME = EXTEND_DBHOME

    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS
    OEDACLI> DEPLOY ACTIONS
    
    OEDACLI> CLONE GUEST TGTNAME=exa01adm03vm01 WHERE STEPNAME = ADD_INSTANCE

    OEDACLI> SAVE ACTION
    OEDACLI> MERGE ACTIONS
    OEDACLI> DEPLOY ACTIONS

    OEDACLI prints out messages similar to the following as each step completes:

    Deploying Action ID : 39 CLONE GUEST TGTNAME=exa01adm03vm01 where STEPNAME = ADD_INSTANCE 
    Deploying CLONE GUEST 
    Cloning Guest 
    Cloning Guest  :  exa01adm03vm01.us.oracle.com_id 
    Adding new instance for database [dbm] on exa01adm03vm01.us.oracle.com 
    Setting up Huge Pages for Database..[dbm] 
    Adding instance dbm3 on host exa01adm03vm01.us.oracle.com 
    Successfully completed adding database instance on the new node [elapsed Time [Elapsed = 
    249561 mS [4.0  minutes] Fri Jun 28 13:35:52 PDT 2019]] 
    Done...
    Done
  4. Save the current state of the configuration and generate configuration information.
    OEDACLI> SAVE FILES LOCATION=/tmp/exa01_plus_adm03_config

    The above command writes all the configuration files to the directory /tmp/exa01_plus_adm03_config. Save a copy of these files in a safe place since they now reflect the changes made to your cluster.

  5. Gather an Oracle EXAchk report and examine it to ensure the cluster is in good health.

6.16 Moving a Guest to a Different Database Server

Guests can move to different database servers.

The target Oracle Exadata Database Machine database server must meet the following requirements:

  • The target database server must have the same Oracle Exadata System Software release installed with Oracle VM.

  • The target database server must have the same network visibility.

  • The target database server must have access to the same Oracle Exadata Database Machine storage servers.

  • The target database server must have sufficient free resources (CPU, memory, and local disk storage) to operate the guest.

    • It is possible to over-commit virtual CPUs such that the total number of virtual CPUs assigned to all domains exceeds the number of physical CPUs on the system. Over-committing CPUs can be done only when the competing workloads for over-subscribed resources are well understood and the concurrent demand does not exceed physical capacity.

    • It is not possible to over-commit memory.

    • Copying disk images to the target database server may increase space allocation of the disk image files because the copied files are no longer able to benefit from the disk space savings gained by using OCFS2 reflinks.

  • The guest name must not be already in use on the target database server.

The following procedure moves a guest to a new database server in the same Oracle Exadata System Software configuration. All steps in this procedure are performed in the kvmhost.

  1. Shut down the guest.
    # /opt/exadata_ovm/vm_maker --stop-domain GuestName
  2. Copy the guest disk image and configuration files to the target database server.

    In the following examples, replace GuestName with the name of the domain.

    # scp -r /EXAVMIMAGES/GuestImages/GuestName/ target:/EXAVMIMAGES/GuestImages
    
  3. Obtain the UUID of the guest.
    # grep ^uuid /EXAVMIMAGES/GuestImages/GuestName/vm.cfg
    

    An example of the guest UUID is 49ffddce4efe43f5910d0c61c87bba58.

  4. Using the UUID of the guest, copy the guest symbolic links from /OVS/Repositories to the target database server.
    # tar cpvf - /OVS/Repositories/UUID/ | ssh target_db_server "tar xpvf - -C /"
    
  5. Start the guest on the target database server.
    # /opt/exadata_ovm/vm_maker --start-domain GuestName

6.17 Implementing Tagged VLAN Interfaces

This topic describes the implementation of tagged VLAN interfaces in Oracle VM environments on Oracle Exadata Database Machine.

Oracle databases running in Oracle VM guests on Oracle Exadata Database Machine are accessed through the client Ethernet network defined in the Oracle Exadata Deployment Assistant (OEDA) configuration tool. Client network configuration in both the kvmhost and guests is done automatically when the OEDA installation tool creates the first guest during initial deployment.

The following figure shows a default bonded client network configuration:

Figure 6-1 NIC Layout in an Oracle Virtual Environment

Description of Figure 6-1 follows
Description of "Figure 6-1 NIC Layout in an Oracle Virtual Environment"

The network has the following configuration:

  1. In the kvmhost, eth slave interfaces (for example, eth1 and eth2, or eth4 and eth5) that allow access to the guest client network defined in OEDA are discovered, configured, and brought up, but no IP is assigned.

  2. In the kvmhost, bondeth0 master interface is configured and brought up, but no IP is assigned.

  3. In the kvmhost, bridge interface vmbondeth0 is configured, but no IP is assigned.

  4. In the kvmhost, one virtual backend interface (VIF) per guest that maps to that particular guest's bondeth0 interface is configured and brought up, but no IP is assigned. These VIFs are configured on top of the bridge interface vmbondeth0, and the mapping between the kvmhost VIF interface and its corresponding guest interface bondeth0 is defined in the guest configuration file called vm.cfg, located in /EXAVMIMAGES/GuestImages/guest name.

For default installations, a single bondeth0 and a corresponding vmbondeth0 bridge interface is configured in the kvmhost as described above. This bondeth0 interface is based on the default Access Virtual Local Area Network (Access VLAN). The ports on the switch used by the slave interfaces making up bondeth0 are configured for Access VLAN.

Using VLAN Tagging

If there is a need for virtual deployments on Exadata to access additional VLANs on the client network, such as enabling network isolation across guests, then 802.1Q-based VLAN tagging is a solution. The following figure shows a client network configuration with VLAN tagging.

Figure 6-2 NIC Layout for Oracle Virtual Environments with VLAN Tagging

Description of Figure 6-2 follows
Description of "Figure 6-2 NIC Layout for Oracle Virtual Environments with VLAN Tagging"

For instructions on how to configure and use such additional VLAN tagged interfaces on the client network, see My Oracle Support note 2018550.1. The Access VLAN must stay working and configured before and after these instructions are followed. At no time is the Access VLAN to be disabled.

6.18 About RDMA Network Fabric Partitioning Across Oracle RAC Clusters Running in Oracle VM

An RDMA Network Fabric partition defines a group of RDMA Network Fabric nodes or members that are allowed to communicate with one another.

One of the key requirements of consolidated systems from a security standpoint is network isolation across the multiple environments within a consolidated system. For consolidations achieved using Oracle VM Oracle Real Application Clusters (Oracle RAC) clusters on Oracle Exadata, this means isolation across the different Oracle RAC clusters such that network traffic of one Oracle RAC cluster is not accessible to another Oracle RAC cluster. For the Ethernet networks, this is accomplished using VLAN tagging as described in My Oracle Support DocID 2018550.1.

For InfiniBand Transport Layer systems based on a RoCE Network Layer (X8M), isolation is accomplished via Server Level Isolation via Access VLAN settings. By default the RDMA Network Fabric ports for the Cisco Nexus 9336c Ethernet Leaf Switches are set to switchport access vlan 3888. This setting should be suitable for the majority of RDMA over Converged Ethernet (RoCE) switch and host configurations. In the extraordinary situation it is necessary to implement server level isolation, then the Leaf ports connected to the hosts that require isolation must be modified to an access VLAN value such as switchport access vlan 3889.

Note:

The ports on the Cisco Nexus 9336c Ethernet Leaf Switches that are available to modify for example, Host ports) are listed in RDMA Network Fabric Cabling Tables X8M. Additionally, limit the switchport access vlan IDs to a range of 2744-3967 to prevent any other conflicts on the system.

6.19 Using Oracle EXAchk in Oracle VM Environments

Oracle EXAchk version 12.1.0.2.2 and higher supports virtualization on Oracle Exadata Database Machine.

6.19.1 Running Oracle EXAchk in Oracle VM Environments

To perform the complete set of Oracle EXAchk audit checks in an Oracle Exadata Database Machine Oracle VM environment, Oracle EXAchk must be installed in and run from multiple locations.

  1. Run Oracle EXAchk from one kvmhost.
  2. Run Oracle EXAchk from one guest in each Oracle Real Application Clusters (Oracle RAC) cluster running in Oracle VM.

For example, an Oracle Exadata Database Machine Quarter Rack with two database servers containing 4 Oracle VM Oracle RAC clusters (2 nodes per cluster for a total of 8 guests across both database servers) requires running Oracle EXAchk five separate times, as follows:

  1. Run Oracle EXAchk in the first guest for the first cluster.

  2. Run Oracle EXAchk in the first guest for the second cluster.

  3. Run Oracle EXAchk in the first guest for the third cluster.

  4. Run Oracle EXAchk in the first guest for the fourth cluster.

  5. Run Oracle EXAchk in the first kvmhost.

6.19.2 Audit Checks Performed by Oracle EXAchk

Oracle EXAchk runs different audit checks on the kvmhost and the guests.

When you install and run Oracle EXAchk on the kvmhost, it performs the following hardware and operating system level checks:

  • Database servers (kvmhosts)
  • Storage servers
  • RDMA Network Fabric
  • RDMA Network Fabric switches

When you install and run Oracle EXAchk on the guest, it performs operating system checks for guests, and checks for Oracle Grid Infrastructure and Oracle Database.

6.19.3 Oracle EXAchk Command Line Options for Oracle Exadata Database Machine

Oracle EXAchk requires no special command line options. It automatically detects that it is running in an Oracle Exadata Database Machine Oracle VM environment. However, you can use command line options to run Oracle EXAchk on a subset of servers or switches.

Oracle EXAchk automatically detects whether it is running in a kvmhost or guest and performs the applicable audit checks. For example, in the simplest case, you can run Oracle EXAchk with no command line options:

./exachk

When Oracle EXAchk is run in the kvmhost, it performs audit checks on all database servers, storage servers, and RDMA Network Fabric switches accessible through the RDMA Network Fabric network.

To run Oracle EXAchk on a subset of servers or switches, use the following command line options:

Options

  • -clusternodes: Specifies a comma-separated list of database servers.

  • -cells: Specifies a comma-separated list of storage servers.

  • -ibswitches: Specifies a comma-separated list of RDMA Network Fabric switches.

Example 6-1 Running Oracle EXAchk on a Subset of Nodes and Switches

For example, for an Oracle Exadata Database Machine Full Rack where only the first Quarter Rack is configured for virtualization, but all components are accessible through the RDMA Network Fabric network, you can run a command similar to the following from the database server dm01adm01:

./exachk -clusternodes dm01adm01,dm01adm02
         -cells dm01celadm01,dm01celadm02,dm01celadm03
         -ibswitches dm01swibs0,dm01sw-iba0,dm01sw-ibb0