14 Installing and Running the ME Virtual Machine

This chapter provides information on downloading, installing, and running the ME Virtual Machine (ME VM) software in virtual OS environments. This software is the same software as used for non-virtual OS but has been packaged specifically as ME VM for use in virtual OS environments.

Note:

If you are installing a patch set for the Media Engine, refer to the Release Notes and the ReadMe files that accompany the patch set for information on installing it.

The ME virtual machine is designed to be used as an evaluation platform so that potential customers can test the ME software in an environment that does not require them to install the software on a dedicated piece of hardware. In some cases, the virtual machine can also be used in production environments provided that the customer understands the limitations associated with using the ME VM software in a virtual OS environment.

Server-Based Requirements

Before downloading the virtual machine to an x86-based server, ensure that the VM host meets the following hardware and software requirements:

  • x86-based Windows or Linux server with Intel 32- or 64-bit dual-core processors

  • 2GB minimum (4 GB recommended) physical memory for each VM instance

  • Minimum of 40GB hard disk space per VM instance

  • One or two Ethernet interfaces

  • OVM 3.3.1, VMware ESXi 5.5, and Xen 3.4.3

Linux Installations

If you are installing the ME Virtual Machine on a Linux workstation running VMware, Oracle recommends the following technical resources:

Installing the VM

This section describes installing WebRTC Session Controller in a virtual machine environment.

Installing the Media Engine on an Oracle Virtual Machine

The ME is certified to run on the Oracle Virtual Machine (OVM) 3.3.1.

Prerequisites

You must meet the following prerequisites before installing the ME on an OVM.

  • A Network File System (NFS) has been mounted for VM storage with an additional storage file server for repository

  • A server pool has been created

  • Server (s) have been discovered and added to this pool

  • The ISO file has been imported

  • Networks and Virtual MAC range have been created

  • VM Console access (VNC) has been made available

You create the ME VM via the OVM Manager GUI. The OVM Manager binds to the weblogic server on the Oracle Linux host's 7002 SSL port.

Access the OVM Manager using the following link:

https://x.x.x.x:7002/ovm/console

Where x.x.x.x is the OVM Manager's IP address.

  1. Log into the OVM Manager using the user name and password configured when you set up the OVM.

  2. Create external routable interfaces by selecting the Networking tab, selecting the Networks button, and clicking the plus (+) icon.

  3. Create a new bridge bonds/ports only and select Virtual Machine in the Network Uses field.

  4. Bind the new bridge to a free port on the VM host.

  5. Optional. Create a heartbeat interface (if you choose to configure clustered VMs) by selecting the Networking tab, selecting the Networks button, and clicking the plus (+) icon.

  6. Create a new bridge local network only and select Virtual Machine in the Network Uses field.

  7. Create MAC addresses for each Virtual NIC by selecting the Networking tab, selecting the Virtual NICs button, and creating a Dynamic MAC Address Range.

    Note:

    You must create a unique MAC address for each Virtual NIC.

    You must mount an NFS to host the ME.

    When creating a VM, your storage repository contains an ISO file and the new VM immediately boots from the Virtual DVD.

To boot the VM from the Virtual DVD:

  1. Select the Repositories tab and select the repository you created from the File Server.

  2. Select ISOs.

  3. Via HTTP, import the ME's .iso code.

    You are now ready to create a VM.

To create a VM:

  1. Select the Servers and VMs tab and choose the server on which you are hosting the VM.

  2. Select Create Virtual Machine and click Next.

  3. Specify a Name and set the Memory and Processors for this VM and click Next.

    Note:

    The default Memory is 1024 and the default number of Processors is 1.
  4. Select your networks and click Next.

    Note:

    The order you select the networks affects how the ME Ethernets align.

    These MAC addresses (whether assigned dynamically or statically) now appear as assigned MAC addresses under the Virtual NICs tab in OVM Manager.

To create the VM virtual disk:

  1. Select the Virtual Disk's Disk Type, select the Create a Virtual Disk icon, and click Next.

  2. Select the previously-created Repository and enter a Virtual Disk Name and a Size (Oracle recommends 40 GB) and click OK.

To point the ME ISO code to commission the VM:

  1. Select CD/DVD from the Slot 2 Disk Type drop-down menu, select Select an ISO, and click Next.

  2. Select the previously-imported ISO and click OK.

Once the ME ISO code is pointed to commission the VM, set the Boot Options. The first time you boot, you utilize the CDRom as nothing resides on the Disk yet. All subsequent boots utilize the Disk and ignore the CDRom.

Note:

If you choose CDRom as the first boot option, the initial boot, as well as all subsequent boots, continue to utilize the CDRom.

To set the Boot Options:

  1. Select Disk.

  2. Select CDRom and click Finish.

    To see the newly-created VM, select the Servers and VMs tab and click Virtual Machines from the Perspective drop-down menu. At this point in the installation process, the Status of the VM is Stopped.

To start the VM:

  1. Select Start to start the VM.

  2. Select Launch Console.

    The OVM Console now displays the installation process.

  3. Type y and press <Enter> when prompted to complete the installation process.

    The VM reboots and once the installation is complete you see the ME login prompt. The ME is now ready to be set up and configured.

Configuring OVM Passthrough

On the OVM, there are two ways to directly connect a VM to a physical port: Single Root I/O Virtualization (SR-IOV) and Peripheral Component Interconnect (PCI) Passthrough. You configure hardware passthrough at the OVM Server's CLI.

Note:

Prior to configuring hardware passthrough, you must have a fully built VM, however, any NICs designated for hardware passthrough may not have an associated Network.

SR-IOV is a specification that treats a single physical device as multiple separate Virtual Functions (VF)s.

Note:

In development, SR-IOV was found to be available on 10GB ixgbe devices only.

To configure SR-IOV:

  1. Access and log into the OVM Server's CLI.

  2. Install the necessary packages on the OVM server. Example 14-1 shows an example installation.

    Example 14-1 Installing Packages On OVM

    libibumad-1.3.8-2.mlnx1.5.5r2.el5.x86_64.rpm
    libibmad-1.3.9-7.mlnx1.5.5r2.el5.x86_64.rpm
    opensm-libs-3.3.15-6.mlnx1.5.5r2.el5.x86_64.rpm
    kernel-ib-1.5.5.092-2.6.39_300.29.1.el5uek.x86_64.rpm
    infiniband-diags-1.5.13.MLNX_20120708-4.mlnx1.5.5r2.el5.x86_64.rpm
    ovsvf-config-1.0-6.noarch.rpm
    
  3. Create a python script called vnfs.py to view and marry PCI addresses to interfaces. Example 14-2 shows an example python script.

    Example 14-2 Python Script

    #!/usr/bin/python
    # Copyright (C) 2012 Steve Jordahl
    #
    # This program is free software; you can redistribute it and/or modify
    # it under the terms of the GNU Lesser General Public License as published
    # by the Free Software Foundation; version 2.1 only.
    #
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    # GNU Lesser General Public License for more details.
    #
    # vfns: list SR-IOV virtual functions
     
    import os
     
    info = {}
     
    def catFile(filename):
            readfile = open(filename)
            return readfile.read().strip()
    for dev in os.listdir('/sys/class/net'):
            if dev.startswith('eth'):
                    info[dev] = {}
                    info[dev]['address'] = catFile('/sys/class/net/' + dev + '/address')
     
    for dev in info.keys():
            devLink = os.readlink('/sys/class/net/' + dev + '/device')
            info[dev]['pci address'] = devLink[-7:]
            os.chdir('/sys/class/net/' + dev)
            for devInfo in os.listdir(devLink):
                    if devInfo.startswith('virtfn'):
                            info[dev][devInfo] = os.readlink(os.path.join(devLink, 
    devInfo))[-7:]
     
    for dev in sorted(info.keys()):
            print dev
            for detail in sorted(info[dev].keys()):
                    print "     " + detail + ":  " + info[dev][detail]
    
  4. Create /etc/pciback/pciback.sh. Example 14-3 shows an example file.

    Example 14-3 Example File

    #!/bin/sh
    if [ $# -eq 0 ] ; then
    echo "Require a PCI device as parameter"
    exit 1
    fi
    for pcidev in $@ ; do
    if [ -h /sys/bus/pci/devices/"$pcidev"/driver ] ; then
    echo "Unbinding $pcidev from" $(basename $(readlink /sys/bus/pci/devices/"$pcidev"/driver))
    echo -n "$pcidev" > /sys/bus/pci/devices/"$pcidev"/driver/unbind
    fi
    echo "Binding $pcidev to pciback"
    echo -n "$pcidev" > /sys/bus/pci/drivers/pciback/new_slot
    echo -n "$pcidev" > /sys/bus/pci/drivers/pciback/bind
    done
    
  5. Use an Input/Output Memory Management Unity (IOMMU) to allow both the VM and physical devices access to memory. The IOMMU allows the OVM to limit what memory a device is allowed and gives the device the same virtualized memory layout that the guest sees. Example 14-4 shows an example IOMMU.

    Example 14-4 IOMMU

    Edit /boot/grub/grub.conf to enable iommu and comment out the existing kernel entry ( see example )
     
    # grub.conf generated by anaconda
    #
    # Note that you do not have to rerun grub after making changes to this file
    # NOTICE:  You have a /boot partition.  This means that
    #          all kernel and initrd paths are relative to /boot/, eg.
    #          root (hd0,0)
    #          kernel /vmlinuz-version ro root=/dev/sdb2
    #          initrd /initrd-[generic-]version.img
    #boot=/dev/sdb
    default=0
    timeout=5
    splashimage=(hd0,0)/grub/splash.xpm.gz
    hiddenmenu
    title Oracle VM Server-ovs (xen-4.3.0 3.8.13-26.4.2.el6uek.x86_64)
    root (hd0,0)
            #kernel /xen.gz console=com1,vga com1=57600,8n1 dom0_mem=max:1776M allowsuperpage dom0_vcpus_pin dom0_max_vcpus=20
            kernel /xen.gz console=com1,vga com1=57600,8n1 dom0_mem=max:1776M allowsuperpage iommu=passthrough,no-qinval,no-intremap
            module /vmlinuz-3.8.13-26.4.2.el6uek.x86_64 ro root=UUID=e2b44279-55a5-48b9-b910-82446b7b8c65 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
            module /initramfs-3.8.13-26.4.2.el6uek.x86_64.img
    
  6. Add SR-IOV support to ovs.conf. Example 14-5 shows SR-IOV ovs.conf support.

    Note:

    The following example configures support for 10 VFs on the server's 4 ixgbe interfaces (matching up to ethernets 9-12).

    Example 14-5 SR-IOV Ovs.conf Support

    [root@meads ~]# vi /etc/modprobe.d/ovs.conf
    options bnx2x disable_tpa=1
    options ipv6 disable=1
    # SRIOV support
    options ixgbe max_vfs="10,10,10,10,0,0,0,0,0,0,0,0,0"
    install ixgbe /sbin/modprobe pciback ; /sbin/modprobe --first-time --ignore-install ixgbe
    
  7. Blacklist the Intel VF driver (ixgbevf) in dom0 so that the dom0 kernel does not try to use the VFs. Example 14-6 shows Intel VF driver (ixgbevf) blacklisted.

    Example 14-6 Intel VF Driver Blacklisted

    [root@Meads ~]# vi /etc/modprobe.d/blacklist.conf
    #
    # Listing a module here prevents the hotplug scripts from loading it.
    # Usually that'd be so that some other driver will bind it instead,
    # no matter which driver happens to get probed first.  Sometimes user
    # mode tools can also control driver binding.
    #
    # Syntax:  driver name alone (without any spaces) on a line. Other
    # lines are ignored.
    #
     
    # watchdog drivers
    blacklist i8xx_tco
     
    # framebuffer drivers
    blacklist aty128fb
    blacklist atyfb
    blacklist radeonfb
    blacklist i810fb
    blacklist cirrusfb
    blacklist intelfb
    blacklist kyrofb
    blacklist i2c-matroxfb
    blacklist hgafb
    blacklist nvidiafb
    blacklist rivafb
    blacklist savagefb
    blacklist sstfb
    blacklist neofb
    blacklist tridentfb
    blacklist tdfxfb
    blacklist virgefb
    blacklist vga16fb
    # ISDN - see bugs 154799, 159068
    blacklist hisax
    blacklist hisax_fcpcipnp
     
    # intel ixgbe sr-iov vf (virtual function) driver
    blacklist ixgbevf
    
  8. Reboot the OVM server.

  9. Run the vnfs script to view addresses and VFs statistics. Example 14-7 shows the vnfs script.

    Example 14-7 Vnfs Script

    [root@Meads ~]# ./vnfs.py
    eth0
         address:  a0:36:9f:2c:39:74
         pci address:  30:00.0
    eth1
         address:  a0:36:9f:2c:39:75
         pci address:  30:00.1
    eth10
         address:  00:21:28:a1:e2:41
         pci address:  88:00.1
         virtfn0:  88:10.1
         virtfn1:  88:10.3
         virtfn2:  88:10.5
         virtfn3:  88:10.7
         virtfn4:  88:11.1
         virtfn5:  88:11.3
         virtfn6:  88:11.5
         virtfn7:  88:11.7
         virtfn8:  88:12.1
         virtfn9:  88:12.3
    eth11
        address:  00:21:28:a1:e2:42
         pci address:  98:00.0
         virtfn0:  98:10.0
         virtfn1:  98:10.2
         virtfn2:  98:10.4
         virtfn3:  98:10.6
         virtfn4:  98:11.0
         virtfn5:  98:11.2
         virtfn6:  98:11.4
         virtfn7:  98:11.6
         virtfn8:  98:12.0
         virtfn9:  98:12.2
    eth12
    address:  00:21:28:a1:e2:43
         pci address:  98:00.1
         virtfn0:  98:10.1
         virtfn1:  98:10.3
         virtfn2:  98:10.5
         virtfn3:  98:10.7
         virtfn4:  98:11.1
         virtfn5:  98:11.3
         virtfn6:  98:11.5
         virtfn7:  98:11.7
         virtfn8:  98:12.1
         virtfn9:  98:12.3
    eth2
         address:  a0:36:9f:2c:39:76
         pci address:  30:00.2
    eth3
         address:  a0:36:9f:2c:39:77
         pci address:  30:00.3
    eth4
         address:  a0:36:9f:2d:0b:a8
         pci address:  a0:00.0
    eth5
         address:  a0:36:9f:2d:0b:a9
         pci address:  a0:00.1
    eth6
         address:  a0:36:9f:2d:0b:aa
         pci address:  a0:00.2
    eth7
         address:  a0:36:9f:2d:0b:ab
         pci address:  a0:00.3
    eth8
         address:  00:21:28:a1:e2:46
         pci address:  5f:00.0
    eth9
         address:  00:21:28:a1:e2:40
         pci address:  88:00.0
         virtfn0:  88:10.0
         virtfn1:  88:10.2
     virtfn2:  88:10.4
         virtfn3:  88:10.6
         virtfn4:  88:11.0
         virtfn5:  88:11.2
         virtfn6:  88:11.4
         virtfn7:  88:11.6
         virtfn8:  88:12.0
         virtfn9:  88:12.2
    
  10. Run the module. Example 14-8 shows running the module.

    Example 14-8 Running the Module

    [root@meads ~]# modprobe xen-pciback
    
  11. Assign devices to pciback in the format:

    Domain 0:Bus:#:Device#:Function #).
    

    Example 14-9 shows an example of adding devices to pciback.

    Note:

    In the following example, the 4 interfaces are VFs on ethernets 9-12.

    Example 14-9 Adding Devices to Pciback

    [root@meads ~]# /etc/pciback/pciback.sh 0000:88:10.0
    Unbinding 0000:88:00.0 from ixgbe
    Binding 0000:88:00.0 to pciback
     
    [root@meads ~]# /etc/pciback/pciback.sh 0000:88:10.1
    Unbinding 0000:88:00.1 from ixgbe
    Binding 0000:88:00.1 to pciback
     
    [root@meads ~]# /etc/pciback/pciback.sh 0000:98:10.0
    Unbinding 0000:98:00.0 from ixgbe
    Binding 0000:98:00.0 to pciback
     
    [root@meads ~]# /etc/pciback/pciback.sh 0000:98:10.1
    Unbinding 0000:98:00.1 from ixgbe
    Binding 0000:98:00.1 to pciback
    
  12. View the list of VMs. Example 14-10 shows a list of configured VMs.

    Example 14-10 Configured VMs

    [root@meads ~]# xm list
    Name                                        ID   Mem VCPUs      State   Time(s)
    0004fb0000060000f9b493a2c24f9549                7  8067    16     -b----  80716.4
    Domain-0                                      0  1775    20     r-----  30379.2
    
  13. View the list of assignable devices. Example 14-11 shows a list of assignable devices.

    Example 14-11 Assignable Devices

    [root@meads ~]# xm pci-list-assignable-devices
    0000:88:10.0
    0000:88:10.1
    0000:98:10.0
    0000:98:10.1
    
  14. Assign these devices to the VM. Example 14-12 shows assigning devices to the VM.

    Note:

    In the following example the VM ID is 7.

    Example 14-12 Assigning Devices to the VM

    [root@meads ~]# xm pci-attach 7 0000:88:10.0
    [root@meads ~]# xm pci-attach 7 0000:88:10.1
    [root@meads ~]# xm pci-attach 7 0000:98:10.0
    [root@meads ~]# xm pci-attach 7 0000:98:10.1
    
  15. View the list of devices for this VM. Example 14-13 shows devices for this VM.

    Example 14-13 Devices For This VM

    [root@Meads ~]# xm pci-list 7
    Vdev Device
    04.0 0000:88:10.0
    05.0 0000:88:10.1
    06.0 0000:98:10.0
    07.0 0000:98:10.1
    

    Now VFs on ethernets 9-12 are assigned to VM ID 7, but there are still VFs available to the host. These interfaces appear when you run the ifconfig command.

  16. Access and log into the ME CLI and execute the echo, run, and restart warm commands. Example 14-14 shows the echo, run, and restart warm commands.

    Example 14-14 Echo, Run, and Restart Warm Commands

    NNOS-E>echo ”1” > /sys/bus//pci/rescan
    NNOS-E>run ./install_build_mactab.sh
    NNOS-E>restart warm
    

After the restart has completed, these interfaces are available on the ME.

PCI Passthrough is a specification that allows you to directly connect one VM to one physical device, making the device unavailable to other VMs.

To configure PCI Passthrough:

  1. Access and log into the OVM Server's CLI.

  2. Install the necessary packages on the OVM Server. Example 14-15 shows installing packages on the OVM.

    Example 14-15 Installing Packages On OVM

    libibumad-1.3.8-2.mlnx1.5.5r2.el5.x86_64.rpm
    libibmad-1.3.9-7.mlnx1.5.5r2.el5.x86_64.rpm
    opensm-libs-3.3.15-6.mlnx1.5.5r2.el5.x86_64.rpm
    kernel-ib-1.5.5.092-2.6.39_300.29.1.el5uek.x86_64.rpm
    infiniband-diags-1.5.13.MLNX_20120708-4.mlnx1.5.5r2.el5.x86_64.rpm
    ovsvf-config-1.0-6.noarch.rpm
    
  3. Create a python script called vnfs.py to view and marry PCI addresses to interfaces. Example 14-16 shows the python script.

    Example 14-16 Python Script

    #!/usr/bin/python
    # Copyright (C) 2012 Steve Jordahl
    #
    # This program is free software; you can redistribute it and/or modify
    # it under the terms of the GNU Lesser General Public License as published
    # by the Free Software Foundation; version 2.1 only.
    #
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    # GNU Lesser General Public License for more details.
    #
    # vfns: list SR-IOV virtual functions
     
    import os
     
    info = {}
     
    def catFile(filename):
            readfile = open(filename)
            return readfile.read().strip()
     
    for dev in os.listdir('/sys/class/net'):
            if dev.startswith('eth'):
                    info[dev] = {}
                    info[dev]['address'] = catFile('/sys/class/net/' + dev + '/address')
     
    for dev in info.keys():
            devLink = os.readlink('/sys/class/net/' + dev + '/device')
            info[dev]['pci address'] = devLink[-7:]
            os.chdir('/sys/class/net/' + dev)
            for devInfo in os.listdir(devLink):
    if devInfo.startswith('virtfn'):
                            info[dev][devInfo] = os.readlink(os.path.join(devLink, devInfo))[-7:]
     
    for dev in sorted(info.keys()):
            print dev
            for detail in sorted(info[dev].keys()):
                    print "     " + detail + ":  " + info[dev][detail]
    
  4. Create /etc/pciback/pciback.sh. Example 14-17 shows an example file.

    Example 14-17 Example File

    #!/bin/sh
    if [ $# -eq 0 ] ; then
    echo "Require a PCI device as parameter"
    exit 1
    fi
    for pcidev in $@ ; do
    if [ -h /sys/bus/pci/devices/"$pcidev"/driver ] ; then
    echo "Unbinding $pcidev from" $(basename $(readlink /sys/bus/pci/devices/"$pcidev"/driver))
    echo -n "$pcidev" > /sys/bus/pci/devices/"$pcidev"/driver/unbind
    fi
    echo "Binding $pcidev to pciback"
    echo -n "$pcidev" > /sys/bus/pci/drivers/pciback/new_slot
    echo -n "$pcidev" > /sys/bus/pci/drivers/pciback/bind
    done
    
  5. Use an IOMMU to allow both the VM and physical devices access to memory. The IOMMU allows the OVM to limit what memory a device is allowed and gives the device the same virtualized memory layout that the guest sees. Example 14-18 shows an example IOMMU.

    Example 14-18 IOMMU

    Edit /boot/grub/grub.conf to enable iommu and comment out the existing kernel entry ( see example )
     
    # grub.conf generated by anaconda
    #
    # Note that you do not have to rerun grub after making changes to this file
    # NOTICE:  You have a /boot partition.  This means that
    #          all kernel and initrd paths are relative to /boot/, eg.
    #          root (hd0,0)
    #          kernel /vmlinuz-version ro root=/dev/sdb2
    #          initrd /initrd-[generic-]version.img
    #boot=/dev/sdb
    default=0
    timeout=5
    splashimage=(hd0,0)/grub/splash.xpm.gz
    hiddenmenu
    title Oracle VM Server-ovs (xen-4.3.0 3.8.13-26.4.2.el6uek.x86_64)
    root (hd0,0)
            #kernel /xen.gz console=com1,vga com1=57600,8n1 dom0_mem=max:1776M allowsuperpage dom0_vcpus_pin dom0_max_vcpus=20
            kernel /xen.gz console=com1,vga com1=57600,8n1 dom0_mem=max:1776M allowsuperpage iommu=passthrough,no-qinval,no-intremap
            module /vmlinuz-3.8.13-26.4.2.el6uek.x86_64 ro root=UUID=e2b44279-55a5-48b9-b910-82446b7b8c65 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
            module /initramfs-3.8.13-26.4.2.el6uek.x86_64.img
    
  6. Reboot the OVM Server.

  7. Run the vnfs script to view addresses and VFs statistics. Example 14-19 shows the vnfs script.

    Example 14-19 Vnfs Script

    [root@meads ~]# ./vnfs.py
    eth0
         address:  a0:36:9f:2c:39:74
         pci address:  30:00.0
    eth1
         address:  a0:36:9f:2c:39:75
         pci address:  30:00.1
    eth10
         address:  00:21:28:a1:e2:41
        pci address:  88:00.1
    eth11
         address:  00:21:28:a1:e2:42
         pci address:  98:00.0
    eth12
         address:  00:21:28:a1:e2:43
         pci address:  98:00.1
    eth2
         address:  a0:36:9f:2c:39:76
         pci address:  30:00.2
    eth3
         address:  a0:36:9f:2c:39:77
         pci address:  30:00.3
    eth4
         address:  a0:36:9f:2d:0b:a8
         pci address:  a0:00.0
    eth5
         address:  a0:36:9f:2d:0b:a9
         pci address:  a0:00.1
    eth6
         address:  a0:36:9f:2d:0b:aa
    pci address:  a0:00.2
    eth7
         address:  a0:36:9f:2d:0b:ab
         pci address:  a0:00.3
    eth8
         address:  00:21:28:a1:e2:46
         pci address:  5f:00.0
    eth9
         address:  00:21:28:a1:e2:40
         pci address:  88:00.0
    
  8. Run the module. Example 14-20 shows running the module.

    Example 14-20 Running the Module

    {root@meads ~}# modprobe xen-pciback
    
  9. Assign devices to pciback in the format:

    Domain ):Bus:#:Device#:Function #).
    

    Example 14-21 shows assigning devices to pciback.

    Note:

    In the following example, the 4 interfaces are VFs on ethernets 9-12.

    Example 14-21 Assigning Devices to Pciback

    [root@meads ~]# /etc/pciback/pciback.sh 0000:88:00.0
    Unbinding 0000:88:00.0 from ixgbe
    Binding 0000:88:00.0 to pciback
     
    [root@meads ~]# /etc/pciback/pciback.sh 0000:88:00.1
    Unbinding 0000:88:00.1 from ixgbe
    Binding 0000:88:00.1 to pciback
     
    [root@meads ~]# /etc/pciback/pciback.sh 0000:98:00.0
    Unbinding 0000:98:00.0 from ixgbe
    Binding 0000:98:00.0 to pciback
     
    [root@meads ~]# /etc/pciback/pciback.sh 0000:98:00.1
    Unbinding 0000:98:00.1 from ixgbe
    Binding 0000:98:00.1 to pciback
    
  10. View the list of VMs. Example 14-22 shows a list of VMs.

    Example 14-22 List of VMs

    [root@meads ~]# xm list
    Name                                        ID   Mem VCPUs      State   Time(s)
    0004fb0000060000f9b493a2c24f9549                7  8067    16     -b----  80716.4
    Domain-0                                      0  1775    20     r-----  30379.2
    
  11. View the list of assignable devices. Example 14-23 shows the list of assignable devices.

    Example 14-23 Assignable Devices

    [root@meads ~]# xm pci-list-assignable-devices
    0000:88:00.0
    0000:88:00.1
    0000:98:00.0
    0000:98:00.1
    
  12. Assign these devices to the VM. Example 14-24 shows assigning devices to the VM.

    Note:

    In the following example the VM ID is 7.

    Example 14-24 Assigning Devices To The VM

    [root@meads ~]# xm pci-attach 7 0000:88:00.0
    [root@meads ~]# xm pci-attach 7 0000:88:00.1
    [root@meads ~]# xm pci-attach 7 0000:98:00.0
    [root@meads ~]# xm pci-attach 7 0000:98:00.1
    
  13. View the list of devices for this VM. Example 14-25 shows devices for this item.

    Example 14-25 Devices For This VM

    [root@Meads ~]# xm pci-list 7
    Vdev Device
    04.0 0000:88:00.0
    05.0 0000:88:00.1
    06.0 0000:98:00.0
    07.0 0000:98:00.1
    

    Now ethernets 9-12 are assigned to VM ID 7 and are not available to host any other VMs. They also do not show up when you run the ifconfig command.

  14. Access and log into the ME CLI and execute the echo, run, and restart warm commands. Example 14-26 shows the echo, run, and restart warm commands.

    Example 14-26 Echo, Run, and Restart Warm Commands

    NNOS-E>echo ”1” > /sys/bus//pci/rescan
    NNOS-E>run ./install_build_mactab.sh
    NNOS-E>restart warm
    

After the restart has completed, these interfaces are available on the ME.

Installing the Media Engine on a VMware ESXi

The ME is certified to run on the VMware ESXi 5.5.

Oracle recommends the following configuration:

  • vCPUs: 16 (16 sockets, 1 core per socket)

  • RAM: 8GB

  • Disk: 50G

To install the ME on a VMware ESXi:

  1. Copy the ME's ISO file to the Datastore.

  2. Click Inventory.

  3. Create a new VM by clicking the ESXi server on the left.

  4. Select File > New > Virtual Machine from the menu.

    • Configuration: Select typical to accept the default number of CPUs and amount of memory (1 CPU and 1GB). Select custom to change the default values. Click Next.

    • Name and Location: Enter a name for the VM. Click Next.

    • Storage: Select the Datastore. Click Next.

    • Virtual Machine Version: For custom configuration only. Select Virtual Machine Version: 8. Click Next.

    • Guest Operating System: Select Linux for OS and Linux Oracle Linux 4/5/6 (64-bit) for Version. Click Next.

    • CPUs: For custom configuration only. Select the number of sockets and number of cores/sockets. Click Next.

    • Memory: For custom configuration only. Select the memory size. Note the minimum, maximum, and recommended sizes for the guest OS you are using. Click Next.

    • Network: Select 3. Data Network. Click Next.

    • SCSI Controller: For custom configuration only. Select LSI Logic Parallel (default). Click Next.

    • Select a Disk: For custom configuration only. Select Create a new virtual disk. Click Next.

    • Create a Disk: Specify the GB for disk capacity and choose Thick Provision Lazy Zeroed and Store with the virtual machine. Click Next.

    • Advanced Options: For custom configuration only. Check the checkbox for SCSI (0:0). Ensure the Independent checkbox remains unchecked. Click Next.

    • Ready to Complete: Click Finish.

  5. Right-click on the VM and select Edit Settings....

    • Select CD/DVD Drive 1.

    • Device Status: Select Connect at power on.

    • Device Type: Select Datastore ISO File and choose <install_release_version>_<build_number>.iso on Datastore1.

    • Click OK.

  6. Power on the VM by clicking the green play button.

  7. Right-click the VM and select Open Console.

The ME is now ready to be set up and configured.

Configuring ESXi Passthrough

On the ESXi, you can directly connect a VM to a physical port via the SR-IOV specification. SR-IOV treats a single physical device as multiple separate Virtual Functions (VF)s. To deploy SR-IOV, you must enable VFs at the host level.

T configure SR-IOV on the ESXi, you must have a NIC with an intel 82599 chipset or newer and a BIOS, both supporting SR-IOV.

The configuration for SR-IOV on the ESXi consists of two parts: first you must configure the ME's VM server, then you must assign individual VFs to specific VMs.

To configure the ME's VM server for SR-IOV:

  1. Enable SR-IOV in the BIOS.

  2. Ensure you have the latest drivers for your intel NIC (ixgbe) and ESXi version. See https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere_with_operations_management/5_5#drivers_tools for more information on ESXi 5.5 drivers.

  3. Install the appropriate drivers and reboot the host.

  4. Log into the ESXi CLI shell and enter the following command to view a list of all NICs on the server and identify which NICs to use for SR-IOV.

    # lspci | grep -i 'ethernet\|network'
    
  5. Specify the number of VFs you are assigning to each port by executing the following command:

    # esxcfg-module ixgbe -s max_vfs=<P1=n><P2=n><P3=n><P4=n>
    

    where <Px=n> stands for the configured ports and their assigned VF values, less than or equal to 63. Assigning a value of 0 makes that port unavailable for SR-IOV.

    Note:

    The SR-IOV specification allows for you to partition the Physical Function (PF) into a particular number of VFs you can then attach to VMs. The maximum number of VFs you can create on a PF depends on the hardware you are using. Typically, for 10GbE chipsets equal to or newer than 825999, that number is 63.
  6. Verify that you entered the correct values by entering the following command:

    # esxcfg-module ixgbe -g isgbe
    
  7. Reboot the server.

  8. View the list of configured VFs by either reentering the following command:

    #lspci | grep -i 'ethernet\|network'
    

    or accessing, via the vSphere GUI, Host > Configuration > Advanced Settings.

To configure a specific ME VM for SR-IOV:

Note:

To attach a VF to a VM, the VM version must be greater than or equal to 10.
  1. Power off the VM.

  2. Select Settings > Hardware > Add.

  3. Select PCI device and select the VF you are adding to the VM.

  4. Repeat this procedure for each VF you are adding to the VM.

    Note:

    If you are prompted to "reserve" resources, you may have to click that button for the VM to power on.

Once a VF is attached to a particular VM, you cannot attach it to any other VM.

Installing the Media Engine as a XEN Virtual Machine

The ME is certified to run on XEN 3.4.3.

Oracle recommends the following configuration:

  • vCPUs: 16 (16 sockets, 1 core per socket)

  • RAM: 8 GB

  • Disk: 50G

Note:

Oracle recommends using LVM partitions as disks.
  1. Create a partition and download the XEN image from buildview into that partition. Example 14-27 creates a 50G partition.

    Example 14-27 Creating a Partition

    # lvcreate --size=50G --name=asc ol
    

    Logical volume "asc" is created.

  2. Download OL7.2 ISO image from the Oracle Software Delivery Cloud and copy the file to the /tmp directory on the server.

  3. Create a config file for the VM at /etc/xen/asc/cfg. Example 14-28 shows an example config file.

    Note:

    The following is an example. Ensure you customize your config file, including changing the MAC addresses, to fit your environment.

    Example 14-28 Sample Config File

    #  -*- mode: python; -*-
    #=====================================================================
    # Python configuration setup for 'xm create'.
    This script sets the parameters used when a domain is created using xm create. Use a separate script for each domain you create or set the parameters for the domain on the XM command line.
    # you can set the parameters for the domain on the xm command line.
    #=====================================================================
     
    #---------------------------------------------------------------------
    # PV GRUB image file.
    kernel = "/usr/lib/xen/boot/hvmloader"
    builder = 'hvm'
    device_model = '/usr/lib64/xen/bin/qemu-dm'
     
    # Sets path to menu.lst
    extra = "(hd0,1)/grub/menu.lst"
    # can be a TFTP-served path (DHCP will automatically be run)
    # extra = "(nd)/netboot/menu.lst"
    # can be configured automatically by GRUB's DHCP option 150 (see grub manual)
    # extra = ""
     
    # Initial memory allocation (in megabytes) for the new domain.
    #
    # WARNING: Creating a domain with insufficient memory may cause out of
    #          memory errors. The domain needs enough memory to boot kernel
    #          and modules. Allocating less than 32MBs is not recommended.
    memory = 8192
     
    # A name for your domain. All domains must have different names.
    name = "asc"
     
    # 128-bit UUID for the domain.  The default behavior is to generate a new UUID
    # on each call to 'xm create'.
    #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"
     
    # List of which CPUS this domain is allowed to use, default Xen picks
    #cpus = ""         # leave to Xen to pick
    #cpus = "0"        # all vcpus run on CPU0
    #cpus = "0-3,5,^1" # all vcpus run on cpus 0,2,3,5
    #cpus = ["2", "3"] # VCPU0 runs on CPU2, VCPU1 runs on CPU3
     
    # Number of Virtual CPUS to use, default is 1
    vcpus = 4
    cpus = "4-31" # all vcpus run on cpus >3
     
    #---------------------------------------------------------------------
    # Define network interfaces.
     
    # By default, no network interfaces are configured.  You may have one created
    # with sensible defaults using an empty vif clause:
    #
    # vif = [ '' ]
    #
    # or optionally override backend, bridge, ip, mac, script, type, or vifname:
    #
    # vif = [ 'mac=00:16:3e:00:00:11, bridge=xenbr0' ]
    #
    # or more than one interface may be configured:
    #
    # vif = [ '', 'bridge=xenbr1' ]
     
    vif = [ 'mac=00:16:3E:62:F7:05, bridge=virbr0', 'mac=00:16:3E:72:C9:95, bridge=messaging', 'mac=00:16:3E:06:57:B6, bridge=data' ]
     
    #---------------------------------------------------------------------
    # Define the disk devices you want the domain to have access to, and
    # what you want them accessible as.
    # Each disk entry is of the form phy:UNAME,DEV,MODE
    # where UNAME is the device, DEV is the device name the domain will see,
    # and MODE is r for read-only, w for read-write.
     
    disk = [ 'phy:/dev/mapper/ol-asc,hda,w' ]
     
    #---------------------------------------------------------------------
    # Define frame buffer device.
    #
    # By default, no frame buffer device is configured.
    #
    # To create one using the SDL backend and sensible defaults:
    #
    # vfb = [ 'sdl=1' ]
    #
    # This uses environment variables XAUTHORITY and DISPLAY.  You
    # can override that:
    #
    # vfb = [ 'sdl=1,xauthority=/home/bozo/.Xauthority,display=:1' ]
    #
    # To create one using the VNC backend and sensible defaults:
    #
    # vfb = [ 'vnc=1' ]
    #
    # The backend listens on 127.0.0.1 port 5900+N by default, where N is
    # the domain ID.  You can override both address and N:
    #
    # vfb = [ 'vnc=1,vnclisten=127.0.0.1,vncdisplay=1' ]
    #
    # Or you can bind the first unused port above 5900:
    #
    # vfb = [ 'vnc=1,vnclisten=0.0.0.0,vncunused=1' ]
    #
    # You can override the password:
    #
    # vfb = [ 'vnc=1,vncpasswd=MYPASSWD' ]
    #
    # Empty password disables authentication.  Defaults to the vncpasswd
    # configured in xend-config.sxp.
     
    #---------------------------------------------------------------------
    # Define to which TPM instance the user domain should communicate.
    # The vtpm entry is of the form 'instance=INSTANCE,backend=DOM'
    # where INSTANCE indicates the instance number of the TPM the VM
    # should be talking to and DOM provides the domain where the backend
    # is located.
    # Note that no two virtual machines should try to connect to the same
    # TPM instance. The handling of all TPM instances does require
    # some management effort in so far that VM configration files (and thus
    # a VM) should be associated with a TPM instance throughout the lifetime
    # of the VM / VM configuration file. The instance number must be
    # greater or equal to 1.
    #vtpm = [ 'instance=1,backend=0' ]
     
    #---------------------------------------------------------------------
    # Configure the behaviour when a domain exits.  There are three 'reasons'
    # for a domain to stop: poweroff, reboot, and crash.  For each of these you
    # may specify:
    #
    #   "destroy",        meaning that the domain is cleaned up as normal;
    #   "restart",        meaning that a new domain is started in place of the old
    #                     one;
    #   "preserve",       meaning that no clean-up is done until the domain is
    #                     manually destroyed (using xm destroy, for example); or
    #   "rename-restart", meaning that the old domain is not cleaned up, but is
    #                     renamed and a new domain started in its place.
    #
    # In the event a domain stops due to a crash, you have the additional options:
    #
    #   "coredump-destroy", meaning dump the crashed domain's core and then destroy;
    #   "coredump-restart', meaning dump the crashed domain's core and the restart.
    #
    # The default is
    #
    #   on_poweroff = 'destroy'
    #   on_reboot   = 'restart'
    #   on_crash    = 'restart'
    #
    # For backwards compatibility we also support the deprecated option restart
    #
    # restart = 'onreboot' means on_poweroff = 'destroy'
    #                            on_reboot   = 'restart'
    #                            on_crash    = 'destroy'
    #
    # restart = 'always'   means on_poweroff = 'restart'
    #                            on_reboot   = 'restart'
    #                            on_crash    = 'restart'
    #
    # restart = 'never'    means on_poweroff = 'destroy'
    #                            on_reboot   = 'destroy'
    #                            on_crash    = 'destroy'
     
    #on_poweroff = 'destroy'
    #on_reboot   = 'restart'
    #on_crash    = 'restart'
     
    #---------------------------------------------------------------------
    #   Configure PVSCSI devices:
    #
    #vscsi=[ 'PDEV, VDEV' ]
    #
    #   PDEV   gives physical SCSI device to be attached to specified guest
    #          domain by one of the following identifier format.
    #          - XX:XX:XX:XX (4-tuples with decimal notation which shows
    #                          "host:channel:target:lun")
    #          - /dev/sdxx or sdx
    #          - /dev/stxx or stx
    #          - /dev/sgxx or sgx
    #          - result of 'scsi_id -gu -s'.
    #            ex. # scsi_id -gu -s /block/sdb
    #                  36000b5d0006a0000006a0257004c0000
    #
    #   VDEV   gives virtual SCSI device by 4-tuples (XX:XX:XX:XX) as 
    #          which the specified guest domain recognize.
    #
     
    #vscsi = [ '/dev/sdx, 0:0:0:0' ]
     
    #=====================================================================
     
    # Guest VGA console configuration, either SDL or VNC
    #sdl = 1
    vnc = 1
    vncpasswd=""
    vncdisplay=10
    vnclisten="0.0.0.0"
    
  4. Start the VM.

    # xl start /etc/xen/asc.cfg
    
  5. Vnc to host:10 to start the OL7 installation process.

Once OL7 is installed, you can begin the ME installation. See "Installing the Media Engine" for more information on installing ME software.

Installing the Media Engine on KVM

The ME is certified to run on KVM 1.5.3 on OL7.

Oracle recommends using the following configuration:

  • vCPUs: 8

  • RAM: 8GB

  • Disk: 50G

Note:

Oracle recommends using LVM partitions as disks.
  1. Install the KVM packages.

    # yum install kvm libyirt
    # yum install python-virtinst virt-top virt-manager virt-v2v virt-viewer
    
  2. Use the virt-manager command to create your networks.

  3. Install the ME guest by either using the following command in the CLI or via the virt-manager (right-click localhost (QEMU) and click New).

    virt-install -n asc -r 8192 --os-type=linux --disk /dev/mapper/ol-asc,device=disk,bus=virtio,size=50,sparse=false,format=raw -w network=management,model=virtio -w network=messaging,model=virtio -w network=data,model=virtio -c /mnt/install/<build_version>.iso --vcpus=8
    

Configuring the VM

Once the VM is installed and running, you now must configure it to match the SIP application you are supporting. Since the VM does not have a pre-installed base configuration, Oracle provides the config configuration setup script that you can use to create a base configuration.

Using Config Setup

For Oracle users who are familiar with ME, the config setup script enables the configuration on the VM to make it reachable via ICMP (ping), SSH, and HTTPS for further configuration. The script presents a set of questions to help you with the initial system configuration. The information in the script includes the following:

  • Local hostname

  • IP interface names and addresses

  • SSH and Web access

  • Default route and any additional static routes per interface for remote management

  • User-defined ME

Every Oracle ME system has a minimum of two Ethernet interfaces. Any Ethernet interface on the system can be used for management traffic, however, Oracle recommends the use of eth1, as eth0 is reserved for fault-tolerant clustering with other ME systems. Management traffic is also supported on any interface that is carrying private or public network traffic. This means that it would be possible to use eth1 to carry SIP traffic and management traffic.

CLI Session

NNOS-E-VM> config setup 
set box\hostname: <name>
config box\interface: eth1
set box\interface eth1\ip a\ip-address: <ipAddress/mask>
config box\interface eth1\ip a\ssh (y or n)? n
config box\interface eth1\ip a\web (y or n)? y
config box\interface eth1\ip a\routing\route: <routeName>
set box\interface eth1\ip a\routing\route localGateway\gateway:
<ipAddress>
set box\cli\prompt: <newPrompt>
Do you want to commit this setup script (y or n) y
Do you want to update the startup configuration (y or n)? y

Sample VM Configuration

This section describes a base configuration designed to support a standard SBC application where the VM functions with SIP endpoints and a PBX or feature server. The high-level details of this configuration are provided below and additional details are embedded in the configuration file itself at the end of this section.

  • Two interfaces: one "outside" and one "inside."

  • Management ports for ICMP, SSH, and HTTPS open on both interfaces.

  • The IP address associated with a DNS resolver.

  • SIP UDP, TCP, and TLS ports open on both interfaces.

  • NAT traversal & media anchoring enabled.

  • A sample gateway configuration for an attached PBX or feature server.

  • A sample registration- and dial-plan for delegation of SIP traffic to the attached PBX or feature server.

  • A local registration plan to support registrations and calls locally through the VM (for cases where there is no attached PBX or feature server).

Note:

Oracle recognizes that the items in the base configuration will not be 100% applicable to all ME VM deployments. However, by including these items in this sample configuration, new VM users can observe the configuration structure and hierarchy. Any necessary changes to this base configuration can be made using the procedures described in the Oracle manual set. See, ”Using Oracle Documentation,” for more information.

Below is a copy of the base configuration. Note that any changes to the configuration should be made using the ME Management System (see, ”Enabling the ME Management System”).

Note:

Oracle does not recommend editing the configuration file below directly, and then importing it into the VM. While the VM does support this function, it is possible to introduce syntax errors into the configuration file using this method. Modifying the configuration with the CLI or Management System prevents this possibility.

This section is unique to every VM; you do not need to edit this.

config cluster
 set name acmepacket-nnos-e-vm-demo
 config box 1
  set hostname acmepacket-nnos-e-vm-demo
  set name acmepacket-nnos-e-vm-demo
  set identifier 00:0c:29:c9:7a:e2 

The IP address is configured as part of the configuration script execution.

config interface eth0
   config ip outside
    set ip-address static 172.30.3.128/22 
config ssh
    return
    config web
    return
    config sip
     set nat-translation enabled
     set udp-port 5060 "" "" any 0
     set tcp-port 5060 "" "" any 0
     set tls-port 5061 "" "" any 0
     set certificate vsp\tls\certificate sample
    return
    config icmp
    return
    config media-ports
    return
    config routing
     config route default
      set gateway 172.30.0.1
return
    return
   return
  return

The following section of the configuration provides a DNS resolver entry and is configured as part of the configuration script execution. This is not required for operation but can be helpful if you want to use Fully Qualified Domain Names in the config instead of IPs.

config dns
  config resolver
   set server 192.168.1.3 UDP 53 100 ALL
  return
 return
return

The following IP is disabled; you can enable it once you change the IP to match your local network conditions.

config interface eth1
   config ip inside
    set admin disabled
    set ip-address static 192.168.1.2/24
config ssh
    return
    config web
    return
    config sip
     set udp-port 5060 "" "" any 0
     set tcp-port 5060 "" "" any 0
     set tls-port 5061 "" "" any 0
     set certificate vsp\tls\certificate sample
    return
    config icmp
    return
    config media-ports
    return

This routing config is provided as an example; edit it as needed. Change to match your preferred Network Time Protocol (NTP) server.

config routing
     config route inside-ntwk
      set destination network 192.168.0.0/16
      set gateway 192.168.1.1
 return
    return
   return
  return
  config ntp-client
   set server pool.ntp.org
return
  config cli
   set prompt nnos-e-vm>
  return
 return
return

The following section of the configuration contains all of the event log filters and targets.

config services
 config event-log
  config file eventlog
   set filter all error
  return
  config file access-log
   set filter access info
  return
  config file kernelsys
   set filter krnlsys debug
  return
  config file db
   set filter db debug
  return
  config file system
   set filter general info
   set filter system info
  return
  config file access
   set filter access info
  return
  config file dos
   set filter dosSip alert
  return
  config local-database
   set filter all error
  return
 return
return

The following section of the config provides some commonly used default system parameters; for more information on these properties, see the Oracle Communications WebRTC Session Controller Media Engine Objects and Properties Reference Guide.

config master-services
 config database
  set media enabled
 return
return

config vsp
 set admin enabled
 config default-session-config
  config media
   set anchor enabled
   config nat-traversal
    set symmetricRTP true
   return
   set rtp-stats enabled
  return
  config sip-directive
   set directive allow
  return
  config log-alert
  return
 return
 config tls
  config certificate sample
  return
 return

The following section of the configuration provides a sample policy rule to reject calls from a user with a URI that starts with 1000.

config policies
  config session-policies
   set default-policy vsp\policies\session-policies\policy default
   config policy default
    config rule sample-rule
     set description "sample rule to reject calls"
     config condition-list
      set from-uri-condition user match 1000
     return
     config session-config
      config sip-directive
       set directive refuse 400 "Please Pay Your Bill"
      return
     return
    return
   return
  return
 return

The folllowing configuration provides a sample dial-plan that takes a call with a Req URI domain of delegate.com and forwards it to the sample SIP gateway.

 config dial-plan
  config route sample-delegate
   set description "delegate to defined server"
   set peer server "vsp\enterprise\servers\sip-gateway sample-gateway"
   set request-uri-match domain-exact delegate.com
  return
 return

The following configuration provides a sample registration plan that takes a registration attempt with a domain of xyz.com and registers the endpoint locally. This is useful for cases where you want to register an endpoint locally for call testing purposes.

config registration-plan
  config route sample-accept-local
   set description "accept registers locally for this domain"
   set action accept
   set peer server "vsp\enterprise\servers\sip-gateway sample-gateway"
   set to-uri-match domain-exact xyz.com
  return

The following configuration provides a sample registration plan that takes a registration attempt with a domain of delegate.com and proxies the registration to the attached PBX or feature server.

config route sample-delegate
   set description "delegate to the defined server"
   set peer server "vsp\enterprise\servers\sip-gateway sample-gateway"
   set to-uri-match domain-exact delegate.com
  return
 return

The following configuration provides a sample SIP gateway that could be used for an attached PBX or feature server. You must edit the IP address to reflect the actual server IP or Fully Qualified Domain Name (FQDN).

 config enterprise
  config servers
   config sip-gateway sample-gateway
    config server-pool
     config server sample-server
      set host 192.168.1.4
     return
    return
   return
  return
 return

config external-services
return
config preferences
 config cms-preferences
 return
return

The following configuration provides two different sample permission sets. These permission sets modified and/or can be used with user accounts that you create.

config access
 config permissions super-user
  set cli advanced
 return
 config permissions view-only
  set cli disabled
  set ftp disabled
  set config view
  set actions disabled
  set templates disabled
  set web-services disabled
  set debug disabled
 return
return

config features
return

Oracle recommends that the storage-device fail-threshold be set to 200 MB.

services
storage-device
  fail-threshold 200 MB

Enabling the ME Management System

Once you have configured an Ethernet interface, such as eth1, you can use your Web browser or native mobile application to point to the configured IP address of this interface to launch the ME Management System. The ME Management System provides a windows and menu user interface to configuring the ME.

Bridging to Additional Ethernet Ports

Follow the steps in this section if you need to configure VMware on a Window platform to use two bridged networks. By default, VMware allows the following functionality:

  • One bridged interface (to the first host network interface)

  • One NAT interface

  • One host-only interface

To create two bridged interfaces, you will need to

  1. add an additional VMnet associated with a second interface, and

  2. edit the VM configuration file to use the new VMnet.

Adding an Additional VMnet

To add an additional VMnet, perform the following steps:

  1. Halt all VMs currently running on this x86-based PC or server.

  2. Launch the vmnetcfg.exe application from the VMware Player installation directory (c:\Program Files\VMware\VMware Player\vmnetcfg.exe).

  3. Select the Host Virtual Network Mapping tab.

  4. Select a VMnet to use for the second network interface card (NIC), such as VMnet3.

  5. From the drop-down men, select the NIC you wish to connect to this VMnet.

If you want to have more control over which VMnet0 which connects to the first NIC perform the following steps:

  1. Select the Automatic Bridging tab.

  2. In the Automatic Bridging box, de-select the Automatically choose and available physical network adapter to bridge to VMnet0.

  3. Select the Host Virtual Network Mapping tab.

  4. Select a VMnet to use for the first NIC, such as VMnet2.

  5. From the drop-down menu, select the NIC you wish to connect to this VMnet.

    Note:

    You can use VMnet0 to assign to a specific NIC. However, avoiding VMnet0 will indicate to a later user of the VMs configuration file that specific NICs were assigned to the VMs virtual interfaces, thus removing any questions about the automatic nature implied with VMnet0 on any particular system.

Editing the VM Configuration File

You will need to edit the VMware configuration file to include the second NIC with the VMware Player. Perform the following steps:

  1. Halt all VMs currently running on this x86-based PC or server.

  2. Using Windows Explorer, open the Oracle ME folder.

  3. Using a text editor such as Notepad, open the file nnos-e-vm.vmx.

  4. At the bottom of the file add the following lines, substituting the desired VMnets for the Ethernet interfaces:

    • ethernet0.connectionType = "custom"

    • ethernet0.vnet = "vmnet0"

    • ethernet1.connectionType = "custom"

    • ethernet1.vnet = "vmnet3"

  5. Ensure that there are no other lines in the file specifying ethernet X .connectionType = "XXXXX".

Media Engine Virtual Machine Troubleshooting

Oracle makes every effort to test the VM in a variety of customer environments. This section covers recently reported issues directly from ME VM customers. If you discover an issue with the VM that we need to know about, contact Oracle Customer Support for assistance.