Supported Private Virtual Infrastructures and Public Clouds

You can run the ESBC on the following Private Virtual Infrastructures, which include individual hypervisors as well as private clouds based on architectures such as VMware or Openstack.

Note:

The ESBC does not support automatic, dynamic disk resizing.

Note:

Virtual ESBCs do not support media interfaces when media interfaces of different NIC models are attached. Media Interfaces are supported only when all media interfaces are of the same model, belong to the same Ethernet Controller, and have the same PCI Vendor ID and Device ID.

Supported Hypervisors for Private Virtual Infrastructures

Oracle supports installation of the ESBC on the following hypervisors:

  • KVM: Linux kernel version (4.1.12-124 or later), with KVM/QEMU (2.9.0_16 or later) and libvirt (3.9.0_14 or later)
  • VMware: vSphere ESXi (Version 6.5 or later)
  • Microsoft Hyper-V: Microsoft Server (2012 R2 or later)

Compatibility with OpenStack Private Virtual Infrastructures

Oracle distributes Heat templates for the Newton and Pike versions of OpenStack. Download the source, nnSCZ920_HOT.tar.gz, and follow the OpenStack Heat Template instructions.

The nnSCZ920_HOT.tar.gz file contains two files:

  • nnSCZ920_HOT_pike.tar
  • nnSCZ920_HOT_newton.tar

Use the Newton template when running either the Newton or Ocata versions of OpenStack. Use the Pike template when running Pike or a later version of OpenStack.

Supported Public Cloud Platforms

You can run the ESBC on the following public cloud platforms.

  • Oracle Cloud Infrastructure (OCI)

    After deployment, you can change the shape of your machine by, for example, adding disks and interfaces. OCI Cloud Shapes and options validated in this release are listed in the table below.

    Shape OCPUs/VCPUs vNICs Tx/Rx Queues Max Forwarding Cores DoS Protection Memory
    VM.Standard2.4 4/8 4 2 2 Y 60
    VM.Standard2.8 8/16 8 2 2 Y 120
    VM.Standard2.16 16/32 16 2 2 Y 240
    VM.Optimized3.Flex-Small 4/8 4 8 6Foot 1 Y 16
    VM.Optimized3.Flex-Medium 8/16 8 15 14Foot 2 Y 32
    VM.Optimized3.Flex-Large 16/32 16 15 15 Y 64

    Footnote 1 This maximum is 5 when using DoS Protection

    Footnote 2 This maximum is 13 when using DoS Protection

    Networking using image mode [SR-IOV mode - Native] is supported on OCI. PV and Emulated modes are not currently supported.

    Note:

    Although the VM.Optimized3.Flex OCI shape is flexible, allowing you to choose from 1-18 OCPUs and 1-256GB of memory, the vSBC requires a minimum of 4 OCPUs and 16GB of memory per instance on these Flex shapes.
  • Amazon Web Services (EC2)

    This table lists the AWS instance sizes that apply to the ESBC.

    Instance Type vNICs RAM vCPUs Max Forwarding Cores DOS Protection
    c5.xlarge 4 8 4 1 N
    c5.2xlarge 4 16Foot 3 8 2 Y
    c5.4xlarge 8 32 16 6 Y
    c5n.xlarge 4 8 4 1 N
    c5n.2xlarge 4 16* 8 2 Y
    c5n.4xlarge 8 32 16 6 Y

    Footnote 3

    "It is observed that the 16GB AWS instances, effectively provide around 15GB system memory for the SBC. For MSRP, 16GB system memory is minimum requirement. Due to this AWS behavior AWS, customer was not able to use MSRP with AWS - 16GB instance.

    To address this behavior, the minimum system memory requirement is reduced to 14GB for vSBC instances over AWS with reduced MSRP capacity."

    Driver support detail includes:

    • ENA is supported on C5/C5n family only.

    Note:

    C5 instances use the Nitro hypervisor.
  • Microsoft Azure

    The following table lists the Azure instance sizes that you can use for the ESBC.

    Size (Fs series) vNICs RAM vCPUs DOS Protection
    Standard_F4s 4 8 4 N
    Standard_F8s 8 16 8 Y
    Standard_F16s 8 32 16 Y
    Size vNICs RAM vCPUs DOS Protection
    Standard_F8s_v2 4 16 8 Y
    Standard_F16s_v2 4 32 16 Y

    Size types define architectural differences and cannot be changed after deployment. During deployment you choose a size for the OCSBC, based on pre-packaged Azure sizes. After deployment, you can change the detail of these sizes to, for example, add disks or interfaces. Azure presents multiple size options for multiple size types.

    For higher performance and capacity on media interfaces, use the Azure CLI to create a network interface with accelerated networking. You can also use the Azure GUI to enable accelerated networking.

    Note:

    The ESBC does not support Data Disks deployed over any Azure instance sizes.

    Note:

    Azure v2 instances have hyperthreading enabled.
  • Google Cloud Platform

    The following table lists the GCP instance sizes that you can use for the ESBC.

    Table 1-1 GCP Machine Types

    Machine Type vCPUs Memory (GB) vNICs Egress Bandwidth (Gbps) Max Tx/Rx queues per VM
    n2-standard-4 4 16 4 10 4
    n2-standard-8 8 32 8 16 8
    n2-standard-16 16 64 8 32 16

    Use the n2-standard-4 machine type if you're deploying an ESBC that requires one management interface and only two or three media interfaces. Otherwise, use the n2-standard-8 or n2-standard-16 machine types for an ESBC that requires one management interface and four media interfaces. Also use the n2-standard-4, n2-standard-8, or n2-standard-16 machine types if deploying the ESBC in HA mode.

    Before deploying your ESBC, check the Available regions and zones to confirm that your region and zone support N2 shapes.

    On GCP the ESBC must use the virtio network interface card. The ESBC will not work with the GVNIC

Platform Hyperthreading Support

Some supported platforms support and enable and expose SMT capability by default. Others may not support SMT, require that you enable it, or have support that is specific to machine size/shape:

  • Of the supported hypervisors, only VMware does not expose SMT capability to the ESBC.
  • Of the supported clouds:
    • AWS—Supports SMT and enables it by default.
    • OCI—Supports SMT and enables it by default.
    • GCP—Supports SMT and enables it by default.
    • Azure—Supports SMT, but requires that you enable it. The exception is the FxS_v2, which enables SMT by default.

DPDK Reference

The ESBC relies on DPDK for packet processing and related functions. You may reference the Tested Platforms section of the DPDK release notes available at https://doc.dpdk.org. This information can be used in conjunction with this Release Notes document for you to set a baseline of:

  • CPU
  • Host OS and version
  • NIC driver and version
  • NIC firmware version

Note:

Oracle only qualifies a specific subset of platforms. Not all the hardware listed as supported by DPDK is enabled and supported in this software.

The DPDK version used in this release is:

  • 21.11

The DPDK version used in this release is uplifted at S-Cz9.2.0p2 to:

  • 22.11