1.2 Hardware Components

1.2.1 Management Nodes
1.2.2 Compute Nodes
1.2.3 Storage Appliance
1.2.4 Network Infrastructure

The Oracle Virtual Compute Appliance consists of a Sun Rack II 1242 base, populated with the hardware components identified in Figure 1.1.

Figure 1.1 Components of an Oracle Virtual Compute Appliance Rack

Figure showing the components installed in a fully populated base rack.

Table 1.1 Figure Legend

Item

Quantity

Description

A

1

Oracle ZFS Storage Appliance ZS3-ES

(for Release 1.0 base rack: Sun ZFS Storage Appliance 7320)

B

2

Sun Server X4-2, used as management nodes

(for Release 1.0 base rack: Sun Server X3-2)

C

2-25

Sun Server X4-2, used as virtualization compute nodes

(for Release 1.0 base rack: Sun Server X3-2)

D

2

Oracle Fabric Interconnect F1-15 Director Switch

E

2

NM2-36P Sun Datacenter InfiniBand Expansion Switch

F

2

Oracle Switch ES1-24


1.2.1 Management Nodes

At the heart of each Oracle Virtual Compute Appliance installation is a pair of management nodes. They are installed in rack units 5 and 6 and form a cluster in active/standby configuration for high availability: both servers are capable of running the same services and have equal access to the system configuration, but one operates as the master while the other is ready to take over the master functions in case a failure occurs. The master management node runs the full set of services required, while the standby management node runs a subset of services until it is promoted to the master role. The master role is determined at boot through OCFS2 Distributed Lock Management on an iSCSI LUN, which both management nodes share on the ZFS storage appliance installed at the bottom of the rack. Because rack units are numbered from the bottom up, the master management node is typically the server in rack unit 5. It is the only server that must be powered on by the administrator in the entire process to bring the appliance online.

For details about how high availability is achieved with Oracle Virtual Compute Appliance, refer to Section 1.5, “High Availability”.

When you power on the Oracle Virtual Compute Appliance for the first time, you can change the factory default IP configuration of the management node cluster, so that they can be easily reached from your data center network. The management nodes share a Virtual IP, where the management web interface can be accessed. (This virtual IP is assigned to whichever server has the master role at any given time.) During system initialization, after the management cluster is set up successfully, the master management node loads a number of Oracle Linux 6 services, in addition to Oracle VM and its associated MySQL database – including network, sshd, ntpd, iscsi initiator, dhcpd – to be able to orchestrate the provisioning of all system components. During provisioning, all networking and storage is configured, and all compute nodes are discovered, installed and added to an Oracle VM server pool. All provisioning configurations are preloaded at the factory and should not be modified by the customer.

For details about the provisioning process, refer to Section 1.4, “Provisioning and Orchestration”.

1.2.2 Compute Nodes

The compute nodes in the Oracle Virtual Compute Appliance constitute the virtualization platform. The compute nodes provide the processing power and memory capacity for the virtual servers they host. The entire provisioning process is orchestrated by the management nodes: compute nodes are installed with Oracle VM Server 3.2.4 and additional packages for InfiniBand and Software Defined Networking. When provisioning is complete, the Oracle Virtual Compute Appliance software expects all compute nodes in the same rack to be part of the same Oracle VM server pool.

For hardware configuration details of the Sun Server X4-2 and Sun Server X3-2 compute nodes, refer to Server Components in the Oracle Virtual Compute Appliance Installation Guide. Both generations of Sun Servers may be mixed within the same installation. This occurs with Release 1.0 base racks that are upgraded to the Release 1.1 software stack: the management nodes and factory-installed compute nodes are Sun Server X3-2, which may be combined with new Sun Server X4-2 expansion nodes. Compute nodes of different hardware generations operate within the same server pool but belong to different CPU compatibility groups. Since live migration between CPU compatibility groups is not supported, virtual machines have to be cold-migrated between a Sun Server X3-2 and Sun Server X4-2 compute node. For more information about CPU compatibility groups, please refer to the section Server Processor Compatibility Groups in the Oracle VM User's Guide.

The Oracle Virtual Compute Appliance Dashboard allows the administrator to monitor the health and status of the compute nodes, as well as all other rack components, and perform certain system operations. The virtual infrastructure is configured and managed with Oracle VM Manager.

The compute capacity of the Oracle Virtual Compute Appliance can be built up in a modular way, in accordance with business needs. The minimum configuration of the base rack contains just two compute nodes, but it can be expanded by one node at a time up to 25 compute nodes. Apart from the hardware installation, adding compute nodes requires no intervention by the administrator. New nodes are discovered, powered on, installed and provisioned automatically by the master management node. The additional compute nodes are integrated into the existing configuration, and as a result, the Oracle VM server pool offers increased capacity for more or larger virtual machines.

Since it is difficult to quantify the compute capacity as a number of virtual machines, it is suggested that you base calculations on the amount of RAM as a rule of thumb. With 256GB per compute node and a minimal dom0 overhead, it is safe to assume that each compute node can host 60 virtual machines with 4GB RAM, or 30 virtual machines with 8GB RAM, or 15 virtual machines with 16GB RAM, or any combination that adds up to an amount of RAM just a couple of GB under the server's physical memory.

1.2.3 Storage Appliance

The Oracle ZFS Storage Appliance ZS3-ES installed at the bottom of the appliance rack should be considered a 'system disk' for the entire appliance. Its main purpose is to provide storage space for the Oracle Virtual Compute Appliance software. A portion of the disk space is made available for customer use and is sufficient for an Oracle VM storage repository with a limited number of virtual machines, templates and assemblies.

The hardware configuration of the Oracle ZFS Storage Appliance ZS3-ES is as follows:

  • Two clustered storage heads with two 1.6TB SSDs each, used exclusively for cache and logging

  • One fully populated disk chassis with twenty 900GB SATA hard disks

  • RAID-Z2 configuration, for best balance between performance and data protection, with a total usable space of 11.3TB

Note

Oracle Virtual Compute Appliance Release 1.0 base racks, which maybe upgraded to the Release 1.1 software stack, use a Sun ZFS Storage Appliance 7320. It offers the same performance, functionality and configuration, but its storage heads use smaller SSDs. The disk shelf and its disks are identical in both models.

The storage appliance is connected to the management subnet ( 192.168.4.0/24 ) and the InfiniBand (IPoIB) storage subnet ( 192.168.40.0/24 ). Because both heads form a cluster, they share a single IP in each subnet. The RAID-Z2 storage pool contains two projects, named OVCA and OVM .

The OVCA project contains all LUNs and file systems used by the Oracle Virtual Compute Appliance software:

  • LUNs

    • Locks (12GB) – to be used exclusively for cluster locking on the two management nodes

    • Manager (200GB) – to be used exclusively as an additional file system on both management nodes

  • File systems:

    • MGMT_ROOT – to be used for storage of all files specific to the Oracle Virtual Compute Appliance

    • Database – to be used for all system databases

    • Incoming (20GB) – to be used for FTP file transfers, primarily for Oracle Virtual Compute Appliance component backups

    • Templates – placeholder file system for future use

    • User – placeholder file system for future use

    • Yum – placeholder file system for future use

The OVM project contains all LUNs and file systems used by Oracle VM:

  • LUNs

    • iscsi_repository1 (300GB) – to be used as Oracle VM storage repository

    • iscsi_serverpool1 (12GB) – to be used as server pool file system for the Oracle VM clustered server pool

  • File systems:

    • nfs_repository1 (300GB) – to be used as Oracle VM storage repository in case NFS is preferred over iSCSI

    • nfs_serverpool1 (12GB) – to be used as server pool file system for the Oracle VM clustered server pool in case NFS is preferred over iSCSI

In addition to offering storage, the ZFS storage appliance also runs the xinetd and tftpd services. These complement the Oracle Linux services on the master management node in order to orchestrate the provisioning of all Oracle Virtual Compute Appliance system components.

1.2.4 Network Infrastructure

The Oracle Virtual Compute Appliance relies on a combination of Ethernet connectivity and an InfiniBand network fabric. The appliance rack contains redundant network hardware components, which are pre-cabled at the factory to help ensure continuity of service in case a failure should occur.

Ethernet

The Ethernet network relies on two interconnected Oracle Switch ES1-24 switches, to which all other rack components are connected with CAT6 Ethernet cables. This network serves as the appliance management network, in which every component has a predefined IP address in the 192.168.4.0/24 range. In addition, all management and compute nodes have a second IP address in this range, which is used for Oracle Integrated Lights Out Manager (ILOM) connectivity.

While the appliance is initializing, the InfiniBand fabric is not accessible, which means that the management network is the only way to connect to the system. Therefore, one of the Oracle Switch ES1-24 switches has an Ethernet cable attached to port 24, which the administrator should use to connect a workstation with fixed IP address 192.168.4.254 . From this workstation, the administrator opens a browser connection to the web server on the master management node at http://192.168.4.3 [1] , in order to monitor the initialization process and perform the initial configuration steps when the appliance is powered on for the first time.

InfiniBand

The Oracle Virtual Compute Appliance rack contains two NM2-36P Sun Datacenter InfiniBand Expansion Switches. These redundant switches have redundant cable connections to both InfiniBand ports in each management node, compute node and storage head. Both InfiniBand switches, in turn, have redundant cable connections to both Oracle Fabric Interconnect F1-15 Director Switches in the rack. All these components combine to form a physical InfiniBand backplane with a 40Gbit (Quad Data Rate) bandwidth.

When the appliance initialization is complete, all necessary Oracle Virtual Compute Appliance software packages, including host drivers and InfiniBand kernel modules, have been installed and configured on each component. At this point, the system is capable of using software defined networking (SDN) configured on top of the physical InfiniBand fabric. SDN is implemented through the Oracle Fabric Interconnect F1-15 Director Switches.

Fabric Interconnect

All Oracle Virtual Compute Appliance network connectivity is managed through the Oracle Fabric Interconnect F1-15 Director Switches. Data is transferred across the physical InfiniBand fabric, but connectivity is implemented in the form of Software Defined Networks (SDN) – sometimes referred to as 'clouds'. The physical InfiniBand backplane is capable of hosting thousands of virtual networks. These Private Virtual Interconnects (PVI) dynamically connect virtual machines and bare metal servers to networks, storage and other virtual machines, while maintaining the traffic separation of hard-wired connections and surpassing their performance.

During the initialization process of the Oracle Virtual Compute Appliance, five essential SDNs are configured: a storage network, an Oracle VM management network, a management Ethernet network, and two VLAN-enabled virtual machine networks.

  • The storage network is a bonded IPoIB connection between the management nodes and the ZFS storage appliance, and uses the 192.168.40.0/24 subnet.

  • The Oracle VM management network is a PVI that connects the management nodes and compute nodes in the 192.168.140.0/24 subnet. It is used for all network traffic inherent to Oracle VM Manager, Oracle VM Server and the Oracle VM Agents.

  • The management Ethernet network is a bonded Ethernet connection between the compute nodes. The primary function of this network is to provide access to the management nodes from the data center network, and enable the management nodes to run a number of system services. Since all compute nodes are also connected to this network, Oracle VM can use it for virtual machine connectivity, with access to and from data center public network. This subnet is configurable through the Network Setup tab in the Oracle Virtual Compute Appliance Dashboard. VLANs are not supported on this network.

  • The public virtual machine network is a bonded Ethernet connection between the compute nodes. Oracle VM uses this network for virtual machine connectivity, where external access is required. VLAN 1 is automatically configured for this network. Customers can add their own VLANs to the Oracle VM network configuration, and define the subnet(s) appropriate for IP address assignment at the virtual machine level. For external connectivity, the next-level data center switches must be configured to accept your tagged VLAN traffic.

  • The private virtual machine network is a bonded Ethernet connection between the compute nodes. Oracle VM uses this network for virtual machine connectivity, where only internal access is required. VLAN 1 is automatically configured for this network. Customers can add VLANs of their choice to the Oracle VM network configuration, and define the subnet(s) appropriate for IP address assignment at the virtual machine level.

Finally, the Oracle Fabric Interconnect F1-15 Director Switches also manage the physical public network connectivity of the Oracle Virtual Compute Appliance. Two 10GbE ports on each Fabric Director switch must be connected to redundant next-level data center switches. At the end of the initialization process, the administrator assigns three reserved IP addresses from the data center (public) network range to the management node cluster of the Oracle Virtual Compute Appliance: one for each management node, and an additional Virtual IP shared by the clustered nodes. From this point forward, the Virtual IP is used to connect to the master management node's web server, which hosts both the Oracle Virtual Compute Appliance Dashboard and the Oracle VM Manager web interface.

Caution

It is critical that both Oracle Fabric Interconnect F1-15 Director Switches have two 10GbE connections each to separate next-level data center switches. This configuration with four cross-connected cables provides redundancy and load splitting at the level of the Fabric Director switches, the 10GbE ports and the data center switches. In addition, it plays a key role in the continuation of service during failover scenarios involving Fabric Director switch outages and other components.



[1] It is possible that the other management node, in the rack unit just above, assumes the master role. In this case, the web page continues to display the message "Oracle Virtual Compute Appliance is still initializing... Please wait..." The administrator should connect to http://192.168.4.4 instead.