Hardware, Software, and Networking Requirements

The following sections outline the hardware, software, and networking requirements for using LEC v8.0.

On this page:

Hardware Requirements

The minimum recommended hardware requirements for each host are:

  • CPU: 8 CPU
  • Memory: 16 GB RAM
  • Hard Disk Space:
    • At least 40 GB available the /var directory.

    • At least 256 GB available in the /opt directory.

    • If you are planning to install a multinode cluster, you also need to have at least one configured block device on each host with a raw (i.e., unformatted) disk or partition that is at 256 GB in size. This will be used to back the cluster's persistent storage.

  • Architecture: x86-64
  • Network Interface: 1 GB Ethernet NIC
  • Filesystem: The root filesystem should be an XFS file system (the default file system for Oracle Linux).

Back to Top

Software Requirements

The minimum recommended software requirements for each host are:

  • Operating System: Oracle Linux 8 (OL8) or Oracle Linux 9 (OL9)

  • Kernel: Unbreakable Enterprise Kernel 6 (UEK6) or Unbreakable Enterprise Kernel 7 (UEK7)

  • Security Enhanced Linux (SEL) Mode: Permissive or Enforcing

Note: Each node in the cluster must have the same major OS release (for example, Oracle Linux 8) and the same major Kernel release (for example, UEK7). They must also have the same mode of SEL set (for example, Permissive).

Back to Top

Networking Requirements

Inter-node Networking Requirements (Multinode Deployment Only)

If you are installing LEC V8 in a multinode deployment, the networking and firewall rules between the nodes in your cluster need to allow for connections between certain types of nodes in the cluster on various ports. The following ports need to be open between nodes in a multinode cluster in order to deploy Kubernetes and OCNE:

The following ports need to be open between nodes in a multinode cluster in order to deploy Kubernetes and OLCNE:

  • 2379/tcp: Kubernetes etcd server client API (on control plane nodes in highly available clusters)

  • 2380/tcp: Kubernetes etcd server client API (on control plane nodes in highly available clusters)

  • 6443/tcp: Kubernetes API server (control plane nodes)

  • 8090/tcp: Platform Agent (control plane and worker nodes)

  • 8091/tcp: Platform API Server (OCNE operator)

  • 8472/udp: Flannel overlay network, VxLAN backend (control plane and worker nodes)

  • 10250/tcp: Kubernetes kubelet API server (control plane and worker nodes)

  • 10251/tcp: Kubernetes kube-scheduler (on control plane nodes in highly available clusters)

  • 10252/tcp: Kubernetes kube-controller-manager (on control plane nodes in highly available clusters)

  • 10255/tcp: Kubernetes kubelet API server for read-only access with no authentication (control plane and worker nodes)

Note: In a three node LEC v8 installation, all three nodes in the Kubernetes cluster will be control plane nodes and worker nodes. The node on which you run the installer will be the designated OCNE operator. See the following OCNE document for more information about configuring your network for a multinode OCNE cluster: https://docs.oracle.com/en/operating-systems/olcne/1.7/start/prereq.html#network.

 

Inter-node Networking Requirements (Multinode Deployment Only)

The following LEC services running on the cluster also require certain networking considerations:

  • gRPC Config Service: If you are using the Oracle Utilities Network Management System (NMS) Flex SCADA tool to connect to your LEC v8.0 cluster over gRPC (the typical use case for LEC v8.0), then you need to make sure that NMS can initiate a TCP connection to the LEC v8.0 cluster's IP on the specified gRPC service's port. The default port that the initial gRPC service will be listening on after a typical LEC v8.0 installation will be 50051.

  • ICCP Service: If you will be using LEC v8.0 as an ICCP VCC that is configured to listen for inbound ICCP associations, then you need to make sure that the remote ICCP peer can initiate a TCP connection for the ICCP association to the LEC v8.0 cluster on the port that you have agreed upon with the peer. Typically, the port that an ICCP server listens on is 102.

  • keepalived Service: If you will be installing a multinode LEC v8.0 cluster, then you can select to install keepalived on the cluster nodes in order to provide a high availability virtual IP address for the cluster. If you select this installation option then you need to make sure that Virtual Redundant Router Protocol (VRRP) and gratuitous ARP messages are allowed on the network infrastructure hosting the cluster nodes.

Note: In a single-node installation, the cluster IP from the perspective of users outside the cluster will simply be the routable IP address of the machine used as the single node in the cluster. In a multi-node installation, the cluster IP will be the high availability IP address that you specified in your LEC v8.0 configuration file.

 

Oracle Linux Host Configuration Requirements

Again, in order to run Oracle Utilities Live Energy Connect v8.0 installer, Oracle Linux 8 or Oracle 9 must be installed on the all nodes in the cluster. The host from which you run the LEC v8.0 installer will be designated as the OCNE operator node. In the case of a multinode deployment, this host will connect to the other hosts in the cluster via SSH while the installer is running.

If you are installing Oracle Linux from scratch on a bare metal host or hosts, you can access the Oracle Linux 8 or Oracle Linux installation images from the Oracle Linux Installation Media page. You must select a Full ISO image of the latest available version of an Oracle Linux 8 or Oracle Linux 9 release. See the Oracle Linux Help Center page for full installation instructions.

If you are installing a multinode cluster, each node in the cluster should use the same Oracle Linux major release (OL8 or OL9), have the same kernel release (UEK6 or UEK7), and have the same SEL mode enabled (Enforcing or Permissive). Once Oracle Linux 8 or Oracle Linux 9 is installed on all the nodes you plan to use for your cluster, you should run ‘sudo dnf -y update’ from the terminal as a user with sudo privileges to update all of the base Oracle Linux packages and then reboot each node. Prior to proceeding with the LEC v8.0 installation, you should confirm the following prerequisites on each node:

  • The user that runs the installer needs to be a non-root user with passwordless sudo privileges during the installation period.

    Note: These sudo privileges are required to install OCNE packages and dependencies via DNF, to modify firewall-cmd rules, and to start certain systemd service units required for OCNE 1.7 and LEC v8.0. For more information on enabling passwordless sudo see: https://docs.oracle.com/en/operating-systems/oracle-linux/8/userauth/userauth-GrantingsudoAccesstoUsers.html#topicudn3jt_hvb.

     

  • The user that runs the installer needs to have passwordless SSH access to each cluster node specified in the config file during the installation period.

    Note: For more information on enabling passwordless SSH see: https://docs.oracle.com/en/operating-systems/oracle-linux/openssh/openssh-WorkingwithSSHKeyPairs.html#remote-access-without-password

     

  • The Bash 4 shell should be set as the default shell for the system and the user.

  • The tar utility needs to be installed to extract the TAR archive from the provided .run file.

  • If you are installing LEC v8.0 on OCI instances, the OCI Operating System Management Service (OSMS) must be disabled on each instance. To determine if OSMS is enabled on an instance you can run the command “ps -elf | grep osms | grep -v grep”. If a process for OSMS is returned and is running, then OSMS is enabled for that instance. Disable OSMS for the instance from the OCI management console.

    Note: OSSM must be disabled on all nodes running OCNE. Unanticipated updates to Kubernetes packages and dependencies pushed through OSSM to a node can cause the Kubernetes cluster to be placed into an untenable state. All updates to OCNE components (including Kubernetes packages) should be done according to OCNE's upgrade documentation here: https://docs.oracle.com/en/operating-systems/olcne/1.7/upgrade/#Oracle-Cloud-Native-Environment. All other OS updates on nodes should be done according to OCNE's documentation here: https://docs.oracle.com/en/operating-systems/olcne/1.7/upgrade/update-os.html#update-os.

     

  • SELinux mode must be set to Permissive or Enforcing. See the SELinux documentation for details on viewing or changing the current SELinux mode.

  • The firewalld service must be running and enabled.

  • Access to the Oracle Linux Yum server at https://yum.oracle.com (or a mirror the Oracle Linux Yum server) must be available during the installation period. The repos that the installer will require for installing OCNE 1.7 and LEC v8.0 are:

    • For Oracle Linux 8

      • oracle-olcne-release-el8

      • ol8_olcne17

      • ol8baseoslatest

      • ol8_appstream

      • ol8_addons

      • ol8UEK6 or ol8UEKR7

      • ol8kvmappstream

    • For Oracle Linux 9

      • oracle-olcne-release-el9

      • ol9_olcne17

      • ol9baseoslatest

      • ol9_appstream

      • ol9_addons

      • ol9UEK6 or ol9UEKR7

         

        Note:The installation script will attempt to enable the correct repos from above with dnf config-manager commands. If it cannot enable these repos or install them from the DNF repos during the installation, it will exit before installing OCNE with a message that it could not enable or access a required repo. If you would like to verify your access to the yum repos and enable the required OCNE DNF repos before you run the installer to avoid any errors, refer to the following sections of OCNE's documentation:

  • Access to the OCNE container images on the Oracle Container Registry at https://container-registry.oracle.com/ords/ocr/ba/olcne (or a mirror of that container registry):

  • Any required proxies for reaching the above-mentioned resources should be defined in user's Bash environment variables: HTTPPROXY, HTTPSPROXY AND NO_PROXY

  • If you do have HTTPPROXY or HTTPSPROXY defined but do not have NO_PROXY defined, then you may need to specify NOPROXY as: ",localhost,127.0.0.1," For example depending on their system and networking, a user's .bashrc file may need to contain something like the following snippet:

Copy

.bashrc file snippet

HTTP_PROXY=http://www-proxy.company.com:80/
HTTPS_PROXY=https://www-proxy.company.com:443/
NO_PROXY=vm-123,localhost,127.0.0.1,100.50.25.12,.company.corp.com,.company.vcn.com
export HTTP_PROXY HTTPS_PROXY NO_PROXY

 

In the example .bashrc file snippet above, vm-123 is the hostname of the host running the installer and 100.50.25.12 is the IPV4 address assigned to the node on the default network interface.