Install Oracle Cloud Native Environment Manually on Oracle Cloud Infrastructure
In this tutorial, you will learn how to create and configure the Oracle Cloud Infrastructure (OCI) resources needed for a quick installation of the Oracle Cloud Native Environment on OCI and then perform the quick installation.
Introduction
Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud-native applications. The core module is the Kubernetes module. It deploys and manages containers and automatically installs and configures CRI-O, runC, and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster, which may be either runC or Kata Containers.
Oracle Cloud Native Environment Release 1.5.7 introduced the ability to use the Oracle Cloud Native Environment Platform CLI to perform a quick installation of itself. The installation runs using the olcnectl provision
command on an installation host (the operator node). The olcnectl provision
command can perform the following operations on the target nodes:
- Generate CA Certificates. (This tutorial uses private CA Certificates. It is recommended for a production environment that you use your own CA Certificates.)
- Copy the CA Certificates to each node.
- Set up the operating system on each node, including the network ports.
- Install the Oracle Cloud Native Environment software packages on each node.
- Start the Oracle Cloud Native Environment platform services (Platform API Server and Platform Agent).
- Create an Oracle Cloud Native Environment.
- Create, validate and install a Kubernetes module, which creates the Kubernetes cluster.
- Set up the Platform certificates to
~./olcne
on the operator node to access the environment using theolcnectl
command.
To create more complex installation topologies, you can write your own Oracle Cloud Native Environment configuration file and then pass it to the olcnectl provision
command using the --config-file
option. For more information, see the Platform Command-Line Interface Guide.
Objectives
This lab demonstrates how to:
- Create a Virtual Cloud Network (VCN) to manage resources for hosting Oracle Cloud Native Environment on OCI.
- Configure the VCN by setting up a public subnet, creating an internet gateway, making a route rule, and editing security list rules.
- Provision OCI instances to serve as operator, control, and worker nodes for Oracle Cloud Native Environment.
- Create and configure an external load balancer on OCI.
- Set up a host with the Platform CLI (olcnectl) on the operator node.
- Use the
olcnectl provision
command to perform a quick installation. - Install Oracle Cloud Native Environment Release 1.8 on a five-node cluster.
- Verify the installation completed successfully.
Prerequisites
An OCI account and a compartment in OCI.
Set Up Virtual Cloud Network
Configure the VCN on OCI for an Oracle Cloud Native Environment quick installation.
-
Log in to the Oracle Cloud Console.
-
Create a VCN.
-
Open the navigation menu, click Networking, and then click Virtual cloud networks.
-
Click Create VCN.
-
Enter a name for your virtual cloud network.
-
For Create In Compartment, select the desired compartment.
-
For IPv4 CIDR Blocks, enter a valid CIDR range. For example, 10.0.0.0/16.
-
Click Create VCN.
-
-
Create a subnet for the VCN.
-
On the Virtual cloud networks page, select your newly created VCN.
-
Click Create Subnet.
-
Enter a name for the subnet.
-
For IPv4 CIDR Blocks, enter a valid subset of the VCN’s CIDR range. For example, 10.0.1.0/24.
-
Click Create Subnet.
-
-
Create an internet gateway for the VCN.
-
Under Resources, click Internet Gateways.
-
Click Create Internet Gateway.
-
Enter a name for the gateway.
-
Click Create Internet Gateway.
-
-
Create a route rule for the internet gateway.
-
Under Resources, click Route Tables.
-
Select the default route table. It should be named Default Route Table for <VCN name>.
-
Click Add Route Rules.
-
From Target Type, select Internet Gateway.
-
For Destination CIDR Block, enter 0.0.0.0/0.
-
For Target Internet Gateway, select the new internet gateway.
-
Click Add Route Rules.
-
-
Set up the security list rules for the VCN.
-
Open the navigation menu, click Networking, and then click Virtual cloud networks.
-
Select the VCN created earlier.
-
Under Resources, click Security Lists.
-
Select the default security list. It should be named Default Security List for <VCN name>.
-
Select the ICMP protocol rule for source 10.0.1.0/16 and then click Edit.
-
For IP Protocol, select All Protocols.
-
Click Save changes.
-
Select the TCP protocol rule and then click Edit.
-
For Source CIDR, enter <your_public_ip_address>. This will ensure that only your IP address may connect via SSH to the instances.
Note: The Source CIDR value of 0.0.0.0/32 is a placeholder for your public IP. You can find your public IP address by visiting a website such as whatismyip.org.
-
Click Save changes.
-
Set Up Instances
Create a minimum of five OCI instances. You’ll use each as a node for Oracle Cloud Native Environment:
- Operator Node: Requires one instance to use the Platform CLI (olcnectl) to perform the installation and host the Platform API Server.
- Control Node: Requires at least two instances to use as a Kubernetes control plane node.
- Worker Node: Requires at least two instances to use as a Kubernetes worker node.
-
Create an instance.
Note: Use default values, unless otherwise mentioned.
-
In the Oracle Cloud Console, open the navigation menu. Click Compute, and then click Instances.
-
Click Create Instance.
-
Enter a name for the instance that identifies which node it will be (such as operator-node, control-node 1, worker-node 2, and so on).
-
Select the image to use. By default, the image will be Oracle Linux 8. You may also choose Oracle Linux 9.
Important Note: All the instances must use the same shape series. Oracle Cloud Native Environment 1.7 and earlier does not support the ARM architecture.
-
Under Primary VNIC information, select the VCN and subnet created earlier.
-
Under Add SSH keys, click Save private key. You will need the private key later to connect to the instance using SSH.
Note: For the operator node, disable the OS Management Service Agent. Click Show advanced options on the bottom on the instance creation page, click the Oracle Cloud Agent tab and then toggling the OS Management Service Agent off.
-
Click Create.
-
-
Repeat until you have created five instances. Make sure to save the private key for each instance.
-
Add the public IP addresses of the instances to the security list rules for the VCN.
-
Open the navigation menu, click Networking, and then click Virtual cloud networks.
-
Select the VCN created earlier.
-
Under Resources, select Security List.
-
Select the default security list. It should be named something like Default Security List for <VCN name>.
-
Click Add Ingress Rules.
-
For IP Protocol, select All Protocols.
-
For Source CIDR, enter the public IP address of one of your instances. Find the public IP address for the on the OCI Instances page.
-
Click Save changes.
-
Repeat until you have added an ingress rule for each instance, as shown below.
-
Set Up Load Balancer
Create a load balancer to ensure that the Oracle Cloud Native Environment deployment is highly available.
-
Create a load balancer.
-
In the Oracle Cloud Console, open the navigation menu. Click Networking, and then click Load balancer.
-
Click Create load balancer.
-
Select the VCN and subnet created earlier.
-
Click Next.
-
Click Add backends.
-
Select the two control node instances created earlier.
-
Click Add selected backends.
-
Change the port for the instances to 6443.
-
Under Specify health check policy, select TCP and set the port to 6443.
-
Click Next.
-
For Specify the type of traffic that your listener handles, select TCP.
-
For Specify the port your listener monitors for ingress traffic, enter 6443.
-
Click Next.
-
For Log Group, select the default group. Its name should contain something like Default_Group, but this may vary between compartments.
-
Click Submit.
-
-
Add a rule for the load balancer to the security list of the VCN.
-
Open the navigation menu, click Networking, and then click Virtual cloud networks.
-
Select the VCN created earlier.
-
Under Resources, click Security Lists.
-
Select the default security list. It should be named something like Default Security List for <VCN name>.
-
Click Add Ingress Rules.
-
For Source CIDR, enter the load balancer’s public IP address. You can find this value on the Load Balancers page.
-
For IP Protocol, select TCP.
-
For Source Port Range and Destination Port Range, enter 6443.
-
Click Add Ingress Rules.
-
Set Up Passwordless SSH for Nodes
To successfully deploy Oracle Cloud Native Environment, the operator node must be able to SSH into the control and worker nodes.
Important Note: Replace the placeholder values with their actual values when running commands. For example, replace <operator_node_public_ip> with the public IP of your operator node.
-
Open a terminal and connect with SSH to the operator node. You can find the public IP address for the node on the Instances page in OCI. The private key is the key file you downloaded when creating the instance.
ssh opc@<operator_node_public_ip> -i <operator_node_private_key>
-
On the operator node, generate an SSH key pair to use for communicating with the worker and control nodes. Do not enter a file in which to save the key, and leave the passphrase empty.
ssh-keygen -t rsa
This will generate a public and private key in the /home/<username>/.ssh directory where id_rsa is the private key and id_rsa.pub is the public key.
-
Use the cat command to print the contents of the entire public key. The public key will be used in the next steps as
. cat ~/.ssh/id_rsa.pub
Use the echo command to concatenate the contents of the public key to the authorized_keys file on the operator node.
echo '<public_key_content>' >> ~/.ssh/authorized_keys
-
For each control and worker node, add the public key:
SSH into the control or worker node from your local machine. Use the private key downloaded when creating the instance.
ssh opc@<node_public_ip> -i <node_private_key>
Use the echo command to concatenate the contents of the public key to the authorized_keys file.
echo '<public_key_content>' >> ~/.ssh/authorized_keys
Set Up the Install Host (operator node) on Oracle Linux
Configure the Oracle Linux host (operator node) for the quick installation of Oracle Cloud Native Environment. Follow the steps that match the instance operating system.
Oracle Linux 8
-
On the operator node, install the
oracle-olcne-release-el8
release package.sudo dnf -y install oracle-olcne-release-el8
-
Enable the current Oracle Cloud Native Environment repository.
sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
-
Disable all previous repository versions.
sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12
-
Disable any developer yum repositories. First, list the name of all active developer repositories.
sudo dnf repolist --enabled | grep developer
Disable each of the listed repositories.
sudo dnf config-manager --disable <repository_name>
-
Install the olcnectl software package.
sudo dnf -y install olcnectl
Oracle Linux 9
-
On the operator node, install the
oracle-olcne-release-el9
release package.sudo dnf -y install oracle-olcne-release-el9
-
Enable the current Oracle Cloud Native Environment repository.
sudo dnf config-manager --enable ol9_olcne18 ol9_addons ol9_baseos_latest ol9_appstream ol9_UEKR7
-
Disable any developer yum repositories. First, list the name of all active developer repositories.
sudo dnf repolist --enabled | grep developer
Disable each of the listed repositories.
sudo dnf config-manager --disable <repository_name>
-
Install the olcnectl software package.
sudo dnf -y install olcnectl
Perform a Quick Install
The following steps describe the fastest method to set up a basic deployment of Oracle Cloud Native Environment and install a Kubernetes cluster. It requires a minimum of five nodes, which are:
- Operator Node: This requires one instance to use the Platform CLI (olcnectl) to perform the installation and host the Platform API Server.
- Control Node: Requires at least two instances to use as a Kubernetes control plane node.
- Worker Node: Requires at least two instances to use as a Kubernetes worker node.
Run olcnectl provision
on the Operator Node
On the operator node, use the olcnectl provision
command to begin the installation. This operation can take 10-15 minutes to complete and there will be no visible indication that anything is occurring until it finishes.
olcnectl provision \
--api-server <operator_node_name> \
--control-plane-nodes <control_node1_name_>,<control_node2_name> \
--worker-nodes <worker_node1_name_>,<worker_node2_name> \
--environment-name <environment_name> \
--name <cluster_name> \
--load-balancer <load_balancer_public_ip>:6443 \
--selinux enforcing
Where:
- –api-server - the FQDN of the node on which the Platform API should be set up.
- –control-plane-nodes - the FQDN of the nodes which will be set up with the Platform Agent and assigned the Kubernetes control plane role. If more than one node is present it should be a comma separated list.
- –worker-nodes - the FQDN of the nodes which will be set up with the Platform Agent and assigned the Kubernetes worker role. If more than one node is present it should be a comma separated list.
- –environment-name - used to identify the environment.
- –name - used to set the name of the Kubernetes module.
- –load-balancer - public IP address used by the load balancer on OCI.
- –selinux - sets the mode for SELinux
Note: When executing this command, a prompt is displayed that lists the changes to be made to the hosts and asks for confirmation. To avoid this prompt, use the
--yes
option. This option sets the olcnectl provision command to assume that the answer to any confirmation prompt is affirmative (yes).
Example Output: This shows example output without using
--yes
:[opc@operator-node-1 ~]$ olcnectl provision \ --api-server operator-node-1 \ --control-plane-nodes control-node-1,control-node-2 \ --worker-nodes worker-node-1,worker-node-2 \ --environment-name cne1-ha-env \ --name cne1-ha-cluster \ --load-balancer 129.153.192.253:6443 \ -- selinux enforcing INFO[29/11/23 16:34:59] Using existing certificate authority INFO[29/11/23 16:34:59] Creating directory "/etc/olcne/certificates/" on operator-node-1 INFO[29/11/23 16:34:59] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on operator-node-1 INFO[29/11/23 16:34:59] Copying local file at "certificates/operator-node-1/node.cert" to "/etc/olcne/certificates/node.cert" on operator-node-1 INFO[29/11/23 16:34:59] Copying local file at "certificates/operator-node-1/node.key" to "/etc/olcne/certificates/node.key" on operator-node-1 INFO[29/11/23 16:34:59] Creating directory "/etc/olcne/certificates/" on control-node-1 ... ? Apply api-server configuration on operator-node-1: * Install oracle-olcne-release * Enable olcne18 repo * Install API Server Add firewall port 8091/tcp Proceed? yes/no(default) yes ... ? Apply worker configuration on worker-node-2: * Install oracle-olcne-release * Enable olcne18 repo * Configure firewall rule: Add interface cni0 to trusted zone Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp * Disable swap * Load br_netfilter module * Load Bridge Tunable Parameters: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 * Set SELinux to permissive * Install and enable olcne-agent Proceed? yes/no(default) yes
Troubleshooting Errors
If you run into an error with creating the kubeadm join config when running the provision command, connect to the control nodes using SSH and run the kubeadm reset command by running:
sudo kubeadm reset
Confirm the Installation
The Oracle Cloud Native Environment platform and Kubernetes cluster software is now installed and configured on all of the nodes.
On the operator node, confirm the installation:
olcnectl module instances \
--environment-name <environment_name>
Example Output:
[oracle@ocne-operator ~]$ olcnectl module instances --environment-name myenvironment INSTANCE MODULE STATE mycluster kubernetes installed worker-node-1:8090 node installed worker-node-2:8090 node installed control-node-1:8090 node installed control-node-2:8090 node installed
Copy the Certificates
To run olcnectl module commands on the operator node, copy the certificates to the appropriate folder using the cp command.
On the operator node, run:
cp -rp /home/opc/.olcne/certificates/<operator_node_name>:8091/* /home/opc/.olcne/certificates/
This allows you to run olcnectl commands for environment operations and commands for modifying modules.
Get More Deployment Information with the olcnectl module report
Command
On the operator node, run:
olcnectl module report \
--environment-name <environment_name> \
--name <cluster_name> \
--children \
--format yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster --children --format yaml Environments: myenvironment: ModuleInstances: - Name: mycluster Properties: - Name: worker:ocne-worker:8090 - Name: podnetworking Value: running - Name: extra-node-operations - Name: cloud-provider - Name: module-operator Value: running - Name: extra-node-operations-update Value: running - Name: status_check Value: healthy - Name: kubectl - Name: master:ocne-control:8090 - Name: kubecfg Value: file exist ... Properties: - Name: kubeadm Value: 1.26.6-1 - Name: kubectl Value: 1.26.6-1 - Name: kubelet Value: 1.26.6-1 - Name: helm Value: 3.12.0-1 - Name: sysctl Properties: - Name: vm.max_map_count Value: "262144" - Name: kubecfg Value: file exist
Note: It is possible to alter the output to return this output in a table format. However, this requires that the terminal application’s encoding be set to
UTF-8
(Set the following in the terminal application’s menu:Terminal -> Set Encoding -> Unicode -> UTF-8
). Then run the command again without the--format yaml
option.
olcnectl module report \
--environment-name <environment_name> \
--name cluster_name> \
--children
Example Output:
[oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster --children ╭─────────────────────────────────────────────────────────────────────┬─────────────────────────╮ │ myenvironment │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ mycluster │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ Property │ Current Value │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ kubecfg │ file exist │ │ podnetworking │ running │ │ module-operator │ running │ │ extra-node-operations │ │ │ extra-node-operations-update │ running │ │ worker:ocne-worker:8090 │ │ │ externalip-webhook │ uninstalled │ │ status_check │ healthy │ │ kubectl │ │ │ cloud-provider │ │ │ master:ocne-control-1:8090 │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ ocne-control-1:8090 │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ ... ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ service │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ containerd.service │ not enabled/not running │ │ crio.service │ enabled/running │ │ kubelet.service │ enabled │ ╰─────────────────────────────────────────────────────────────────────┴─────────────────────────╯
Set up kubectl
-
On one of the control nodes, setup the kubectl command.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
-
Verify kubectl works.
kubectl get nodes
Example Outptut:
[oracle@ocne-control ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION control-node-1 Ready control-plane 15d v1.26.6+1.el9 control-node-2 Ready control-plane 15d v1.26.6+1.el9 worker-node-1 Ready <none> 15d v1.26.6+1.el9 worker-node-2 Ready <none> 15d v1.26.6+1.el9
or
kubectl get pods --all-namespaces
Example Output:
[oracle@ocne-control ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-965bbb7bb-7qc4n 1/1 Running 0 4m11s kube-system coredns-965bbb7bb-x9wwv 1/1 Running 0 4m11s kube-system etcd-ocne-control 1/1 Running 0 5m12s kube-system kube-apiserver-ocne-control 1/1 Running 0 5m10s kube-system kube-controller-manager-ocne-control 1/1 Running 1 (4m49s ago) 5m10s kube-system kube-flannel-ds-dz22m 1/1 Running 0 3m52s kube-system kube-flannel-ds-nw57l 1/1 Running 0 3m52s kube-system kube-proxy-qs4q2 1/1 Running 0 3m53s kube-system kube-proxy-rvknz 1/1 Running 0 4m11s kube-system kube-scheduler-ocne-control 1/1 Running 1 (4m49s ago) 5m10s kubernetes-dashboard kubernetes-dashboard-76b8b45c77-t9lrt 1/1 Running 0 3m51s verrazzano-install verrazzano-module-operator-78c95444c7-59qt7 1/1 Running 0 3m51s
This confirms that Oracle Cloud Native Environment is setup and running on the five nodes.
For More Information
- Oracle Cloud Native Environment Documentation
- Oracle Cloud Native Environment Track
- Oracle Linux Training Station
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Install Oracle Cloud Native Environment Manually on Oracle Cloud Infrastructure
F92379-03
July 2024