Install Oracle Cloud Native Environment

These steps describe how to install Oracle Cloud Native Environment on Oracle Private Cloud Appliance.

Install Oracle Cloud Native Environment API Server on the Operator Node

API server 
if ! systemctl status OCNE-api-server.service | grep 'Loaded: loaded'; then
 echo "No platform OCNE-api-server.service seen on `hostname`, so the way is clear to 
install it..."
 pm_action=install
else
 sudo systemctl stop OCNE-api-server.service
 pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y OCNEctl OCNE-api-server 
OCNE-utils
sudo systemctl enable OCNE-api-server.service

Install Oracle Cloud Native Environment Platform Agents on the Control Plane and Worker Nodes

platform agents 
if ! systemctl status OCNE-agent.service | grep 'Loaded: loaded'; then
 echo "No platform OCNE-agent.service seen on `hostname`, so the way is clear to 
install it..."
 pm_action=install
else
 sudo systemctl stop OCNE-agent.service
 pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y OCNE-agent OCNE-utils
sudo systemctl enable OCNE-agent.service 
sudo mkdir -p /etc/systemd/system/crio.service.d
cat <<EOD > /tmp/t
[Service]
Environment="HTTP_PROXY=http://proxy-host:proxy-port"
Environment="HTTPS_PROXY=http://proxy-host:proxy-port"
Environment="NO_PROXY=localhost,127.0.0.1,. proxy-hostus.oracle.com,.oraclecorp.com,.oraclevcn.com,10.0.1.0/24,10.0.0.0/24,.svc,/var/run/crio/c
rio.sock,10.96.0.0/12"
EOD
sudo mv /tmp/t /etc/systemd/system/crio.service.d/proxy.conf
if ! systemctl status docker.service 2>&1 | grep -l 'could not be found.' > /dev/null 2>&1; 
then
 sudo systemctl disable --now docker.service
fi
if ! systemctl status containerd.service 2>&1 | grep -l 'could not be found.' > /dev/null 
2>&1; then
 sudo systemctl disable --now containerd.service
fi

Prepare Firewall for Internal Load Balancer on the Control Plane

control plane firewall for HA
sudo firewall-cmd --add-port=6444/tcp
sudo firewall-cmd --add-port=6444/tcp --permanent
sudo firewall-cmd --add-protocol=vrrp
sudo firewall-cmd --add-protocol=vrrp --permanent
scp /path/to/your/id_rsa ocneoperator.dm.com:/home/opc/.ssh/id_rsa

Generate X509 Certificates on the Operator Node

generate certificates 
if [ -f /home/opc/.OCNE/certificates/127.0.0.1:8091/node.cert ]; then
 # should we call
 # OCNEctl --api-server 127.0.0.1:8091 environment report --environment-name 
myenvironment
 # this file is searched for. Skip the call and just check for the file; if we see it, 
attempt to delete existing env:
 echo signs of myenvironment seen, will try to delete it...
 OCNEctl --api-server 127.0.0.1:8091 environment delete --environment-name 
myenvironment
else
 echo "no environment seen, as expected. But will attempt to delete anyway to account 
for the case that the env is not reflected in that file check above"
 OCNEctl --api-server 127.0.0.1:8091 environment delete --environment-name 
myenvironment
fi
cd /etc/OCNE
if systemctl status OCNE-api-server.service; then
 echo running OCNE-api-server.service seen, stopping it...
 sudo systemctl stop OCNE-api-server.service
else
 echo no running OCNE-api-server.service seen, as expected
fi
sudo ./gen-certs-helper.sh --cert-request-organization-unit "Paper Sales" --cert-request-organization "Dunder Mifflin" --cert-request-locality "Scranton" --cert-request-state "WA" --
cert-request-country "US" --cert-request-common-name "dm.com" --nodes 
ocneoperator.dm.com,ocnecontrol.dm.com,ocneworker.dm.com,ocneworker2.dm.com,ocneworker3.dm.com

Start Oracle Cloud Native Environment API Server on the Operator Node

start API server 
sudo bash -x /etc/OCNE/bootstrap-OCNE.sh --secret-manager-type file --OCNE-node-cert-path 
/etc/OCNE/configs/certificates/production/node.cert --OCNE-ca-path 
/etc/OCNE/configs/certificates/production/ca.cert --OCNE-node-key-path 
/etc/OCNE/configs/certificates/production/node.key --OCNE-component api-server
systemctl status OCNE-api-server.service

Start Platform Agents on Control Plane and Worker Nodes

start platform agents 
sudo /etc/OCNE/bootstrap-OCNE.sh --secret-manager-type file --OCNE-node-cert-path 
/etc/OCNE/configs/certificates/production/node.cert --OCNE-ca-path 
/etc/OCNE/configs/certificates/production/ca.cert --OCNE-node-key-path 
/etc/OCNE/configs/certificates/production/node.key --OCNE-component agent
systemctl status OCNE-agent.service

Verify Platform Agents Running

Verify platform agents running on the control plane (i.e., the control nodes) and the worker nodes.

verify platform agents up 
ps auxww | grep /usr/libexec/OCNE-agent | grep -v grep > /tmp/kk.26597
if [ -s /tmp/kk.26597 ]; then
 echo "OK /usr/libexec/OCNE-agent running on `hostname`"
 if [ -n "" ]; then
 cat /tmp/kk.26597
 fi
else
 echo "FAIL /usr/libexec/OCNE-agent NOT running on `hostname`"
fi

Create Oracle Cloud Native Environment on Operator Node

If this is not a freshly allocated cluster, and an Oracle Cloud Native Environment had previously been created on the operator node, then delete that now.

optionally remove previous environment 
OCNEctl --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
create environment 
sudo chown -R opc:opc /etc/OCNE/configs
OCNEctl --api-server 127.0.0.1:8091 environment create --environment-name myenvironment --
update-config --secret-manager-type file --OCNE-node-cert-path 
/etc/OCNE/configs/certificates/production/node.cert --OCNE-ca-path 
/etc/OCNE/configs/certificates/production/ca.cert --OCNE-node-key-path 
/etc/OCNE/configs/certificates/production/node.key

Create Kubernetes Module

If this is not a freshly allocated cluster, and an Oracle Cloud Native Environment Kubernetes module had previously been created on the operator node, then remove that module.

For example:

optionally remove previous k8s module 
OCNEctl module uninstall --environment-name myenvironment --module kubernetes --name mycluster

Create Kubernetes module on the operator node. For this command we need to determine what network interface to use. In this example code, we run ifconfig and pick the first non-loop network interface. So, for example, if ifconfig produces the following output:

ifconfig output 
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
 inet 172.16.8.117 netmask 255.255.252.0 broadcast 172.16.11.255
 inet6 fe80::213:97ff:fe3c:8b34 prefixlen 64 scopeid 0x20link
 ether 00:13:97:3c:8b:34 txqueuelen 1000 (Ethernet)
 RX packets 2284 bytes 392817 (383.6 KiB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 1335 bytes 179539 (175.3 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 127.0.0.1 netmask 255.0.0.0
 inet6 ::1 prefixlen 128 scopeid 0x10host
 loop txqueuelen 1000 (Local Loopback)
 RX packets 6 bytes 416 (416.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 6 bytes 416 (416.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions

Then the sed expression on the second line below will end up setting iface to be ens3.

For example:

create k8 module 
sudo chown -R opc:opc /etc/OCNE/configs/certificates/restrict_external_ip/production
iface=`ifconfig | sed -n -e '/^ /d' -e /LOOPBACK/d -e 's/:.*//p'```
# substitute the IP of your control node for CONTROL_NODE_IP below:
OCNEctl module create --environment-name myenvironment --module kubernetes --name mycluster --
container-registry container-registry.oracle.com/OCNE --virtual-ip CONTROL_NODE_IP --master-nodes ocnecontrol.dm.com:8090 --worker-nodes 
ocneworker.dm.com:8090,ocneworker2.dm.com:8090,ocneworker3.dm.com:8090 --selinux enforcing --
restrict-service-externalip-ca-cert 
/etc/OCNE/configs/certificates/restrict_external_ip/production/production/ca.cert --restrict-service-externalip-tls-cert 
/etc/OCNE/configs/certificates/restrict_external_ip/production/production/node.cert --
restrict-service-externalip-tls-key 
/etc/OCNE/configs/certificates/restrict_external_ip/production/production/node.key --pod-network-iface $iface

Add Ingress Rule to Subnet

  1. In the Private Cloud Appliance interface, navigate to Dashboard/Virtual Cloud Networks/your_VCN/your_security_list.
  2. Add a rule for source 0.0.0.0/0 of type TCP, allowing a destination port range 2379-10255.

Validate Kubernetes Module on Operator Node

validate k8 
OCNEctl module validate --environment-name myenvironment --name mycluster

Install Kubernetes Module on Operator Node

install k8 
OCNEctl module install --environment-name myenvironment --name mycluster

See Kubernetes Module Report on Operator Node

report on k8  OCNEctl module report --environment-name myenvironment --name
    mycluster

Show Kubernetes Nodes on Operator Node

Note: To use kubectl within your cluster,:
run kubectl 
kubectl get nodes -o wide