Installa Oracle Cloud Native Environment

Questi passaggi descrivono come installare Oracle Cloud Native Environment su Oracle Private Cloud Appliance.

Installa server API Oracle Cloud Native Environment sul nodo operatore

API server 
if ! systemctl status olcne-api-server.service | grep 'Loaded: loaded'; then
 echo "No platform olcne-api-server.service seen on `hostname`, so the way is clear to install it..."
 pm_action=install
else
 sudo systemctl stop olcne-api-server.service
 pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y olcne olcne-api-server olcne-utils
sudo systemctl enable olcne-api-server.service

Installare gli agenti della piattaforma Oracle Cloud Native Environment sul piano di controllo e sui nodi di lavoro

platform agents 
if ! systemctl status olcne-agent.service | grep 'Loaded: loaded'; then
 echo "No platform olcne-agent.service seen on `hostname`, so the way is clear to 
install it..."
 pm_action=install
else
 sudo systemctl stop olcne-agent.service
 pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y olcne-agent olcne-utils
sudo systemctl enable olcne-agent.service 
sudo mkdir -p /etc/systemd/system/crio.service.d
cat <<EOD > /tmp/t
[Service]
Environment="HTTP_PROXY=http://proxy-host:proxy-port"
Environment="HTTPS_PROXY=http://proxy-host:proxy-port"
Environment="NO_PROXY=localhost,127.0.0.1,. proxy-host.us.oracle.com,.oraclecorp.com,.oraclevcn.com,10.0.1.0/24,10.0.0.0/24,.svc,/var/run/crio/crio.sock,10.96.0.0/12"
EOD
sudo mv /tmp/t /etc/systemd/system/crio.service.d/proxy.conf
if ! systemctl status docker.service 2>&1 | grep -l 'could not be found.' > /dev/null 2>&1; 
then
 sudo systemctl disable --now docker.service
fi
if ! systemctl status containerd.service 2>&1 | grep -l 'could not be found.' > /dev/null 
2>&1; then
 sudo systemctl disable --now containerd.service
fi

Prepara firewall per load balancer interno sul piano di controllo

control plane firewall for HA
sudo firewall-cmd --add-port=6444/tcp
sudo firewall-cmd --add-port=6444/tcp --permanent
sudo firewall-cmd --add-protocol=vrrp
sudo firewall-cmd --add-protocol=vrrp --permanent
scp /path/to/your/id_rsa ocneoperator.dm.com:/home/opc/.ssh/id_rsa

Genera certificati X509 sul nodo operatore

generate certificates 
if [ -f /home/opc/.olcne/certificates/127.0.0.1:8091/node.cert ]; then
 # should we call
 # olcne --api-server 127.0.0.1:8091 environment report --environment-name myenvironment
 # this file is searched for. Skip the call and just check for the file; if we see it, 
attempt to delete existing env:
 echo signs of myenvironment seen, will try to delete it...
 olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
else
 echo "no environment seen, as expected. But will attempt to delete anyway to account for the case that the env is not reflected in that file check above"
 olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
fi
cd /etc/olcne
if systemctl status olcne-api-server.service; then
 echo running olcne-api-server.service seen, stopping it...
 sudo systemctl stop olcne-api-server.service
else
 echo no running olcne-api-server.service seen, as expected
fi
sudo ./gen-certs-helper.sh --cert-request-organization-unit "Paper Sales" --cert-request-organization "Dunder Mifflin" --cert-request-locality "Scranton" --cert-request-state "WA" --cert-request-country "US" --cert-request-common-name "dm.com" --nodes olcneoperator.dm.com,olcnecontrol.dm.com,olcneworker.dm.com,olcneworker2.dm.com,ocneworker3.dm.com

Avvia server API Oracle Cloud Native Environment sul nodo operatore

start API server 
sudo bash -x /etc/olcne/bootstrap-olcne.sh --secret-manager-type file --olcne-node-cert-path 
/etc/olcne/configs/certificates/production/node.cert --olcne-ca-path 
/etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path 
/etc/olcne/configs/certificates/production/node.key --olcne-component api-server
systemctl status olcne-api-server.service

Avvia agenti piattaforma su piano di controllo e nodi di lavoro

start platform agents 
sudo /etc/olcne/bootstrap-olcne.sh \
--secret-manager-type file \
--olcne-component agent \
--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \
--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \
--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key
systemctl status olcne-agent.service

Verifica dell'esecuzione degli agenti della piattaforma

Verificare gli agenti della piattaforma in esecuzione sul piano di controllo (ossia i nodi di controllo) e sui nodi di lavoro.

verify platform agents up 
ps auxww | grep /usr/libexec/olcne-agent | grep -v grep > /tmp/kk.26597
if [ -s /tmp/kk.26597 ]; then
 echo "OK /usr/libexec/olcne-agent running on `hostname`"
 if [ -n "" ]; then
 cat /tmp/kk.26597
 fi
else
 echo "FAIL /usr/libexec/olcne-agent NOT running on `hostname`"
fi

Crea Oracle Cloud Native Environment sul nodo operatore

Se questo non è un cluster appena allocato e un Oracle Cloud Native Environment è stato precedentemente creato sul nodo operatore, eliminarlo ora.

#optionally remove previous environment 
olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
create environment 
sudo chown -R opc:opc /etc/olcne /configs
olcne --api-server 127.0.0.1:8091 environment create --environment-name myenvironment --update-config --secret-manager-type file --olcne-node-cert-path 
/etc/olcne/configs/certificates/production/node.cert --olcne-ca-path 
/etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path 
/etc/olcne/configs/certificates/production/node.key

Crea modulo Kubernetes

Se questo non è un cluster appena allocato e un modulo Kubernetes di Oracle Cloud Native Environment era stato precedentemente creato sul nodo operatore, quindi rimuovere tale modulo.

Ad esempio:

#optionally remove previous k8s module 
olcne module uninstall --environment-name myenvironment --module kubernetes --name mycluster

Crea il modulo Kubernetes sul nodo operatore. Per questo comando è necessario determinare l'interfaccia di rete da utilizzare. In questo codice di esempio, eseguiamo ifconfig e selezioniamo la prima interfaccia di rete non a loop. Ad esempio, se ifconfig produce il seguente output:

ifconfig output 
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
 inet 172.16.8.117 netmask 255.255.252.0 broadcast 172.16.11.255
 inet6 fe80::213:97ff:fe3c:8b34 prefixlen 64 scopeid 0x20link
 ether 00:13:97:3c:8b:34 txqueuelen 1000 (Ethernet)
 RX packets 2284 bytes 392817 (383.6 KiB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 1335 bytes 179539 (175.3 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 127.0.0.1 netmask 255.0.0.0
 inet6 ::1 prefixlen 128 scopeid 0x10host
 loop txqueuelen 1000 (Local Loopback)
 RX packets 6 bytes 416 (416.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 6 bytes 416 (416.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions

Quindi l'espressione sed sulla seconda riga seguente finirà per impostare iface su ens3.

Ad esempio:

create k8 module 
sudo chown -R opc:opc /etc/olcne/configs/certificates/restrict_external_ip/production
iface=`ifconfig | sed -n -e '/^ /d' -e /LOOPBACK/d -e 's/:.*//p'```
# substitute the IP of your control node for CONTROL_NODE_IP below:
olcne module create --environment-name myenvironment --module kubernetes --name mycluster --container-registry container-registry.oracle.com/olcne --virtual-ip CONTROL_NODE_IP --master-nodes ocnecontrol.dm.com:8090 --worker-nodes ocneworker.dm.com:8090,ocneworker2.dm.com:8090,ocneworker3.dm.com:8090 --selinux enforcing --restrict-service-externalip-ca-cert 
/etc/olcne/configs/certificates/restrict_external_ip/production/production/ca.cert --restrict-service-externalip-tls-cert 
/etc/olcne/configs/certificates/restrict_external_ip/production/production/node.cert --restrict-service-externalip-tls-key 
/etc/olcne/configs/certificates/restrict_external_ip/production/production/node.key --pod-network-iface $iface

Aggiungi regola di entrata alla subnet

  1. Nell'interfaccia Private Cloud Appliance, passare a Dashboard/Virtual Cloud Networks/your_VCN/your_security_list.
  2. Aggiungere una regola per l'origine 0.0.0.0/0 di tipo TCP, consentendo un intervallo di porte di destinazione 2379-10255.

Convalida modulo Kubernetes sul nodo operatore

validate k8 
olcne module validate --environment-name myenvironment --name mycluster

Installa modulo Kubernetes sul nodo operatore

install k8 
olcne module install --environment-name myenvironment --name mycluster

Vedere il report del modulo Kubernetes sul nodo operatore

report on k8  OCNEctl module report --environment-name myenvironment --name mycluster

Mostra nodi Kubernetes nel nodo operatore

Note: To use kubectl within your cluster,:
run kubectl 
kubectl get nodes -o wide