Instalar o Oracle Cloud Native Environment

Estas etapas descrevem como instalar o Oracle Cloud Native Environment no Oracle Private Cloud Appliance.

Instalar o Servidor de API do Oracle Cloud Native Environment no Nó do Operador

API server 
if ! systemctl status olcne-api-server.service | grep 'Loaded: loaded'; then
 echo "No platform olcne-api-server.service seen on `hostname`, so the way is clear to install it..."
 pm_action=install
else
 sudo systemctl stop olcne-api-server.service
 pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y olcne olcne-api-server olcne-utils
sudo systemctl enable olcne-api-server.service

Instalar Agentes da Plataforma do Oracle Cloud Native Environment no Plano de Controle e nos Nós de Trabalho

platform agents 
if ! systemctl status olcne-agent.service | grep 'Loaded: loaded'; then
 echo "No platform olcne-agent.service seen on `hostname`, so the way is clear to 
install it..."
 pm_action=install
else
 sudo systemctl stop olcne-agent.service
 pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y olcne-agent olcne-utils
sudo systemctl enable olcne-agent.service 
sudo mkdir -p /etc/systemd/system/crio.service.d
cat <<EOD > /tmp/t
[Service]
Environment="HTTP_PROXY=http://proxy-host:proxy-port"
Environment="HTTPS_PROXY=http://proxy-host:proxy-port"
Environment="NO_PROXY=localhost,127.0.0.1,. proxy-host.us.oracle.com,.oraclecorp.com,.oraclevcn.com,10.0.1.0/24,10.0.0.0/24,.svc,/var/run/crio/crio.sock,10.96.0.0/12"
EOD
sudo mv /tmp/t /etc/systemd/system/crio.service.d/proxy.conf
if ! systemctl status docker.service 2>&1 | grep -l 'could not be found.' > /dev/null 2>&1; 
then
 sudo systemctl disable --now docker.service
fi
if ! systemctl status containerd.service 2>&1 | grep -l 'could not be found.' > /dev/null 
2>&1; then
 sudo systemctl disable --now containerd.service
fi

Preparar Firewall para Balanceador de Carga Interno no Plano de Controle

control plane firewall for HA
sudo firewall-cmd --add-port=6444/tcp
sudo firewall-cmd --add-port=6444/tcp --permanent
sudo firewall-cmd --add-protocol=vrrp
sudo firewall-cmd --add-protocol=vrrp --permanent
scp /path/to/your/id_rsa ocneoperator.dm.com:/home/opc/.ssh/id_rsa

Gerar Certificados X509 no Nó do Operador

generate certificates 
if [ -f /home/opc/.olcne/certificates/127.0.0.1:8091/node.cert ]; then
 # should we call
 # olcne --api-server 127.0.0.1:8091 environment report --environment-name myenvironment
 # this file is searched for. Skip the call and just check for the file; if we see it, 
attempt to delete existing env:
 echo signs of myenvironment seen, will try to delete it...
 olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
else
 echo "no environment seen, as expected. But will attempt to delete anyway to account for the case that the env is not reflected in that file check above"
 olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
fi
cd /etc/olcne
if systemctl status olcne-api-server.service; then
 echo running olcne-api-server.service seen, stopping it...
 sudo systemctl stop olcne-api-server.service
else
 echo no running olcne-api-server.service seen, as expected
fi
sudo ./gen-certs-helper.sh --cert-request-organization-unit "Paper Sales" --cert-request-organization "Dunder Mifflin" --cert-request-locality "Scranton" --cert-request-state "WA" --cert-request-country "US" --cert-request-common-name "dm.com" --nodes olcneoperator.dm.com,olcnecontrol.dm.com,olcneworker.dm.com,olcneworker2.dm.com,ocneworker3.dm.com

Iniciar o Servidor de API do Oracle Cloud Native Environment no Nó do Operador

start API server 
sudo bash -x /etc/olcne/bootstrap-olcne.sh --secret-manager-type file --olcne-node-cert-path 
/etc/olcne/configs/certificates/production/node.cert --olcne-ca-path 
/etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path 
/etc/olcne/configs/certificates/production/node.key --olcne-component api-server
systemctl status olcne-api-server.service

Iniciar Agentes de Plataforma no Plano de Controle e nos Nós de Trabalho

start platform agents 
sudo /etc/olcne/bootstrap-olcne.sh \
--secret-manager-type file \
--olcne-component agent \
--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \
--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \
--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key
systemctl status olcne-agent.service

Verificar Agentes da Plataforma em Execução

Verifique os agentes da plataforma em execução no plano de controle (ou seja, os nós de controle) e os nós de trabalho.

verify platform agents up 
ps auxww | grep /usr/libexec/olcne-agent | grep -v grep > /tmp/kk.26597
if [ -s /tmp/kk.26597 ]; then
 echo "OK /usr/libexec/olcne-agent running on `hostname`"
 if [ -n "" ]; then
 cat /tmp/kk.26597
 fi
else
 echo "FAIL /usr/libexec/olcne-agent NOT running on `hostname`"
fi

Criar o Oracle Cloud Native Environment no Nó do Operador

Se este não for um cluster recém-alocado e um Oracle Cloud Native Environment tiver sido criado anteriormente no nó do operador, exclua-o agora.

#optionally remove previous environment 
olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
create environment 
sudo chown -R opc:opc /etc/olcne /configs
olcne --api-server 127.0.0.1:8091 environment create --environment-name myenvironment --update-config --secret-manager-type file --olcne-node-cert-path 
/etc/olcne/configs/certificates/production/node.cert --olcne-ca-path 
/etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path 
/etc/olcne/configs/certificates/production/node.key

Criar Módulo do Kubernetes

Se este não for um cluster recém-alocado e um módulo Kubernetes do Oracle Cloud Native Environment tiver sido criado anteriormente no nó do operador, remova esse módulo.

Por exemplo:

#optionally remove previous k8s module 
olcne module uninstall --environment-name myenvironment --module kubernetes --name mycluster

Crie o módulo do Kubernetes no nó do operador. Para esse comando, precisamos determinar qual interface de rede usar. Neste código de exemplo, executamos ifconfig e escolhemos a primeira interface de rede não loop. Portanto, por exemplo, se ifconfig produzir a seguinte saída:

ifconfig output 
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
 inet 172.16.8.117 netmask 255.255.252.0 broadcast 172.16.11.255
 inet6 fe80::213:97ff:fe3c:8b34 prefixlen 64 scopeid 0x20link
 ether 00:13:97:3c:8b:34 txqueuelen 1000 (Ethernet)
 RX packets 2284 bytes 392817 (383.6 KiB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 1335 bytes 179539 (175.3 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 127.0.0.1 netmask 255.0.0.0
 inet6 ::1 prefixlen 128 scopeid 0x10host
 loop txqueuelen 1000 (Local Loopback)
 RX packets 6 bytes 416 (416.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 6 bytes 416 (416.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions

Em seguida, a expressão sed na segunda linha abaixo terminará definindo iface como ens3.

Por exemplo:

create k8 module 
sudo chown -R opc:opc /etc/olcne/configs/certificates/restrict_external_ip/production
iface=`ifconfig | sed -n -e '/^ /d' -e /LOOPBACK/d -e 's/:.*//p'```
# substitute the IP of your control node for CONTROL_NODE_IP below:
olcne module create --environment-name myenvironment --module kubernetes --name mycluster --container-registry container-registry.oracle.com/olcne --virtual-ip CONTROL_NODE_IP --master-nodes ocnecontrol.dm.com:8090 --worker-nodes ocneworker.dm.com:8090,ocneworker2.dm.com:8090,ocneworker3.dm.com:8090 --selinux enforcing --restrict-service-externalip-ca-cert 
/etc/olcne/configs/certificates/restrict_external_ip/production/production/ca.cert --restrict-service-externalip-tls-cert 
/etc/olcne/configs/certificates/restrict_external_ip/production/production/node.cert --restrict-service-externalip-tls-key 
/etc/olcne/configs/certificates/restrict_external_ip/production/production/node.key --pod-network-iface $iface

Adicionar Regra de Entrada à Sub-rede

  1. Na interface do Private Cloud Appliance, navegue até Dashboard/Virtual Cloud Networks/your_VCN/your_security_list.
  2. Adicione uma regra para a origem 0.0.0.0/0 do tipo TCP, permitindo um intervalo de portas de destino 2379-10255.

Validar Módulo do Kubernetes no Nó do Operador

validate k8 
olcne module validate --environment-name myenvironment --name mycluster

Instalar o Módulo do Kubernetes no Nó do Operador

install k8 
olcne module install --environment-name myenvironment --name mycluster

Consulte Relatório do Módulo do Kubernetes no Nó do Operador

report on k8  OCNEctl module report --environment-name myenvironment --name mycluster

Mostrar Nós do Kubernetes no Nó do Operador

Note: To use kubectl within your cluster,:
run kubectl 
kubectl get nodes -o wide