Instalación de Oracle Cloud Native Environment
En estos pasos se describe cómo instalar Oracle Cloud Native Environment en Oracle Private Cloud Appliance.
Instalación del servidor de API de Oracle Cloud Native Environment en el nodo de operador
API server
if ! systemctl status olcne-api-server.service | grep 'Loaded: loaded'; then
echo "No platform olcne-api-server.service seen on `hostname`, so the way is clear to install it..."
pm_action=install
else
sudo systemctl stop olcne-api-server.service
pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y olcne olcne-api-server olcne-utils
sudo systemctl enable olcne-api-server.service
Instalación de agentes de Oracle Cloud Native Environment Platform en el plano de control y los nodos de trabajador
platform agents
if ! systemctl status olcne-agent.service | grep 'Loaded: loaded'; then
echo "No platform olcne-agent.service seen on `hostname`, so the way is clear to
install it..."
pm_action=install
else
sudo systemctl stop olcne-agent.service
pm_action=reinstall
fi
sudo dnf --best --setopt=keepcache=1 --allowerasing $pm_action -y olcne-agent olcne-utils
sudo systemctl enable olcne-agent.service
sudo mkdir -p /etc/systemd/system/crio.service.d
cat <<EOD > /tmp/t
[Service]
Environment="HTTP_PROXY=http://proxy-host:proxy-port"
Environment="HTTPS_PROXY=http://proxy-host:proxy-port"
Environment="NO_PROXY=localhost,127.0.0.1,. proxy-host.us.oracle.com,.oraclecorp.com,.oraclevcn.com,10.0.1.0/24,10.0.0.0/24,.svc,/var/run/crio/crio.sock,10.96.0.0/12"
EOD
sudo mv /tmp/t /etc/systemd/system/crio.service.d/proxy.conf
if ! systemctl status docker.service 2>&1 | grep -l 'could not be found.' > /dev/null 2>&1;
then
sudo systemctl disable --now docker.service
fi
if ! systemctl status containerd.service 2>&1 | grep -l 'could not be found.' > /dev/null
2>&1; then
sudo systemctl disable --now containerd.service
fi
Preparación del firewall para el equilibrador de carga interno en el plano de control
control plane firewall for HA
sudo firewall-cmd --add-port=6444/tcp
sudo firewall-cmd --add-port=6444/tcp --permanent
sudo firewall-cmd --add-protocol=vrrp
sudo firewall-cmd --add-protocol=vrrp --permanent
scp /path/to/your/id_rsa ocneoperator.dm.com:/home/opc/.ssh/id_rsa
Generación de certificados X509 en el nodo de operador
generate certificates
if [ -f /home/opc/.olcne/certificates/127.0.0.1:8091/node.cert ]; then
# should we call
# olcne --api-server 127.0.0.1:8091 environment report --environment-name myenvironment
# this file is searched for. Skip the call and just check for the file; if we see it,
attempt to delete existing env:
echo signs of myenvironment seen, will try to delete it...
olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
else
echo "no environment seen, as expected. But will attempt to delete anyway to account for the case that the env is not reflected in that file check above"
olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
fi
cd /etc/olcne
if systemctl status olcne-api-server.service; then
echo running olcne-api-server.service seen, stopping it...
sudo systemctl stop olcne-api-server.service
else
echo no running olcne-api-server.service seen, as expected
fi
sudo ./gen-certs-helper.sh --cert-request-organization-unit "Paper Sales" --cert-request-organization "Dunder Mifflin" --cert-request-locality "Scranton" --cert-request-state "WA" --cert-request-country "US" --cert-request-common-name "dm.com" --nodes olcneoperator.dm.com,olcnecontrol.dm.com,olcneworker.dm.com,olcneworker2.dm.com,ocneworker3.dm.com
Iniciar el servidor de API de Oracle Cloud Native Environment en el nodo de operador
start API server
sudo bash -x /etc/olcne/bootstrap-olcne.sh --secret-manager-type file --olcne-node-cert-path
/etc/olcne/configs/certificates/production/node.cert --olcne-ca-path
/etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path
/etc/olcne/configs/certificates/production/node.key --olcne-component api-server
systemctl status olcne-api-server.service
Inicio de agentes de plataforma en el plano de control y los nodos de trabajador
start platform agents
sudo /etc/olcne/bootstrap-olcne.sh \
--secret-manager-type file \
--olcne-component agent \
--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \
--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \
--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key
systemctl status olcne-agent.service
Verificación de la ejecución de agentes de plataforma
Verifique que los agentes de plataforma se ejecuten en el plano de control (es decir, los nodos de control) y los nodos de trabajador.
verify platform agents up
ps auxww | grep /usr/libexec/olcne-agent | grep -v grep > /tmp/kk.26597
if [ -s /tmp/kk.26597 ]; then
echo "OK /usr/libexec/olcne-agent running on `hostname`"
if [ -n "" ]; then
cat /tmp/kk.26597
fi
else
echo "FAIL /usr/libexec/olcne-agent NOT running on `hostname`"
fi
Creación de Oracle Cloud Native Environment en el nodo de operador
Si no se trata de un cluster recién asignado y se ha creado previamente un Oracle Cloud Native Environment en el nodo de operador, suprímalo ahora.
#optionally remove previous environment
olcne --api-server 127.0.0.1:8091 environment delete --environment-name myenvironment
create environment
sudo chown -R opc:opc /etc/olcne /configs
olcne --api-server 127.0.0.1:8091 environment create --environment-name myenvironment --update-config --secret-manager-type file --olcne-node-cert-path
/etc/olcne/configs/certificates/production/node.cert --olcne-ca-path
/etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path
/etc/olcne/configs/certificates/production/node.key
Crear módulo de Kubernetes
Si no se trata de un cluster recién asignado y se ha creado previamente un módulo de Kubernetes de Oracle Cloud Native Environment en el nodo de operador, elimine ese módulo.
Por ejemplo:
#optionally remove previous k8s module
olcne module uninstall --environment-name myenvironment --module kubernetes --name mycluster
Cree el módulo de Kubernetes en el nodo de operador. Para este comando, necesitamos determinar qué interfaz de red utilizar. En este código de ejemplo, ejecutamos ifconfig
y seleccionamos la primera interfaz de red que no sea de bucle. Por ejemplo, si ifconfig
produce la siguiente salida:
ifconfig output
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 172.16.8.117 netmask 255.255.252.0 broadcast 172.16.11.255
inet6 fe80::213:97ff:fe3c:8b34 prefixlen 64 scopeid 0x20link
ether 00:13:97:3c:8b:34 txqueuelen 1000 (Ethernet)
RX packets 2284 bytes 392817 (383.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1335 bytes 179539 (175.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10host
loop txqueuelen 1000 (Local Loopback)
RX packets 6 bytes 416 (416.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 416 (416.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions
A continuación, la expresión sed de la segunda línea siguiente terminará definiendo iface
en ens3
.
Por ejemplo:
create k8 module
sudo chown -R opc:opc /etc/olcne/configs/certificates/restrict_external_ip/production
iface=`ifconfig | sed -n -e '/^ /d' -e /LOOPBACK/d -e 's/:.*//p'```
# substitute the IP of your control node for CONTROL_NODE_IP below:
olcne module create --environment-name myenvironment --module kubernetes --name mycluster --container-registry container-registry.oracle.com/olcne --virtual-ip CONTROL_NODE_IP --master-nodes ocnecontrol.dm.com:8090 --worker-nodes ocneworker.dm.com:8090,ocneworker2.dm.com:8090,ocneworker3.dm.com:8090 --selinux enforcing --restrict-service-externalip-ca-cert
/etc/olcne/configs/certificates/restrict_external_ip/production/production/ca.cert --restrict-service-externalip-tls-cert
/etc/olcne/configs/certificates/restrict_external_ip/production/production/node.cert --restrict-service-externalip-tls-key
/etc/olcne/configs/certificates/restrict_external_ip/production/production/node.key --pod-network-iface $iface
Adición de una regla de entrada a una subred
- En la interfaz de Private Cloud Appliance, vaya a
Dashboard/Virtual Cloud Networks/your_VCN/your_security_list
. - Agregue una regla para el origen
0.0.0.0/0
de tipoTCP
, lo que permite un rango de puertos de destino 2379-10255.
Validación del módulo de Kubernetes en el nodo de operador
validate k8
olcne module validate --environment-name myenvironment --name mycluster
Instalación del módulo de Kubernetes en el nodo de operador
install k8
olcne module install --environment-name myenvironment --name mycluster