The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Chapter 3 Installing Oracle Cloud Native Environment
This chapter discusses how to prepare the nodes to be used in an Oracle Cloud Native Environment deployment. When the nodes are prepared, they must be installed with the Oracle Cloud Native Environment software packages. When the nodes are set up with the software, you can use the Platform CLI to perform a deployment of a Kubernetes cluster and optionally a service mesh.
This chapter shows you how to perform the steps to set up the hosts and install the Oracle Cloud Native Environment software, ready to perform a deployment of modules. When you have set up the nodes, deploy the Kubernetes module to install a Kubernetes cluster using the steps in Container Orchestration.
3.1 Installation Overview
The high level overview of setting up Oracle Cloud Native Environment is described in this section.
-
Prepare the operator node: An operator node is a host that is used to perform and manage the deployment of environments. The operator node must be set up with the Platform API Server, and the Platform CLI (olcnectl).
-
Prepare the Kubernetes nodes: The Kubernetes control plane and worker nodes must to be set up with the Platform Agent.
-
Set up a load balancer: If you are deploying a highly available Kubernetes cluster, set up a load balancer. You can set up your own load balancer, or use the container-based load balancer deployed by the Platform CLI.
-
Set up X.509 Certificates: X.509 Certificates are used to provide secure communication between the Kubernetes nodes. You must set up the certificates before you create an environment and perform a deployment.
-
Start the services: Start the Platform API Server and Platform Agent services on nodes using the X.509 Certificates.
-
Create an environment: Create an environment into which you can install the Kubernetes module and any other optional modules.
-
Deploy modules: Deploy the Kubernetes module and any other optional modules.
3.2 Setting up the Nodes
This section discusses setting up nodes to use in an Oracle Cloud Native Environment. The nodes are used to form a Kubernetes cluster.
An operator node should be used to perform the deployment of the Kubernetes cluster using the Platform CLI and the Platform API Server. An operator node may be a node in the Kubernetes cluster, or a separate host. In examples in this book, the operator node is a separate host, and not part of the Kubernetes cluster.
On each Kubernetes node (both control plane and worker nodes) the Platform Agent must be installed. Before you set up the Kubernetes nodes, you must prepare them. For information on preparing the nodes, see Chapter 2, Oracle Cloud Native Environment Prerequisites.
During the installation of the required packages on, an
olcne
user is created. This user is used to
start the Platform API Server or Platform Agent services and
has the minimum operating system privileges to perform that task.
The olcne
user should not be used for any other
purpose.
3.2.1 Setting up the Operator Node
This section discusses setting up the operator node. The operator node is a host that is used to perform and manage the deployment of environments, including deploying the Kubernetes cluster.
-
On the operator node, install the Platform CLI, Platform API Server, and utilities.
On Oracle Linux 7 enter:
sudo yum install olcnectl olcne-api-server olcne-utils
On Oracle Linux 8 enter:
sudo dnf install olcnectl olcne-api-server olcne-utils
-
Enable the
olcne-api-server
service, but do not start it. Theolcne-api-server
service is started when you configure the X.509 Certificates.sudo systemctl enable olcne-api-server.service
For information on configuration options for the Platform API Server, see Section 4.1, “Configuring the Platform API Server”.
3.2.2 Setting up Kubernetes Nodes
This section discusses setting up the nodes to use in a Kubernetes cluster. Perform these steps on both Kubernetes control plane and worker nodes.
-
On each node to be added to the Kubernetes cluster, install the Platform Agent package and utilities.
On Oracle Linux 7 enter:
sudo yum install olcne-agent olcne-utils
On Oracle Linux 8 enter:
sudo dnf install olcne-agent olcne-utils
-
Enable the
olcne-agent
service, but do not start it. Theolcne-agent
service is started when you configure the X.509 Certificates.sudo systemctl enable olcne-agent.service
For information on configuration options for the Platform Agent, see Section 4.2, “Configuring the Platform Agent”.
-
If you use a proxy server, configure it with CRI-O. On each Kubernetes node, create a CRI-O
systemd
configuration directory:sudo mkdir /etc/systemd/system/crio.service.d
Create a file named
proxy.conf
in the directory, and add the proxy server information. For example:[Service] Environment="HTTP_PROXY=
proxy.example.com:3128
" Environment="HTTPS_PROXY=proxy.example.com:3128
" Environment="NO_PROXY=mydomain.example.com
" -
If the
docker
service is running, stop and disable it.sudo systemctl disable --now docker.service
-
If the
containerd
service is running, stop and disable it.sudo systemctl disable --now containerd.service
3.3 Setting up a Load Balancer for Highly Available Clusters
A highly available (HA) cluster needs a load balancer to provide high availability of control plane nodes. A load balancer communicates with the Kubernetes API server on the control plane nodes.
There are two methods of setting up a load balancer to create an HA cluster:
-
Using your own external load balancer instance
-
Using the load balancer that can be deployed by the Platform CLI on the control plane nodes
3.3.1 Setting up Your Own Load Balancer
If you want to use your own load balancer implementation, it should be set up and ready to use before you perform an HA cluster deployment. The load balancer hostname and port is entered as an option when you create the Kubernetes module. The load balancer should be set up with the following configuration:
-
The listener listening on TCP port 6443.
-
The distribution set to round robin.
-
The target set to TCP port 6443 on the control plane nodes.
-
The health check set to TCP.
For more information on setting up your own load balancer, see the Oracle® Linux 7: Administrator's Guide, or Oracle® Linux 8: Setting Up Load Balancing.
If you are deploying to Oracle Cloud Infrastructure, set up a load balancer.
-
Create a load balancer.
-
Add a backend set to the load balancer using weighted round robin. Set the health check to be TCP port 6443.
-
Add the control plane nodes to the backend set. Set the port for the control plane nodes to port 6443.
-
Create a listener for the backend set using TCP port 6443.
For more information on setting up a load balancer in Oracle Cloud Infrastructure, see the Oracle Cloud Infrastructure documentation.
3.3.2 Setting up the In-built Load Balancer
If you want to use the in-built load balancer that can be deployed by the Platform CLI, you need to perform the following steps to prepare the control plane nodes. These steps should be performed on each control plane node.
-
Set up the control plane nodes as described in Section 3.2.2, “Setting up Kubernetes Nodes”.
-
Nominate a virtual IP address that can be used for the primary control plane node. This IP address should not be in use on any node, and is assigned dynamically to the control plane node assigned as the primary controller by the load balancer. If the primary node fails, the load balancer reassigns the virtual IP address to another control plane node, and that, in turn, becomes the primary node. The virtual IP address used in examples in this documentation is
192.0.2.100
. -
Open port 6444. When you use a virtual IP address, the Kubernetes API server port is changed from the default of 6443 to 6444. The load balancer listens on port 6443 and receives the requests and passes them to the Kubernetes API server.
sudo firewall-cmd --add-port=6444/tcp sudo firewall-cmd --add-port=6444/tcp --permanent
-
Enable the Virtual Router Redundancy Protocol (VRRP) protocol:
sudo firewall-cmd --add-protocol=vrrp sudo firewall-cmd --add-protocol=vrrp --permanent
3.4 Setting up X.509 Certificates for Kubernetes Nodes
Communication between the Kubernetes nodes is secured using X.509 certificates.
Before you deploy Kubernetes, you need to configure the X.509 certificates used to manage the communication between the nodes. There are a number of ways to manage and deploy the certificates. You can use:
-
Vault: The certificates are managed using the HashiCorp Vault secrets manager. Certificates are created during the deployment of the Kubernetes module. You need to create a token authentication method for Oracle Cloud Native Environment.
-
CA Certificates: Use your own certificates, signed by a trusted Certificate Authority (CA), and copied to each Kubernetes node before the deployment of the Kubernetes module. These certificates are unmanaged and must be renewed and updated manually.
-
Private CA Certificates: Using generated certificates, signed by a private CA you set up, and copied to each Kubernetes node before the deployment of the Kubernetes module. These certificates are unmanaged and must be renewed and updated manually. A script is provided to help you set this up.
A software-based secrets manager is recommended to manage these certificates. The HashiCorp Vault secrets manager can be used to generate, assign and manage the certificates. Oracle recommends you implement your own instance of Vault, setting up the appropriate security for your environment.
For more information on installing and setting up Vault, see the HashiCorp documentation at:
https://learn.hashicorp.com/vault/operations/ops-deployment-guide
If you do not want to use Vault, you can use your own certificates, signed by a trusted CA, and copied to each node. A script is provided to generate a private CA which allows you to generate certificates for each node. This script also gives you the commands needed to copy the certificates to the nodes.
3.4.1 Setting up Vault Authentication
To configure Vault for use with Oracle Cloud Native Environment, set up a Vault token with the following properties:
-
A PKI secret engine with a CA certificate or intermediate, located at
olcne_pki_intermediary
. -
A role under that PKI, named
olcne
, configured to not require a common name, and allow any name. -
A token authentication method and policy that attaches to the
olcne
role and can request certificates.
For information on setting up the Vault PKI secrets engine to generate dynamic X.509 certificates, see:
https://www.vaultproject.io/docs/secrets/pki/index.html
For information on creating Vault tokens, see:
3.4.2 Setting up CA Certificates
This section shows you how to use your own certificates, signed by a trusted CA, without using a secrets manager such as Vault. To use your own certificates, copy them to all Kubernetes nodes, and to the Platform API Server node.
To make sure the Platform Agent on each Kubernetes node, and the
Platform API Server have access to certificates, make sure you
copy them into the /etc/olcne/certificates/
directory on each node. The path to the certificates is used
when setting up the Platform Agent and Platform API Server,
and when creating an environment.
The examples in this book use the
/etc/olcne/configs/certificates/production/
directory for certificates.
For example:
-
CA Certificate:
/etc/olcne/configs/certificates/production/ca.cert
-
Node Key:
/etc/olcne/configs/certificates/production/node.key
-
Node Certificate:
/etc/olcne/configs/certificates/production/node.cert
3.4.3 Setting up Private CA Certificates
This section shows you how to create a private CA, and use that to generate signed certificates for the nodes. This section also contains information on copying the certificates to the nodes. Additionally this section contains information on generating additional certificates for nodes that you want to scale into a Kubernetes cluster.
3.4.3.1 Creating and Copying Certificates
This section shows you how to create a private CA, and use that to generate signed certificates for the nodes.
-
(Optional) You can set up keyless SSH between the operator node and the Kubernetes nodes to make it easier to copy the certificates to the nodes. For information on setting up keyless SSH, see Oracle® Linux: Connecting to Remote Systems With OpenSSH.
-
Use the
/etc/olcne/gen-certs-helper.sh
script to generate a private CA and certificates for the nodes.TipThe
gen-certs-helper.sh
script saves the certificate files to the directory from which you run the script. Thegen-certs-helper.sh
script also creates a script you can use to copy the certificates to each Kubernetes node (olcne-transfer-certs.sh
). If you run thegen-certs-helper.sh
script from the/etc/olcne
directory, it uses the default certificate directory used in this book (/etc/olcne/certificates/
) when creating theolcne-transfer-certs.sh
script. This means you can start up the Platform API Server, and the Platform Agent on Kubernetes nodes, using the default certificate directory locations as shown in this book. You could also use the--cert-dir
option to specify the location to save the certificates and transfer script.Provide the nodes for which you want to create certificates using the
--nodes
option. You should create a certificate for each node that runs the Platform API Server or Platform Agent. That is, for the operator node, and each Kubernetes node. If you are deploying a highly available Kubernetes cluster using a virtual IP address, you do not need to create a certificate for a virtual IP address.Provide the private CA information using the
--cert-request
options (some, but not all, of these options are shown in the example). You can get a list of all command options using the gen-certs-helper.sh --help command.*
For example:
cd /etc/olcne sudo ./gen-certs-helper.sh \ --cert-request-organization-unit "My Company Unit" \ --cert-request-organization "My Company" \ --cert-request-locality "My Town" \ --cert-request-state "My State" \ --cert-request-country US \ --cert-request-common-name cloud.example.com \ --nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com,\ worker3.example.com
The certificates and keys for each node are generated and saved to the directory:
/
path
/configs/certificates/tmp-olcne/node
/Where
path
is the directory from which you ran thegen-certs-helper.sh
script, or the location you set with the--cert-dir
option; andnode
is the name of the node for which the certificate was generated.The private CA certificate and key files are saved to the directory:
/
path
/configs/certificates/production/ -
Copy the certificate generated for a node from the
/
directory to that node.path
/configs/certificates/tmp-olcne/node
/To make sure the Platform Agent on each Kubernetes node, and the Platform API Server have access to certificates, make sure you copy them into the
/etc/olcne/certificates/
directory on each node. The path to the certificates is used when setting up the Platform Agent and Platform API Server, and when creating an environment.The examples in this book use the
/etc/olcne/configs/certificates/production/
directory as the location for certificates on nodes.A script is created to help you copy the certificates to the nodes,
/
. You can use this script and modify it to suit your needs, or transfer the certificates to the nodes using some other method.path
/configs/certificates/olcne-transfer-certs.shImportantIf you set up keyless SSH, change the
USER
variable in this script to the user you set up with keyless SSH.Run the script to copy the certificates to the nodes:
bash -ex /
path
/configs/certificates/olcne-tranfer-certs.sh -
Make sure the
olcne
user on each node that runs the Platform API Server or Platform Agent is able to read the directory in which you copy the certificates. If you used the default path for certificates of/etc/olcne/certificates/
, theolcne
user has read access.If you used a different path, check the
olcne
user can read the certificate path. On the operator node, and each Kubernetes node, run:sudo -u olcne ls /
path
/configs/certificates/production
ca.cert node.cert node.keyYou should see a list of the certificates and key for the node.
3.4.3.2 Creating Additional Certificates
This section contains information about generating certificates for any additional nodes that you want to add to a Kubernetes cluster.
-
On the operator node, generate new certificates for the nodes using the
/etc/olcne/gen-certs-helper.sh
script. For example:cd /etc/olcne sudo ./gen-certs-helper.sh \ --cert-request-organization-unit "My Company Unit" \ --cert-request-organization "My Company" \ --cert-request-locality "My Town" \ --cert-request-state "My State" \ --cert-request-country US \ --cert-request-common-name cloud.example.com \ --nodes control4.example.com,control5.example.com \ --byo-ca-cert /
path
/configs/certificates/production/ca.cert \ --byo-ca-key /path
/configs/certificates/production/ca.keyThe private key to generate the new certificates is specified with the
--byo-ca-key
option and the CA certificate with the--byo-ca-cert
option. In this example, the private CA certificate and key files are located in the directory:/
path
/configs/certificates/production/Where
path
is the directory from which you originally ran thegen-certs-helper.sh
script (it is most likely/etc/olcne
), or the location you set with the--cert-dir
option if you used that option. -
When you have generated the new certificates, copy them to the nodes. A script is created to help you copy the certificates to the nodes,
/
. You can use this script and modify it to suit your needs, or transfer the certificates to the nodes using some other method.path
/configs/certificates/olcne-transfer-certs.shRun the script to copy the certificates to the nodes:
bash -ex /
path
/configs/certificates/olcne-tranfer-certs.sh
3.5 Setting up X.509 Certificates for the externalIPs
Kubernetes Service
You do not need to perform the steps in this section if you are using Oracle Cloud Native Environment Release 1.2.0. The set up steps in this section are for Release 1.2.2 or later.
When you deploy Kubernetes, a service is deployed to the cluster that
controls access to externalIPs
in Kubernetes
services. The service is named
externalip-validation-webhook-service
and runs
in the externalip-validation-system
namespace.
This Kubernetes service requires X.509 certificates be set up
prior to deploying Kubernetes. You can use Vault to generate the
certificates, or use your own certificates for this purpose. You
can also generate certificates using the
gen-certs-helper.sh
script. The certificates
must be available on the operator node. The examples in this book
use the /etc/olcne/configs/certificates/restrict_external_ip/production/
directory for
these certificates.
3.5.1 Setting up Vault Certificates
You can use Vault to generate a certificates for the
externalIPs
Kubernetes service. The Vault
instance must be configured in the same way as described in
Section 3.4.1, “Setting up Vault Authentication”.
You need to generate certificates for two nodes, named:
externalip-validation-webhook-service.externalip-validation-system.svc
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local
The certificate information should be generated in PEM format.
For example:
vault write olcne_pki_intermediary/issue/olcne \ alt_names=externalip-validation-webhook-service.externalip-validation-system.svc,\ externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \ format=pem_bundle
The output is displayed. Look for the section that starts with
certificate
. This section contains the
certificates for the node names (set with the
alt_names
option). Save the output in this
section to a file named node.cert
. The file
should look something like:
-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAymg8uHy+mpwlelCyC4WrnfLwUmJ5vZmSos85QnIlZvyycUPK ... X3c8LNaJDfQx1wKfTc/c0czBhHYxgwfau0G6wjqScZesPi2xY0xyslE= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIID2TCCAsGgAwIBAgIUZ/M/D7bAjhyGx7DivsjBb9oeLhAwDQYJKoZIhvcNAQEL ... 9bRwnen+JrxUn4GV59GtsTiqzY6R2OKPm+zLl8E= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDnDCCAoSgAwIBAgIUMapl4aWnBXE/02qTW0zOZ9aQVGgwDQYJKoZIhvcNAQEL ... kV8w2xVXXAehp7cg0BakVA== -----END CERTIFICATE-----
Look for the section that starts with
issuing_ca
. This section contains the CA
certificate. Save the output in this section to a file named
ca.cert
. The file should look something like:
-----BEGIN CERTIFICATE----- MIIDnDCCAoSgAwIBAgIUMapl4aWnBXE/02qTW0zOZ9aQVGgwDQYJKoZIhvcNAQEL ... kV8w2xVXXAehp7cg0BakVA== -----END CERTIFICATE-----
Look for the section that starts with
private_key
. This section contains the
private key for the node certificates. Save the output in this
section to a file named node.key
. The file
should look something like:
-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAymg8uHy+mpwlelCyC4WrnfLwUmJ5vZmSos85QnIlZvyycUPK ... X3c8LNaJDfQx1wKfTc/c0czBhHYxgwfau0G6wjqScZesPi2xY0xyslE= -----END RSA PRIVATE KEY-----
Copy the three files ( node.cert
,
ca.cert
and node.key
) to
the operator node and set the ownership of the files as
described in Section 3.5.2, “Setting up CA Certificates”.
3.5.2 Setting up CA Certificates
If you are using your own certificates, you should copy them to
a directory under /etc/olcne/certificates/
on the operator node. For example:
-
CA Certificate:
/etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert
-
Node Key:
/etc/olcne/configs/certificates/restrict_external_ip/production/node.key
-
Node Certificate:
/etc/olcne/configs/certificates/restrict_external_ip/production/node.cert
You should copy these certificates to a different location on the operator node than the certificates and keys used for the Kubernetes nodes as set up in Section 3.4, “Setting up X.509 Certificates for Kubernetes Nodes”. This makes sure you do not overwrite those certificates and keys. You need to generate certificates for two nodes, named:
externalip-validation-webhook-service.externalip-validation-system.svc
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local
The certificates for these two nodes should be saved as a single
file as node.cert
.
Make sure the permissions of the output directory where the
certificates are located can be read by the user on the operator
node that you intend to use use to run the
olcnectl commands to install Kubernetes. In
this example the opc
user is to be used on
the operator node, so ownership of the directory is set to the
opc
user:
sudo chown -R opc:opc /etc/olcne/configs/certificates/restrict_external_ip/
3.5.3 Setting up Private CA Certificates
You can use the gen-certs-helper.sh
script to
generate the certificates. Run the script on the operator node
and enter the options required for your environment.
The --cert-dir
option sets the location where
the certificates are to be saved.
The --nodes
option must be set to the name of
the Kubernetes service, as shown:
--nodes
externalip-validation-webhook-service.externalip-validation-system.svc,externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local
Use the --one-cert
option to save the
certificates for the two service names to a single file.
cd /etc/olcne sudo ./gen-certs-helper.sh \ --cert-dir /etc/olcne/configs/certificates/restrict_external_ip/ \ --cert-request-organization-unit "My Company Unit" \ --cert-request-organization "My Company" \ --cert-request-locality "My Town" \ --cert-request-state "My State" \ --cert-request-country US \ --cert-request-common-name cloud.example.com \ --nodes externalip-validation-webhook-service.externalip-validation-system.svc,\ externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \ --one-cert
You can use the same CA certificate and private key you used to
generate the Kubernetes node certificates by using the
--byo-ca-cert
and
--byo-ca-key
options. For example, add the
following lines to the command:
--byo-ca-cert /path
/configs/certificates/production/ca.cert \ --byo-ca-key /path
/configs/certificates/production/ca.key
Make sure the permissions of the output directory where the
certificates are located can be read by the user on the operator
node that you intend to use use to run the
olcnectl commands to install Kubernetes. In
this example the opc
user is to be used on
the operator node, so ownership of the directory is set to the
opc
user:
sudo chown -R opc:opc /etc/olcne/configs/certificates/restrict_external_ip/
3.6 Starting the Platform API Server and Platform Agent Services
This section discusses using certificates to set up secure communication between the Platform API Server and the Platform Agent on nodes in the cluster. You can set up secure communication using certificates managed by Vault, or using your own certificates copied to each node. You must configure the Platform API Server and the Platform Agent to use the certificates when you start the services.
For information on setting up the certificates with Vault, see Section 3.4, “Setting up X.509 Certificates for Kubernetes Nodes”.
For information on creating a private CA to sign certificates that can be used during testing, see Section 3.4.3, “Setting up Private CA Certificates”.
3.6.1 Starting the Services Using Vault
This section shows you how to set up the Platform API Server and Platform Agent services to use certificates managed by Vault.
-
On the operator node, use the
/etc/olcne/bootstrap-olcne.sh
script to configure the Platform API Server to retrieve and use a Vault certificate. Use the bootstrap-olcne.sh --help command for a list of options for this script. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type vault \ --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \ --vault-address https://192.0.2.20:8200 \ --force-download-certs \ --olcne-component api-server
The certificates are generated and downloaded from Vault. By default, the certificates are saved to the
/etc/olcne/certificates/
directory. You can alternatively specify a path for the certificates, for example, by including the following options in the bootstrap-olcne.sh command:--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \
The Platform API Server is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-api-server.service
-
On each Kubernetes node, use the
/etc/olcne/bootstrap-olcne.sh
script to configure the Platform Agent to retrieve and use a certificate. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type vault \ --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \ --vault-address https://192.0.2.20:8200 \ --force-download-certs \ --olcne-component agent
The certificates are generated and downloaded from Vault.
The Platform Agent is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-agent.service
3.6.2 Starting the Services Using Certificates
This section shows you how to set up the Platform API Server and
Platform Agent services to use your own certificates, which
have been copied to each node. This example assumes the
certificates are available on all nodes in the
/etc/olcne/configs/certificates/production/
directory.
-
On the operator node, use the
/etc/olcne/bootstrap-olcne.sh
script to configure the Platform API Server to use the certificates. Use the bootstrap-olcne.sh --help command for a list of options for this script. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \ --olcne-component api-server
The Platform API Server is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-api-server.service
-
On each Kubernetes node, use the
/etc/olcne/bootstrap-olcne.sh
script to configure the Platform Agent to use the certificates. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \ --olcne-component agent
The Platform Agent is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-agent.service
3.7 Creating an Environment
The first step to creating a Kubernetes cluster is to create an environment. You can create multiple environments, with each environment potentially containing multiple modules. Naming each environment and module makes it easier to manage the deployed components of Oracle Cloud Native Environment.
You should not use the same node in more than one environment.
Use the olcnectl environment create command on the operator node to create an environment. For more information on the syntax for the olcnectl environment create command, see Platform Command-Line Interface.
This section shows you how to create an environment using Vault, and using your own certificates copied to the file system on each node. For information on setting up X.509 certificates, see Section 3.4, “Setting up X.509 Certificates for Kubernetes Nodes”.
3.7.1 Creating an Environment using Certificates Managed by Vault
This section shows you how to create an environment using Vault to provide and manage the certificates.
On the operator node, use the
olcnectl environment create command to create
an environment. For example, to create an environment named
myenvironment
using certificates generated
from a Vault instance located at
https://192.0.2.20:8200
olcnectl --api-server 127.0.0.1:8091 environment create \ --environment-name myenvironment \ --update-config \ --secret-manager-type vault \ --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \ --vault-address https://192.0.2.20:8200
The --update-config
option saves the
certificate generated by Vault on the local host. When you use
this option, you do not need to enter the certificate
information again when managing the environment. This option
also saves the connection information for the Platform API Server.
You do not need to provide this information again when
connecting to the environment. That is, the next time you
connect to the environment you do not need to provide the
--api-server
option. For more information on
setting the Platform API Server see
Platform Command-Line Interface.
The --secret-manager-type
option sets the
certificate manager to Vault. Replace
--vault-token
with the token to access Vault.
Replace --vault-address
with the location of
your Vault instance.
By default, the certificate is saved to
$HOME/.olcne/certificates/
.
If you want to specify a different location to save the
certificate, use the environment_name
/--olcne-node-cert-path
,
--olcne-ca-path
, and
--olcne-node-key-path
options. For example,
add the following options to the olcnectl environment
create command:
--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key
3.7.2 Creating an Environment using Certificates
This section shows you how to create an environment using your
own certificates, copied to each node. This example assumes the
certificates are available on all nodes in the
/etc/olcne/configs/certificates/production/
directory.
On the operator node, create the environment using the olcnectl environment create command. For example:
olcnectl --api-server 127.0.0.1:8091 environment create \ --environment-name myenvironment \ --update-config \ --secret-manager-type file \ --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key
The --update-config
option saves the
connection information for the Platform API Server. You do not
need to provide this information again when connecting to the
environment. That is, the next time you connect to the
environment you do not need to provide the
--api-server
option. For more information on
setting the Platform API Server see
Platform Command-Line Interface.
The --secret-manager-type
option sets the
certificate manager to use file-based certificates.
You can optionally set the location for the certificate files using environment variables; olcnectl uses these if they are set.
The environment variables map to the olcnectl environment create command options:
-
$OLCNE_SM_CERT_PATH
sets the value used with the--olcne-node-cert-path
option. -
$OLCNE_SM_CA_PATH
sets the value used with the--olcne-ca-path
option. -
$OLCNE_SM_KEY_PATH
sets the value used with the--olcne-node-key-path
option.
For example:
export OLCNE_SM_CA_PATH=/etc/olcne/configs/certificates/production/ca.cert export OLCNE_SM_CERT_PATH=/etc/olcne/configs/certificates/production/node.cert export OLCNE_SM_KEY_PATH=/etc/olcne/configs/certificates/production/node.key olcnectl --api-server 127.0.0.1:8091 environment create --environment-name myenvironment \ --update-config \ --secret-manager-type file
3.8 Installing Modules
After you create an environment, you can add any modules you want to the environment.
3.8.1 Creating a Kubernetes Module
A base installation requires a Kubernetes module which is used to create a Kubernetes cluster. For information on creating and installing a Kubernetes module, see Container Orchestration.
3.8.2 Creating an Istio Module
When you have created and installed a Kubernetes module, you can optionally install a service mesh using the Istio module. For information on installing the Istio module to create a service mesh, see Service Mesh.
3.8.3 Creating an Operator Lifecycle Manager Module
When you have created and installed a Kubernetes module, you can optionally install the Operator Lifecycle Manager module to manage the installation and lifecycle management of operators in a Kubernetes cluster. For information on installing the Operator Lifecycle Manager module, see Container Orchestration.