The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

Chapter 3 Installing Oracle Cloud Native Environment

This chapter discusses how to prepare the nodes to be used in an Oracle Cloud Native Environment deployment. When the nodes are prepared, they must be installed with the Oracle Cloud Native Environment software packages. When the nodes are set up with the software, you can use the Platform CLI to perform a deployment of a Kubernetes cluster and optionally a service mesh.

This chapter shows you how to perform the steps to set up the hosts and install the Oracle Cloud Native Environment software, ready to perform a deployment of modules. When you have set up the nodes, deploy the Kubernetes module to install a Kubernetes cluster using the steps in Container Orchestration.

3.1 Installation Overview

The high level overview of setting up Oracle Cloud Native Environment is described in this section.

To install Oracle Cloud Native Environment:
  1. Prepare the operator node: An operator node is a host that is used to perform and manage the deployment of environments. The operator node must be set up with the Platform API Server, and the Platform CLI (olcnectl).

  2. Prepare the Kubernetes nodes: The Kubernetes control plane and worker nodes must to be set up with the Platform Agent.

  3. Set up a load balancer: If you are deploying a highly available Kubernetes cluster, set up a load balancer. You can set up your own load balancer, or use the container-based load balancer deployed by the Platform CLI.

  4. Set up X.509 Certificates: X.509 Certificates are used to provide secure communication between the Kubernetes nodes. You must set up the certificates before you create an environment and perform a deployment.

  5. Start the services: Start the Platform API Server and Platform Agent services on nodes using the X.509 Certificates.

  6. Create an environment: Create an environment into which you can install the Kubernetes module and any other optional modules.

  7. Deploy modules: Deploy the Kubernetes module and any other optional modules.

3.2 Setting up the Nodes

This section discusses setting up nodes to use in an Oracle Cloud Native Environment. The nodes are used to form a Kubernetes cluster.

An operator node should be used to perform the deployment of the Kubernetes cluster using the Platform CLI and the Platform API Server. An operator node may be a node in the Kubernetes cluster, or a separate host. In examples in this book, the operator node is a separate host, and not part of the Kubernetes cluster.

On each Kubernetes node (both control plane and worker nodes) the Platform Agent must be installed. Before you set up the Kubernetes nodes, you must prepare them. For information on preparing the nodes, see Chapter 2, Oracle Cloud Native Environment Prerequisites.

During the installation of the required packages on, an olcne user is created. This user is used to start the Platform API Server or Platform Agent services and has the minimum operating system privileges to perform that task. The olcne user should not be used for any other purpose.

3.2.1 Setting up the Operator Node

This section discusses setting up the operator node. The operator node is a host that is used to perform and manage the deployment of environments, including deploying the Kubernetes cluster.

To set up the operator node:
  1. On the operator node, install the Platform CLI, Platform API Server, and utilities.

    On Oracle Linux 8 enter:

    sudo dnf install olcnectl olcne-api-server olcne-utils

    On Oracle Linux 7 enter:

    sudo yum install olcnectl olcne-api-server olcne-utils
  2. Enable the olcne-api-server service, but do not start it. The olcne-api-server service is started when you configure the X.509 Certificates.

    sudo systemctl enable olcne-api-server.service 

    For information on configuration options for the Platform API Server, see Section 4.1, “Configuring the Platform API Server”.

3.2.2 Setting up Kubernetes Nodes

This section discusses setting up the nodes to use in a Kubernetes cluster. Perform these steps on both Kubernetes control plane and worker nodes.

To set up the Kubernetes nodes:
  1. On each node to be added to the Kubernetes cluster, install the Platform Agent package and utilities.

    On Oracle Linux 8 enter:

    sudo dnf install olcne-agent olcne-utils

    On Oracle Linux 7 enter:

    sudo yum install olcne-agent olcne-utils
  2. Enable the olcne-agent service, but do not start it. The olcne-agent service is started when you configure the X.509 Certificates.

    sudo systemctl enable olcne-agent.service 

    For information on configuration options for the Platform Agent, see Section 4.2, “Configuring the Platform Agent”.

  3. If you use a proxy server, configure it with CRI-O. On each Kubernetes node, create a CRI-O systemd configuration directory:

    sudo mkdir /etc/systemd/system/crio.service.d

    Create a file named proxy.conf in the directory, and add the proxy server information. For example:

    [Service]
    Environment="HTTP_PROXY=proxy.example.com:3128"
    Environment="HTTPS_PROXY=proxy.example.com:3128"
    Environment="NO_PROXY=mydomain.example.com"
  4. If the docker service is running, stop and disable it.

    sudo systemctl disable --now docker.service
  5. If the containerd service is running, stop and disable it.

    sudo systemctl disable --now containerd.service

3.3 Setting up a Load Balancer for Highly Available Clusters

A highly available (HA) cluster needs a load balancer to provide high availability of control plane nodes. A load balancer communicates with the Kubernetes API server on the control plane nodes.

There are two methods of setting up a load balancer to create an HA cluster:

  • Using your own external load balancer instance

  • Using the load balancer that can be deployed by the Platform CLI on the control plane nodes

3.3.1 Setting up Your Own Load Balancer

If you want to use your own load balancer implementation, it should be set up and ready to use before you perform an HA cluster deployment. The load balancer hostname and port is entered as an option when you create the Kubernetes module. The load balancer should be set up with the following configuration:

  • The listener listening on TCP port 6443.

  • The distribution set to round robin.

  • The target set to TCP port 6443 on the control plane nodes.

  • The health check set to TCP.

For more information on setting up your own load balancer, see Oracle® Linux 8: Setting Up Load Balancing, or Oracle® Linux 7: Administrator's Guide.

If you are deploying to Oracle Cloud Infrastructure, set up a load balancer.

To set up a load balancer on Oracle Cloud Infrastructure:
  1. Create a load balancer.

  2. Add a backend set to the load balancer using weighted round robin. Set the health check to be TCP port 6443.

  3. Add the control plane nodes to the backend set. Set the port for the control plane nodes to port 6443.

  4. Create a listener for the backend set using TCP port 6443.

For more information on setting up a load balancer in Oracle Cloud Infrastructure, see the Oracle Cloud Infrastructure documentation.

3.3.2 Setting up the In-built Load Balancer

If you want to use the in-built load balancer that can be deployed by the Platform CLI, you need to perform the following steps to prepare the control plane nodes. These steps should be performed on each control plane node.

To prepare control plane nodes for the load balancer deployed by the Platform CLI:
  1. Set up the control plane nodes as described in Section 3.2.2, “Setting up Kubernetes Nodes”.

  2. Nominate a virtual IP address that can be used for the primary control plane node. This IP address should not be in use on any node and is assigned dynamically to the control plane node assigned as the primary controller by the load balancer. If the primary node fails, the load balancer reassigns the virtual IP address to another control plane node, and that, in turn, becomes the primary node. The virtual IP address used in examples in this documentation is 192.0.2.100.

    Tip

    If you are deploying to Oracle Cloud Infrastructure virtual instances, you can assign a secondary private IP address to the VNIC on a control plane node to create a virtual IP address. Make sure you list this control plane node first when creating the Kubernetes module. For more information on secondary private IP addresses, see the Oracle Cloud Infrastructure documentation.

  3. Open port 6444. When you use a virtual IP address, the Kubernetes API server port is changed from the default of 6443 to 6444. The load balancer listens on port 6443 and receives the requests and passes them to the Kubernetes API server.

    sudo firewall-cmd --add-port=6444/tcp
    sudo firewall-cmd --add-port=6444/tcp --permanent
  4. Enable the Virtual Router Redundancy Protocol (VRRP) protocol:

    sudo firewall-cmd --add-protocol=vrrp
    sudo firewall-cmd --add-protocol=vrrp --permanent

3.4 Setting up X.509 Certificates for Kubernetes Nodes

Communication between the Kubernetes nodes is secured using X.509 certificates.

Before you deploy Kubernetes, you need to configure the X.509 certificates used to manage the communication between the nodes. There are a number of ways to manage and deploy the certificates. You can use:

  • Vault: The certificates are managed using the HashiCorp Vault secrets manager. Certificates are created during the deployment of the Kubernetes module. You need to create a token authentication method for Oracle Cloud Native Environment.

  • CA Certificates: Use your own certificates, signed by a trusted Certificate Authority (CA), and copied to each Kubernetes node before the deployment of the Kubernetes module. These certificates are unmanaged and must be renewed and updated manually.

  • Private CA Certificates: Using generated certificates, signed by a private CA you set up, and copied to each Kubernetes node before the deployment of the Kubernetes module. These certificates are unmanaged and must be renewed and updated manually. A script is provided to help you set this up.

A software-based secrets manager is recommended to manage these certificates. The HashiCorp Vault secrets manager can be used to generate, assign and manage the certificates. Oracle recommends you implement your own instance of Vault, setting up the appropriate security for your environment.

For more information on installing and setting up Vault, see the HashiCorp documentation at:

https://learn.hashicorp.com/vault/operations/ops-deployment-guide

If you do not want to use Vault, you can use your own certificates, signed by a trusted CA, and copied to each node. A script is provided to generate a private CA which allows you to generate certificates for each node. This script also gives you the commands needed to copy the certificates to the nodes.

3.4.1 Setting up Vault Authentication

To configure Vault for use with Oracle Cloud Native Environment, set up a Vault token with the following properties:

  • A PKI secret engine with a CA certificate or intermediate, located at olcne_pki_intermediary.

  • A role under that PKI, named olcne, configured to not require a common name, and allow any name.

  • A token authentication method and policy that attaches to the olcne role and can request certificates.

For information on setting up the Vault PKI secrets engine to generate dynamic X.509 certificates, see:

https://www.vaultproject.io/docs/secrets/pki/index.html

For information on creating Vault tokens, see:

https://www.vaultproject.io/docs/commands/token/create.html

3.4.2 Setting up CA Certificates

This section shows you how to use your own certificates, signed by a trusted CA, without using a secrets manager such as Vault. To use your own certificates, copy them to all Kubernetes nodes, and to the Platform API Server node.

To make sure the Platform Agent on each Kubernetes node, and the Platform API Server have access to certificates, make sure you copy them into the /etc/olcne/certificates/ directory on each node. The path to the certificates is used when setting up the Platform Agent and Platform API Server, and when creating an environment.

The examples in this book use the /etc/olcne/certificates/ directory for certificates. For example:

  • CA Certificate: /etc/olcne/certificates/ca.cert

  • Node Key: /etc/olcne/certificates/node.key

  • Node Certificate: /etc/olcne/certificates/node.cert

3.4.3 Setting up Private CA Certificates

This section shows you how to create a private CA, and use that to generate signed certificates for the nodes. This section also contains information on copying the certificates to the nodes. Additionally this section contains information on generating additional certificates for nodes that you want to scale into a Kubernetes cluster.

3.4.3.1 Creating and Copying Certificates

This section shows you how to create a private CA, and use that to generate signed certificates for the nodes.

To generate certificates using a private CA:
  1. (Optional) You can set up keyless SSH between the operator node and the Kubernetes nodes to make it easier to copy the certificates to the nodes. For information on setting up keyless SSH, see Oracle® Linux: Connecting to Remote Systems With OpenSSH.

  2. Use the /etc/olcne/gen-certs-helper.sh script to generate a private CA and certificates for the nodes.

    The gen-certs-helper.sh script saves the certificate files to the directory from which you run the script. The gen-certs-helper.sh script also creates a script you can use to copy the certificates to each Kubernetes node (olcne-tranfer-certs.sh). If you run the gen-certs-helper.sh script from the /etc/olcne directory, it uses the directory /etc/olcne/configs/certificates/ to save generated files.

    Note

    You can optionally use the --cert-dir option to specify the location to save the certificates and transfer script. If you use the --cert-dir option, make sure you change the path in this section to the path you specify.

    Provide the nodes for which you want to create certificates using the --nodes option. You should create a certificate for each node that runs the Platform API Server or Platform Agent. That is, for the operator node, and each Kubernetes node. If you are deploying a highly available Kubernetes cluster using a virtual IP address, you do not need to create a certificate for a virtual IP address.

    Provide the private CA information using the --cert-request* options (some, but not all, of these options are shown in the example). You can get a list of all command options using the gen-certs-helper.sh --help command.

    For example:

    cd /etc/olcne
    sudo ./gen-certs-helper.sh \
    --cert-request-organization-unit "My Company Unit" \
    --cert-request-organization "My Company" \
    --cert-request-locality "My Town" \
    --cert-request-state "My State" \
    --cert-request-country US \
    --cert-request-common-name cloud.example.com \
    --nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com,\
    worker3.example.com

    The certificates and keys for each node are generated and saved to the directory:

    /etc/olcne/configs/certificates/tmp-olcne/node/

    Where node is the name of the node for which the certificate was generated.

    The private CA certificate and key files are saved to the directory:

    /etc/olcne/configs/certificates/production/

  3. Copy the certificate generated for a node from the /etc/olcne/configs/certificates/tmp-olcne/node/ directory to that node.

    The examples in this book use the /etc/olcne/certificates/ directory as the location for certificates on nodes. This is the recommended location for the certificates on nodes. The path to the certificates is used when setting up the Platform Agent or Platform API Server on each node, and when creating an environment.

    A script is created to help you copy the certificates to the nodes, /etc/olcne/configs/certificates/olcne-tranfer-certs.sh. You can use this script and modify it to suit your needs, or transfer the certificates to the nodes using some other method.

    Important

    If you set up keyless SSH, change the USER variable in the olcne-tranfer-certs.sh script to the user you set up with keyless SSH.

    Run the script to copy the certificates to the nodes:

    bash -ex /etc/olcne/configs/certificates/olcne-tranfer-certs.sh

    This script copies the certificates for each node to the following directory on nodes:

    /etc/olcne/configs/certificates/production/

    Important

    If you use the olcne-tranfer-certs.sh script to copy the certificate files, they are copied to a different directory than is used in examples in this documentation.

    Make sure you use this path (/etc/olcne/configs/certificates/production/) when starting the Platform API Server and Platform Agent services, and when creating an environment. This path differs from the standard path of /etc/olcne/certificates/ which is used in examples in this documentation.

  4. Make sure the olcne user on each node that runs the Platform API Server or Platform Agent is able to read the directory in which you copy the certificates. If you used the default path for certificates of /etc/olcne/certificates/, the olcne user has read access.

    If you used a different path, check the olcne user can read the certificate path. On the operator node, and each Kubernetes node, run:

    sudo -u olcne ls /etc/olcne/configs/certificates/production/
    ca.cert node.cert node.key

    You should see a list of the certificates and key for the node.

3.4.3.2 Creating Additional Certificates

This section contains information about generating certificates for any additional nodes that you want to add to a Kubernetes cluster. This section shows you how to generate additional certificates using the /etc/olcne/gen-certs-helper.sh script on the operator node.

To generate additional certificates using a private CA:
  1. On the operator node, generate new certificates for the nodes using the /etc/olcne/gen-certs-helper.sh script. For example:

    cd /etc/olcne
    sudo ./gen-certs-helper.sh \
    --cert-request-organization-unit "My Company Unit" \
    --cert-request-organization "My Company" \
    --cert-request-locality "My Town" \
    --cert-request-state "My State" \
    --cert-request-country US \
    --cert-request-common-name cloud.example.com \
    --nodes control4.example.com,control5.example.com \
    --byo-ca-cert /etc/olcne/configs/certificates/production/ca.cert \
    --byo-ca-key /etc/olcne/configs/certificates/production/ca.key

    The private key to generate the new certificates is specified with the --byo-ca-key option and the CA certificate with the --byo-ca-cert option. In this example, the private CA certificate and key files are located in the directory:

    /etc/olcne/configs/certificates/production/

    The location may be different if you used the --cert-dir option of the gen-certs-helper.sh script when creating the original certificates.

  2. When you have generated the new certificates, copy them to the nodes. A script is created to help you copy the certificates to the nodes, olcne-tranfer-certs.sh. You can use this script and modify it to suit your needs, or transfer the certificates to the nodes using some other method.

    Run the script to copy the certificates to the nodes:

    bash -ex /etc/olcne/configs/certificates/olcne-tranfer-certs.sh

3.5 Setting up X.509 Certificates for the externalIPs Kubernetes Service

When you deploy Kubernetes, a service is deployed to the cluster that controls access to externalIPs in Kubernetes services. The service is named externalip-validation-webhook-service and runs in the externalip-validation-system namespace. This Kubernetes service requires X.509 certificates be set up prior to deploying Kubernetes. You can use Vault to generate the certificates, or use your own certificates for this purpose. You can also generate certificates using the gen-certs-helper.sh script. The certificates must be available on the operator node.

The examples in this book use the /etc/olcne/certificates/restrict_external_ip/ directory for these certificates.

3.5.1 Setting up Vault Certificates

You can use Vault to generate a certificates for the externalIPs Kubernetes service. The Vault instance must be configured in the same way as described in Section 3.4.1, “Setting up Vault Authentication”.

You need to generate certificates for two nodes, named:

externalip-validation-webhook-service.externalip-validation-system.svc

externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local

The certificate information should be generated in PEM format.

For example:

vault write olcne_pki_intermediary/issue/olcne \
    alt_names=externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
    format=pem_bundle

The output is displayed. Look for the section that starts with certificate. This section contains the certificates for the node names (set with the alt_names option). Save the output in this section to a file named node.cert. The file should look something like:

-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAymg8uHy+mpwlelCyC4WrnfLwUmJ5vZmSos85QnIlZvyycUPK
...
X3c8LNaJDfQx1wKfTc/c0czBhHYxgwfau0G6wjqScZesPi2xY0xyslE=
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIID2TCCAsGgAwIBAgIUZ/M/D7bAjhyGx7DivsjBb9oeLhAwDQYJKoZIhvcNAQEL
...
9bRwnen+JrxUn4GV59GtsTiqzY6R2OKPm+zLl8E=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDnDCCAoSgAwIBAgIUMapl4aWnBXE/02qTW0zOZ9aQVGgwDQYJKoZIhvcNAQEL
...
kV8w2xVXXAehp7cg0BakVA==
-----END CERTIFICATE-----

Look for the section that starts with issuing_ca. This section contains the CA certificate. Save the output in this section to a file named ca.cert. The file should look something like:

-----BEGIN CERTIFICATE-----
MIIDnDCCAoSgAwIBAgIUMapl4aWnBXE/02qTW0zOZ9aQVGgwDQYJKoZIhvcNAQEL
...
kV8w2xVXXAehp7cg0BakVA==
-----END CERTIFICATE-----

Look for the section that starts with private_key. This section contains the private key for the node certificates. Save the output in this section to a file named node.key. The file should look something like:

-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAymg8uHy+mpwlelCyC4WrnfLwUmJ5vZmSos85QnIlZvyycUPK
...
X3c8LNaJDfQx1wKfTc/c0czBhHYxgwfau0G6wjqScZesPi2xY0xyslE=
-----END RSA PRIVATE KEY-----

Copy the three files (node.cert, ca.cert and node.key) to the operator node and set the ownership of the files as described in Section 3.5.2, “Setting up CA Certificates”.

3.5.2 Setting up CA Certificates

If you are using your own certificates, you should copy them to a directory under /etc/olcne/certificates/ on the operator node. For example:

  • CA Certificate: /etc/olcne/certificates/restrict_external_ip/ca.cert

  • Node Key: /etc/olcne/certificates/restrict_external_ip/node.key

  • Node Certificate: /etc/olcne/certificates/restrict_external_ip/node.cert

You should copy these certificates to a different location on the operator node than the certificates and keys used for the Kubernetes nodes as set up in Section 3.4, “Setting up X.509 Certificates for Kubernetes Nodes”. This makes sure you do not overwrite those certificates and keys. You need to generate certificates for two nodes, named:

externalip-validation-webhook-service.externalip-validation-system.svc

externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local

The certificates for these two nodes should be saved as a single file as node.cert.

Make sure the permissions of the directory where the certificates are located can be read by the user on the operator node that you intend to use to run the olcnectl commands to install Kubernetes. In this example the opc user is to be used on the operator node, so ownership of the directory is set to the opc user:

sudo chown -R opc:opc /etc/olcne/certificates/restrict_external_ip/

3.5.3 Setting up Private CA Certificates

You can use the gen-certs-helper.sh script to generate the certificates. Run the script on the operator node and enter the options required for your environment.

The --cert-dir option sets the location where the certificates are to be saved.

The --nodes option must be set to the name of the Kubernetes service, as shown:

--nodes externalip-validation-webhook-service.externalip-validation-system.svc,externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local

Use the --one-cert option to save the certificates for the two service names to a single file.

cd /etc/olcne
sudo ./gen-certs-helper.sh \
--cert-dir /etc/olcne/certificates/restrict_external_ip/ \
--cert-request-organization-unit "My Company Unit" \
--cert-request-organization "My Company" \
--cert-request-locality "My Town" \
--cert-request-state "My State" \
--cert-request-country US \
--cert-request-common-name cloud.example.com \
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
--one-cert

You can use the same CA certificate and private key you used to generate the Kubernetes node certificates by using the --byo-ca-cert and --byo-ca-key options. For example, if you used the gen-certs-helper.sh script to generate the node certificates, add the following lines to the command:

--byo-ca-cert /etc/olcne/configs/certificates/production/ca.cert \
--byo-ca-key /etc/olcne/configs/certificates/production/ca.key

In this example, the certificates are created and located in the directory:

/etc/olcne/certificates/restrict_external_ip/production

Make sure the permissions of the output directory where the certificates are located can be read by the user on the operator node that you intend to use use to run the olcnectl commands to install Kubernetes. In this example the opc user is to be used on the operator node, so ownership of the directory is set to the opc user. For example:

sudo chown -R opc:opc /path/

If you used the gen-certs-helper.sh script as shown in this section, run:

sudo chown -R opc:opc /etc/olcne/certificates/restrict_external_ip/production

3.6 Starting the Platform API Server and Platform Agent Services

This section discusses using certificates to set up secure communication between the Platform API Server and the Platform Agent on nodes in the cluster. You can set up secure communication using certificates managed by Vault, or using your own certificates copied to each node. You must configure the Platform API Server and the Platform Agent to use the certificates when you start the services.

For information on setting up the certificates with Vault, see Section 3.4, “Setting up X.509 Certificates for Kubernetes Nodes”.

For information on creating a private CA to sign certificates that can be used during testing, see Section 3.4.3, “Setting up Private CA Certificates”.

3.6.1 Starting the Services Using Vault

This section shows you how to set up the Platform API Server and Platform Agent services to use certificates managed by Vault.

To set up and start the services using Vault:
  1. On the operator node, use the /etc/olcne/bootstrap-olcne.sh script to configure the Platform API Server to retrieve and use a Vault certificate. Use the bootstrap-olcne.sh --help command for a list of options for this script. For example:

    sudo /etc/olcne/bootstrap-olcne.sh \
    --secret-manager-type vault \
    --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \
    --vault-address https://192.0.2.20:8200 \
    --force-download-certs \
    --olcne-component api-server

    The certificates are generated and downloaded from Vault.

    By default, the certificates are saved to the /etc/olcne/certificates/ directory. You can alternatively specify a path for the certificates, for example, by including the following options in the bootstrap-olcne.sh command:

    --olcne-ca-path /path/ca.cert \
    --olcne-node-cert-path /path/node.cert \
    --olcne-node-key-path /path/node.key \

    The Platform API Server is configured to use the certificates, and started. You can confirm the service is running using:

    systemctl status olcne-api-server.service 
  2. On each Kubernetes node, use the /etc/olcne/bootstrap-olcne.sh script to configure the Platform Agent to retrieve and use a certificate. For example:

    sudo /etc/olcne/bootstrap-olcne.sh \
    --secret-manager-type vault \
    --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \
    --vault-address https://192.0.2.20:8200 \
    --force-download-certs \
    --olcne-component agent

    The certificates are generated and downloaded from Vault.

    By default, the certificates are saved to the /etc/olcne/certificates/ directory. You can alternatively specify a path for the certificates, for example, by including the following options in the bootstrap-olcne.sh command:

    --olcne-ca-path /path/ca.cert \
    --olcne-node-cert-path /path/node.cert \
    --olcne-node-key-path /path/node.key \

    The Platform Agent is configured to use the certificates, and started. You can confirm the service is running using:

    systemctl status olcne-agent.service 

3.6.2 Starting the Services Using Certificates

This section shows you how to set up the Platform API Server and Platform Agent services to use your own certificates, which have been copied to each node. This example assumes the certificates are available on all nodes in the /etc/olcne/certificates/ directory.

To set up and start the services using certificates:
  1. On the operator node, use the /etc/olcne/bootstrap-olcne.sh script to configure the Platform API Server to use the certificates. Use the bootstrap-olcne.sh --help command for a list of options for this script. For example:

    sudo /etc/olcne/bootstrap-olcne.sh \
    --secret-manager-type file \
    --olcne-component api-server

    If your certificates are in a directory other than /etc/olcne/certificates/, add the location of the certificates using the following options, for example:

    --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \
    --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \
    --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \

    The Platform API Server is configured to use the certificates, and started. You can confirm the service is running using:

    systemctl status olcne-api-server.service 
  2. On each Kubernetes node, use the /etc/olcne/bootstrap-olcne.sh script to configure the Platform Agent to use the certificates. For example:

    sudo /etc/olcne/bootstrap-olcne.sh \
    --secret-manager-type file \
    --olcne-component agent

    If your certificates are in a directory other than /etc/olcne/certificates/, add the location of the certificates using the following options, for example:

    --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \
    --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \
    --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \

    The Platform Agent is configured to use the certificates, and started. You can confirm the service is running using:

    systemctl status olcne-agent.service 

3.7 Creating an Environment

The first step to creating a Kubernetes cluster is to create an environment. You can create multiple environments, with each environment potentially containing multiple modules. Naming each environment and module makes it easier to manage the deployed components of Oracle Cloud Native Environment.

Note

You should not use the same node in more than one environment.

Use the olcnectl environment create command on the operator node to create an environment. For more information on the syntax for the olcnectl environment create command, see Platform Command-Line Interface.

Tip

You can also use a configuration file to create an environment. The configuration file is a YAML file that contains the information about the environments and modules you want to deploy. Using a configuration file reduces the information you need to provide with olcnectl commands. For information on creating and using a configuration file, see Platform Command-Line Interface.

This section shows you how to create an environment using Vault, and using your own certificates copied to the file system on each node. For information on setting up X.509 certificates, see Section 3.4, “Setting up X.509 Certificates for Kubernetes Nodes”.

3.7.1 Creating an Environment using Certificates Managed by Vault

This section shows you how to create an environment using Vault to provide and manage the certificates.

On the operator node, use the olcnectl environment create command to create an environment. For example, to create an environment named myenvironment using certificates generated from a Vault instance located at https://192.0.2.20:8200:

olcnectl environment create \
--api-server 127.0.0.1:8091 \
--environment-name myenvironment \
--secret-manager-type vault \
--vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \
--vault-address https://192.0.2.20:8200 \
--update-config 

The --api-server option sets the location of the Platform API Server service. In this example, the Platform API Server is running on the operator node (the localhost) and listening on port 8091.

The --environment-name option sets the name of the environment, which in this example is myenvironment.

The --secret-manager-type option sets the certificate manager to Vault.

Replace --vault-token with the token to access Vault.

Replace --vault-address with the location of your Vault instance.

By default, the certificate generated by Vault is saved to $HOME/.olcne/certificates/environment_name/. If you want to specify a different location to save the certificate, use the --olcne-node-cert-path, --olcne-ca-path, and --olcne-node-key-path options. For example, add the following options to the olcnectl environment create command:

--olcne-node-cert-path /path/node.cert \
--olcne-ca-path /path/ca.cert \
--olcne-node-key-path /path/node.key 

The --update-config option writes information about the environment to a local configuration file at $HOME/.olcne/olcne.conf, and this configuration is used for future calls to the Platform API Server. If you use this option, you do not need to specify the Platform API Server (using the --api-server option) in future olcnectl commands. For more information on setting the Platform API Server see Platform Command-Line Interface.

3.7.2 Creating an Environment using Certificates

This section shows you how to create an environment using your own certificates, copied to each node. This example assumes the certificates are available on all nodes in the /etc/olcne/certificates/ directory.

On the operator node, create the environment using the olcnectl environment create command. For example:

olcnectl environment create \
--api-server 127.0.0.1:8091 \
--environment-name myenvironment \
--secret-manager-type file \
--olcne-node-cert-path /etc/olcne/certificates/node.cert \
--olcne-ca-path /etc/olcne/certificates/ca.cert \
--olcne-node-key-path /etc/olcne/certificates/node.key \
--update-config

The --api-server option sets the location of the Platform API Server service. In this example, the Platform API Server is running on the operator node (the localhost) and listening on port 8091.

The --environment-name option sets the name of the environment, which in this example is myenvironment.

The --secret-manager-type option sets the certificate manager to use file-based certificates.

The --olcne-node-cert-path, --olcne-ca-path, and --olcne-ca-path options set the location of the certificate files. You can optionally set the location for the certificate files using environment variables; olcnectl uses these if they are set. The following environment variables map to the olcnectl environment create command options:

Table 3.1 Certificate Options

Command Option

Environment Variable

Purpose

--olcne-node-cert-path

$OLCNE_SM_CERT_PATH

The path to the node certificate.

--olcne-ca-path

$OLCNE_SM_CA_PATH

The path to the Certificate Authority certificate.

--olcne-node-key-path

$OLCNE_SM_KEY_PATH

The path to the key for the node's certificate.


For example, to set the certificate information using environment variables for the same environment, you could use:

export OLCNE_SM_CA_PATH=/etc/olcne/certificates/ca.cert
export OLCNE_SM_CERT_PATH=/etc/olcne/certificates/node.cert
export OLCNE_SM_KEY_PATH=/etc/olcne/certificates/node.key

olcnectl environment create \
--api-server 127.0.0.1:8091 \
--environment-name myenvironment \
--secret-manager-type file \
--update-config 

The --update-config option writes information about the environment to a local configuration file at $HOME/.olcne/olcne.conf, and this configuration is used for future calls to the Platform API Server. If you use this option, you do not need to specify the Platform API Server (using the --api-server option) in future olcnectl commands. For more information on setting the Platform API Server see Platform Command-Line Interface.

3.8 Installing Modules

After you create an environment, you can add any modules you want to the environment.

3.8.1 Creating a Kubernetes Module

A base installation requires a Kubernetes module which is used to create a Kubernetes cluster. For information on creating and installing a Kubernetes module, see Container Orchestration.

3.8.2 Creating an Istio Module

When you have created and installed a Kubernetes module, you can optionally install a service mesh using the Istio module. For information on installing the Istio module to create a service mesh, see Service Mesh.

3.8.3 Creating an Operator Lifecycle Manager Module

When you have created and installed a Kubernetes module, you can optionally install the Operator Lifecycle Manager module to manage the installation and lifecycle management of operators in a Kubernetes cluster. For information on installing the Operator Lifecycle Manager module, see Container Orchestration.

3.8.4 Creating an Oracle Cloud Infrastructure Container Storage Interface Module

When you have created and installed a Kubernetes module, you can optionally install the Oracle Cloud Infrastructure Container Storage Interface module to set up access to Oracle Cloud Infrastructure storage. This allows you to use Oracle Cloud Infrastructure block volumes to provide persistent storage for Kubernetes applications. For information on installing the Oracle Cloud Infrastructure Container Storage Interface module, see Storage.

3.8.5 Creating a Gluster Container Storage Interface Module

When you have created and installed a Kubernetes module, you can optionally install the Gluster Container Storage Interface module to set up access to Gluster storage. This allows you to use a Gluster cluster to provide persistent storage for Kubernetes applications. For information on installing the Gluster Container Storage Interface module, see Storage.