8 Quick Install using Configuration File

Install Oracle Cloud Native Environment on bare metal hosts or virtual machines, including a Kubernetes cluster, using a configuration file.

This sets up a basic deployment of Oracle Cloud Native Environment on bare metal hosts, including a Kubernetes cluster.

Nodes Required: At least three nodes.

  • Operator Node: One node to use as the operator node, which is used to perform the installation using the Platform CLI (olcnectl), and to host the Platform API Server.

  • Kubernetes control plane: At least one node to use as a Kubernetes control plane node.

  • Kubernetes worker: At least one node to use as a Kubernetes worker node.

Before you begin: Complete the prerequisite set up. See Prerequisites.

To do a quick install using a configuration file:

  1. On the operator node, create an Oracle Cloud Native Environment configuration file for the deployment. For information on creating an Oracle Cloud Native Environment configuration file, see Platform Command-Line Interface. This example uses the file name myenvironment.yaml for the configuration file.

    A basic example configuration file is:

    environments:
      - environment-name: myenvironment
        globals:
          api-server: operator.example.com:8091
          selinux: enforcing
        modules:
          - module: kubernetes
            name: mycluster
            args:
              container-registry: container-registry.oracle.com/olcne
              control-plane-nodes: 
                - control1.example.com:8090
              worker-nodes:
                - worker1.example.com:8090
                - worker2.example.com:8090

    This example configuration file uses the default settings to create a Kubernetes cluster with a single control plane node, and two worker nodes. Change the nodes listed to reflect the ones in your environment.

    Private CA Certificates (using default settings) are automatically created and distributed to each node to secure the communication. To use CA Certificates, specify the location of the certificates using the olcne-ca-path, olcne-node-cert-path and olcne-node-key-path options. The certificates must be in place on the nodes before you provision them using the configuration file. For example, the globals section would look similar to:
        globals:
          api-server: operator.example.com:8091
          selinux: enforcing
          olcne-ca-path: /etc/olcne/certificates/ca.cert
          olcne-node-cert-path: /etc/olcne/certificates/node.cert
          olcne-node-key-path: /etc/olcne/certificates/node.key

    Tip:

    You can use the olcnectl certificates distribute command to generate certificates and copy them to the nodes. See also the olcnectl certificates generate and olcnectl certificates copy commands.

    By default, a Kubernetes service is deployed that controls access to externalIPs in Kubernetes services. Private CA Certificates are also automatically generated for this purpose, using default values. To use CA Certificates, include the location using the restrict-service-externalip-ca-cert, restrict-service-externalip-tls-cert and restrict-service-externalip-tls-key options in the args section for the Kubernetes module. You can also set the IP addresses that can be accessed by Kubernetes services using the restrict-service-externalip-cidrs option. For example, the args section would look similar to:
            args:
              container-registry: container-registry.oracle.com/olcne
              control-plane-nodes: 
                - control1.example.com:8090
              worker-nodes:
                - worker1.example.com:8090
                - worker2.example.com:8090
              restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert
              restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert
              restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key 
              restrict-service-externalip-cidrs: 192.0.2.0/24,198.51.100.0/24           
    If you don't want to deploy this service, use the restrict-service-externalip: false option in the configuration file. For example, the args section would look similar to:
            args:
              container-registry: container-registry.oracle.com/olcne
              control-plane-nodes: 
                - control1.example.com:8090
              worker-nodes:
                - worker1.example.com:8090
                - worker2.example.com:8090
              restrict-service-externalip: false           

    For more information on setting access to externalIPs in Kubernetes services, see Kubernetes Module.

    To include other modules to deploy with the Kubernetes module, add them to the configuration file. A more complex example of a configuration file, which includes an external load balancer, and installs other modules, is:

    environments:
      - environment-name: myenvironment
        globals:
          api-server: operator.example.com:8091
          selinux: enforcing
        modules:
          - module: kubernetes
            name: mycluster
            args:
              container-registry: container-registry.oracle.com/olcne
              load-balancer: lb.example.com:6443
              control-plane-nodes: 
                - control1.example.com:8090
                - control2.example.com:8090
                - control3.example.com:8090
              worker-nodes:
                - worker1.example.com:8090
                - worker2.example.com:8090
                - worker3.example.com:8090
            - module: operator-lifecycle-manager
              name: myolm
              args:
                olm-kubernetes-module: mycluster
            - module: istio
              name: myistio
              args: 
                istio-kubernetes-module: mycluster
  2. On the operator node, use the olcnectl provision command with the --config-file option to start the installation. For example:

    olcnectl provision --config-file myenvironment.yaml

    Several other command options might be required, such as the SSH login credentials, proxy server information, and the option to automatically accept any prompts using the --yes option. For information on the syntax options for the olcnectl provision command, see Platform Command-Line Interface.

  3. A list of the steps to be performed on each node is displayed and a prompt is displayed to proceed. For example, on a control plane node, the changes might look similar to:

    ? Apply control-plane configuration on control1.example.com:
    * Install oracle-olcne-release
    ...
    * Install and enable olcne-agent
    
    Proceed? yes/no(default) yes                          

    Enter yes to continue. The node is set up.

    Information about the changes on each node is displayed. You need to confirm the set up steps for each node.

    Tip:

    To avoid accepting the changes on each node, use the --yes command option with the olcnectl provision command.

  4. The nodes are set up with the Oracle Cloud Native Environment platform and the modules are installed. You can show information about the environment using the syntax:

    olcnectl module instances 
    --api-server host_name:8091 
    --environment-name name

    Tip:

    To avoid using the --api-server option in future olcnectl commands, add the --update-config option.

    For example:

    olcnectl module instances \
    --api-server operator.example.com:8091 \
    --environment-name myenvironment \
    --update-config

    The output looks similar to:

    INFO[...] Global flag configuration for myenvironment has been written to the 
    local Platform config and you don't need to specify them for any future calls 
    INSTANCE                   MODULE      STATE    
    control1.example.com:8090  node        installed
    ...
    mycluster                  kubernetes  installed

    To see more information about the deployment, use the olcnectl module report command. For example:

    olcnectl module report \
    --environment-name myenvironment \
    --name mycluster \
    --children
  5. Set up the Kubernetes CLI (kubectl) on a control plane node. The kubectl command is installed on each control plane node in the cluster. To use it to access the cluster, you need to configure it using the Kubernetes configuration file.

    Login to a control plane node and copy and paste these commands to a terminal in the user's home directory:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

    Verify that you can use the kubectl command using any kubectl command such as:

    kubectl get pods --all-namespaces

    The output looks similar to:

    NAMESPACE              NAME                          READY   STATUS    RESTARTS   AGE
    externalip-validat...  externalip-validation-...     1/1     Running   0          1h
    kube-system            coredns-...                   1/1     Running   0          1h
    kube-system            coredns-...                   1/1     Running   0          1h
    kube-system            etcd-...                      1/1     Running   2          1h
    kube-system            etcd-...                      1/1     Running   2          1h
    kube-system            kube-apiserver-...            1/1     Running   2          1h
    kube-system            kube-apiserver-...            1/1     Running   2          1h
    kube-system            kube-controller-manager-...   1/1     Running   5 (1h ago) 1h
    kube-system            kube-controller-manager-...   1/1     Running   2          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-flannel-...              1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-proxy-...                1/1     Running   0          1h
    kube-system            kube-scheduler-...            1/1     Running   5 (1h ago) 1h
    kube-system            kube-scheduler-...            1/1     Running   2          1h
    kubernetes-dashboard   kubernetes-dashboard-...      1/1     Running   0          1h

    Note:

    After the deployment, a Kubernetes configuration file is created in the local directory of the operator node. The file is named kubeconfig.environment_name.cluster_name and contains information about the Kubernetes cluster. This file is created for convenience and isn't required to set up kubectl on the control plane nodes.

    You might want to use this file to add to a larger Kubernetes configuration file if you have multiple clusters. See the upstream Kubernetes documentation for more information on configuring access to multiple clusters.

Tip:

Adding and Removing Nodes to Scale a Kubernetes Cluster

To change the nodes in the Kubernetes cluster, run the olcnectl provision command again with updated control plane and worker node lists: any nodes you omit of a new node list are removed from the cluster, whilst any new nodes you specify are added to it.

If you're adding nodes, new certificates are automatically generated for you and installed on the new nodes, the Oracle Cloud Native Environment software is installed, and the nodes are added to the Kubernetes cluster. However, you still need to ensure that all new nodes have been set up with the required prerequisites (see Prerequisites), and that any new control plane nodes have been added to the load balancer if you're using an external load balancer.