8 Quick Install using Configuration File
Important:
The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.
We recommend that you upgrade the software described by this documentation as soon as possible.
Install Oracle Cloud Native Environment on bare metal hosts or virtual machines, including a Kubernetes cluster, using a configuration file.
This sets up a basic deployment of Oracle Cloud Native Environment on bare metal hosts, including a Kubernetes cluster.
Nodes Required: At least three nodes. They are:
-
Operator Node: One node to use as the operator node, which is used to perform the installation using the Platform CLI (
olcnectl), and to host the Platform API Server. -
Kubernetes control plane: At least one node to use as a Kubernetes control plane node.
-
Kubernetes worker: At least one node to use as a Kubernetes worker node.
Before you begin: Complete the prerequisite set up. See Prerequisites.
To do a quick install using a configuration file:
-
On the operator node, create an Oracle Cloud Native Environment configuration file for your deployment. For information on creating an Oracle Cloud Native Environment configuration file, see Platform Command-Line Interface. This example uses the file name
myenvironment.yamlfor the configuration file.A basic example configuration file is:
environments: - environment-name: myenvironment globals: api-server: operator.example.com:8091 selinux: enforcing modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne control-plane-nodes: - control1.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090This example configuration file uses the default settings to create a Kubernetes cluster with a single control plane node, and two worker nodes. You should change the nodes listed to those of your own hosts.
Private CA Certificates, using default settings, are automatically created and distributed to each node to secure the communication. If you want to use your own pre-generated certificates, specify the location of the certificates using theolcne-ca-path,olcne-node-cert-pathandolcne-node-key-pathoptions. The certificates must be in place on the nodes before you provision them using the configuration file. For example, theglobalssection would look similar to:globals: api-server: operator.example.com:8091 selinux: enforcing olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.keyTip:
You can use the
olcnectl certificates distributecommand to generate your own certificates using your own key material, and copy them to the nodes. See also theolcnectl certificates generateandolcnectl certificates copycommands.By default, a Kubernetes service is deployed that controls access toexternalIPsin Kubernetes services. Private CA Certificates are also automatically generated for this purpose, using default values. If you want to use your own certificates, include the location using therestrict-service-externalip-ca-cert,restrict-service-externalip-tls-certandrestrict-service-externalip-tls-keyoptions in theargssection for the Kubernetes module. You can also set the IP addresses that can be accessed by Kubernetes services using therestrict-service-externalip-cidrsoption. For example, theargssection would look similar to:args: container-registry: container-registry.oracle.com/olcne control-plane-nodes: - control1.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key restrict-service-externalip-cidrs: 192.0.2.0/24,198.51.100.0/24If you do not want to deploy this service, use therestrict-service-externalip: falseoption in the configuration file. For example, theargssection would look similar to:args: container-registry: container-registry.oracle.com/olcne control-plane-nodes: - control1.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 restrict-service-externalip: falseFor more information on setting access to
externalIPsin Kubernetes services, see Container Orchestration.If you want to include other modules to deploy with the Kubernetes module, add them to the configuration file. A more complex example of a configuration file, which includes an external load balancer, and installs other modules, is:
environments: - environment-name: myenvironment globals: api-server: operator.example.com:8091 selinux: enforcing modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 control-plane-nodes: - control1.example.com:8090 - control2.example.com:8090 - control3.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 - worker3.example.com:8090 - module: operator-lifecycle-manager name: myolm args: olm-kubernetes-module: mycluster - module: istio name: myistio args: istio-kubernetes-module: mycluster -
On the operator node, use the
olcnectl provisioncommand with the--config-fileoption to start the installation. For example:olcnectl provision --config-file myenvironment.yamlThere are a number of other command options that you may need, such as the SSH log in credentials, proxy server information, and the option to automatically accept any prompts using the
--yesoption. For information on the syntax options for theolcnectl provisioncommand, see Platform Command-Line Interface. -
A list of the steps to be performed on each node is displayed and a prompt is displayed to proceed. For example, on a control plane node, the changes may look similar to:
? Apply control-plane configuration on control1.example.com: * Install oracle-olcne-release ... * Install and enable olcne-agent Proceed? yes/no(default) yesEnter
yesto continue. The node is set up.Information about the changes on each node is displayed. You need to confirm the set up steps for each node.
Tip:
If you want to avoid accepting the changes on each node, use the
--yescommand option with theolcnectl provisioncommand. -
The nodes are set up with the Oracle Cloud Native Environment platform and the module(s) are installed. You can show information about the environment using the syntax:
olcnectl module instances --api-server host_name:8091 --environment-name nameTip:
To avoid having to enter the
--api-serveroption in futureolcnectlcommands, add the--update-configoption.For example:
olcnectl module instances \ --api-server operator.example.com:8091 \ --environment-name myenvironment \ --update-configThe output looks similar to:
INFO[...] Global flag configuration for myenvironment has been written to the local Platform config and you don't need to specify them for any future calls INSTANCE MODULE STATE control1.example.com:8090 node installed ... mycluster kubernetes installedIf you want to see more information about the deployment, use the
olcnectl module reportcommand. For example:olcnectl module report \ --environment-name myenvironment \ --name mycluster \ --children -
Set up the Kubernetes CLI (
kubectl) on a control plane node. Thekubectlcommand is installed on each control plane node in the cluster. To use it to access the cluster, you need to configure it using the Kubernetes configuration file.Log into a control plane node and copy and paste these commands to a terminal in your home directory:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrcVerify that you can use the
kubectlcommand using anykubectlcommand such as:kubectl get pods --all-namespacesThe output looks similar to:
NAMESPACE NAME READY STATUS RESTARTS AGE externalip-validat... externalip-validation-... 1/1 Running 0 1h kube-system coredns-... 1/1 Running 0 1h kube-system coredns-... 1/1 Running 0 1h kube-system etcd-... 1/1 Running 2 1h kube-system etcd-... 1/1 Running 2 1h kube-system kube-apiserver-... 1/1 Running 2 1h kube-system kube-apiserver-... 1/1 Running 2 1h kube-system kube-controller-manager-... 1/1 Running 5 (1h ago) 1h kube-system kube-controller-manager-... 1/1 Running 2 1h kube-system kube-flannel-... 1/1 Running 0 1h kube-system kube-flannel-... 1/1 Running 0 1h kube-system kube-flannel-... 1/1 Running 0 1h kube-system kube-proxy-... 1/1 Running 0 1h kube-system kube-proxy-... 1/1 Running 0 1h kube-system kube-proxy-... 1/1 Running 0 1h kube-system kube-scheduler-... 1/1 Running 5 (1h ago) 1h kube-system kube-scheduler-... 1/1 Running 2 1h kubernetes-dashboard kubernetes-dashboard-... 1/1 Running 0 1hNote:
After the deployment, a Kubernetes configuration file is created in the local directory of the operator node. The file is named
kubeconfig.environment_name.cluster_nameand contains information about the Kubernetes cluster. This file is created for your convenience and is not required to set upkubectlon the control plane nodes.You may want to use this file to add to a larger Kubernetes configuration file if you have multiple clusters. See the upstream Kubernetes documentation for more information on configuring access to multiple clusters.
Tip:
Adding and Removing Nodes to Scale a Kubernetes Cluster
If you want to change the nodes in the Kubernetes cluster, run the olcnectl
provision command again with updated control plane and worker node lists: any
nodes you leave out of a new node list will be removed from the cluster, whilst any new
nodes you specify will be added to it.
If you are adding nodes, new certificates are automatically generated for you and installed on the new nodes, the Oracle Cloud Native Environment software is installed, and the nodes are added to the Kubernetes cluster. However, you still need to make sure that all new nodes have been set up with the required Prerequisites, and that any new control plane nodes have been added to your load balancer if you are using an external load balancer.