2 Using a Configuration File
Important:
The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.
We recommend that you upgrade the software described by this documentation as soon as possible.
To simplify creating and managing environments and modules, you can use a configuration file. The configuration file includes all information about the environments and modules you want to create. Using a configuration file saves repeated entries of Platform CLI command options.
You can use a configuration file using the
--config-file
option with any Platform CLI
command as it is a global command option. When you use the
--config-file
option with a Platform CLI
command, any other command line options are ignored, with the
exception of the --force
option. Only the
information contained in the configuration file is used with an
olcnectl
command.
The following sections contain information on writing a configuration file and using a configuration file create and remove environments and modules. There are more uses for the configuration file than this chapter describes. The use cases described in this chapter are the most common ways to use a configuration file.
Creating a Configuration File
The configuration file must be valid YAML with a file extension of
yaml
or yml
. The basic
format of components in the configuration file is:
environments: - environment-name: name globals: key: value modules: - module: name name: name args: key: value - module: name name: name args: key: value - environment-name: name globals: key: value modules: - module: name name: name args: key: value - module: name name: name args: key: value
The olcnectl template
command is useful to
create a YAML file that contains some basic configuration options
to start a configuration file for your environment.
olcnectl template
This command creates a file named
config-file-template.yaml
in the local
directory. You can edit this file to suit your needs.
The configuration file should contain key:
value
pairs for olcnectl
command
options. For example, when creating an environment, you might use
an olcnectl
command like:
olcnectl environment create \ --api-server 127.0.0.1:8091 \ --environment-name myenvironment \ --secret-manager-type vault \ --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \ --vault-address https://192.0.2.20:8200 \ --update-config
To represent this same information in YAML format in the configuration file, you would use:
environments: - environment-name: myenvironment globals: api-server: 127.0.0.1:8091 secret-manager-type: vault vault-token: s.3QKNuRoTqLbjXaGBOmO6Psjh vault-address: https://192.0.2.20:8200 update-config: true
Notice that the olcnectl environment create
command options to create the environment map directly to the YAML
key: value
pairs.
When you write the modules
section, you can use
any olcnectl module
command option that relates
to modules. That is, any olcnectl module
command option that can be used with a module can be included in
the module
section. The args
section for a module should only contain the options available
with the olcnectl module create
command. Any
other options should be under the main module
set of options.
In this example, the --generate-scripts
and
--force
options are not valid with the
olcnectl module create
command, but they are
valid options for the olcnectl module validate
or olcnectl module uninstall
options. The
generate-scripts
and force
options should not be added as module args
,
instead they should be listed under the module:
kubernetes
section.
... modules: - module: kubernetes name: mycluster generate-scripts: true force: true args: kube-version: 1.24.15 container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 master-nodes: - control1.example.com:8090 - control2.example.com:8090 - control3.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 - worker3.example.com:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key
If you do not provide all mandatory options for a command, you are
prompted for them when you use the configuration file with
olcnectl
. If you do not supply a value for a
key, the default for that olcnectl
command
option is used, or if there is no default value, that key is
ignored. If you add key values that are not valid, an error is
displayed to help you correct the invalid option. If you add keys
that are not valid, they are ignored.
Do not include the --config-file
option for any
olcnectl
commands in the configuration file.
This option is ignored and cannot be used in a configuration file.
The order of the components in the YAML file is important. The components should be in the same order as you would create them using the Platform CLI.
For example, this file creates two environments, the first environment includes only the
Kubernetes module. The second environment includes the Kubernetes module, the Helm module, the
Operator Lifecycle Manager module (which requires Helm), the Istio module (which also requires
Helm) and finally, the Oracle Cloud Infrastructure Cloud Controller Manager module (which
requires Helm as well). Both environments and all modules can be created and installed using a
single set of olcnectl
commands.
environments: - environment-name: myenvironment1 globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.key modules: - module: kubernetes name: mycluster1 args: container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 master-nodes: - control1.example.com:8090 - control2.example.com:8090 - control3.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 - worker3.example.com:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key - environment-name: myenvironment2 globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.key modules: - module: kubernetes name: mycluster2 args: container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 master-nodes: - control4.example.com:8090 - control5.example.com:8090 - control6.example.com:8090 worker-nodes: - worker4.example.com:8090 - worker5.example.com:8090 - worker6.example.com:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key - module: helm name: myhelm args: helm-kubernetes-module: mycluster2 - module: operator-lifecycle-manager name: myolm args: olm-helm-module: myhelm - module: istio name: myistio args: istio-helm-module: myhelm - module: oci-ccm name: myoci args: oci-ccm-helm-module: myhelm oci-region: us-ashburn-1 oci-tenancy: ocid1.tenancy.oc1..unique_ID oci-compartment: ocid1.compartment.oc1..unique_ID oci-user: ocid1.user.oc1..unique_ID oci-fingerprint: b5:52:... oci-private-key: /home/opc/.oci/oci_api_key.pem oci-vcn: ocid1.vcn.oc1..unique_ID oci-lb-subnet1: ocid1.subnet.oc1..unique_ID
Installing Using a Configuration File
This section contains an example of using a configuration file to create an environment and deploy Kubernetes into it.
The configuration file for this is named
myenvironment.yaml
and contains:
environments: - environment-name: myenvironment globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.key modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 master-nodes: - control1.example.com:8090 - control2.example.com:8090 - control3.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 - worker3.example.com:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key
Use the same commands as you would usually use to create an environment and deploy the Kubernetes module, but instead of passing all the command options using the Platform CLI, simply provide the location of the configuration file.
To create the environment and deploy Kubernetes, on the operator node:
-
Use the
olcnectl environment create
command with the--config-file
option:olcnectl environment create \ --config-file myenvironment.yaml
The environment is created and ready to use to install the Kubernetes module. If you have multiple environments set up in your configuration file, they are all created using this one step.
-
Use the
olcnectl module create
command to create the Kubernetes module.olcnectl module create \ --config-file myenvironment.yaml
If you have multiple modules set up in your configuration file, they are all created using this one step.
-
You should also validate the module is able to be installed on the nodes. Use the
olcnectl module validate
command to validate the module.olcnectl module validate \ --config-file myenvironment.yaml
If you have multiple modules set up in your configuration file, they are all validated.
-
The last step is to install the module. Use the
olcnectl module install
command to install the module.olcnectl module install \ --config-file myenvironment.yaml
If you have multiple modules set up in your configuration file, they are all installed.
-
You can verify the Kubernetes module is deployed and the nodes are set up using the
olcnectl module instances
command.olcnectl module instances \ --config-file myenvironment.yaml INSTANCE MODULE STATE control1.example.com:8090 node installed control2.example.com:8090 node installed control3.example.com:8090 node installed worker1.example.com:8090 node installed worker2.example.com:8090 node installed worker3.example.com:8090 node installed mycluster kubernetes installed
Adding Modules or Environments Using a Configuration File
If you want to add modules or environments to your deployment, add
them to your configuration file, then run the
olcnectl
commands to add them to your
deployment. For example, to add the Operator Lifecycle Manager module
to an existing Kubernetes deployment, create a file similar to the
following. This file is the same as that used in
Installing Using a Configuration File, to create an environment and deploy
Kubernetes, with the addition of the Helm and
Operator Lifecycle Manager modules.
environments: - environment-name: myenvironment globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.key modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 master-nodes: - control1.example.com:8090 - control2.example.com:8090 - control3.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 - worker3.example.com:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key - module: helm name: myhelm args: helm-kubernetes-module: mycluster - module: operator-lifecycle-manager name: myolm args: olm-helm-module: myhelm
Install the Helm module and the
Operator Lifecycle Manager module using the olcnectl
module
commands.
olcnectl module create \ --config-file myenvironment.yaml olcnectl module validate \ --config-file myenvironment.yaml olcnectl module install \ --config-file myenvironment.yaml
The additional Helm and Operator Lifecycle Manager modules are installed into the existing Kubernetes cluster in the environment.
Uninstalling Specific Modules or Environments Using a Configuration File
As the Platform API Server acts upon all the information contained in a configuration file, if you want to remove specific components from your deployment, while leaving other components, you need to create a separate configuration file with only the components you want to remove. The new configuration file includes only the information about the environment and module(s) you want to uninstall.
For example, to remove the Helm and Operator Lifecycle Manager modules and not the Kubernetes module in an environment, create a file similar to the following. This file is the same as used in Adding Modules or Environments Using a Configuration File, without the information about the Kubernetes module. Specify the environment in which the modules are deployed, and only the modules you want to remove.
environments: - environment-name: myenvironment globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.key modules: - module: helm name: myhelm args: helm-kubernetes-module: mycluster - module: operator-lifecycle-manager name: myolm args: olm-helm-module: myhelm
The filename in this example is
myenvironment-olm.yaml
.
Important:
Make sure you confirm the configuration file is correct before you use it to uninstall modules in order to maintain the integrity of your deployment.
Uninstall the Helm and Operator Lifecycle Manager modules using the
olcnectl module uninstall
command. Remember to
use the --force
option to make sure the modules
are removed in the correct order by the Platform API Server.
olcnectl module uninstall \ --config-file myenvironment-olm.yaml \ --force
The Helm and Operator Lifecycle Manager modules are uninstalled from the environment, while leaving the Kubernetes module untouched.
Scaling a Cluster Using a Configuration File
The information in this section shows you how to scale a Kubernetes cluster using a configuration file. For more information about scaling a cluster and preparing nodes, see Container Orchestration.
To scale a Kubernetes cluster using a configuration file, simply change
the nodes listed in the Kubernetes module and use
the olcnectl module update
command to apply the
changes to the module. For example, to add nodes to an existing
cluster that has the following listed in the configuration file:
...
modules:
- module: kubernetes
name: mycluster
args:
container-registry: container-registry.oracle.com/olcne
load-balancer: lb.example.com:6443
master-nodes:
- control1.example.com:8090
- control2.example.com:8090
- control3.example.com:8090
worker-nodes:
- worker1.example.com:8090
- worker2.example.com:8090
- worker3.example.com:8090
...
Add the new nodes to the configuration file. In this case there are two additional control plane nodes and one additional worker node.
...
modules:
- module: kubernetes
name: mycluster
args:
container-registry: container-registry.oracle.com/olcne
load-balancer: lb.example.com:6443
master-nodes:
- control1.example.com:8090
- control2.example.com:8090
- control3.example.com:8090
- control4.example.com:8090
- control5.example.com:8090
worker-nodes:
- worker1.example.com:8090
- worker2.example.com:8090
- worker3.example.com:8090
- worker4.example.com:8090
...
Use the olcnectl module update
command to scale
up the cluster.
olcnectl module update \ --config-file myenvironment.yaml
The Platform API Server backs up the cluster and adds the new nodes.
To scale down a cluster, perform the same steps, except delete the information about the nodes you want to remove from the cluster from the configuration file.
Updating and Upgrading Using a Configuration File
You can use the configuration file when you update or upgrade modules. For more information about updating or upgrading modules, see Updates and Upgrades.
To update all modules to the latest available errata release, use
the olcnectl module update
command.
olcnectl module update \ --config-file myenvironment.yaml
To upgrade modules to the latest available release, set the version for the module in the
configuration file and use the olcnectl module update
command. For example,
to upgrade the Kubernetes module to the latest version, add kube-version: 1.24.15
, and for the Istio module, add
istio-version: 1.15.7
:
... modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne load-balancer: lb.example.com:6443 kube-version: 1.24.15 master-nodes: - control1.example.com:8090 - control2.example.com:8090 - control3.example.com:8090 worker-nodes: - worker1.example.com:8090 - worker2.example.com:8090 - worker3.example.com:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /etc/olcne/certificates/restrict_external_ip/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /etc/olcne/certificates/restrict_external_ip/node.key modules: - module: helm name: myhelm args: helm-kubernetes-module: mycluster - module: istio name: myistio args: istio-helm-module: myhelm istio-version: 1.15.7
Use the olcnectl module update
command to
upgrade the modules listed in the configuration file.
olcnectl module update \ --config-file myenvironment.yaml
Uninstalling Using a Configuration File
To use a configuration file to uninstall environments and modules,
you should use the same olcnectl
commands you
would use to remove modules without using the file. That is, you
should remove the modules first, then remove the environment.
Use the --force
option of the olcnectl
module uninstall
command to make sure the module
dependency order is maintained internally by the Platform API Server
when you remove modules from an environment.
olcnectl module uninstall \ --config-file myenvironment.yaml \ --force
All the modules in the configuration file are removed.
Remove the environment using:
olcnectl environment delete \ --config-file myenvironment.yaml
The environment is removed.