The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

Chapter 4 Platform CLI Commands

This chapter contains the syntax for each olcnectl command option, including usage and examples.

4.1 Environment Create

Creates an empty environment.

The first step to deploying Oracle Cloud Native Environment is to create an empty environment. You can create an environment using certificates provided by Vault, or using existing certificates on the nodes.

Syntax

olcnectl environment create 
{-E|--environment-name} environment_name 
[{-h|--help}]
[globals]

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is:

{-a|--api-server} api_server_address:8091

The Platform API Server for the environment. This is the host running the olcne-api-server service in an environment. The value of api_server_address is the IP address or hostname of the Platform API Server. The port number is the port on which the olcne-api-server service is available. The default port is 8091.

If a Platform API Server is not specified, a local instance is used. If no local instance is set up, it is configured in the $HOME/.olcne/olcne.conf file.

For more information on setting the Platform API Server see Section 1.2, “Setting the Platform API Server”.

This option maps to the $OLCNE_API_SERVER_BIN environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--config-file path

The location of a YAML file that contains the configuration information for the environment(s) and module(s). The filename extension must be either yaml or yml. When you use this option, any other command line options are ignored, with the exception of the --force option. Only the information contained in the configuration file is used.

--secret-manager-type {file|vault}

The secrets manager type. The options are file or vault. Use file for certificates saved on the nodes and use vault for certificates managed by Vault.

--update-config

Writes the global arguments for an environment to a local configuration file which is used for future calls to the Platform API Server. If this option has not been used previously, global arguments must be specified for every Platform API Server call.

The global arguments configuration information is saved to $HOME/.olcne/olcne.conf on the local host.

If you use Vault to generate certificates for nodes, the certificate is saved to $HOME/.olcne/certificates/environment_name/ on the local host.

--olcne-ca-path ca_path

The path to a predefined Certificate Authority certificate, or the destination of the certificate if using a secrets manager to download the certificate. The default is /etc/olcne/certificates/ca.cert, or gathered from the local configuration if the --update-config option is used.

This option maps to the $OLCNE_SM_CA_PATH environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-node-cert-path node_cert_path

The path to a predefined certificate, or the a destination if using a secrets manager to download the certificate. The default is /etc/olcne/certificates/node.cert, or gathered from the local configuration if the --update-config option is used.

This option maps to the $OLCNE_SM_CERT_PATH environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-node-key-path node_key_path

The path to a predefined key, or the destination of the key if using a secrets manager to download the key. The default is /etc/olcne/certificates/node.key, or gathered from the local configuration if the --update-config option is used.

This option maps to the $OLCNE_SM_KEY_PATH environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-tls-cipher-suites ciphers

The TLS cipher suites to use for Oracle Cloud Native Environment services (the Platform Agent and Platform API Server). Enter one or more in a comma separated list. The options are:

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_ECDSA_WITH_RC4_128_SHA

  • TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_RSA_WITH_RC4_128_SHA

  • TLS_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA256

  • TLS_RSA_WITH_AES_128_GCM_SHA256

  • TLS_RSA_WITH_AES_256_CBC_SHA

  • TLS_RSA_WITH_AES_256_GCM_SHA384

  • TLS_RSA_WITH_RC4_128_SHA

For example:

--olcne-tls-cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

This option maps to the $OLCNE_TLS_CIPHER_SUITES environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-tls-max-version version

The TLS maximum version for Oracle Cloud Native Environment components. The default is VersionTLS12. Options are:

  • VersionTLS10

  • VersionTLS11

  • VersionTLS12

  • VersionTLS13

This option maps to the $OLCNE_TLS_MAX_VERSION environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-tls-min-version version

The TLS minimum version for Oracle Cloud Native Environment components. The default is VersionTLS12. Options are:

  • VersionTLS10

  • VersionTLS11

  • VersionTLS12

  • VersionTLS13

This option maps to the $OLCNE_TLS_MIN_VERSION environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--vault-address vault_address

The IP address of the Vault instance. The default is https://127.0.0.1:8200, or gathered from the local configuration if the --update-config option is used.

--vault-cert-sans vault_cert_sans

Subject Alternative Names (SANs) to pass to Vault to generate the Oracle Cloud Native Environment certificate. The default is 127.0.0.1, or gathered from the local configuration if the --update-config option is used.

--vault-token vault_token

The Vault authentication token.

Examples

Example 4.1 Creating an environment using Vault

To create an environment named myenvironment using certificates generated from a Vault instance, use the --secret-manager-type vault option:

olcnectl environment create \
--api-server 127.0.0.1:8091 \
--environment-name myenvironment \
--secret-manager-type vault \
--vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \
--vault-address https://192.0.2.20:8200 \
--update-config 

Example 4.2 Creating an environment using certificates

To create an environment named myenvironment using certificates on the node's file system, use the --secret-manager-type file option:

olcnectl environment create \
--api-server 127.0.0.1:8091 \
--environment-name myenvironment \
--secret-manager-type file \
--olcne-node-cert-path /etc/olcne/certificates/node.cert \
--olcne-ca-path /etc/olcne/certificates/ca.cert \
--olcne-node-key-path /etc/olcne/certificates/node.key \
--update-config 

4.2 Environment Delete

Deletes an existing environment.

You must uninstall any modules from an environment before you can delete it.

Syntax

olcnectl environment delete 
{-E|--environment-name} environment_name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.3 Deleting an environment

To delete an environment named myenvironment:

olcnectl environment delete \
--environment-name myenvironment

4.3 Environment Report

Reports summary and detailed information about environments.

Syntax

olcnectl environment report 
[{-E|--environment-name} environment_name]
[--children]
[--exclude pattern]
[--include pattern]
[--format {yaml|table}] 
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

--children

When added to the command, this option recursively displays the properties for all children of a module instance. The default value is 'false'.

--exclude pattern

An RE2 regular expression selecting the properties to exclude from the report. This option may specify more than one property as a comma separated lists.

--include pattern

An RE2 regular expression selecting the properties to include in the report. This option may specify more than one property as a comma separated lists. By default, all properties are displayed. Using this option one or more times overrides this behavior.

--format {yaml|table}

To generate reports in YAML or table format, use this option. The default format is table.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.4 Reporting summary details about an environment

To report a summary about the environment named myenvironment:

olcnectl environment report \
--environment-name myenvironment

Example 4.5 Reporting details about an environment

To report details about the environment named myenvironment:

olcnectl environment report \
--environment-name myenvironment \
--children

4.4 Module Backup

Backs up a module in an environment.

Syntax

olcnectl module backup 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.6 Backing up a control plane nodes

To back up the configuration for the Kubernetes control plane nodes in a kubernetes module named mycluster in an environment named myenvironment:

olcnectl module backup \
--environment-name myenvironment \
--name mycluster

4.5 Module Create

Adds and configures a module in an environment.

Syntax

olcnectl module create 
{-E|--environment-name} environment_name 
{-M|--module} module 
{-N|--name} name
[{-h|--help}]
[module_args ...]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-M|--module} module

The module type to create in an environment. The value of module is the name of a module type. The available module types are:

  • kubernetes

  • helm

  • prometheus

  • grafana

  • istio

  • operator-lifecycle-manager

  • gluster

  • oci-csi

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-h|--help}

Lists information about the command and the available options.

Where module_args is:

The value of module_args is one or more arguments to configure a module in an environment.

module_args for the kubernetes module:

{-o|--apiserver-advertise-address} IP_address

The IP address on which to advertise the Kubernetes API server to members of the Kubernetes cluster. This address must be reachable by the cluster nodes. If no value is provided, the interface on the control plane node is used specified with the --master-nodes argument.

This option is not used in a highly available (HA) cluster with multiple control plane nodes.

Important

This argument has been deprecated. Use the --master-nodes argument instead.

{-b|--apiserver-bind-port} port

The Kubernetes API server bind port. The default is 6443.

{-B|--apiserver-bind-port-alt} port

The port on which the Kubernetes API server listens when you use a virtual IP address for the load balancer. The default is 6444. This is optional.

When you use a virtual IP address, the Kubernetes API server port is changed from the default of 6443 to 6444. The load balancer listens on port 6443 and receives the requests and passes them to the Kubernetes API server. If you want to change the Kubernetes API server port in this situation from 6444, you can use this option to do so.

{-e|--apiserver-cert-extra-sans} api_server_sans

The Subject Alternative Names (SANs) to use for the Kubernetes API server serving certificate. This value can contain both IP addresses and DNS names.

{-r|--container-registry} container_registry

The container registry that contains the Kubernetes images. Use container-registry.oracle.com/olcne to pull the Kubernetes images from the Oracle Container Registry.

If you do not provide this value, you are prompted for it by the Platform CLI.

{-x|--kube-proxy-mode} {userspace|iptables|ipvs}

The routing mode for the Kubernetes proxy. The default is iptables. The available proxy modes are:

  • userspace: This is an older proxy mode.

  • iptables: This is the fastest proxy mode. This is the default mode.

  • ipvs: This is an experimental mode.

If no value is provided, the default of iptables is used. If the system's kernel or iptables version is insufficient, the userspace proxy is used.

{-v|--kube-version} version

The version of Kubernetes to install. The default is the latest version. For information on the latest version number, see Release Notes.

{-t|--kubeadm-token} token

The token to use for establishing bidirectional trust between Kubernetes nodes and control plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16}, for example, abcdef.0123456789abcdef.

--kube-tls-cipher-suites ciphers

The TLS cipher suites to use for Kubernetes components. Enter one or more in a comma separated list. The options are:

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_ECDSA_WITH_RC4_128_SHA

  • TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_RSA_WITH_RC4_128_SHA

  • TLS_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA256

  • TLS_RSA_WITH_AES_128_GCM_SHA256

  • TLS_RSA_WITH_AES_256_CBC_SHA

  • TLS_RSA_WITH_AES_256_GCM_SHA384

  • TLS_RSA_WITH_RC4_128_SHA

For example:

--kube-tls-cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

--kube-tls-min-version version

The TLS minimum version for Kubernetes components. The default is VersionTLS12. Options are:

  • VersionTLS10

  • VersionTLS11

  • VersionTLS12

  • VersionTLS13

{-l|--load-balancer} load_balancer

The Kubernetes API server load balancer hostname or IP address, and port. The default port is 6443. For example, 192.0.2.100:6443.

{-m|--master-nodes} nodes ...

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes, including the port number for the Platform Agent. For example, control1.example.com:8090,control2.example.com:8090.

If you do not provide this value, you are prompted for it by the Platform CLI.

{-g|--nginx-image} container_location

The location for an NGINX container image to use in a highly available (HA) cluster with multiple control plane nodes. This is optional.

You can use this option if you do not provide your own load balancer using the --load-balancer option. This option may be useful if you are using a mirrored container registry. For example:

--nginx-image mirror.example.com:5000/olcne/nginx:1.17.7

By default, podman is used to pull the NGINX image that is configured in /usr/libexec/pull_olcne_nginx. If you set the --nginx-image option to use another NGINX container image, the location of the image is written to /etc/olcne-nginx/image, and overrides the default image.

--node-labels label

The label to add to Kubernetes nodes on Oracle Cloud Infrastructure instances to set the Availability Domain for pods. This option is used with the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi). The label should be in the format:

failure-domain.beta.kubernetes.io/zone=region-identifier-AD-availability-domain-number

For example:

--node-labels failure-domain.beta.kubernetes.io/zone=US-ASHBURN-AD-1

For a list of the Availability Domains, see the Oracle Cloud Infrastructure documentation.

--node-ocids OCIDs

A comma separated list of Kubernetes nodes (both control plane and worker nodes) with their Oracle Cloud Identifiers (OCIDs). This option is used with the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi). The format for the list is:

FQDN=OCID,...

For example:

--node-ocids control1.example.com=ocid1.instance...,worker1.example.com=ocid1.instance...,worker2.example.com=ocid1.instance...

For information about OCIDs, see the Oracle Cloud Infrastructure documentation.

{-p|--pod-cidr} pod_CIDR

The Kubernetes pod CIDR. The default is 10.244.0.0/16. This is the range from which each Kubernetes pod network interface is assigned an IP address.

{-n|--pod-network} network_fabric

The network fabric for the Kubernetes cluster. The default is flannel.

{-P|--pod-network-iface} network_interface

The name of the network interface on the nodes to use for the Kubernetes data plane network communication. The data plane network is used by the pods running on Kubernetes. If you use regex to set the interface name, the first matching interface returned by the kernel is used. For example:

--pod-network-iface "ens[1-5]|eth5"

--selinux {enforcing|permissive}

Whether to use SELinux enforcing or permissive mode. permissive is the default.

You should use this option if SELinux is set to enforcing on the control plane and worker nodes. SELinux is set to enforcing mode by default on the operating system and is the recommended mode.

{-s|--service-cidr} service_CIDR

The Kubernetes service CIDR. The default is 10.96.0.0/12. This is the range from which each Kubernetes service is assigned an IP address.

{-i|--virtual-ip} virtual_ip

The virtual IP address for the load balancer. This is optional.

You should use this option if you do not specify your own load balancer using the --load-balancer option. When you specify a virtual IP address, it is used as the primary IP address for control plane nodes.

{-w|--worker-nodes} nodes ...

A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes, including the port number for the Platform Agent. If a worker node is behind a NAT gateway, use the public IP address for the node. The worker node's interface behind the NAT gateway must have an public IP address using the /32 subnet mask that is reachable by the Kubernetes cluster. The /32 subnet restricts the subnet to one IP address, so that all traffic from the Kubernetes cluster flows through this public IP address (for more information about configuring NAT, see Getting Started). The default port number is 8090. For example, worker1.example.com:8090,worker2.example.com:8090.

If you do not provide this value, you are prompted for it by the Platform CLI.

--restrict-service-externalip {true|false}

Sets whether to restrict access to external IP addresses for Kubernetes services. The default is true, which restricts access to external IP addresses.

This option deploys a Kubernetes service named externalip-validation-webhook-service to validate externalIPs set in Kubernetes service configuration files. Access to any external IP addresses is set in a Kubernetes service configuration file using the externalIPs option in the spec section.

--restrict-service-externalip-ca-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/ca.cert.

--restrict-service-externalip-tls-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.cert.

--restrict-service-externalip-tls-key path

The path to the private key for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.key.

--restrict-service-externalip-cidrs allowed_cidrs

Enter one or more comma separated CIDR blocks if you want to allow only IP addresses from the specified CIDR blocks. For example, 192.0.2.0/24,198.51.100.0/24.

module_args for the helm module:

--helm-kubernetes-module kubernetes_module

The name of the kubernetes module that Helm should be associated with. Each instance of Kubernetes can have one instance of Helm associated with it.

--helm-version version

The version of Helm to install. The default is the latest version. For information on the latest version number, see Release Notes.

module_args for the prometheus module:

--prometheus-helm-module helm_module

The name of the helm module that Prometheus should be associated with.

--prometheus-version version

The version of Prometheus to install. The default is the latest version. For information on the latest version number, see Release Notes.

--prometheus-image container_registry

The container image registry and tag to use when installing Prometheus. The default is container-registry.oracle.com/olcne/prometheus.

--prometheus-namespace namespace

The Kubernetes namespace in which to install Prometheus. The default namespace is default.

--prometheus-persistent-storage {true|false}

If this value is false, Prometheus writes its data into an emptydir on the host where the pod is running. If the pod migrates, metric data is lost.

If this value is true, Prometheus requisitions a Kubernetes PersistentVolumeClaim so that its data persists, despite destruction or migration of the pod.

The default is false.

--prometheus-alerting-rules path

The path to a configuration file for Prometheus alerts.

--prometheus-recording-rules path

The path to a configuration file for Prometheus recording rules.

--prometheus-scrape-configuration path

The path to a configuration file for Prometheus metrics scraping.

module_args for the grafana module:

--grafana-helm-module helm_module

The name of the helm module that Grafana should be associated with.

--grafana-version version

The version of Grafana to install. The default is the latest version. For information on the latest version number, see Release Notes.

--grafana-container-registry container_registry

The container image registry and tag to use when installing Grafana. The default is container-registry.oracle.com/olcne.

--grafana-namespace namespace

The Kubernetes namespace in which to install Grafana. The default namespace is default.

--grafana-dashboard-configmaps configmap

The name of the ConfigMap reference that contains the Grafana dashboards.

--grafana-dashboard-providers path

The location of the file that contains the configuration for the Grafana dashboard providers.

--grafana-datasources path

The location of the file that contains the configuration for the Grafana data sources.

--grafana-existing-sercret-name secret

The name of the existing secret containing the Grafana admin password.

--grafana-notifiers path

The location of the file that contains the configuration for the Grafana notifiers.

--grafana-pod-annotations annotations

A comma separated list of annotations to be added to the Grafana pods.

--grafana-pod-env env_vars

A comma separated list of environment variables to be passed to Grafana deployment pods.

--grafana-service-port port

The port number for the Grafana service. The default is 3000.

--grafana-service-type service

The service type to access Grafana. The default is ClusterIP.

module_args for the istio module:

--istio-helm-module helm_module

The name of the helm module that Istio should be associated with.

--istio-version version

The version of Istio to install. The default is the latest version. For information on the latest version number, see Release Notes.

--istio-container-registry container_registry

The container image registry to use when deploying Istio. The default is container-registry.oracle.com/olcne.

--istio-mutual-tls {true|false}

Sets whether to enable Mutual Transport Layer Security (mTLS) for communication between the control plane pods for Istio, and for any pods deployed into the Istio service mesh.

The default is true.

Important

It is strongly recommended that this value is not set to false, especially in production environments.

module_args for the operator-lifecycle-manager module:

--olm-helm-module helm_module

The name of the helm module that Operator Lifecycle Manager should be associated with.

--olm-version version

The version of Operator Lifecycle Manager to install. The default is the latest version. For information on the latest version number, see Release Notes.

--olm-container-registry container_registry

The container image registry to use when deploying the Operator Lifecycle Manager. The default is container-registry.oracle.com/olcne.

--olm-enable-operatorhub {true|false}

Sets whether to enable the Operator Lifecycle Manager to use the OperatorHub registry as a catalog source.

The default is true.

module_args for the gluster module:

--gluster-helm-module helm_module

The name of the helm module that the Gluster Container Storage Interface module should be associated with.

--gluster-server-url URL

The URL of the Heketi API server endpoint. The default is http://127.0.0.1:8080.

--gluster-server-user user

The username of the Heketi API server admin user. The default is admin.

--gluster-existing-secret-name secret

The name of the existing secret containing the admin password. The default is heketi-admin.

--gluster-secret-key secret

The secret containing the admin password. The default is secret.

--gluster-namespace namespace

The Kubernetes namespace in which to install the Gluster Container Storage Interface module. The default is default.

--gluster-sc-name class_name

The StorageClass name for the Glusterfs StorageClass. The default is hyperconverged.

--gluster-server-rest-auth {true|false}

Whether the Heketi API server accepts REST authorization. The default is true.

module_args for the oci-csi module:

--oci-csi-helm-module helm_module

The name of the helm module that the Oracle Cloud Infrastructure Container Storage Interface module should be associated with.

--oci-tenancy OCID

The OCID for the Oracle Cloud Infrastructure tenancy.

--oci-region region_identifier

The Oracle Cloud Infrastructure region identifier. The default is us-ashburn-1.

For a list of the region identifiers, see the Oracle Cloud Infrastructure documentation.

--oci-compartment OCID

The OCID for the Oracle Cloud Infrastructure compartment.

--oci-user OCID

The OCID for the Oracle Cloud Infrastructure user.

--oci-private-key path

The location of the private key for the Oracle Cloud Infrastructure API signing key. This must be located on the primary control plane node. The default is /root/.ssh/id_rsa.

Important

The private key must be available on the primary control plane node. This is the first control plane node listed in the --master-nodes option when you create the Kubernetes module.

--oci-fingerprint fingerprint

The fingerprint of the public key for the Oracle Cloud Infrastructure API signing key.

--oci-passphrase passphrase

The passphrase for the private key for the Oracle Cloud Infrastructure API signing key, if one is set.

--oci-vcn OCID

The OCID for the Oracle Cloud Infrastructure Virtual Cloud Network on which the Kubernetes cluster is available.

--oci-lb-subnet1 OCID

The OCID of the regional subnet for the Oracle Cloud Infrastructure load balancer.

Alternatively, the OCID of the first subnet of the two required availability domain specific subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-subnet2 OCID

The OCID of the second subnet of the two subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-security-mode {All|Frontend|None}

This option sets whether the Oracle Cloud Infrastructure CSI plug-in should manage security lists for load balancer services. This option sets the configuration mode to use for security lists managed by the Kubernetes Cloud Controller Manager. The default is None.

For information on the security modes, see the Kubernetes Cloud Controller Manager implementation for Oracle Cloud Infrastructure documentation.

--oci-container-registry container_registry

The container image registry to use when deploying the Oracle Cloud Infrastructure cloud provisioner image. The default is iad.ocir.io/oracle.

--csi-container-registry container_registry

The container image registry to use when deploying the CSI component images. The default is quay.io/k8scsi.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.7 Creating a module for an HA cluster with an external load balancer

This example creates an HA cluster with an external load balancer, available on the host lb.example.com and running on port 6443.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--load-balancer lb.example.com:6443 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key

Example 4.8 Creating a module for an HA cluster with an internal load balancer

This example example creates an HA Kubernetes cluster using the load balancer deployed by the Platform CLI. The --virtual-ip option sets the virtual IP address to 192.0.2.100, which is the IP address of the primary control plane node. The primary control plane node is the first node in the list of control plane nodes. This cluster contains three control plane nodes and three worker nodes.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key

Example 4.9 Creating a module for a cluster to allow access to service IP address ranges

This example example creates a Kubernetes cluster that sets the external IP addresses that can be accessed by Kubernetes services. The IP ranges that are allowed are within the 192.0.2.0/24 and 198.51.100.0/24 CIDR blocks.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key \
--restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24

Example 4.10 Creating a module for a cluster to allow access to all service IP addresses

This example creates a Kubernetes cluster that allows access to all external IP addresses for Kubernetes services. This disables the deployment of the externalip-validation-webhook-service Kubernetes service, which means no validation of external IP addresses is performed for Kubernetes services, and access is allowed for all CIDR blocks.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip false 

Example 4.11 Creating module for a cluster with a single control plane node

This example creates a Kubernetes module to deploy a Kubernetes cluster with a single control plane node. The --module option is set to kubernetes to create a Kubernetes module. This cluster contains one control plane and two worker nodes.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--master-nodes control1.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key 

Example 4.12 Creating a module for a service mesh

This example creates a service mesh using the Istio module. The --module option is set to istio to create an Istio module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and an Istio module named myistio.

The --istio-helm-module option sets the name of the Helm module to use.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module istio \
--name myistio \
--istio-helm-module myhelm

Example 4.13 Creating a module for Operator Lifecycle Manager

This example creates a module that can be used to install Operator Lifecycle Manager. The --module option is set to operator-lifecycle-manager to create an Operator Lifecycle Manager module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and an Operator Lifecycle Manager module named myolm.

The --olm-helm-module option sets the name of the Helm module to use.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module operator-lifecycle-manager \
--name myolm \
--olm-helm-module myhelm

Example 4.14 Creating a module for Gluster Storage

This example creates a module that creates a Kubernetes StorageClass provisioner to access Gluster storage. The --module option is set to gluster to create a Gluster Container Storage Interface module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and a Gluster Container Storage Interface module named mygluster.

The --gluster-helm-module option sets the name of the Helm module to use.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module gluster \
--name mygluster \
--gluster-helm-module myhelm

Example 4.15 Creating a module for Oracle Cloud Infrastructure Storage

This example creates a module that creates a Kubernetes StorageClass provisioner to access Oracle Cloud Infrastructure storage. The --module option is set to oci-csi to create an Oracle Cloud Infrastructure Container Storage Interface module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and an Oracle Cloud Infrastructure Container Storage Interface module named myoci.

The --oci-csi-helm-module option sets the name of the Helm module to use.

You should also provide the information required to access Oracle Cloud Infrastructure using the options as shown in this example, such as:

  • --oci-tenancy

  • --oci-compartment

  • --oci-user

  • --oci-fingerprint

  • --oci-private-key

You may need to provide more options to access Oracle Cloud Infrastructure, depending on your environment.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module oci-csi \
--name myoci \
--oci-csi-helm-module myhelm \
--oci-tenancy ocid1.tenancy.oc1... \
--oci-compartment ocid1.compartment.oc1... \
--oci-user ocid1.user.oc1... \
--oci-fingerprint b5:52:... \
--oci-private-key /home/opc/.oci/oci_api_key.pem 

4.6 Module Install

Installs a module in an environment. When you install a module, the nodes are checked to make sure they are set up correctly to run the module. If the nodes are not set up correctly, the commands required to fix each node are shown in the output and optionally saved to files.

Syntax

olcnectl module install 
{-E|--environment-name} environment_name 
{-N|--name} name 
[{-g|--generate-scripts}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.16 Installing a module

To install a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module install \
--environment-name myenvironment \
--name mycluster

4.7 Module Instances

Lists the installed modules in an environment.

Syntax

olcnectl module instances 
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]  

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.17 Listing the deployed modules in an environment

To list the deployed modules for an environment named myenvironment:

olcnectl module instances \
--environment-name myenvironment

4.8 Module List

Lists the available modules for an environment.

Syntax

olcnectl module list 
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]  

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.18 Listing available modules in an environment

To list the modules for an environment named myenvironment:

olcnectl module list \
--environment-name myenvironment

4.9 Module Property Get

Lists the value of a module property.

Syntax

olcnectl module property get 
{-E|--environment-name} environment_name 
{-N|--name} name
{-P|--property} property_name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-P|--property} property_name

The name of the property. You can get a list of the available properties using the olcnectl module property list command.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.19 Listing module properties

To list the value of the kubecfg property for a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module property get \
--environment-name myenvironment \
--name mycluster \
--property kubecfg

4.10 Module Property List

Lists the available properties for a module in an environment.

Syntax

olcnectl module property list 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.20 Listing module properties

To list the properties for a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module property list \
--environment-name myenvironment \
--name mycluster

4.11 Module Report

Reports summary and detailed information about module and properties in an environment.

Syntax

olcnectl module report 
{-E|--environment-name} environment_name 
[{-N|--name} name]
[--children]
[--exclude pattern]
[--include pattern]
[--format {yaml|table}] 
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment. When no name is specified, the output of the command contains information about all modules deployed in the selected environment.

--children

When added to the command, this option recursively displays the properties for all children of a module instance. The default value is 'false'.

--exclude pattern

An RE2 regular expression selecting the properties to exclude from the report. This option may specify more than one property as a comma separated lists.

--include pattern

An RE2 regular expression selecting the properties to include in the report. This option may specify more than one property as a comma separated lists. By default, all properties are displayed. Using this option one or more times overrides this behavior.

--format {yaml|table}

To generate reports in YAML or table format, use this option. The default format is table.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.21 Reporting summary details about an environment

To report a summary of all modules deployed in the environment named myenvironment:

olcnectl module report \
--environment-name myenvironment \

Example 4.22 Reporting summary details about a Kubernetes module

To report summary details about a Kubernetes module named mycluster:

olcnectl module report \
--environment-name myenvironment \
--name mycluster

Example 4.23 Reporting comprehensive details about a Kubernetes module

To report comprehensive details about a Kubernetes module named mycluster:

olcnectl module report \
--environment-name myenvironment \
--name mycluster \
--children

4.12 Module Restore

Restores a module from a back in an environment.

Syntax

olcnectl module restore 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-g|--generate-scripts}]
[{-F|--force}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-F|--force}

Skips the confirmation prompt.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.24 Restoring control plane nodes from a back up

To restore the Kubernetes control plane nodes from a back up in a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module restore \
--environment-name myenvironment \
--name mycluster

4.13 Module Uninstall

Uninstalls a module from an environment. Uninstalling the module also removes the module configuration from the Platform API Server.

Syntax

olcnectl module uninstall 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-F|--force}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-F|--force}

Skips the confirmation prompt.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.25 Uninstalling a module

To uninstall a Kubernetes module named mycluster from an environment named myenvironment:

olcnectl module uninstall \
--environment-name myenvironment \
--name mycluster

In this example, the Kubernetes containers are stopped and deleted on each node, and the Kubernetes cluster is removed.


4.14 Module Update

Updates a module in an environment. The module configuration is automatically retrieved from the Platform API Server. This command can be used to:

  • Update the Kubernetes release on nodes to the latest errata release

  • Upgrade the Kubernetes release on nodes to the latest release

  • Update or upgrade other modules and components

  • Scale up a Kubernetes cluster (add control plane and/or worker nodes)

  • Scale down a Kubernetes cluster (remove control plane and/or worker nodes)

Important

Before you update or upgrade the Kubernetes cluster, make sure you have updated or upgraded Oracle Cloud Native Environment to the latest release. For information on updating or upgrading Oracle Cloud Native Environment, see Updates and Upgrades.

Syntax

olcnectl module update 
{-E|--environment-name} environment_name 
{-N|--name} name 
[{-r|--container-registry} container_registry]
[{-k|--kube-version} version]
[{-m|--master-nodes} nodes ...] 
[{-w|--worker-nodes} nodes ...]
[--nginx-image container_location]
[--helm-version version]
[--prometheus-version version]
[--prometheus-container-registry container_registry]
[--grafana-version version]
[--grafana-container-registry container_registry]
[--istio-version version]
[--istio-container-registry container_registry]
[--olm-version version]
--restrict-service-externalip {true|false}
--restrict-service-externalip-ca-cert path
--restrict-service-externalip-tls-cert path
--restrict-service-externalip-tls-key path
--restrict-service-externalip-cidrs allowed_cidrs
[--selinux {enforcing|permissive}]
[{-g|--generate-scripts}]
[{-F|--force}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-k|--kube-version} version

Sets the Kubernetes version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

If this option is not provided any Kubernetes errata updates are installed.

{-r|--container-registry} container_registry

The container registry that contains the Kubernetes images when performing an update or upgrade. Use the Oracle Container Registry or a local registry to pull the Kubernetes images.

This option allows you to update or upgrade using a different container registry. This option sets the default container registry during all subsequent updates or upgrades and need only be used when changing the default container registry.

{-m|--master-nodes} nodes ...

A comma-separated list of the hostnames or IP addresses of the Kubernetes control plane nodes that should remain in or be added to the Kubernetes cluster, including the port number for the Platform Agent. Any control plane nodes not included in this list are removed from the cluster. The nodes in this list are the nodes that are to be included in the cluster.

The default port number for the Platform Agent is 8090. For example, control1.example.com:8090,control2.example.com:8090.

{-w|--worker-nodes} nodes ...

A comma-separated list of the hostnames or IP addresses of the Kubernetes worker nodes that should remain in or be added to the Kubernetes cluster, including the port number for the Platform Agent. Any worker nodes not included in this list are removed from the cluster. The nodes in this list are the nodes that are to be included in the cluster.

The default port number for the Platform Agent is 8090. For example, worker1.example.com:8090,worker2.example.com:8090.

--nginx-image container_location

The location of the NGINX container image to update. This is optional.

This option pulls the NGINX container image from the container registry location you specify to update NGINX on the control plane nodes. For example:

--nginx-image container-registry.oracle.com/olcne/nginx:1.17.7

--helm-version version

Sets the Helm version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--prometheus-version version

Sets the Prometheus version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--prometheus-container-registry container_registry

The container registry that contains the Prometheus images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Prometheus images.

--grafana-version version

Sets the Grafana version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--grafana-container-registry container_registry

The container registry that contains the Grafana images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Grafana images.

--istio-version version

Sets the Istio version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--istio-container-registry container_registry

The container registry that contains the Istio images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Istio images.

--olm-version version

Sets the Operator Lifecycle Manager version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--restrict-service-externalip {true|false}

Sets whether to restrict access to external IP addresses for Kubernetes services. The default is true, which restricts access to external IP addresses.

This option deploys a Kubernetes service named externalip-validation-webhook-service to validate externalIPs set in Kubernetes service configuration files. Access to any external IP addresses is set in a Kubernetes service configuration file using the externalIPs option in the spec section.

--restrict-service-externalip-ca-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/ca.cert.

--restrict-service-externalip-tls-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.cert.

--restrict-service-externalip-tls-key path

The path to the private key for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.key.

--restrict-service-externalip-cidrs allowed_cidrs

Enter one or more comma separated CIDR blocks if you want to allow only IP addresses from the specified CIDR blocks. For example, 192.0.2.0/24,198.51.100.0/24.

--selinux {enforcing|permissive}

Whether to use SELinux enforcing or permissive mode. permissive is the default.

You should use this option if SELinux is set to enforcing on the control plane and worker nodes. SELinux is set to enforcing mode by default on the operating system and is the recommended mode.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-F|--force}

Skips the confirmation prompt.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.26 Scaling a cluster

To scale up a cluster, list all nodes to be included in the cluster. If an existing cluster includes two control plane and two worker nodes, and you want to add a new control plane and a new worker, list all the nodes to include. For example, to add a control3.example.com control plane node, and a worker3.example.com worker node to a Kubernetes module named mycluster:

olcnectl module update \
--environment-name myenvironment \  
--name mycluster \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090

To scale down a cluster, list all the nodes to be included in the cluster. To remove the control3.example.com control plane node, and worker3.example.com worker node from the kubernetes module named mycluster:

olcnectl module update \
--environment-name myenvironment \  
--name mycluster \
--master-nodes control1.example.com:8090,control2.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090

As the control3.example.com control plane node and worker3.example.com worker node are not listed in the --master-nodes and --worker-nodes options, the Platform API Server removes those nodes from the cluster.


Example 4.27 Updating the Kubernetes release for errata updates

To update a Kubernetes module named mycluster in an environment named myenvironment to the latest Kubernetes errata release, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster

The nodes in the environment are updated to the latest Kubernetes errata release.


Example 4.28 Updating using a different container registry

To update a Kubernetes module named mycluster in an environment named myenvironment to the latest Kubernetes errata release using a different container registry than the default specified when creating the Kubernetes module, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--container-registry container-registry-austin-mirror.oracle.com/olcne/

The nodes in the environment are updated to the latest Kubernetes errata release contained on the mirror container registry.


Example 4.29 Upgrading the Kubernetes release

To upgrade a Kubernetes module named mycluster in an environment named myenvironment to Kubernetes Release 1.21.14-3, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--kube-version 1.21.14-3

The --kube-version option specifies the release to which you want to upgrade. This example uses release number 1.21.14-3.

Make sure you upgrade to the latest Kubernetes release. To get the version number of the latest Kubernetes release, see Release Notes.

The nodes in the environment are updated to Kubernetes Release 1.21.14-3.


Example 4.30 Upgrading using a different container registry

To upgrade a Kubernetes module named mycluster in an environment named myenvironment to Kubernetes Release 1.21.14-3 using a different container registry than the current default container registry, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--container-registry container-registry-austin-mirror.oracle.com/olcne/ \
--kube-version 1.21.14-3

The --kube-version option specifies the release to which you want to upgrade. This example uses release number 1.21.14-3. The specified container registry becomes the new default container registry for all subsequent updates or upgrades.

Make sure you upgrade to the latest Kubernetes release. To get the version number of the latest Kubernetes release, see Release Notes.

The nodes in the environment are updated to Kubernetes 1.21.14-3.


Example 4.31 Setting access to external IP addresses for Kubernetes services

This example sets the range of external IP addresses that Kubernetes services can access.

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24

Example 4.32 Modifying host SELinux settings

This example updates the configuration with the Platform API Server that nodes in the Kubernetes cluster have SELinux enforcing mode enabled.

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--selinux enforcing

4.15 Module Validate

Validates a module for an environment. When you validate a module, the nodes are checked to make sure they are set up correctly to run the module. If the nodes are not set up correctly, the commands required to fix each node are shown in the output and optionally saved to files.

Syntax

olcnectl module validate 
{-E|--environment-name} environment_name 
{-N|--name} name 
[{-g|--generate-scripts}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Section 1.3, “Using Global Flags”.

Examples

Example 4.33 Validating a module

To validate a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module validate \
--environment-name myenvironment \
--name mycluster

4.16 Template

Generates a simple configuration file template. The template file is named config-file-template.yaml and created in the local directory.

Syntax

olcnectl template 
[{-h|--help}]

Where:

{-h|--help}

Lists information about the command and the available options.

Examples

Example 4.34 Creating a sample configuration template

To create a sample configuration template:

olcnectl template