4 Platform CLI Commands

Important:

The software described in this documentation is either in Extended Support or Sustaining Support. See Oracle Open Source Support Policies for more information.

We recommend that you upgrade the software described by this documentation as soon as possible.

This chapter contains the syntax for each olcnectl command option, including usage and examples.

Certificates Copy

Copies the generated CA Certificates and private keys to the Kubernetes nodes.

Copies and installs the CA Certificates and private keys for a set of nodes from a pre-generated set. The files must be located within a specific directory structure.

The certificate authority bundle must be located at: <cert-dir>/ca.cert.

The certificate for each node must be located at <cert-dir>/<node-address>/node.cert.

The node key must be located at <cert-dir>/<node-address>/node.key.

Syntax

olcnectl certificates copy
[--cert-dir certificate-directory]
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]

Where:

--cert-dir certificate-directory

The directory to read or write key material generated by this utility. The default is <CURRENT_DIR>/certificates.

{-h|--help}

Lists information about the command and the available options.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

Examples

Example 4-1 Copy certificates to nodes

This example copies the certificates to the nodes listed.

olcnectl certificates copy \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com

Certificates Distribute

Generates and distributes CA Certificates and keys to the nodes.

Syntax

olcnectl certificates distribute
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[{-y|--yes}]

Where:

--byo-ca-cert certificate-path

The path to an existing public CA Certificate.

--byo-ca-key key-path

The path to an existing private key.

--cert-dir certificate-directory

The directory to read or write key material generated by this utility. The default is <CURRENT_DIR>/certificates.

--cert-request-common-name common_name

The Certificate Common Name suffix. The default is example.com.

--cert-request-country country

The two letter country code where your company is located, for example, US for the United States, GB for the United Kingdom and CN for China. The default is US.

--cert-request-locality locality

The name of the city where your company is located. The default is Redwood City.

--cert-request-organization organization

The name of the your company. The default is OLCNE.

--cert-request-organization-unit organization-unit

The name of the department within your company. The default is OLCNE.

--cert-request-state state

The name of the state or province where your company is located. The default is California.

{-h|--help}

Lists information about the command and the available options.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

--one-cert

Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option is not set.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

Examples

Example 4-2 Distribute certificates to nodes

This example distributes the certificates to the nodes listed.

olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com

Example 4-3 Distribute certificates for nodes with Certificate and SSH login information

This example distributes the certificates for the nodes listed, sets the Certificate and SSH login information. This also accepts all prompts.

olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--cert-request-common-name cloud.example.com \
--cert-request-country US \
--cert-request-locality "My Town" \
--cert-request-organization "My Company" \
--cert-request-organization-unit "My Company Unit" \
--cert-request-state "My State" \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--yes

Example 4-4 Distribute certificates for nodes using an existing CA Certificate and private key

This example distributes the certificate for the nodes listed using an existing CA Certificate and private key. This is useful for generating and copying the certificate information to nodes you want to add to an existing Kubernetes cluster.

olcnectl certificates distribute \
--nodes worker3.example.com,worker4.example.com \
--byo-ca-cert /etc/olcne/configs/certificates/production/ca.cert \
--byo-ca-key /etc/olcne/configs/certificates/production/ca.key

Example 4-5 Distribute certificates for nodes using an existing CA Certificate and private key and SSH login information

This example distributes the certificate for the nodes listed using an existing CA Certificate and private key.

olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--byo-ca-cert /etc/olcne/certificates/ca.cert \
--byo-ca-key /etc/olcne/certificates/ca.key

Certificates Generate

Generates CA Certificates for the nodes.

Creates the CA Certificates and keys required to authenticate the Platform CLI, Platform API Server, and Platform Agent for a set of hosts.

If a Certificate Authority is created, its key material is placed in the directory specified with the --cert-dir option. The CA Certificate is written to the file ca.cert in that directory, and the private key is written to ca.key.

Syntax

olcnectl certificates generate
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
{-n|--nodes} nodes
[--one-cert]

Where:

--byo-ca-cert certificate-path

The path to an existing public CA Certificate.

--byo-ca-key key-path

The path to an existing private key.

--cert-dir certificate-directory

The directory to read or write key material generated by this utility. The default is <CURRENT_DIR>/certificates.

--cert-request-common-name common_name

The Certificate Common Name suffix. The default is example.com.

--cert-request-country country

The two letter country code where your company is located, for example, US for the United States, GB for the United Kingdom and CN for China. The default is US.

--cert-request-locality locality

The name of the city where your company is located. The default is Redwood City.

--cert-request-organization organization

The name of the your company. The default is OLCNE.

--cert-request-organization-unit organization-unit

The name of the department within your company. The default is OLCNE.

--cert-request-state state

The name of the state or province where your company is located. The default is California.

{-h|--help}

Lists information about the command and the available options.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

--one-cert

Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option is not set.

Examples

Example 4-6 Generate certificates for nodes

This example generates the certificates for the nodes listed.

olcnectl certificates generate \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com

Example 4-7 Generate certificates for nodes with certificate information

This example generates the certificates for the nodes listed, and sets the certificate information.

olcnectl certificates generate \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--cert-request-common-name cloud.example.com \
--cert-request-country US \
--cert-request-locality "My Town" \
--cert-request-organization "My Company" \
--cert-request-organization-unit "My Company Unit" \
--cert-request-state "My State"

Example 4-8 Generate certificates for the Kubernetes ExternalIPs service with existing key information

This example generates the certificates for the Kubernetes ExternalIPs service using existing CA certificate and private key.

olcnectl certificates generate \
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
--cert-dir /etc/olcne/certificates/restrict_external_ip/ \
--byo-ca-cert /etc/olcne/configs/certificates/production/ca.cert \
--byo-ca-key /etc/olcne/configs/certificates/production/ca.key \
--one-cert

Example 4-9 Generate certificates for the Kubernetes ExternalIPs service

This example generates the certificates for the Kubernetes ExternalIPs service using a private CA certificate and private key that is genereated at the time.

olcnectl certificates generate \
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
--cert-dir /etc/olcne/certificates/restrict_external_ip/ \
--cert-request-organization-unit "My Company Unit" \
--cert-request-organization "My Company" \
--cert-request-locality "My Town" \
--cert-request-state "My State" \
--cert-request-country US \
--cert-request-common-name cloud.example.com \
--one-cert

Environment Create

Creates an empty environment.

The first step to deploying Oracle Cloud Native Environment is to create an empty environment. You can create an environment using certificates provided by Vault, or using existing certificates on the nodes.

Syntax

olcnectl environment create 
{-E|--environment-name} environment_name 
[{-h|--help}]
[globals]

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is:

{-a|--api-server} api_server_address:8091

The Platform API Server for the environment. This is the host running the olcne-api-server service in an environment. The value of api_server_address is the IP address or hostname of the Platform API Server. The port number is the port on which the olcne-api-server service is available. The default port is 8091.

If a Platform API Server is not specified, a local instance is used. If no local instance is set up, it is configured in the $HOME/.olcne/olcne.conf file.

For more information on setting the Platform API Server see Setting the Platform API Server.

This option maps to the $OLCNE_API_SERVER_BIN environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--config-file path

The location of a YAML file that contains the configuration information for the environment(s) and module(s). The filename extension must be either yaml or yml. When you use this option, any other command line options are ignored, with the exception of the --force option. Only the information contained in the configuration file is used.

--secret-manager-type {file|vault}

The secrets manager type. The options are file or vault. Use file for certificates saved on the nodes and use vault for certificates managed by Vault.

--update-config

Writes the global arguments for an environment to a local configuration file which is used for future calls to the Platform API Server. If this option has not been used previously, global arguments must be specified for every Platform API Server call.

The global arguments configuration information is saved to $HOME/.olcne/olcne.conf on the local host.

If you use Vault to generate certificates for nodes, the certificate is saved to $HOME/.olcne/certificates/environment_name/ on the local host.

--olcne-ca-path ca_path

The path to a predefined Certificate Authority certificate, or the destination of the certificate if using a secrets manager to download the certificate. The default is /etc/olcne/certificates/ca.cert, or gathered from the local configuration if the --update-config option is used.

This option maps to the $OLCNE_SM_CA_PATH environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-node-cert-path node_cert_path

The path to a predefined certificate, or the a destination if using a secrets manager to download the certificate. The default is /etc/olcne/certificates/node.cert, or gathered from the local configuration if the --update-config option is used.

This option maps to the $OLCNE_SM_CERT_PATH environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-node-key-path node_key_path

The path to a predefined key, or the destination of the key if using a secrets manager to download the key. The default is /etc/olcne/certificates/node.key, or gathered from the local configuration if the --update-config option is used.

This option maps to the $OLCNE_SM_KEY_PATH environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-tls-cipher-suites ciphers

The TLS cipher suites to use for Oracle Cloud Native Environment services (the Platform Agent and Platform API Server). Enter one or more in a comma separated list. The options are:

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_ECDSA_WITH_RC4_128_SHA

  • TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_RSA_WITH_RC4_128_SHA

  • TLS_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA256

  • TLS_RSA_WITH_AES_128_GCM_SHA256

  • TLS_RSA_WITH_AES_256_CBC_SHA

  • TLS_RSA_WITH_AES_256_GCM_SHA384

  • TLS_RSA_WITH_RC4_128_SHA

For example:

--olcne-tls-cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

This option maps to the $OLCNE_TLS_CIPHER_SUITES environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-tls-max-version version

The TLS maximum version for Oracle Cloud Native Environment components. The default is VersionTLS12. Options are:

  • VersionTLS10

  • VersionTLS11

  • VersionTLS12

  • VersionTLS13

This option maps to the $OLCNE_TLS_MAX_VERSION environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--olcne-tls-min-version version

The TLS minimum version for Oracle Cloud Native Environment components. The default is VersionTLS12. Options are:

  • VersionTLS10

  • VersionTLS11

  • VersionTLS12

  • VersionTLS13

This option maps to the $OLCNE_TLS_MIN_VERSION environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting.

--vault-address vault_address

The IP address of the Vault instance. The default is https://127.0.0.1:8200, or gathered from the local configuration if the --update-config option is used.

--vault-cert-sans vault_cert_sans

Subject Alternative Names (SANs) to pass to Vault to generate the Oracle Cloud Native Environment certificate. The default is 127.0.0.1, or gathered from the local configuration if the --update-config option is used.

--vault-token vault_token

The Vault authentication token.

Examples

Example 4-10 Creating an environment using Vault

To create an environment named myenvironment using certificates generated from a Vault instance, use the --secret-manager-type vault option:

olcnectl environment create \
--api-server 127.0.0.1:8091 \
--environment-name myenvironment \
--secret-manager-type vault \
--vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \
--vault-address https://192.0.2.20:8200 \
--update-config 

Example 4-11 Creating an environment using certificates

To create an environment named myenvironment using certificates on the node's file system, use the --secret-manager-type file option:

olcnectl environment create \
--api-server 127.0.0.1:8091 \
--environment-name myenvironment \
--secret-manager-type file \
--olcne-node-cert-path /etc/olcne/certificates/node.cert \
--olcne-ca-path /etc/olcne/certificates/ca.cert \
--olcne-node-key-path /etc/olcne/certificates/node.key \
--update-config 

Environment Delete

Deletes an existing environment.

You must uninstall any modules from an environment before you can delete it.

Syntax

olcnectl environment delete 
{-E|--environment-name} environment_name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-12 Deleting an environment

To delete an environment named myenvironment:

olcnectl environment delete \
--environment-name myenvironment

Environment Update

Updates or upgrades the Platform Agent on nodes in an existing environment.

Syntax

olcnectl environment update 
olcne
{-E|--environment-name} environment_name
[{-N|--name} name] 
[{-h|--help}]
[globals] 

Where:

olcne

Specifies to update the Platform Agent on each node in an environment.

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

The Platform Agent is updated on only the nodes used in this module.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-13 Updating Platform Agents

To update the Platform Agent on each node in an environment named myenvironment:

olcnectl environment update olcne \
--environment-name myenvironment

Example 4-14 Updating Platform Agents in a module

To update the Platform Agent on each node in a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl environment update olcne \
--environment-name myenvironment \
--name mycluster

Environment Report

Reports summary and detailed information about environments.

Syntax

olcnectl environment report 
[{-E|--environment-name} environment_name]
[--children]
[--exclude pattern]
[--include pattern]
[--format {yaml|table}] 
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

--children

When added to the command, this option recursively displays the properties for all children of a module instance. The default value is 'false'.

--exclude pattern

An RE2 regular expression selecting the properties to exclude from the report. This option may specify more than one property as a comma separated lists.

--include pattern

An RE2 regular expression selecting the properties to include in the report. This option may specify more than one property as a comma separated lists. By default, all properties are displayed. Using this option one or more times overrides this behavior.

--format {yaml|table}

To generate reports in YAML or table format, use this option. The default format is table.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-15 Reporting summary details about an environment

To report a summary about the environment named myenvironment:

olcnectl environment report \
--environment-name myenvironment

Example 4-16 Reporting details about an environment

To report details about the environment named myenvironment:

olcnectl environment report \
--environment-name myenvironment \
--children

Module Backup

Backs up a module in an environment.

Syntax

olcnectl module backup 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-17 Backing up a control plane nodes

To back up the configuration for the Kubernetes control plane nodes in a kubernetes module named mycluster in an environment named myenvironment:

olcnectl module backup \
--environment-name myenvironment \
--name mycluster

Module Create

Adds and configures a module in an environment.

Syntax

olcnectl module create 
{-E|--environment-name} environment_name 
{-M|--module} module 
{-N|--name} name
[{-h|--help}]
[module_args ...]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-M|--module} module

The module type to create in an environment. The value of module is the name of a module type. The available module types are:

  • kubernetes

  • helm

  • oci-ccm

  • oci-csi (Deprecated)

  • metallb

  • gluster

  • operator-lifecycle-manager

  • prometheus

  • grafana

  • istio

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-h|--help}

Lists information about the command and the available options.

Where module_args is:

The value of module_args is one or more arguments to configure a module in an environment.

module_args for the kubernetes module:

{-o|--apiserver-advertise-address} IP_address

The IP address on which to advertise the Kubernetes API server to members of the Kubernetes cluster. This address must be reachable by the cluster nodes. If no value is provided, the interface on the control plane node is used specified with the --master-nodes argument.

This option is not used in a highly available (HA) cluster with multiple control plane nodes.

Important:

This argument has been deprecated. Use the --master-nodes argument instead.

{-b|--apiserver-bind-port} port

The Kubernetes API server bind port. The default is 6443.

{-B|--apiserver-bind-port-alt} port

The port on which the Kubernetes API server listens when you use a virtual IP address for the load balancer. The default is 6444. This is optional.

When you use a virtual IP address, the Kubernetes API server port is changed from the default of 6443 to 6444. The load balancer listens on port 6443 and receives the requests and passes them to the Kubernetes API server. If you want to change the Kubernetes API server port in this situation from 6444, you can use this option to do so.

{-e|--apiserver-cert-extra-sans} api_server_sans

The Subject Alternative Names (SANs) to use for the Kubernetes API server serving certificate. This value can contain both IP addresses and DNS names.

--compact {true|false}

Sets whether to allow non-system Kubernetes workloads to run on control plane nodes. The default is false.

If you set this to true, the Platform API Server does not taint the control plane node(s). This allows non-system Kubernetes workloads to be scheduled and run on control plane nodes.

Important:

For production environments, this option must be set to false (the default).

{-r|--container-registry} container_registry

The container registry that contains the Kubernetes images. Use container-registry.oracle.com/olcne to pull the Kubernetes images from the Oracle Container Registry.

If you do not provide this value, you are prompted for it by the Platform CLI.

{-x|--kube-proxy-mode} {userspace|iptables|ipvs}

The routing mode for the Kubernetes proxy. The default is iptables. The available proxy modes are:

  • userspace: This is an older proxy mode.

  • iptables: This is the fastest proxy mode. This is the default mode.

  • ipvs: This is an experimental mode.

If no value is provided, the default of iptables is used. If the system's kernel or iptables version is insufficient, the userspace proxy is used.

{-v|--kube-version} version

The version of Kubernetes to install. The default is the latest version. For information on the latest version number, see Release Notes.

{-t|--kubeadm-token} token

The token to use for establishing bidirectional trust between Kubernetes nodes and control plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16}, for example, abcdef.0123456789abcdef.

--kube-tls-cipher-suites ciphers

The TLS cipher suites to use for Kubernetes components. Enter one or more in a comma separated list. The options are:

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_ECDSA_WITH_RC4_128_SHA

  • TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305

  • TLS_ECDHE_RSA_WITH_RC4_128_SHA

  • TLS_RSA_WITH_3DES_EDE_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA

  • TLS_RSA_WITH_AES_128_CBC_SHA256

  • TLS_RSA_WITH_AES_128_GCM_SHA256

  • TLS_RSA_WITH_AES_256_CBC_SHA

  • TLS_RSA_WITH_AES_256_GCM_SHA384

  • TLS_RSA_WITH_RC4_128_SHA

For example:

--kube-tls-cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

--kube-tls-min-version version

The TLS minimum version for Kubernetes components. The default is VersionTLS12. Options are:

  • VersionTLS10

  • VersionTLS11

  • VersionTLS12

  • VersionTLS13

{-l|--load-balancer} load_balancer

The Kubernetes API server load balancer hostname or IP address, and port. The default port is 6443. For example, 192.0.2.100:6443.

{-m|--master-nodes} nodes ...

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes, including the port number for the Platform Agent. For example, control1.example.com:8090,control2.example.com:8090.

If you do not provide this value, you are prompted for it by the Platform CLI.

{-g|--nginx-image} container_location

The location for an NGINX container image to use in a highly available (HA) cluster with multiple control plane nodes. This is optional.

You can use this option if you do not provide your own load balancer using the --load-balancer option. This option may be useful if you are using a mirrored container registry. For example:

--nginx-image mirror.example.com:5000/olcne/nginx:1.17.7

By default, podman is used to pull the NGINX image that is configured in /usr/libexec/pull_olcne_nginx. If you set the --nginx-image option to use another NGINX container image, the location of the image is written to /etc/olcne-nginx/image, and overrides the default image.

--node-labels label

Important:

This option, and the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi) that required this option, is deprecated in Release 1.5.

The label to add to Kubernetes nodes on Oracle Cloud Infrastructure instances to set the Availability Domain for pods. This option is used with the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi). The label should be in the format:

failure-domain.beta.kubernetes.io/zone=region-identifier-AD-availability-domain-number

For example:

--node-labels failure-domain.beta.kubernetes.io/zone=US-ASHBURN-AD-1

For a list of the Availability Domains, see the Oracle Cloud Infrastructure documentation.

--node-ocids OCIDs

Important:

This option, and the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi) that required this option, is deprecated in Release 1.5.

A comma separated list of Kubernetes nodes (both control plane and worker nodes) with their Oracle Cloud Identifiers (OCIDs). This option is used with the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi). The format for the list is:

FQDN=OCID,...

For example:

--node-ocids control1.example.com=ocid1.instance...,worker1.example.com=ocid1.instance...,worker2.example.com=ocid1.instance...

For information about OCIDs, see the Oracle Cloud Infrastructure documentation.

{-p|--pod-cidr} pod_CIDR

The Kubernetes pod CIDR. The default is 10.244.0.0/16. This is the range from which each Kubernetes pod network interface is assigned an IP address.

{-n|--pod-network} network_fabric

The network fabric for the Kubernetes cluster. The default is flannel.

{-P|--pod-network-iface} network_interface

The name of the network interface on the nodes to use for the Kubernetes data plane network communication. The data plane network is used by the pods running on Kubernetes. If you use regex to set the interface name, the first matching interface returned by the kernel is used. For example:

--pod-network-iface "ens[1-5]|eth5"

--selinux {enforcing|permissive}

Whether to use SELinux enforcing or permissive mode. permissive is the default.

You should use this option if SELinux is set to enforcing on the control plane and worker nodes. SELinux is set to enforcing mode by default on the operating system and is the recommended mode.

{-s|--service-cidr} service_CIDR

The Kubernetes service CIDR. The default is 10.96.0.0/12. This is the range from which each Kubernetes service is assigned an IP address.

{-i|--virtual-ip} virtual_ip

The virtual IP address for the load balancer. This is optional.

You should use this option if you do not specify your own load balancer using the --load-balancer option. When you specify a virtual IP address, it is used as the primary IP address for control plane nodes.

{-w|--worker-nodes} nodes ...

A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes, including the port number for the Platform Agent. If a worker node is behind a NAT gateway, use the public IP address for the node. The worker node's interface behind the NAT gateway must have an public IP address using the /32 subnet mask that is reachable by the Kubernetes cluster. The /32 subnet restricts the subnet to one IP address, so that all traffic from the Kubernetes cluster flows through this public IP address (for more information about configuring NAT, see Getting Started). The default port number is 8090. For example, worker1.example.com:8090,worker2.example.com:8090.

If you do not provide this value, you are prompted for it by the Platform CLI.

--restrict-service-externalip {true|false}

Sets whether to restrict access to external IP addresses for Kubernetes services. The default is true, which restricts access to external IP addresses.

This option deploys a Kubernetes service named externalip-validation-webhook-service to validate externalIPs set in Kubernetes service configuration files. Access to any external IP addresses is set in a Kubernetes service configuration file using the externalIPs option in the spec section.

--restrict-service-externalip-ca-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/ca.cert.

--restrict-service-externalip-tls-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.cert.

--restrict-service-externalip-tls-key path

The path to the private key for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.key.

--restrict-service-externalip-cidrs allowed_cidrs

Enter one or more comma separated CIDR blocks if you want to allow only IP addresses from the specified CIDR blocks. For example, 192.0.2.0/24,198.51.100.0/24.

module_args for the helm module:

--helm-kubernetes-module kubernetes_module

The name of the kubernetes module that Helm should be associated with. Each instance of Kubernetes can have one instance of Helm associated with it.

--helm-version version

The version of Helm to install. The default is the latest version. For information on the latest version number, see Release Notes.

module_args for the oci-ccm module:

--oci-ccm-helm-module helm_module

The name of the helm module that the Oracle Cloud Infrastructure Cloud Controller Manager module should be associated with.

--oci-tenancy OCID

The OCID for the Oracle Cloud Infrastructure tenancy.

--oci-region region_identifier

The Oracle Cloud Infrastructure region identifier. The default is us-ashburn-1.

For a list of the region identifiers, see the Oracle Cloud Infrastructure documentation.

--oci-compartment OCID

The OCID for the Oracle Cloud Infrastructure compartment.

--oci-user OCID

The OCID for the Oracle Cloud Infrastructure user.

--oci-private-key path

The location of the private key for the Oracle Cloud Infrastructure API signing key. This must be located on the primary control plane node. The default is /root/.ssh/id_rsa.

Important:

The private key must be available on the primary control plane node. This is the first control plane node listed in the --master-nodes option when you create the Kubernetes module.

--oci-fingerprint fingerprint

The fingerprint of the public key for the Oracle Cloud Infrastructure API signing key.

--oci-passphrase passphrase

The passphrase for the private key for the Oracle Cloud Infrastructure API signing key, if one is set.

--oci-vcn OCID

The OCID for the Oracle Cloud Infrastructure Virtual Cloud Network on which the Kubernetes cluster is available.

--oci-lb-subnet1 OCID

The OCID of the regional subnet for the Oracle Cloud Infrastructure load balancer.

Alternatively, the OCID of the first subnet of the two required availability domain specific subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-subnet2 OCID

The OCID of the second subnet of the two subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-security-mode {All|Frontend|None}

This option sets whether the Oracle Cloud Infrastructure Cloud Controller Manager module should manage security lists for load balancer services. This option sets the configuration mode to use for security lists managed by the Kubernetes Cloud Controller Manager. The default is None.

For information on the security modes, see the Kubernetes Cloud Controller Manager implementation for Oracle Cloud Infrastructure documentation.

--oci-use-instance-principals {true|false}

Sets whether to enable an instance to make API calls in Oracle Cloud Infrastructure services. The default is false.

--oci-container-registry container_registry

The container image registry to use when deploying the Oracle Cloud Infrastructure cloud provisioner image. The default is an empty string. The Platform API Server determines the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that is to be installed. Alternatively, you can use a private registry.

--ccm-container-registry container_registry

The container image registry to use when deploying the Oracle Cloud Infrastructure Cloud Controller Manager component images. The default is an empty string. The Platform API Server determines the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that is to be installed. Alternatively, you can use a private registry.

--oci-ccm-version version

The version of Oracle Cloud Infrastructure Cloud Controller Manager to install. The default is the latest version. For information on the latest version number, see Release Notes.

module_args for the oci-csi module:

Important:

The oci-csi module is deprecated in Release 1.5. You should instead use the oci-ccm module from Release 1.5 onwards. If you have upgraded from Release 1.4 to 1.5, the oci-csi module is automatically renamed to oci-ccm. You must also perform another step after the upgrade to make sure the module is correctly configured. For information on upgrading the oci-csi module to the oci-ccm module, see Updates and Upgrades.
--oci-csi-helm-module helm_module

The name of the helm module that the Oracle Cloud Infrastructure Container Storage Interface module should be associated with.

--oci-tenancy OCID

The OCID for the Oracle Cloud Infrastructure tenancy.

--oci-region region_identifier

The Oracle Cloud Infrastructure region identifier. The default is us-ashburn-1.

For a list of the region identifiers, see the Oracle Cloud Infrastructure documentation.

--oci-compartment OCID

The OCID for the Oracle Cloud Infrastructure compartment.

--oci-user OCID

The OCID for the Oracle Cloud Infrastructure user.

--oci-private-key path

The location of the private key for the Oracle Cloud Infrastructure API signing key. This must be located on the primary control plane node. The default is /root/.ssh/id_rsa.

Important:

The private key must be available on the primary control plane node. This is the first control plane node listed in the --master-nodes option when you create the Kubernetes module.

--oci-fingerprint fingerprint

The fingerprint of the public key for the Oracle Cloud Infrastructure API signing key.

--oci-passphrase passphrase

The passphrase for the private key for the Oracle Cloud Infrastructure API signing key, if one is set.

--oci-vcn OCID

The OCID for the Oracle Cloud Infrastructure Virtual Cloud Network on which the Kubernetes cluster is available.

--oci-lb-subnet1 OCID

The OCID of the regional subnet for the Oracle Cloud Infrastructure load balancer.

Alternatively, the OCID of the first subnet of the two required availability domain specific subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-subnet2 OCID

The OCID of the second subnet of the two subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-security-mode {All|Frontend|None}

This option sets whether the Oracle Cloud Infrastructure CSI plug-in should manage security lists for load balancer services. This option sets the configuration mode to use for security lists managed by the Kubernetes Cloud Controller Manager. The default is None.

For information on the security modes, see the Kubernetes Cloud Controller Manager implementation for Oracle Cloud Infrastructure documentation.

--oci-container-registry container_registry

The container image registry to use when deploying the Oracle Cloud Infrastructure cloud provisioner image. The default is iad.ocir.io/oracle.

--csi-container-registry container_registry

The container image registry to use when deploying the CSI component images. The default is quay.io/k8scsi.

module_args for the metallb module:

--metallb-helm-module helm_module

The name of the helm module that MetalLB should be associated with.

--metallb-config path

The location of the file that contains the configuration information for MetalLB. This file must be located on operator node.

--metallb-namespace namespace

The Kubernetes namespace in which to install MetalLB. The default namespace is metallb-system.

--metallb-version version

The version of MetalLB to install. The default is the latest version. For information on the latest version number, see Release Notes.

--metallb-container-registry container_registry

The container image registry and optional tag to use when installing MetalLB. The default is container-registry.oracle.com/olcne.

module_args for the gluster module:

--gluster-helm-module helm_module

The name of the helm module that the Gluster Container Storage Interface module should be associated with.

--gluster-server-url URL

The URL of the Heketi API server endpoint. The default is http://127.0.0.1:8080.

--gluster-server-user user

The username of the Heketi server admin user. The default is admin.

--gluster-existing-secret-name secret

The name of the existing secret containing the admin password. The default is heketi-admin.

--gluster-secret-key secret

The secret containing the admin password. The default is secret.

--gluster-namespace namespace

The Kubernetes namespace in which to install the Gluster Container Storage Interface module. The default is default.

--gluster-sc-name class_name

The StorageClass name for the Glusterfs StorageClass. The default is hyperconverged.

--gluster-server-rest-auth {true|false}

Whether the Heketi API server accepts REST authorization. The default is true.

module_args for the operator-lifecycle-manager module:

--olm-helm-module helm_module

The name of the helm module that Operator Lifecycle Manager should be associated with.

--olm-version version

The version of Operator Lifecycle Manager to install. The default is the latest version. For information on the latest version number, see Release Notes.

--olm-container-registry container_registry

The container image registry to use when deploying the Operator Lifecycle Manager. The default is container-registry.oracle.com/olcne.

--olm-enable-operatorhub {true|false}

Sets whether to enable the Operator Lifecycle Manager to use the OperatorHub registry as a catalog source.

The default is true.

module_args for the prometheus module:

--prometheus-helm-module helm_module

The name of the helm module that Prometheus should be associated with.

--prometheus-version version

The version of Prometheus to install. The default is the latest version. For information on the latest version number, see Release Notes.

--prometheus-image container_registry

The container image registry and tag to use when installing Prometheus. The default is container-registry.oracle.com/olcne/prometheus.

--prometheus-namespace namespace

The Kubernetes namespace in which to install Prometheus. The default namespace is default.

--prometheus-persistent-storage {true|false}

If this value is false, Prometheus writes its data into an emptydir on the host where the pod is running. If the pod migrates, metric data is lost.

If this value is true, Prometheus requisitions a Kubernetes PersistentVolumeClaim so that its data persists, despite destruction or migration of the pod.

The default is false.

--prometheus-alerting-rules path

The path to a configuration file for Prometheus alerts.

--prometheus-recording-rules path

The path to a configuration file for Prometheus recording rules.

--prometheus-scrape-configuration path

The path to a configuration file for Prometheus metrics scraping.

module_args for the grafana module:

--grafana-helm-module helm_module

The name of the helm module that Grafana should be associated with.

--grafana-version version

The version of Grafana to install. The default is the latest version. For information on the latest version number, see Release Notes.

--grafana-container-registry container_registry

The container image registry and tag to use when installing Grafana. The default is container-registry.oracle.com/olcne.

--grafana-namespace namespace

The Kubernetes namespace in which to install Grafana. The default namespace is default.

--grafana-dashboard-configmaps configmap

The name of the ConfigMap reference that contains the Grafana dashboards.

--grafana-dashboard-providers path

The location of the file that contains the configuration for the Grafana dashboard providers.

--grafana-datasources path

The location of the file that contains the configuration for the Grafana data sources.

--grafana-existing-sercret-name secret

The name of the existing secret containing the Grafana admin password.

--grafana-notifiers path

The location of the file that contains the configuration for the Grafana notifiers.

--grafana-pod-annotations annotations

A comma separated list of annotations to be added to the Grafana pods.

--grafana-pod-env env_vars

A comma separated list of environment variables to be passed to Grafana deployment pods.

--grafana-service-port port

The port number for the Grafana service. The default is 3000.

--grafana-service-type service

The service type to access Grafana. The default is ClusterIP.

module_args for the istio module:

--istio-helm-module helm_module

The name of the helm module that Istio should be associated with.

--istio-version version

The version of Istio to install. The default is the latest version. For information on the latest version number, see Release Notes.

--istio-container-registry container_registry

The container image registry to use when deploying Istio. The default is container-registry.oracle.com/olcne.

--istio-enable-grafana {true|false}

Sets whether to deploy the Grafana module to visualize the metrics stored in Prometheus for Istio. The default is true.

--istio-enable-prometheus {true|false}

Sets whether to deploy the Prometheus module to store the metrics for Istio. The default is true.

--istio-mutual-tls {true|false}

Sets whether to enable Mutual Transport Layer Security (mTLS) for communication between the control plane pods for Istio, and for any pods deployed into the Istio service mesh.

The default is true.

Important:

It is strongly recommended that this value is not set to false, especially in production environments.

--istio-parent name

The name of the istio module to use with a custom profile. When used with the --istio-profile option, allows multiple instances of the istio module to attach Istio platform components to a single Istio control plane. When this option is set, the default Istio profile is replaced with the a mostly empty profile. The only contents of the profile are the container image hub location, and tags that correspond to the currently installed version of the istio module.

--istio-profile path

The path to the file that contains the spec section of an IstioOperator resource from the install.istio.io/v1alpha1 Kubernetes API. The values in this resource are laid over top of, and override, the default profile for Istio.

For information on the IstioOperator resource file, see the upstream Istio documentation.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-18 Creating a module for an HA cluster with an external load balancer

This example creates an HA cluster with an external load balancer, available on the host lb.example.com and running on port 6443.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--load-balancer lb.example.com:6443 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key

Example 4-19 Creating a module for an HA cluster with an internal load balancer

This example example creates an HA Kubernetes cluster using the load balancer deployed by the Platform CLI. The --virtual-ip option sets the virtual IP address to 192.0.2.100, which is the IP address of the primary control plane node. The primary control plane node is the first node in the list of control plane nodes. This cluster contains three control plane nodes and three worker nodes.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key

Example 4-20 Creating a module for a cluster to allow access to service IP address ranges

This example example creates a Kubernetes cluster that sets the external IP addresses that can be accessed by Kubernetes services. The IP ranges that are allowed are within the 192.0.2.0/24 and 198.51.100.0/24 CIDR blocks.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key \
--restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24

Example 4-21 Creating a module for a cluster to allow access to all service IP addresses

This example creates a Kubernetes cluster that allows access to all external IP addresses for Kubernetes services. This disables the deployment of the externalip-validation-webhook-service Kubernetes service, which means no validation of external IP addresses is performed for Kubernetes services, and access is allowed for all CIDR blocks.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip false 

Example 4-22 Creating module for a cluster with a single control plane node

This example creates a Kubernetes module to deploy a Kubernetes cluster with a single control plane node. The --module option is set to kubernetes to create a Kubernetes module. This cluster contains one control plane and two worker nodes.

You must also include the location of the certificates for the externalip-validation-webhook-service Kubernetes service.

olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--master-nodes control1.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \
--restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \
--restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key 

Example 4-23 Creating a module for a service mesh

This example creates a service mesh using the Istio module. The --module option is set to istio to create an Istio module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and an Istio module named myistio.

The --istio-helm-module option sets the name of the Helm module to use.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module istio \
--name myistio \
--istio-helm-module myhelm

Example 4-24 Creating a module for Operator Lifecycle Manager

This example creates a module that can be used to install Operator Lifecycle Manager. The --module option is set to operator-lifecycle-manager to create an Operator Lifecycle Manager module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and an Operator Lifecycle Manager module named myolm.

The --olm-helm-module option sets the name of the Helm module to use.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module operator-lifecycle-manager \
--name myolm \
--olm-helm-module myhelm

Example 4-25 Creating a module for Gluster Storage

This example creates a module that creates a Kubernetes StorageClass provisioner to access Gluster storage. The --module option is set to gluster to create a Gluster Container Storage Interface module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and a Gluster Container Storage Interface module named mygluster.

The --gluster-helm-module option sets the name of the Helm module to use.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module gluster \
--name mygluster \
--gluster-helm-module myhelm

Example 4-26 Creating a module for Oracle Cloud Infrastructure

This example creates a module that creates a Kubernetes StorageClass provisioner to access Oracle Cloud Infrastructure storage, and enables the provision of highly available application load balancers. The --module option is set to oci-ccm to create an Oracle Cloud Infrastructure Cloud Controller Manager module. This example uses a Kubernetes module named mycluster, a Helm module named myhelm, and an Oracle Cloud Infrastructure Cloud Controller Manager module named myoci.

The --oci-ccm-helm-module option sets the name of the Helm module to use.

You should also provide the information required to access Oracle Cloud Infrastructure using the options as shown in this example, such as:

  • --oci-tenancy

  • --oci-compartment

  • --oci-user

  • --oci-fingerprint

  • --oci-private-key

  • --oci-vcn
  • --oci-lb-subnet1
  • --oci-lb-subnet2

You may need to provide more options to access Oracle Cloud Infrastructure, depending on your environment.

If you do not include all the required options when adding the modules you are prompted to provide them.

olcnectl module create \
--environment-name myenvironment \
--module oci-ccm \
--name myoci \
--oci-ccm-helm-module myhelm \
--oci-tenancy ocid1.tenancy.oc1..unique_ID \
--oci-compartment ocid1.compartment.oc1..unique_ID \
--oci-user ocid1.user.oc1..unique_ID \
--oci-fingerprint b5:52:... \
--oci-private-key /home/opc/.oci/oci_api_key.pem \
--oci-vcn ocid1.vcn.oc1..unique_ID \
--oci-lb-subnet1 ocid1.subnet.oc1..unique_ID \
--oci-lb-subnet2 ocid1.subnet.oc1..unique_ID

Module Install

Installs a module in an environment. When you install a module, the nodes are checked to make sure they are set up correctly to run the module. If the nodes are not set up correctly, the commands required to fix each node are shown in the output and optionally saved to files.

Syntax

olcnectl module install 
{-E|--environment-name} environment_name 
{-N|--name} name 
[{-g|--generate-scripts}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-27 Installing a module

To install a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module install \
--environment-name myenvironment \
--name mycluster

Module Instances

Lists the installed modules in an environment.

Syntax

olcnectl module instances 
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]  

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-28 Listing the deployed modules in an environment

To list the deployed modules for an environment named myenvironment:

olcnectl module instances \
--environment-name myenvironment

Module List

Lists the available modules for an environment.

Syntax

olcnectl module list 
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]  

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-29 Listing available modules in an environment

To list the modules for an environment named myenvironment:

olcnectl module list \
--environment-name myenvironment

Module Property Get

Lists the value of a module property.

Syntax

olcnectl module property get 
{-E|--environment-name} environment_name 
{-N|--name} name
{-P|--property} property_name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-P|--property} property_name

The name of the property. You can get a list of the available properties using the olcnectl module property list command.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-30 Listing module properties

To list the value of the kubecfg property for a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module property get \
--environment-name myenvironment \
--name mycluster \
--property kubecfg

Module Property List

Lists the available properties for a module in an environment.

Syntax

olcnectl module property list 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-31 Listing module properties

To list the properties for a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module property list \
--environment-name myenvironment \
--name mycluster

Module Report

Reports summary and detailed information about module and properties in an environment.

Syntax

olcnectl module report 
{-E|--environment-name} environment_name 
[{-N|--name} name]
[--children]
[--exclude pattern]
[--include pattern]
[--format {yaml|table}] 
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment. When no name is specified, the output of the command contains information about all modules deployed in the selected environment.

--children

When added to the command, this option recursively displays the properties for all children of a module instance. The default value is 'false'.

--exclude pattern

An RE2 regular expression selecting the properties to exclude from the report. This option may specify more than one property as a comma separated lists.

--include pattern

An RE2 regular expression selecting the properties to include in the report. This option may specify more than one property as a comma separated lists. By default, all properties are displayed. Using this option one or more times overrides this behavior.

--format {yaml|table}

To generate reports in YAML or table format, use this option. The default format is table.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-32 Reporting summary details about an environment

To report a summary of all modules deployed in the environment named myenvironment:

olcnectl module report \
--environment-name myenvironment \

Example 4-33 Reporting summary details about a Kubernetes module

To report summary details about a Kubernetes module named mycluster:

olcnectl module report \
--environment-name myenvironment \
--name mycluster

Example 4-34 Reporting comprehensive details about a Kubernetes module

To report comprehensive details about a Kubernetes module named mycluster:

olcnectl module report \
--environment-name myenvironment \
--name mycluster \
--children

Module Restore

Restores a module from a back in an environment.

Syntax

olcnectl module restore 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-g|--generate-scripts}]
[{-F|--force}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-F|--force}

Skips the confirmation prompt.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-35 Restoring control plane nodes from a back up

To restore the Kubernetes control plane nodes from a back up in a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module restore \
--environment-name myenvironment \
--name mycluster

Module Uninstall

Uninstalls a module from an environment. Uninstalling the module also removes the module configuration from the Platform API Server.

Syntax

olcnectl module uninstall 
{-E|--environment-name} environment_name 
{-N|--name} name
[{-F|--force}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-F|--force}

Skips the confirmation prompt.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-36 Uninstalling a module

To uninstall a Kubernetes module named mycluster from an environment named myenvironment:

olcnectl module uninstall \
--environment-name myenvironment \
--name mycluster

In this example, the Kubernetes containers are stopped and deleted on each node, and the Kubernetes cluster is removed.

Module Update

Updates a module in an environment. The module configuration is automatically retrieved from the Platform API Server. This command can be used to:

  • Update the Kubernetes release on nodes to the latest errata release

  • Upgrade the Kubernetes release on nodes to the latest release

  • Update or upgrade other modules and components

  • Scale up a Kubernetes cluster (add control plane and/or worker nodes)

  • Scale down a Kubernetes cluster (remove control plane and/or worker nodes)

Important:

Before you update or upgrade the Kubernetes cluster, make sure you have updated or upgraded Oracle Cloud Native Environment to the latest release. For information on updating or upgrading Oracle Cloud Native Environment, see Updates and Upgrades.

Syntax

olcnectl module update 
{-E|--environment-name} environment_name 
{-N|--name} name 
[{-g|--generate-scripts}]
[{-F|--force}]
[{-h|--help}]
[module_args ...]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-F|--force}

Skips the confirmation prompt.

{-h|--help}

Lists information about the command and the available options.

Where module_args is:

The value of module_args is one or more arguments to update a module in an environment.

module_args for the kubernetes module:

{-k|--kube-version} version

Sets the Kubernetes version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

If this option is not provided any Kubernetes errata updates are installed.

{-r|--container-registry} container_registry

The container registry that contains the Kubernetes images when performing an update or upgrade. Use the Oracle Container Registry or a local registry to pull the Kubernetes images.

This option allows you to update or upgrade using a different container registry. This option sets the default container registry during all subsequent updates or upgrades and need only be used when changing the default container registry.

{-m|--master-nodes} nodes ...

A comma-separated list of the hostnames or IP addresses of the Kubernetes control plane nodes that should remain in or be added to the Kubernetes cluster, including the port number for the Platform Agent. Any control plane nodes not included in this list are removed from the cluster. The nodes in this list are the nodes that are to be included in the cluster.

The default port number for the Platform Agent is 8090. For example, control1.example.com:8090,control2.example.com:8090.

{-w|--worker-nodes} nodes ...

A comma-separated list of the hostnames or IP addresses of the Kubernetes worker nodes that should remain in or be added to the Kubernetes cluster, including the port number for the Platform Agent. Any worker nodes not included in this list are removed from the cluster. The nodes in this list are the nodes that are to be included in the cluster.

The default port number for the Platform Agent is 8090. For example, worker1.example.com:8090,worker2.example.com:8090.

--compact {true|false}

Sets whether to allow non-system Kubernetes workloads to run on control plane nodes. The default is false.

If you set this to true, the Platform API Server untaints control plane nodes if they are tainted. This allows non-system Kubernetes workloads to be scheduled and run on control plane nodes.

If you set this to false, the Platform API Server taints the control plane nodes if they are untainted. This prevents non-system Kubernetes workloads to be scheduled and run on control plane nodes.

Important:

For production environments, this option must be set to false (the default).

--nginx-image container_location

The location of the NGINX container image to update. This is optional.

This option pulls the NGINX container image from the container registry location you specify to update NGINX on the control plane nodes. For example:

--nginx-image container-registry.oracle.com/olcne/nginx:1.17.7

--helm-version version

Sets the Helm version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--restrict-service-externalip {true|false}

Sets whether to restrict access to external IP addresses for Kubernetes services. The default is true, which restricts access to external IP addresses.

This option deploys a Kubernetes service named externalip-validation-webhook-service to validate externalIPs set in Kubernetes service configuration files. Access to any external IP addresses is set in a Kubernetes service configuration file using the externalIPs option in the spec section.

--restrict-service-externalip-ca-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/ca.cert.

--restrict-service-externalip-tls-cert path

The path to a CA certificate file for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.cert.

--restrict-service-externalip-tls-key path

The path to the private key for the externalip-validation-webhook-service application that is deployed when the --restrict-service-externalip option is set to true. For example, /etc/olcne/certificates/restrict_external_ip/node.key.

--restrict-service-externalip-cidrs allowed_cidrs

Enter one or more comma separated CIDR blocks if you want to allow only IP addresses from the specified CIDR blocks. For example, 192.0.2.0/24,198.51.100.0/24.

--selinux {enforcing|permissive}

Whether to use SELinux enforcing or permissive mode. permissive is the default.

You should use this option if SELinux is set to enforcing on the control plane and worker nodes. SELinux is set to enforcing mode by default on the operating system and is the recommended mode.

module_args for the oci-ccm module:

--oci-ccm-version version

Sets the Oracle Cloud Infrastructure Cloud Controller Manager version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--oci-container-registry container_registry

The container image registry to use when updating the Oracle Cloud Infrastructure cloud provisioner image. The default is an empty string. The Platform API Server determines the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that is to be upgrade. Alternatively, you can use a private registry.

--ccm-container-registry container_registry

The container image registry to use when updating the Oracle Cloud Infrastructure Cloud Controller Manager component images. The default is an empty string. The Platform API Server determines the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that is to be upgraded. Alternatively, you can use a private registry.

--oci-lb-subnet1 OCID

The OCID of the regional subnet for the Oracle Cloud Infrastructure load balancer.

Alternatively, the OCID of the first subnet of the two required availability domain specific subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-subnet2 OCID

The OCID of the second subnet of the two subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.

--oci-lb-security-mode {All|Frontend|None}

This option sets whether the Oracle Cloud Infrastructure Cloud Controller Manager module should manage security lists for load balancer services. This option sets the configuration mode to use for security lists managed by the Kubernetes Cloud Controller Manager. The default is None.

For information on the security modes, see the Kubernetes Cloud Controller Manager implementation for Oracle Cloud Infrastructure documentation.

module_args for the metallb module:

--metallb-version version

Sets the MetalLB version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--metallb-container-registry container_registry

The container image registry and optional tag to use when upgrading MetalLB. The default is container-registry.oracle.com/olcne.

module_args for the operator-lifecycle-manager module:

--olm-version version

Sets the Operator Lifecycle Manager version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

module_args for the prometheus module:

--prometheus-version version

Sets the Prometheus version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--prometheus-container-registry container_registry

The container registry that contains the Prometheus images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Prometheus images.

module_args for the grafana module:

--grafana-version version

Sets the Grafana version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--grafana-container-registry container_registry

The container registry that contains the Grafana images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Grafana images.

module_args for the istio module:

--istio-version version

Sets the Istio version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.

--istio-container-registry container_registry

The container registry that contains the Istio images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Istio images.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-37 Scaling a cluster

To scale up a cluster, list all nodes to be included in the cluster. If an existing cluster includes two control plane and two worker nodes, and you want to add a new control plane and a new worker, list all the nodes to include. For example, to add a control3.example.com control plane node, and a worker3.example.com worker node to a Kubernetes module named mycluster:

olcnectl module update \
--environment-name myenvironment \  
--name mycluster \
--master-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090

To scale down a cluster, list all the nodes to be included in the cluster. To remove the control3.example.com control plane node, and worker3.example.com worker node from the kubernetes module named mycluster:

olcnectl module update \
--environment-name myenvironment \  
--name mycluster \
--master-nodes control1.example.com:8090,control2.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090

As the control3.example.com control plane node and worker3.example.com worker node are not listed in the --master-nodes and --worker-nodes options, the Platform API Server removes those nodes from the cluster.

Example 4-38 Updating the Kubernetes release for errata updates

To update a Kubernetes module named mycluster in an environment named myenvironment to the latest Kubernetes errata release, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster

The nodes in the environment are updated to the latest Kubernetes errata release.

Example 4-39 Updating using a different container registry

To update a Kubernetes module named mycluster in an environment named myenvironment to the latest Kubernetes errata release using a different container registry than the default specified when creating the Kubernetes module, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--container-registry container-registry-austin-mirror.oracle.com/olcne/

The nodes in the environment are updated to the latest Kubernetes errata release contained on the mirror container registry.

Example 4-40 Upgrading the Kubernetes release

To upgrade a Kubernetes module named mycluster in an environment named myenvironment to Kubernetes Release 1.24.15, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--kube-version 1.24.15

The --kube-version option specifies the release to which you want to upgrade. This example uses release number .

Make sure you upgrade to the latest Kubernetes release. To get the version number of the latest Kubernetes release, see Release Notes.

The nodes in the environment are updated to Kubernetes Release 1.24.15.

Example 4-41 Upgrading using a different container registry

To upgrade a Kubernetes module named mycluster in an environment named myenvironment to Kubernetes Release 1.24.15 using a different container registry than the current default container registry, enter:

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--container-registry container-registry-austin-mirror.oracle.com/olcne/ \
--kube-version 1.24.15

The --kube-version option specifies the release to which you want to upgrade. This example uses release number 1.24.15. The specified container registry becomes the new default container registry for all subsequent updates or upgrades.

Make sure you upgrade to the latest Kubernetes release. To get the version number of the latest Kubernetes release, see Release Notes.

The nodes in the environment are updated to Kubernetes 1.24.15.

Example 4-42 Setting access to external IP addresses for Kubernetes services

This example sets the range of external IP addresses that Kubernetes services can access.

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24

Example 4-43 Modifying host SELinux settings

This example updates the configuration with the Platform API Server that nodes in the Kubernetes cluster have SELinux enforcing mode enabled.

olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--selinux enforcing

Module Validate

Validates a module for an environment. When you validate a module, the nodes are checked to make sure they are set up correctly to run the module. If the nodes are not set up correctly, the commands required to fix each node are shown in the output and optionally saved to files.

Syntax

olcnectl module validate 
{-E|--environment-name} environment_name 
{-N|--name} name 
[{-g|--generate-scripts}]
[{-h|--help}]
[globals] 

Where:

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

{-g|--generate-scripts}

Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named hostname:8090.sh.

{-h|--help}

Lists information about the command and the available options.

Where globals is one or more of the global options as described in Using Global Flags.

Examples

Example 4-44 Validating a module

To validate a Kubernetes module named mycluster in an environment named myenvironment:

olcnectl module validate \
--environment-name myenvironment \
--name mycluster

Node Install-Agent

Installs the Platform Agent software packages on Kubernetes nodes.

Syntax

olcnectl node install-agent 
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]

Where:

{-h|--help}

Lists information about the command and the available options.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

Examples

Example 4-45 Install the Platform Agent on nodes

This example installs the Platform Agent on the nodes listed.

olcnectl node install-agent \
--nodes control1.example.com,worker1.example.com,worker2.example.com

Node Install-Api-Server

Installs the Platform API Server software packages on the operator node.

Syntax

olcnectl node install-api-server 
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]

Where:

{-h|--help}

Lists information about the command and the available options.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

Examples

Example 4-46 Install the Platform API Server on a node

This example installs the Platform API Server on the node listed.

olcnectl install-api-server \
--nodes operator.example.com

Node Install-Certificates

Installs the CA Certificates and key for the Platform API Server and Platform Agent to the nodes, with the appropriate file ownership.

The certificates and key:

  • Are copied to /etc/olcne/certificates on the nodes.
  • Have ownership of the certificate files (using chown) set to olcne:olcne.
  • Have permissions of the certificate files (using chmod) set to 0440.

Syntax

olcnectl node install-certificates 
[{-c|--certificate} path]
[{-C|--certificate-authority-chain} path]
[{-h|--help}]
[{-K|--key} path]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]

Where :

{-c|--certificate} path

The path to the node.cert certificate.

{-C|--certificate-authority-chain} path

The path to the ca.cert Certificate Authority certificate.

{-h|--help}

Lists information about the command and the available options.

{-K|--key} path

The path to the node.key key.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

Examples

Example 4-47 Install certificates on nodes

This example installs the certificates on the node listed.

olcnectl node install-certificates \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com

Node Setup-Kubernetes

Sets up nodes to prepare for an installation of the Kubernetes module.

Configures hosts so they can be used as either Kubernetes control plane or worker nodes. Performs operations such as configuring firewall rules and opening network ports. Before changes are made to the hosts, a prompt is displayed to list the changes to be made and asks for confirmation.

Syntax

olcnectl node setup-kubernetes 
{-a|--api-server} api-server-address
[--control-plane-ha-nodes nodes ]
[--control-plane-nodes nodes]
[{-d|--debug}]
[{-h|--help}]
[{-m|--master-nodes} nodes]
[{-n|--nodes} nodes]
[{-R|--remote-command} remote-command]
[{-r|--role} role]
[--selinux {permissive|enforcing}]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[{-w|--worker-nodes} nodes]
[{-y|--yes}]

Where:

{-a|--api-server} api-server-address

The hostname or IP address of the Platform API Server host.

--control-plane-ha-nodes nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes in a High Availability cluster. For example, control1.example.com,control2.example.com,control3.example.com.

--control-plane-nodes nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example, control1.example.com,control2.example.com.

{-d|--debug}

Enable debug logging.

{-h|--help}

Lists information about the command and the available options.

{-m|--master-nodes} nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example, control1.example.com,control2.example.com,control3.example.com.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-r|--role} role

The role of a host. The roles are one of:

  • api-server: The Platform API Server.

  • control-plane: A Kubernetes control plane node for a cluster that has only one.

  • control-plane-ha: A Kubernetes control plane node for a cluster that has more than one.

  • worker: A Kubernetes worker node.

--selinux {permissive|enforcing}

Sets whether SELinux should be set to enforcing or permissive. The default is permissive.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

{-w|--worker-nodes} nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes. For example, worker1.example.com,worker2.example.com.

{-y|--yes}

Sets whether to assume the answer to a confirmation prompt is affirmative (yes).

Node Setup-Package-Repositories

Sets up the software package repositories on nodes.

Syntax

olcnectl node setup-package-repositories
[{-d|--debug}]
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]

Where:

{-d|--debug}

Enable debug logging.

{-h|--help}

Lists information about the command and the available options.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

Examples

Example 4-48 Set up software repositories

This example sets up the software package repositories for the nodes listed.

olcnectl node setup-package-repositories \
--nodes control1.example.com,worker1.example.com,worker2.example.com

Node Setup-Platform

Installs the Oracle Cloud Native Environment platform (Platform API Server and Platform Agent) and starts the platform services.

Completely configures, installs, and starts the Oracle Cloud Native Environment platform components on a set of hosts.

  1. Configures the yum software package repositories.

  2. Configures networking ports.
  3. Installs the Platform API Server and Platform Agent.

  4. Generates and installs CA Certificates.

  5. Starts the platform services (olcne-api-server.service and olcne-agent.service).

Syntax

olcnectl node setup-platform 
{-a|--api-server} api-server-address
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[--control-plane-ha-nodes nodes ]
[--control-plane-nodes nodes]
[{-d|--debug}]
[{-h|--help}]
[--http-proxy proxy-server]
[--https-proxy proxy-server]
[--no-proxy no_proxy]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[{-r|--role} role]
[--selinux {permissive|enforcing}]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[{-w|--worker-nodes} nodes]
[{-y|--yes}]

Where:

{-a|--api-server} api-server-address

The hostname or IP address of the Platform API Server host.

--byo-ca-cert certificate-path

The path to an existing public CA Certificate.

--byo-ca-key key-path

The path to an existing private key.

--cert-dir certificate-directory

The directory to read or write key material generated by this utility. The default is <CURRENT_DIR>/certificates.

--cert-request-common-name common_name

The Certificate Common Name suffix. The default is example.com.

--cert-request-country country

The two letter country code where your company is located, for example, US for the United States, GB for the United Kingdom and CN for China. The default is US.

--cert-request-locality locality

The name of the city where your company is located. The default is Redwood City.

--cert-request-organization organization

The name of the your company. The default is OLCNE.

--cert-request-organization-unit organization-unit

The name of the department within your company. The default is OLCNE.

--cert-request-state state

The name of the state or province where your company is located. The default is California.

--control-plane-ha-nodes nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes in a High Availability cluster. For example, control1.example.com,control2.example.com,control3.example.com.

--control-plane-nodes nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example, control1.example.com,control2.example.com.

{-d|--debug}

Enable debug logging.

{-h|--help}

Lists information about the command and the available options.

--http-proxy proxy-server

The location of the HTTP proxy server if required.

--https-proxy proxy-server

The location of the HTTPS proxy server if required.

--no-proxy no_proxy

The list of hosts for which to exclude from the proxy server settings.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

--one-cert

Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option is not set.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-r|--role} role

The role of a host. The roles are one of:

  • api-server: The Platform API Server.

  • control-plane: A Kubernetes control plane node for a cluster that has only one.

  • control-plane-ha: A Kubernetes control plane node for a cluster that has more than one.

  • worker: A Kubernetes worker node.

--selinux {permissive|enforcing}

Sets whether SELinux should be set to enforcing or permissive. The default is permissive.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

{-w|--worker-nodes} nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes. For example, worker1.example.com,worker2.example.com.

{-y|--yes}

Sets whether to assume the answer to a confirmation prompt is affirmative (yes).

Examples

Example 4-49 Install the platform using default options

This example installs the Oracle Cloud Native Environment platform using default options.

olcnectl node setup-platform \
--api-server operator.example.com \
--control-plane-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker2.example.com 

Example 4-50 Install the platform for an HA deploy using default options

This example installs the Oracle Cloud Native Environment platform using default options.

olcnectl node setup-platform \
--api-server operator.example.com \
--control-plane-ha-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker2.example.com 

Example 4-51 Install the platform for an HA deploy using default options

This example installs the Oracle Cloud Native Environment platform using default options.

olcnectl node setup-platform \
--api-server operator.example.com \
--control-plane-ha-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker2.example.com 

Example 4-52 Install the Platform API Server

This example installs the Platform API Server and accepts all prompts.

olcnectl node setup-platform \
--nodes operator.example.com \
--role api-server \
--yes

Example 4-53 Install the Platform Agent on HA control plane nodes

This example installs the Platform Agent on Kubernetes control plane nodes for an HA install. This accepts all prompts.

olcnectl node setup-platform \
--nodes control1.example.com,control2.example.com,control3.example.com \
--role control-plane-ha \
--yes

Example 4-54 Install the Platform Agent on worker nodes with a proxy server and SSH login information

This example installs the Platform Agent on Kubernetes worker nodes. This uses proxy server information, provides SSH login information, and accepts all prompts.

olcnectl node setup-platform \
--nodes worker1.example.com,worker2.example.com,worker2.example.com \
--role worker \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--http-proxy "http://www-proxy.example.com:80" \
--https-proxy "https://www-proxy.example.com:80" \
--no-proxy ".example.com" \
--yes

Node Start-Platform

Starts the Platform API Server service on the operator node and the Platform Agent service on Kubernetes nodes.

Syntax

olcnectl node start-platform
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[{-y|--yes}]

Where:

{-h|--help}

Lists information about the command and the available options.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

{-y|--yes}

Sets whether to assume the answer to a confirmation prompt is affirmative (yes).

Examples

Example 4-55 Starting Platform services

This example starts the Platform services on the nodes listed.

olcnectl node start-platform \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--nodes control01.example.com,worker01.example.com,worker02.example.com

Provision

Sets up the nodes and installs the Oracle Cloud Native Environment platform (Platform API Server and Platform Agents), including creating and installing the Kubernetes module.

This command configures all nodes, creates and distributes certificates and keys, installs the Platform API Server and Platform Agents, starts the required system services, and creates and installs an instance of the Kubernetes module. This provides a quick installation of Oracle Cloud Native Environment with a Kubernetes cluster.

When you run this command, a prompt is displayed that lists the changes to be made to the hosts and asks for confirmation. To avoid this prompt, use the --yes option.

More complex deployments can be made by using a configuration file with the --config-file option.

Syntax

olcnectl provision 
{-a|--api-server} api-server-address
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[--config-file config-file-path]
[--container-registry registry]
[{-d|--debug}]
{-E|--environment-name} environment_name 
[{-h|--help}]
[--http-proxy proxy-server]
[--https-proxy proxy-server]
[--load-balancer load-balancer]
{-m|--master-nodes} nodes
{-N|--name} name
[--no-proxy no_proxy]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[--restrict-service-externalip-cidrs allowed_cidrs]
[--selinux {permissive|enforcing}]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[--virtual-ip IP_address]
{-w|--worker-nodes} nodes
[{-y|--yes}]

Where:

{-a|--api-server} api-server-address

The hostname or IP address of the Platform API Server host.

--byo-ca-cert certificate-path

The path to an existing public CA Certificate.

--byo-ca-key key-path

The path to an existing private key.

--cert-dir certificate-directory

The directory to read or write key material generated by this utility. The default is <CURRENT_DIR>/certificates.

--cert-request-common-name common_name

The Certificate Common Name suffix. The default is example.com.

--cert-request-country country

The two letter country code where your company is located, for example, US for the United States, GB for the United Kingdom and CN for China. The default is US.

--cert-request-locality locality

The name of the city where your company is located. The default is Redwood City.

--cert-request-organization organization

The name of the your company. The default is OLCNE.

--cert-request-organization-unit organization-unit

The name of the department within your company. The default is OLCNE.

--cert-request-state state

The name of the state or province where your company is located. The default is California.

--config-file config-file-path

The path and location of an Oracle Cloud Native Environment configuration file.

--container-registry registry

The container registry from which to pull the Kubernetes container images. The default is container-registry.oracle.com/olcne.

{-d|--debug}

Enable debug logging.

{-E|--environment-name} environment_name

The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.

{-h|--help}

Lists information about the command and the available options.

--http-proxy proxy-server

The location of the HTTP proxy server if required.

--https-proxy proxy-server

The location of the HTTPS proxy server if required.

--load-balancer load-balancer

The location of the external load balancer if required.

{-m|--master-nodes} nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example, control1.example.com,control2.example.com,control3.example.com.

{-N|--name} name

The module name. The value of name is the name to use to identify a module in an environment.

--no-proxy no_proxy

The list of hosts for which to exclude from the proxy server settings.

{-n|--nodes} nodes

A comma separated list of the hostnames or IP addresses of nodes.

Sets the node(s) on which to perform an action. Any nodes that are not the local node use the command indicated by --remote-command to connect to the host (by default, ssh). If a node address resolves to the local system, all commands are executed locally without using the remote command.

--one-cert

Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option is not set.

{-R|--remote-command} remote-command

Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:

ssh -i ~/.ssh/myidfile -l myuser

The default remote command is ssh.

--restrict-service-externalip-cidrs allowed_cidrs

Enter one or more comma separated CIDR blocks if you want to allow only IP addresses from the specified CIDR blocks. For example, 192.0.2.0/24,198.51.100.0/24.

--selinux {permissive|enforcing}

Sets whether SELinux should be set to enforcing or permissive. The default is permissive.

{-i|--ssh-identity-file} file_location

The location of the SSH identity file. If no value is specified, the operating system defaults are used.

{-l|--ssh-login-name} username

The username to log in using SSH. The default is opc.

--timeout minutes

The number of minutes to set for command timeouts. The default is 40 minutes.

--virtual-ip IP_address

The virtual IP address to use for the internal load balancer.

{-w|--worker-nodes} nodes

A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes. For example, worker1.example.com,worker2.example.com.

{-y|--yes}

Sets whether to assume the answer to a confirmation prompt is affirmative (yes).

Examples

Example 4-56 Quick install

To perform a quick install:

olcnectl provision \
--api-server operator.example.com \
--master-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster

Example 4-57 Quick install with SSH log in information

To perform a quick install using SSH log in information and accepting all prompts:

olcnectl provision \
--api-server operator.example.com \
--master-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--yes

Example 4-58 Quick HA install with an external load balancer

To perform a quick HA install using an external load balancer and accepting all prompts:

olcnectl provision \
--api-server operator.example.com \
--master-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--load-balancer lb.example.com:6443 \
--yes

Example 4-59 Quick install using a proxy server

To perform a quick install using SSH log in information and a proxy server, and accepting all prompts:

olcnectl provision  \
--api-server operator.example.com \
--master-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--http-proxy "http://www-proxy.example.com:80" \
--https-proxy "https://www-proxy.example.com:80" \
--no-proxy ".example.com" \
--yes

Example 4-60 Quick install with externalIPs service

To perform a quick install that includes the externalIPs Kubernetes service, specify the CIDR block(s) that should be allowed access to the service. This deploys the externalip-validation-webhook-service Kubernetes service:

olcnectl provision \
--api-server operator.example.com \
--master-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24 

Example 4-61 Quick install using a configuration file

To perform a quick install using a configuration file and SSH log in information:

olcnectl provision \
--config-file myenvironment.yaml \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--yes

Template

Generates a simple configuration file template. The template file is named config-file-template.yaml and created in the local directory.

Syntax

olcnectl template 
[{-h|--help}]

Where:

{-h|--help}

Lists information about the command and the available options.

Examples

Example 4-62 Creating a sample configuration template

To create a sample configuration template:

olcnectl template