Prepare the Release 2 Configuration

Set up the configuration for an Oracle CNE Release 2 cluster.

Creating a Cluster Configuration File

Create a cluster configuration file to match the configuration of the Release 1 cluster. Ensure you include any custom configuration identified in OS Customizations. The options you set must match the Release 1 cluster, for example, the cluster name must be the same.

Create a cluster configuration file. The minimum configuration required is:

provider: byo
name: cluster_name
kubernetesVersion: kube_version
loadBalancer: ip_address
providers:
  byo:
    networkInterface: nic_name

For information on what can be included in a cluster configuration file, see Oracle Cloud Native Environment: Kubernetes Clusters.

For example:

provider: byo
name: mycluster
kubernetesVersion: 1.29
loadBalancer: 192.0.2.100
providers:
  byo:
    networkInterface: enp1s0

Creating an OSTree Image

Before you begin, identify the version of Kubernetes running in the Oracle CNE Release 1 cluster.

  1. Create an OSTree image.

    Important:

    If you're using a custom OSTree image, created using Oracle Container Host for Kubernetes Image Builder, you don't need to perform this step.

    Create an OSTree image for the BYO provider that includes the version of Kubernetes in the Oracle CNE Release 1 cluster.

    Use the ocne image create command to create a OSTree image. The syntax is:

    ocne image create 
    {-a|--arch} arch
    [{-t|--type} provider]
    [{-v|--version} version]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    For example, for Kubernetes Release 1.29 on 64-bit x86 servers:

    ocne image create --type ostree --arch amd64 --version 1.29

    This command might take some time to complete. The OSTree image is saved to the $HOME/.ocne/images/ directory.

  2. Upload the OSTree image to a container registry.

    Use the ocne image upload command to upload the image to a container registry. The syntax is:

    ocne image upload 
    {-a|--arch} arch
    [{-b|--bucket} name]
    [{-c|--compartment} name]
    [--config path]
    [{-d|--destination} path]
    {-f|--file} path
    {-i|--image-name} name 
    {-t|--type} provider
    {-v|--version} version

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    For example:
    ocne image upload --type ostree --file $HOME/.ocne/images/ock-1.29-amd64-ostree.tar --destination docker://myregistry.example.com/ock-ostree:latest --arch amd64

    The Kubernetes cluster uploads the OSTree image. A sign in prompt is provided if credentials aren't set for the target container registry. The image is uploaded to the container registry.

    Tip:

    If you don't have a container registry, or you prefer to run this locally, you can load the OSTree archive image to a local container runtime. For example, to load an OSTree archive file to Podman on the localhost, you might use:

    podman load < $HOME/.ocne/images/ock-1.29-amd64-ostree.tar
  3. Serve the OSTree image.

    Make the OSTree image available as a container. Use any container runtime you like, including a Kubernetes cluster.

    For example, to serve the container image from a container registry using Podman, you might use:

    podman run -d --name ock-content-server -p 8080:80 myregistry.example.com/ock-ostree:latest

    To serve the container image from a local instance of Podman, you could use:

    podman run -d --name ock-content-server -p 8080:80 localhost/ock-ostree:latest

Creating Ignition Files

In Release 2, Ignition files are needed to join a node to a Kubernetes cluster. The settings in an Ignition file differ for control plane and worker nodes, so create an Ignition file for each of the cluster node types. You include these Ignition files in the Kickstart file that's used to boot nodes when upgrading the host OS to the Release 2 OS.

  1. Set up the location of Ignition files.

    Decide how you want to make the Kubernetes cluster Ignition files available.

    An Ignition file must be available to all hosts during their first boot. Ignition files can be served using any of the platforms listed in the upstream Ignition documentation, for example, using a Network File Server (NFS), or a web server.

  2. Set the location of the kubeconfig file.
    Set the kubeconfig file as the KUBECONFIG environment variable. This must be set to the Release 1 cluster. For example:
    export KUBECONFIG=~/.kube/kubeconfig.mycluster
  3. Generate the Ignition configuration for control plane nodes.

    Use the ocne cluster join command to generate the Ignition information that joins a control plane node to the cluster. Use the syntax:

    ocne cluster join 
    [{-c|--config} path] 
    [{-d|--destination} path]
    [{-N|--node} name]
    [{-P|--provider} provider]
    [{-r|--role-control-plane}]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    The important options are:

    ocne cluster join --kubeconfig path --role-control-plane --config path > ignition_file

    Important:

    The --kubeconfig option is required, even if this is set as an environment variable.

    For example:

    ocne cluster join --kubeconfig ~/.kube/kubeconfig.mycluster --role-control-plane --config myconfig.yaml > ignition-control-plane.ign

    The output includes important information used later in the upgrade.

    Important:

    You don't need to run the command shown in the output. Instead, take note of the certificate-key and token in this output.

  4. Generate the Ignition configuration for worker nodes.

    Use the ocne cluster join command to generate the Ignition information that joins a worker node to the cluster. The important options are:

    ocne cluster join --kubeconfig path --config path > ignition_file

    Important:

    The --kubeconfig option is required, even if this is set as an environment variable.

    For example:

    ocne cluster join --kubeconfig ~/.kube/kubeconfig.mycluster --config myconfig.yaml > ignition-worker.ign

    The output includes important information used later in the upgrade.

    Important:

    You don't need to run the command shown in the output. Instead, take note of the token in this output.

  5. Expose the Ignition files.

    Make the Ignition files available at the location you chose for hosting these files. For example, copy them to a web server.

    Tip:

    To make the Ignition files available locally, using an NGINX web server container running on Podman, you might use:

    podman run -d --name ol-content-server -p 8081:80 -v 'directory':/usr/share/nginx/html/ock --privileged container-registry.oracle.com/olcne/nginx:1.17.7

    Where directory is the location on the localhost that contains the Ignition files. This directory might also be used to serve Kickstart files. You can validate the Ignition file is being served using:

    curl http://IPaddress:8081/ock/filename.ign

    Where IPaddress is the IP address of the localhost, and filename is the name of the Ignition file.

Creating an Automated Oracle Linux Installation

Create two Kickstart files, one to start control plane nodes, and one to start worker nodes. They're highly likely to be identical, but must include references to the appropriate Ignition file for the Kubernetes node type.

Note:

To retain existing host IP addresses, ensure you create one Kickstart file for each node and include the IP address configuration. Adjust the steps in this upgrade as appropriate.

  1. Prepare the automated Oracle Linux installation.

    A Kickstart file defines an automated installation of Oracle Linux. Kickstart files must be made available during the installation of the Release 2 OS. Decide on the method to perform an automated install of Oracle Linux on the hosts using Kickstart files. For example, you might want to use a network drive, a web server, or a USB drive.

    Tip:

    It might be helpful to include the Kickstart files in the same location as the Ignition files, for example, using NFS, or a web server.

    You only need to provision the kernel and initrd (initial ramdisk) that matches the boot kernel. We recommend you use an Oracle Linux UEK boot ISO file as the boot media as it contains the required kernel and initrd, in a smaller file size. Download Oracle Linux ISO files from the Oracle Linux yum server.

    Prepare the Oracle Linux boot media using the method you select.

    For more information about the automated installation options for Oracle Linux, see Oracle Linux 9: Installing Oracle Linux.

  2. Create two Kickstart files.

    Create a Kickstart file for control plane nodes, and a separate one for worker nodes. The Kickstart files must include the OSTree image location, and the location of the Ignition file for the node type.

    Ensure you add the location of the Ignition file using:

    bootloader --append "rw ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.config.url=http://hostname/filename.ign ignition.firstboot=1"

    Where hostname is the hostname or IP address of the Ignition file server, and filename is the Ignition file.

    And the location of the OSTree image in the container registry using:

    ostreesetup --nogpg --osname ock --url registry --ref ock

    Replace registry with the URL to the OSTree image in the container registry. For example, http://myregistry.example.com/ostree.

    For example, the OSTree image location information might use something similar to:

    ...
    services --enabled=ostree-remount
    bootloader --append "rw ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.config.url=http://myhost.example.com/ignition.ign ignition.firstboot=1"
    ostreesetup --nogpg --osname ock --url http://myregistry.example.com/ostree --ref ock
    %post
    %end

    This example creates a minimal Kickstart file:

    1. Set the variables:
      export OSTREE_IP=$(ip route |  grep 'default.*src' | cut -d' ' -f9)
      export OSTREE_PORT=8080
      export OSTREE_REF=ock
      export OSTREE_PATH=ostree
      export KS_PORT=8081
    2. Create the Kickstart file:

      The following command generates a Kickstart file using the variables set in the previous step.

      envsubst > kickstart.cfg << EOF
      logging
                      
      keyboard us
      lang en_US.UTF-8
      timezone UTC
      text
      reboot
                      
      selinux --enforcing
      firewall --use-system-defaults
      network --bootproto=dhcp
                      
      zerombr
      clearpart --all --initlabel
      part /boot --fstype=xfs --label=boot --size=2048
      part /boot/efi --fstype=efi --label=efi --size=1024
      part / --fstype=xfs --label=root --grow
                      
      services --enabled=ostree-remount
                      
      bootloader --append "rw ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.config.url=http://$SERVER_IP:$KS_PORT/ock/ignition.ign ignition.firstboot=1"
                      
      ostreesetup --nogpg --osname ock --url http://$SERVER_IP:$OSTREE_PORT/$OSTREE_PATH --ref $OSTREE_REF
                      
      %post
                      
      %end
      EOF

      The following example Kickstart file includes static IP address configuration. A separate file is required for each host if using this option.

      envsubst > kickstart.cfg << EOF
      logging
                      
      keyboard us
      lang en_US.UTF-8
      timezone UTC
      text
      reboot
                      
      selinux --enforcing
      firewall --use-system-defaults
      network --bootproto=static --device=enp1s0 --gateway=192.0.2.1 --ip=192.0.2.50 --netmask=255.255.255.0 --onboot=yes --hostname=ocne-control-plane-1
                      
      zerombr
      clearpart --all --initlabel
      part /boot --fstype=xfs --label=boot --size=2048
      part /boot/efi --fstype=efi --label=efi --size=1024
      part / --fstype=xfs --label=root --grow
                      
      services --enabled=ostree-remount
                      
      bootloader --append "rw ip=192.0.2.50::192.0.2.1:255.255.255.0::enp1s0:none rd.neednet=1 ignition.platform.id=metal ignition.config.url=http://$SERVER_IP:$KS_PORT/ock/ignition.ign ignition.firstboot=1"
                      
      ostreesetup --nogpg --osname ock --url http://$SERVER_IP:$OSTREE_PORT/$OSTREE_PATH --ref $OSTREE_REF
                      
      %post
                      
      %end
      EOF
  3. Expose the Kickstart files.

    Make the Kickstart files available at the location you chose for hosting these files. For example, copy them to a web server.

    Tip:

    You can also make the Kickstart files available locally using Podman. If you have an NGINX container running on Podman locally to serve Ignition files, and the Kickstart files are in the same directory, they're already being served. If not, start a container:

    podman run -d --name ol-content-server -p 8081:80 -v 'directory':/usr/share/nginx/html/ock --privileged container-registry.oracle.com/olcne/nginx:1.17.7

    Where directory is the location on the localhost that contains the Kickstart files. You can validate the Kickstart file is being served using:

    curl http://IPaddress:8081/ock/filename.cfg

    Where IPaddress is the IP address of the localhost, and filename is the name of the Kickstart file.