Prepare the Release 2 Configuration

Set up the configuration for an Oracle CNE Release 2 cluster.

Creating Cluster Configuration Files

Create a cluster configuration file to match the configuration of the Release 1 cluster. Ensure you include any custom configuration identified in OS Customizations. The options you set must match the Release 1 cluster, for example, the cluster name must be the same.

Create a cluster configuration file for each VM. This configuration file contains the hostname and IP address information, so a configuration file must be created for each VM.

The minimum configuration required is:

provider: byo
name: cluster_name
kubernetesVersion: kube_version
loadBalancer: ip_address
providers:
  byo:
    networkInterface: nic_name
extraIgnitionInline: |
  variant: fcos
  version: 1.5.0
  storage:
    files:
      - path: /etc/hostname
        mode: 0755
        contents:
          inline: hostname
      - path: /etc/sysconfig/network-scripts/ifcfg-nic_name
        mode: 0755
        contents:
          inline: |
            TYPE=Ethernet
            BOOTPROTO=none
            NAME=nic_name
            DEVICE=nic_name
            ONBOOT=yes
            IPADDR=IP_address
            PREFIX=24
            GATEWAY=gateway

For information on what can be included in a cluster configuration file, see Oracle Cloud Native Environment: Kubernetes Clusters.

For example:

provider: byo
name: mycluster
kubernetesVersion: 1.29
loadBalancer: 192.0.2.100
providers:
  byo:
    networkInterface: enp1s0
extraIgnitionInline: |
  variant: fcos
  version: 1.5.0
  storage:
    files:
      - path: /etc/hostname
        mode: 0755
        contents:
          inline: ocne-control-plane-1
      - path: /etc/sysconfig/network-scripts/ifcfg-enp1s0
        mode: 0755
        contents:
          inline: |
            TYPE=Ethernet
            BOOTPROTO=none
            NAME=enp1s0
            DEVICE=enp1s0
            ONBOOT=yes
            IPADDR=192.0.2.50
            PREFIX=24
            GATEWAY=192.0.2.1

Creating an OCK Image

Before you begin, identify the version of Kubernetes running in the Oracle CNE Release 1 cluster.

  1. Create an OCK image.

    Important:

    If you're using a custom OCK image, created using Oracle Container Host for Kubernetes Image Builder, you don't need to perform this step.

    Create an OCK image that includes the version of Kubernetes in the Oracle CNE Release 1 cluster.

    Use the ocne image create command to create an OCK image. The syntax is:

    ocne image create 
    {-a|--arch} arch
    [{-t|--type} provider]
    [{-v|--version} version]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    For example, for Kubernetes Release 1.29 on 64-bit x86 servers:

    ocne image create --type olvm --arch amd64 --version 1.29

    This command might take some time to complete. The OCK image is saved to the $HOME/.ocne/images/ directory.

  2. Upload the OCK image to Oracle Linux Virtualization Manager.

    Use the Oracle Linux Virtualization Manager console to upload the OCK image as a disk.

  3. Clone the OCK disk.

    Use the Oracle Linux Virtualization Manager console to make a clone of the OCK disk for each VM to be upgraded. There must be one disk available for each VM.

    Important:

    Each VM requires a new OCK disk. An OCK disk can't be reused. It must not have been used when booting another VM, or the Ignition information isn't read (it's only read on the first boot).

Creating Ignition Files

In Release 2, Ignition information is needed to join a node to a Kubernetes cluster. An Ignition file must be generated for each VM. The settings in an Ignition file differ for control plane and worker nodes, so ensure you use the correct syntax to generate Ignition for the appropriate node type. You include the content of the Ignition file in the configuration disk that's used when booting a node during the host upgrade to the Release 2 OS.

Ensure you use the appropriate Oracle CNE cluster configuration file for the VM when using the ocne cluster join command to generate the Ignition file.

Repeat these steps for each VM.

  1. If the node is a control plane:

    Use the ocne cluster join command to generate the Ignition information that joins a control plane node to the cluster. Use the syntax:

    ocne cluster join 
    [{-c|--config} path] 
    [{-d|--destination} path]
    [{-N|--node} name]
    [{-P|--provider} provider]
    [{-r|--role-control-plane}]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    The important options are:

    ocne cluster join --kubeconfig path --role-control-plane --config path > ignition_file

    Important:

    The --kubeconfig option is required, even if this is set as an environment variable.

    For example:

    ocne cluster join --kubeconfig ~/.kube/kubeconfig.mycluster --role-control-plane --config myconfig.yaml > ignition-control-plane.ign

    The output includes important information used later in the upgrade.

    Important:

    You don't need to run the command shown in the output. Instead, take note of the certificate-key and token in this output.

  2. If the node is a worker node:

    Use the ocne cluster join command to generate the Ignition information that joins a worker node to the cluster. The important options are:

    ocne cluster join --kubeconfig path --config path > ignition_file

    Important:

    The --kubeconfig option is required, even if this is set as an environment variable.

    For example:

    ocne cluster join --kubeconfig ~/.kube/kubeconfig.mycluster --config myconfig.yaml > ignition-worker.ign

    The output includes important information used later in the upgrade.

    Important:

    You don't need to run the command shown in the output. Instead, take note of the token in this output.

Creating Configuration Disks

Create a configuration disk for each VM, with the label of CONFIG-2, that includes the Ignition configuration for the Oracle Linux Virtualization Manager VM. The Ignition configuration is generated using the ocne cluster join command.

Repeat these steps for each VM, changing the Qcow2 disk name as required.

  1. Load the nbd module.

    If this module isn't already loaded, load it using:

    sudo modprobe nbd
  2. Set the target node name as an environment variable.
    export FILENAME=config2-target_node.qcow2

    Replace target_node with the name of the target Kubernetes node.

  3. Create a configuration disk.
    qemu-img create -f qcow2 $FILENAME 256M
  4. Configure the disk.
    sudo qemu-nbd --connect /dev/nbd0 $FILENAME
    sudo parted -s /dev/nbd0 mklabel gpt
    sudo parted -s /dev/nbd0 mkpart CONFIG-2 xfs 0 100%
    sudo mkfs.xfs /dev/nbd0p1
    sudo xfs_admin -L CONFIG-2 /dev/nbd0p1
  5. Mount the disk.
    sudo mount /dev/nbd0p1 /mnt
  6. Make a directory for the Ignition information on the disk.
    sudo mkdir -p /mnt/openstack/latest
  7. Add the Ignition configuration.

    Using sudo, create a file named user_data and add the Ignition information generated in Creating Ignition Files to the file. The file to add this information is:

    /mnt/openstack/latest/user_data

    Add the Ignition configuration for the node.

  8. Unmount the disk.

    Unmount and disconnect from the disk.

    sudo umount /mnt
    sudo qemu-nbd --disconnect /dev/nbd0
  9. Upload the disk.

    Upload the disk to Oracle Linux Virtualization Manager using the Oracle Linux Virtualization Manager console.