Enabling Cgroups v2 on Worker Nodes Using Custom Images
Find out how to enable Cgroups v2 on worker nodes that run the Oracle Linux 8 (OL8) in clusters created with Kubernetes Engine (OKE), using custom images.
Control Groups (cgroups) is a Linux kernel feature that provides a mechanism for managing and controlling resource allocation for processes or groups of processes. The cgroups feature enables system administrators and developers to allocate and limit various system resources (such as CPU, memory, I/O, network bandwidth) to specific processes or sets of processes. Cgroups offers a powerful and flexible way to manage resource usage, ensuring that processes receive the necessary resources while preventing them from consuming excessive amounts and impacting the performance of other processes or the system as a whole. By creating and organizing processes into control groups, administrators can enforce resource constraints, prioritize tasks, and maintain system stability.
Oracle Linux provides two types of control groups:
- Control groups version 1 (cgroups v1): These groups provide a per-resource controller hierarchy. Each resource, such as CPU, memory, I/O, and so on, has its own control group hierarchy. A disadvantage of cgroups v1 is the difficulty of coordinating resource use among groups that might belong to different process hierarchies.
- Control groups version 2 (cgroups v2): These groups provide a single control group hierarchy against which all resource controllers are mounted. In this hierarchy, you can coordinate resource use across different resource controllers
For more information about control groups and Oracle Linux, see Managing Resources Using Control Groups in the Oracle Linux documentation.
Both cgroups v1 and cgroups v2 are present in Oracle Linux. However, in the OKE images and platform images provided by Oracle and currently supported by Kubernetes Engine, cgroups v1 is enabled by default. Therefore, when you specify an OL8 OKE image or platform image to use for worker nodes, the Linux kernel of compute instances hosting the nodes have cgroups v1 enabled by default.
However, you can enable cgroups v2 in the Linux kernel of instances hosting worker nodes. At a high level, the process to enable cgroups v2 is as follows:
- Step 1: Create a compute instance running the required OL8 image, and enable cgroups v2.
- Step 2: Enable cgroups v2 on the compute instance.
- Step 3: Create a custom image based on the compute instance where cgroups v2 is enabled.
- Step 4: Add worker nodes running OL8 with cgroups v2 enabled to a cluster. The way in which you add cgroups v2-enabled nodes to a cluster depends on whether you want to add the nodes as managed nodes or as self-managed nodes. For managed nodes, you define a managed node pool. For self-managed nodes, you add compute instances as worker nodes.
Step 1: Create a compute instance running the required OL8 image, and enable cgroups v2
In this step, you use the Compute service to create a compute instance that is running the OL8 release you want on worker nodes in the Kubernetes cluster.
- Decide which OL8 release (and if you're going to select an OKE image, which Kubernetes version) you want on worker nodes.
Oracle provides a number of different OL8 OKE images and platform images.
-
Follow the instructions in Creating an Instance in the Compute service documentation to create a new compute instance, and select a suitable platform image (either by selecting a platform image or by specifying the OCID of an OKE image).
This is the compute instance that you will use as the basis of a new custom image.
Step 2: Enable cgroups v2 on the compute instance
In this step, you enable cgroups v2 on the compute instance you created in the previous step. The instructions here are intended as a convenient summary of Enabling cgroups v2 in the OL8 documentation.
- In a terminal window, connect to the compute instance and configure all kernel boot entries to mount cgroups v2 by default, by entering:
sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1"
- Reboot the instance, by entering:
sudo reboot
- Confirm that cgroups v2 is now mounted, by entering:
sudo mount -l | grep cgroup
- Optionally, check the contents of the
/sys/fs/cgroup
directory (the root control group), by entering:ls -l /sys/fs/cgroup/
For cgroups v2, the files in the directory should have prefixes at the start of their file names (such as
cgroup
.*,cpu
.*,memory
.*).
Step 3: Create a custom image based on the compute instance where cgroups v2 is enabled
In this step, you use the Compute service to create a custom image from the compute instance that you have enabled for cgroups v2 in the previous step.
- Shut down the instance that you have enabled for cgroups v2, by entering:
sudo shutdown -h now
- Follow the instructions in Managing Custom Images in the Compute service documentation, to create a custom image based on the compute instance.
- Make a note of the OCID of the custom image you have created.
Step 4: Add worker nodes running OL8 with cgroups v2 enabled to a cluster
In this step, you use the custom image you created in the previous step to add worker nodes running OL8 with cgroups v2 enabled to a Kubernetes cluster.
Note that there are different instructions to follow, depending on whether you want to enable cgroups v2 on managed nodes, or on self-managed nodes. For managed nodes, you define a managed node pool. For self-managed nodes, you add compute instances as worker nodes.
Note that you have to use the CLI to create managed nodes based on custom images.
Adding managed nodes running OL8 with cgroups v2 enabled
To add managed nodes running OL8 with cgroups v2 enabled to an existing cluster:
- Open a command prompt and use the oci ce node-pool create command to create a new node pool.
- As well as the mandatory parameters required by the command, include the
--node-image-id
parameter, and specify the OCID of the custom image that you created in Step 3: Create a custom image based on the compute instance where cgroups v2 is enabled.For example, you might enter the following command:
oci ce node-pool create \ --cluster-id ocid1.cluster.oc1.iad.aaaa______m4w \ --name my-nodepool \ --node-image-id ocid1.image.oc1.iad.aaaa______zpq \ --compartment-id ocid1.tenancy.oc1..aaa______q4a \ --kubernetes-version v1.29.1 \ --node-shape VM.Standard2.1 \ --placement-configs "[{\"availabilityDomain\":\"PKGK:US-ASHBURN-AD-1\", \"subnetId\":\"ocid1.subnet.oc1.iad.aaaa______kfa\"}]" \ --size 3 \ --region us-ashburn-1
Adding self-managed nodes running OL8 with cgroups v2 enabled
Before you create a self-managed node:
- Confirm that the cluster to which you want to add the self-managed node is configured appropriately for self-managed nodes. See Cluster Requirements.
- Confirm that a dynamic group and an IAM policy already exist to allow the compute instance hosting the self-managed node to join an enhanced cluster created with Kubernetes Engine. See Creating a Dynamic Group and a Policy for Self-Managed Nodes.
- Create a cloud-init script containing the Kubernetes API private endpoint and base64-encoded CA certificate of the enhanced cluster to which you want to add the self- managed node. See Creating Cloud-init Scripts for Self-managed Nodes.
Using the Console
- Create a new compute instance to host the self-managed node:
- Open the navigation menu and select Compute. Under Compute, select Instances.
- Follow the instructions in the Compute service documentation to create a new compute instance. Note that appropriate policies must exist to allow the new compute instance to join the enhanced cluster. See Creating a Dynamic Group and a Policy for Self-Managed Nodes.
- In the Image and Shape section, click Change image.
- Click My images, select the Image OCID option, and then enter the OCID of the custom image that you created in Step 3: Create a custom image based on the compute instance where cgroups v2 is enabled.
- Click Show advanced options, and on the Management tab, select the Paste cloud-init script option.
- Copy and paste the cloud-init script for self-managed nodes that you created earlier, into the Cloud-init script field.
- Click Create to create the compute instance to host the self-managed node.
When the compute instance is created, it is added as a self-managed node to the cluster with the Kubernetes API endpoint that you specified in the cloud-init script.
- (Optional) Verify that the self-managed node has been added to the Kubernetes cluster, and that labels have been added to the node and set as expected, by following the instructions in Creating Self-Managed Nodes.
Using the CLI
- Open a command prompt and enter the
oci Compute instance launch
command and required parameters to create a self-managed node. - As well as the mandatory parameters required by the command:
- Include the
--image-id
parameter, and specify the OCID of the custom image that you created in Step 3: Create a custom image based on the compute instance where cgroups v2 is enabled. - Include the
--user-data-file
parameter and specify the cloud-init script for self-managed nodes that you created earlier.
For example, you might enter the following command:
oci compute instance launch \ --availability-domain zkJl:PHX-AD-1 \ --compartment-id ocid1.compartment.oc1..aaaaaaa______neoq \ --shape VM.Standard2.2 \ --subnet-id ocid1.subnet.oc1.phx.aaaaaaa______hzia \ --user-data-file my-selfmgd-cgroupsv2-cloud-init.yaml \ --image-id ocid1.image.oc1.phx.aaaaaaa______slcr
When the compute instance is created, it is added as a self-managed node to the cluster with the Kubernetes API endpoint that you specified in the cloud-init script.
- Include the