Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Run KubeVirt on Oracle Cloud Native Environment
Introduction
KubeVirt is a virtualization technology to create and manage virtual machines in Kubernetes. Administrators create these virtual machines using the kubectl
command and Kubernetes custom resource definitions (CRDs). As with any container image within Kubernetes, it requires persistent storage to maintain its state. Hence, our need for Rook and Ceph.
Rook is a cloud-native storage orchestrator platform that enables Ceph storage for our Kubernetes cluster. Rook deploys as a Kubernetes operator inside a Kubernetes cluster and automates the tasks to provision and de-provision Ceph-backed persistent storage using the Kubernetes Container Storage Interface (CSI).
While Ceph allows the creation of block and object storage, there also exists a shared file system storage. This type uses a CephFilesystem (CephFS) to mount a shared POSIX (Portable Operating System Interface) compliant folder into one or more pods. This storage type is similar to NFS (Network File System) shared storage or CIFS (Common Internet File System) shared folders.
This tutorial guides users on deploying KubeVirt with Ceph storage managed by Rook on Oracle Cloud Native Environment.
Objectives
At the end of this tutorial, you should be able to do the following:
- Install the Rook operator
- Configure Ceph storage
- Install KubeVirt
- Create and Deploy a VM
Prerequisites
-
Minimum of a 5-node Oracle Cloud Native Environment cluster:
- Operator node
- Kubernetes control plane node
- 3 Kubernetes worker node
-
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- Installation of Oracle Cloud Native Environment
-
Each worker node contains an attached unformatted block volume
-
An available container registry to store container image VMs.
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
-
Open a terminal on the Luna Desktop.
-
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
-
Change into the working directory.
cd linux-virt-labs/ocne
-
Install the required collections.
ansible-galaxy collection install -r requirements.yml
-
Update the Oracle Cloud Native Environment configuration.
cat << EOF | tee instances.yml > /dev/null compute_instances: 1: instance_name: "ocne-operator" type: "operator" 2: instance_name: "ocne-control-01" type: "controlplane" 3: instance_name: "ocne-worker-01" type: "worker" 4: instance_name: "ocne-worker-02" type: "worker" 5: instance_name: "ocne-worker-03" type: "worker" EOF
-
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e use_oci_ccm=true -e add_ceph_block_storage=true -e add_ceph_deployments=true -e use_ocir=true -e "@instances.yml"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Verify the Lab Environment
-
Open a terminal and connect via ssh to the ocne-control-01 node.
ssh oracle@<ip_address_of_node>
-
Set the terminal encoding to UTF-8.
On the Terminal menu, click Terminal, Set Encoding, Unicode, UTF-8.
-
Get a list of Kubernetes nodes.
kubectl get nodes -o wide
-
Verify the additional block volume exists on the worker nodes.
ssh ocne-worker-01 lsblk -f /dev/sdb
In the free lab environment, the block volume attaches as
sdb
, and theFSTYPE
column appears empty, confirming no file system exists on the disk. Repeat for ocne-worker-02 and ocne-worker-03.
Deploy the Rook Operator
The Rook operator is responsible for deploying, configuring, provisioning, scaling, upgrading, and monitoring Ceph storage within the Kubernetes cluster.
Install the Module
-
Create the Rook operator.
ssh ocne-operator "olcnectl module create --environment-name myenvironment --module rook --name myrook --rook-kubernetes-module mycluster"
-
Install the Rook operator.
ssh ocne-operator "olcnectl module install --environment-name myenvironment --name myrook"
Verify the Module
-
Switch to the existing terminal session for the devops node.
-
Verify the Rook operator is running.
kubectl -n rook get pod
-n
is the short option for the--namespace
option.
Example Output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-69bc6598bb-bqvll 1/1 Running 0 2m41s
Create the Ceph Cluster
A Ceph cluster is a distributed storage system providing file, block, and object storage at scale for our Kubernetes cluster.
-
View the cluster CSD.
less cluster.yaml
Oracle Cloud Native Environment defaults to placing the Rook operator in the
rook
namespace and pulls the Ceph image from the Oracle Container Registry.The cluster CSD defines three monitor daemons (
mon
) for the Ceph distributed file system to allow for a quorum. These monitor daemons get distributed evenly across the three worker nodes based on the value ofallowMultiplePerNode
set tofalse
. -
Apply the Ceph cluster configuration.
kubectl apply -f cluster.yaml
Example Output:
cephcluster.ceph.rook.io/rook-ceph created
-
Verify the cluster is running.
watch kubectl -n rook get pod
Example Output:
NAME READY STATUS RESTARTS AGE csi-cephfsplugin-fn69v 2/2 Running 0 4m51s csi-cephfsplugin-p9xw2 2/2 Running 0 4m51s csi-cephfsplugin-provisioner-864d9fd857-65tnz 5/5 Running 0 4m51s csi-cephfsplugin-provisioner-864d9fd857-mgzct 5/5 Running 0 4m51s csi-cephfsplugin-xzw9k 2/2 Running 0 4m51s csi-rbdplugin-2nk8n 2/2 Running 0 4m51s csi-rbdplugin-f2nkd 2/2 Running 0 4m51s csi-rbdplugin-ffqkr 2/2 Running 0 4m51s csi-rbdplugin-provisioner-6966cf469c-fjf8h 5/5 Running 0 4m51s csi-rbdplugin-provisioner-6966cf469c-zkjsk 5/5 Running 0 4m51s rook-ceph-crashcollector-ocne-worker-01-84b886c998-v8774 1/1 Running 0 2m49s rook-ceph-crashcollector-ocne-worker-02-699dc4b447-77jwb 1/1 Running 0 2m19s rook-ceph-crashcollector-ocne-worker-03-668dcbc7c6-v6hrs 1/1 Running 0 2m40s rook-ceph-mgr-a-794c487d99-z65lq 1/1 Running 0 2m51s rook-ceph-mon-a-76b99bd5f5-zxk8s 1/1 Running 0 4m19s rook-ceph-mon-b-5766869646-vlj4h 1/1 Running 0 3m24s rook-ceph-mon-c-669fc577bc-xc6tp 1/1 Running 0 3m10s rook-ceph-operator-69bc6598bb-bqvll 1/1 Running 0 22m rook-ceph-osd-0-67ffc8c8dd-brtnp 1/1 Running 0 2m20s rook-ceph-osd-1-7bdb876b78-t5lw8 1/1 Running 0 2m20s rook-ceph-osd-2-8df6d884-c94zl 1/1 Running 0 2m19s rook-ceph-osd-prepare-ocne-worker-01-jx749 0/1 Completed 0 2m29s rook-ceph-osd-prepare-ocne-worker-02-mzrg2 0/1 Completed 0 2m29s rook-ceph-osd-prepare-ocne-worker-03-m7jz7 0/1 Completed 0 2m29s
Wait for the cluster creation to complete and look like the sample output. This action can take 5-10 minutes or longer in some cases. The
STATUS
for each item shows asRunning
orCompleted
. -
Exit the
watch
command usingCtrl-C
. -
Confirm deployment of the Ceph cluster.
kubectl -n rook get cephcluster
Example Output:
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID rook-ceph /var/lib/rook 3 3m49s Ready Cluster created successfully HEALTH_OK e14b4ffc-3491-49a5-82b3-fee488fb3838
Check the State of the Ceph Cluster
The Rook toolbox is a container built with utilities to help debug and test Rook.
-
View the toolbox CSD.
less toolbox.yaml
The toolbox CSD defines a single
replica
or instance of the Ceph container to deploy. -
Apply the tools Pod Deployment.
kubectl apply -f toolbox.yaml
Example Output:
deployment.apps/rook-ceph-tools created
-
Verify the tools Pod successfully deploys.
kubectl -n rook rollout status deploy/rook-ceph-tools
Example Output:
deployment "rook-ceph-tools" successfully rolled out
-
View the status of the Ceph cluster.
kubectl -n rook exec -it deploy/rook-ceph-tools -- ceph status
Example Output:
cluster: id: 8a12ac76-0e2e-48cc-b0cf-1498535a1c3c health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 4m) mgr: a(active, since 4m) osd: 3 osds: 3 up (since 3m), 3 in (since 4m) data: pools: 1 pools, 1 pgs objects: 2 objects, 577 KiB usage: 65 MiB used, 150 GiB / 150 GiB avail pgs: 1 active+clean
The output shows that the Ceph cluster reaches quorum and is active and healthy once completing the deployment.
Create the Ceph Filesystem Storage
-
View the Filesystem CSD.
less filesystem.yaml
The CSD creates the metadata pool and a single data pool, each with a replication of three. For more information, see the creating shared filesystems in the upstream documentation.
-
Apply the Ceph Filesystem configuration.
kubectl apply -f filesystem.yaml
-
Confirm the Filesystem Pod is running.
kubectl -n rook get pod -l app=rook-ceph-mds
The
mds
pods monitor the file system namespace and show aSTATUS
ofRunning
when done configuring the file system. -
Check the status of the Filesystem and the existence of the
mds
service.kubectl -n rook exec -it deploy/rook-ceph-tools -- ceph status
Example Output:
cluster: id: c83b0a5a-30d4-42fd-a28c-fc68a605a23d health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 39m) mgr: a(active, since 38m) mds: 1/1 daemons up, 1 hot standby osd: 3 osds: 3 up (since 38m), 3 in (since 38m) data: volumes: 1/1 healthy pools: 3 pools, 65 pgs objects: 24 objects, 579 KiB usage: 68 MiB used, 150 GiB / 150 GiB avail pgs: 65 active+clean io: client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
Notice the
mds
shows one daemonup
and another inhot standby
. -
View the StorageClass CSD.
less storageclass.yaml
Note that we set the
provisioner
prefix to match the Rook operator namespace ofrook
. -
Provision the Storage.
kubectl apply -f storageclass.yaml
Once we create the storage, it’s ready for Kubernetes deployments to consume.
Deploy KubeVirt
Install the Module
-
Create the KubeVirt module.
ssh ocne-operator "olcnectl module create --environment-name myenvironment --module kubevirt --name mykubevirt --kubevirt-kubernetes-module mycluster"
-
Install the KubeVirt module.
ssh ocne-operator "olcnectl module install --environment-name myenvironment --name mykubevirt"
Verify the Module
-
Verify the KubeVirt deployments are running in the
kubevirt
namespace.watch kubectl get deployments -n kubevirt
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE virt-api 2/2 2 2 5m16s virt-controller 2/2 2 2 4m50s virt-operator 2/2 2 2 5m44s
Wait for the kubevirt deployment to complete and look like the sample output.
-
Exit the
watch
command usingCtrl-C
. -
Install the
virtctl
command line tool.This utility allows access to the virtual machine’s serial and graphical consoles, as well as convenience to these features:
- Starting and stopping the virtual machine
- Live migrations
- Uploading virtual machine disk images
sudo dnf install -y virtctl
Build a Virtual Machine Container Image
KubeVirt can pull a containerized image from a container registry when deploying virtual machine instances. These containerdisks
should be based on scratch
and have the qcow2 disk placed into the /disk
directory of the container readable by the qemu user, which has a UID of 107. The scratch
image is the smallest image for containerization and doesn’t contain any files or folders.
-
Download the Oracle Linux cloud image in QCOW format.
curl -JLO https://yum.oracle.com/templates/OracleLinux/OL9/u4/x86_64/OL9U4_x86_64-kvm-b234.qcow2
-
Create a Containerfile to build a Podman image from the QCOW image.
cat << EOF > Containerfile FROM scratch ADD --chown=107:107 OL9U4_x86_64-kvm-b234.qcow2 /disk/ EOF
-
Build the image with Podman.
podman build . -t oraclelinux-cloud:9.4-terminal
Where:
oraclelinux9-cloud
is the image name9.4-terminal
is the image tag, where9.4
is the release version andterminal
indicates the image is CLI only.
-
Verify the image exists on the local server.
podman images
Example Output:
[oracle@devops-node ~]$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/oraclelinux-cloud 9.4-terminal 0d96b825b3d4 About a minute ago 561 MB
Gather the Oracle Container Registry Repository Credentials
The tables in this section provide example values we’ll use in subsequent steps in this lab. The fra
example is the region key for Germany Central (Frankfurt) region. If your region is US East (Ashburn), the region key is iad
. Refer to the Regions and Availability Domains documentation for a complete table listing available region keys.
Registry Data | Lab placeholder | Notes |
---|---|---|
REGISTRY_TYPE | private |
Displayed in the repository info panel as “Access” |
REPOSITORY_NAME | demo/oraclelinux-cloud |
Displayed in the “Repositories and images” list-of-values |
OCIR_INSTANCE | fra.ocir.io |
Use <region>.ocir.io |
In the free lab environment, we configure the repository as private and with the name demo/oraclelinux-cloud
.
See Pushing Images Using the Docker CLI if needed. This procedure describes the login process required to push images to the Container Registry using the CLI.
-
Gather your login details.
You’ll need a username and authentication token to access the container registry. The free lab environment provides these details on the Luna Lab tab of the Luna Lab page. The table shows examples of these values.
Credential Lab placeholder OCIR_USERNAME luna.user@14ad03fa-49d8-4e1b-b934-bb043f9db4b9
LUNA_TOKEN 7Q9jSeNf7gMA:q>pKPh;
Create environment variables similar to those below using the gathered credentials.
$ export OCIR_USERNAME="<luna_ephemeral_account_username>"
$ export LUNA_TOKEN="<luna_oci_auth_token>"
-
Gather your Namespace and OCIR instance.
The output of the deployment playbook lists the namespace. The table shows an example of this value.
Credential Lab placeholder OCIR_NAMESPACE frn7gzeg0xzn
OCIR_INSTANCE fra.ocir.io
Create environment variables similar to those below using the gathered items.
$ export OCIR_NAMESPACE="<luna_container_registry_namespace>"
-
Create environment variables that we’ll use in the
podman login
command.export USER="$OCIR_NAMESPACE/$OCIR_USERNAME" export TOKEN="$LUNA_TOKEN"
-
Login to the container registry.
podman login -u $USER -p $TOKEN fra.ocir.io --verbose
The
--verbose
flag shows where podman creates the auth file for this login. We’ll use this information later in the lab.
Push the Virtual Machine Image
In this example, Oracle Container Registry stores the final repository URIs as:
docker://OCIR_INSTANCE/REGISTRY_NAMESPACE/REPOSITORY_NAME/IMAGE:TAG
Podman can push local images to remote registries without tagging the image beforehand.
-
Push the local
oraclelinux-cloud:9.4-terminal
image.podman push oraclelinux-cloud:9.4-terminal docker://fra.ocir.io/$OCIR_NAMESPACE/demo/oraclelinux-cloud:9.4-terminal
Example Output:
Getting image source signatures Copying blob ff65b0a12df1 done Copying config 5891207960 done Writing manifest to image destination Storing signatures
Create a Kubernetes Secret Based on the Registry Credentials
Per the Kubernetes upstream documentation, a Secret is an object that contains a small amount of sensitive data, such as a password, a token, or a key. This Secret holds the credentials required to pull the container image from the registry.
Important: The Secret obscures the data using base64 encoding and does not encrypt it. Therefore, anyone with API access or the ability to create a Pod in a namespace can access and decode the credentials.
See Information security for Secrets in the upstream documentation for more details.
-
Create the OCIR credentials Secret.
kubectl create secret docker-registry ocirsecret --docker-server=fra.ocir.io --docker-username=$USER --docker-password=$TOKEN --docker-email=jdoe@example.com
See Pulling Images from Registry during Deployment in the Oracle Cloud Infrastructure documentation for more details.
-
Inspect the Secret
kubectl get secret ocirsecret --output=yaml
Create a Virtual Machine with Persistent Storage
Kubevirt allows associating a PersistentVolumeClaim to a VM disk in either filesystem
or block
mode. In the free lab environment, we’ll use filesystem
mode. Kubevirt requires placing a disk named disk.img
in the root of the PersistentVolumeClaim’s filesystem and owned by the user-id 107
. If we do not create this in advance, Kubevirt will make it at deployment time. See Kubevirt’s upstream persistentVolumeClaim documentation for more details.
-
View the PersistentVolumeClaim CSD file.
less pvc.yaml
The PVC CSD defines a read-write-many volume of 1 Gbs from our Ceph Filesystem storage.
-
Apply the PersistentVolumeClaim configuration.
kubectl apply -f pvc.yaml
-
View the VirtualMachine CSD file.
less vm.yaml
-
Replace the placeholder
OCIR_NAMESPACE
in the file with the free labs OCIR_NAMESPACE.sed -i 's/OCIR_NAMESPACE/'"$OCIR_NAMESPACE"'/g' vm.yaml
-
Generate the cloud-config’s user data.
cat << EOF > cloud-config-script #cloud-config system_info: default_user: name: opc ssh_authorized_keys: - $(cat /home/oracle/.ssh/id_rsa.pub) users: - default - name: oracle lock_password: true sudo: ALL=(ALL) NOPASSWD:ALL ssh_authorized_keys: - $(cat /home/oracle/.ssh/id_rsa.pub) packages: - git EOF
-
Create a Secret containing the cloud-init user data.
Storing the user data in a Secret allows for easy configuration sharing across multiple virtual machines. The Secret requires using a key with the name
userdata
.kubectl create secret generic vmi-userdata-secret --from-file=userdata=cloud-config-script
-
Deploy the VirtualMachine.
kubectl apply -f vm.yaml
-
Check on the VirtualMachine creation.
kubectl get vm
Repeat the command until you see the
STATUS
change toRunning
.
Verify the Virtual Machine Creation and the Persistent Volume Storage
-
SSH into the VM.
virtctl ssh oracle@ol9-nocloud
-
Get a list of block devices within the VM.
lsblk
The 1 Gbs PVC appears as the
/dev/vdb
device. -
Format and mount the PVC disk.
echo ';' | sudo sfdisk /dev/vdb sudo mkfs.xfs /dev/vdb1 sudo mkdir /u01 sudo mount /dev/vdb1 /u01
-
Create a file and confirm it exists on the persistent disk.
sudo touch /u01/SUCCESS sudo ls -l /u01/
-
Disconnect from the VM.
exit
-
Delete the VM and remove its public key fingerprint from the known_hosts file.
kubectl delete vm ol9-nocloud ssh-keygen -R vmi/ol9-nocloud.default -f ~/.ssh/kubevirt_known_hosts
Using
virtctl
creates a defaultkubevirt_known_hosts
file separate from theknown_hosts
file ssh generates. Thessh-keygen
command’s-R
option removes the public key fingerprint associated with the VM hostname, while the-f
option points to the customknown_hosts
file. -
Confirm the removal of the VM.
kubectl get vm
The output shows there are no resources found.
-
Recreate the VM.
kubectl apply -f vm.yaml
Run
kubectl get vm
and wait for theSTATUS
to report asRunning
. -
Mount the block device and confirm the data on the PVC persists.
virtctl ssh oracle@ol9-nocloud -c "sudo mkdir /u01; sudo mount /dev/vdb1 /u01; sudo ls -al /u01"
The output shows the
SUCCESS
file confirming that the data persists on the disk image stored on the Ceph Filesystem based PVC.
Summary
That completes the demonstration detailing the creation of a VM by KubeVirt that leverages Ceph Filesystem storage generated using Oracle Cloud Native Environment’s Rook module.
For More Information
- Oracle Cloud Native Environment Documentation
- Oracle Cloud Native Environment Track
- Oracle Linux Training Station
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Run KubeVirt on Oracle Cloud Native Environment
F87178-01
June 2024