Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Deploy Oracle Cloud Native Environment User Interface
Introduction
This tutorial introduces you to the new user interface features of Oracle Cloud Native Environment. The UI builds on the upstream Headlamp project that provides a fully functional Kubernetes UI.
Objectives
In this tutorial, you will learn:
- How to configure the Oracle Cloud Native Environment Application Catalog (Technical Preview)
- How to install and access the Oracle Cloud Native Environment UI (Technical Preview)
Prerequisites
-
Minimum of a 3-node Oracle Cloud Native Environment cluster:
- Operator node
- Kubernetes control plane node
- Kubernetes worker node
-
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- Installation of Oracle Cloud Native Environment
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
-
Open a terminal on the Luna Desktop.
-
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
-
Change into the working directory.
cd linux-virt-labs/ocne
-
Install the required collections.
ansible-galaxy collection install -r requirements.yml
-
Update the Oracle Cloud Native Environment repository versions.
cat << EOF | tee repos.yml > /dev/null ol8_enable_repo: "ol8_olcne19" ol8_disable_repo: "ol8_olcne12 ol8_olcne13 ol8_olcne14 ol8_olcne15 ol8_olcne16 ol8_olcne17 ol8_olcne18" ol9_enable_repo: "ol9_olcne19" ol9_disable_repo: "ol9_olcne17 ol9_olcne18" EOF
-
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e "@repos.yml"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Confirm the Number of Nodes
It helps to know the number and names of nodes in your Kubernetes cluster.
-
Open a terminal and connect via SSH to the ocne-operator node.
ssh oracle@<ip_address_of_node>
-
Set up the
kubectl
command on the operator node.mkdir -p $HOME/.kube; \ ssh ocne-control-01 "sudo cat /etc/kubernetes/admin.conf" > $HOME/.kube/config; \ sudo chown $(id -u):$(id -g) $HOME/.kube/config; \ export KUBECONFIG=$HOME/.kube/config; \ echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
-
List the nodes in the cluster.
kubectl get nodes
The output shows the control plane and worker nodes in a
Ready
state along with their current Kubernetes version.
Start a Local Instance of the Application Catalog
This installation procedure is a bootstrap process that uses the Application Catalog to install the UI and itself within Oracle Cloud Native Environment.
-
Run the Application Catalog container image.
podman run -p 8080:80 --rm --detach --name ocne_catalog container-registry.oracle.com/olcne_developer/ocne-catalog:v1.0.0
-
Confirm the container is running.
podman ps -a
The output shows the container is running the
nginx
command while exposing the container port80/tcp
on the local machine port
8080`. -
Open the firewall to allow access to the Application Catalog instance.
sudo firewall-cmd --add-port=8080/tcp --permanent sudo firewall-cmd --reload
-
Initialize the Helm Application Chart Repository.
ssh ocne-control-01 helm repo add ocne http://ocne-operator:8080/charts
This command runs
helm
on the ocne-control node and references the Application Catalog container running on the ocne-operator node.
Install the UI and Application Catalog
-
Generate Certificates for the UI.
mkdir ocne-ui-certs olcnectl certificates generate --cert-dir ocne-ui-certs -n headlamp.ocne-system.svc.cluster.local
-
Deploy a Secret.
kubectl create ns ocne-system kubectl create secret -n ocne-system tls headlamp-tls --cert=./ocne-ui-certs/headlamp.ocne-system.svc.cluster.local/node.cert --key=./ocne-ui-certs/headlamp.ocne-system.svc.cluster.local/node.key
-
Install the UI Service.
ssh ocne-control-01 helm install ocne-ui ocne/ui -n ocne-system --set "image.registry=container-registry.oracle.com"
Example Output:
NAME: ocne-ui LAST DEPLOYED: Tue Jun 11 23:30:20 2024 NAMESPACE: ocne-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace ocne-system -l "app.kubernetes.io/name=ui,app.kubernetes.io/instance=ocne-ui" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace ocne-system $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace ocne-system port-forward $POD_NAME 8080:$CONTAINER_PORT 2. Get the token using kubectl create token ocne-ui --namespace ocne-system
-
Install the Application Catalog Service.
ssh ocne-control-01 helm install app-catalog ocne/app-catalog -n ocne-system --set "image.repository=container-registry.oracle.com/olcne_developer/ocne-catalog"
Example Output:
NAME: app-catalog LAST DEPLOYED: Tue Jun 11 23:31:44 2024 NAMESPACE: ocne-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace ocne-system -o jsonpath="{.spec.ports[0].nodePort}" services app-catalog) export NODE_IP=$(kubectl get nodes --namespace ocne-system -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT
-
Port Forward the UI Service.
The UI deploys as a ClusterIP Service, which allows minimal clusters to host it. Accessing it with this configuration forwards the service ports to your host.
kubectl -n ocne-system port-forward --address 0.0.0.0 service/ocne-ui 8443:443 > /dev/null 2>&1 &
This command opens port
8443
listening on localhost-only of the ocne-operator node where runningkubectl
. The> /dev/null 2>&1 &
redirects all output to/dev/null
and runs the port forwarding in the background. You can use thejobs
command to find this background job as it started in the current shell. If you were to open a different shell to the ocne-operator node, you could find it usingps -eF | grep port-forward
. To stop the port-forward, usekill -9 <PID>
, where<PID>
is the process id returned by theps
orjobs
command. -
Create an Access Token.
The UI uses token-based authentication, which you’ll create in this step and later paste into the login page.
kubectl create token ocne-ui -n ocne-system
Accessing the UI
-
Open a new terminal window and configure an SSH tunnel to the ocne-operator node.
The ocne-operator node contains the port forward to the UI service, as that is the node where we ran the
kubectl
command.ssh -L 8444:localhost:8443 oracle@<ip_address_of_node>
-
Open a web browser and enter the URL.
https://localhost:8444
Note: Approve the security warning based on the browser used. For Chrome, click the
Advanced
button and then theProceed to localhost (unsafe)
link.
Log into the UI
-
Switch to the previous terminal window containing the generated access token.
-
Copy the access token.
-
Switch to the browser containing the UI Authentication.
-
Paste the access token into the
ID token
field and clickAUTHENTICATE
. -
After login, the UI displays.
Remove the Local Instance of the Application Catalog
With the installation complete, the local instance of the Application Catalog is no longer needed.
-
Switch to the terminal window containing the generated access token.
-
Stop the container.
podman stop ocne_catalog
When the container exits, it automatically removes itself due to the
--rm
option passed during creation. -
Show a list of local images.
podman images
-
Remove the image.
podman rmi <IMAGE_ID>
Where
<IMAGE_ID>
is the image id copied from the previous command.
Summary
After removing the local Application Catalog, you’ve completed the installation of the Oracle Cloud Native Environment UI. Now, you can explore the UI interface and its features.
For More Information
- Oracle Cloud Native Environment Documentation
- Oracle Cloud Native Environment Track
- Oracle Linux Training Station
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Deploy Oracle Cloud Native Environment User Interface
G10221-01
June 2024