Learn About Provisioning an ARM Kubernetes Cluster on OCI and Deploying an Nginx Website

You can provision an ARM Kubernetes cluster on OCI and then deploy an Nginx website either manually or automatically by using a Terraform script. Once deployed, testing the website is a simple matter of launching it in your local browser.

Manually Provision an OKE Cluster with ARM Pool

You can easily provision a Kubernetes cluster and create a set of heterogeneous pools on OCI and then deploy an Nginx website from the OCI console. This approach is helpful if you are not familiar with infrastructure automation and want to use the console instead.

From the OCI console, do the following:
  1. First, provision the OKE cluster:
    1. Under the Developers Services section, select Kubernetes Clusters (OKE).

      The Create Cluster page appears.

    2. Select Quick Create to provision the required Kubernetes cluster infrastructure rapidly and securely.

      The Quick Create page appears.

    3. Select the ARM shape (VM.Standard.A1.Flex) for your worker nodes. This shape allows you to configure both the number of OCPUs and the amount of allocated memory, allowing you to fine tune resources (and commensurately the price) to fit your specific requirements.

      Note:

      If you are building this cluster for a high availabiity (HA) website, keep the resources to a minimum. You can scale up the number of nodes later, if needed.
    4. Once the cluster is provisioned, you can add additional pools with speciality machines that you can, in turn, bind to specific images in your deployment file; for example, you can set up a pool of GPU-backed nodes for the ML tasks conducted by the website (for example, content recommendation or a facial recognition backend).

      To add additional node pools, simply update the Shape of the Node Pool section on the Cluster details page:

      1. Go to the Resources menu.
      2. Click Node Pools and then Add New Pool.
      3. Modify the pool as necessary by adding/changing the node shapes.
  2. Now, use a deployment file to access the cluster and start deploying the Nginx environment:
    1. Navigate to your cluster and click Access Cluster.

      The quick access screen appears. Follow steps 1 and 2 (as denoted on the screen) to continue. If you use cloud shell access, you don't need to set up and configure the kubectl environment on your machine This process launches a web terminal emulator and configures it for your environment. As noted under step 2 on the screen, pasting the OCI CLI command provided there will configure your kubectl to access your brand new cluster.

    2. Next, verify that the pods have been provisioned; enter:

      Note:

      For convenience with all commands in this playbook, you can click Copy in the example and paste the command directly at your prompt.
      kubectl get nodes -o wide
      You should see the two nodes and a machine shape that indicates they are running on ARM; for example:
      NAME        STATUS ROLES AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE              KERNEL-VERSION                   CONTAINER-RUNTIME 
      10.0.10.129 Ready  node  3m57s   v.1.19.7  10.0.10.129   <none>        Oracle Linux Server   7.9 5.4.17-2102.el7uek.aarch64   docker://19.3.11 
      10.0.10.153 Ready  node  3m17s   v.1.19.7  10.0.10.153   <none>        Oracle Linux Server   7.9 5.4.17-2102.el7uek.aarch64   docker://19.3.11
      If you need to scale-up the number of nodes, on the Node Pool screen, click Scale and, in Number of Nodes, enter the necessary number of nodes. OCI will spin up and enroll them for you.
  3. Finally, deploy the Nginx website and and expose it through a load balancer:
    1. Deploy a 2 replica Nginx deployment with a service listening set to port 80; enter:
       kubectl apply -f https://k8s.io/examples/application/deployment.yaml 
    2. Verify the deployment; enter:
      kubectl describe deployment nginx-deployment
      You should see something like this output:
      ----
      kubectl describe deployment nginx-deployment
      
      Name:                   nginx-deployment
      Namespace:              default
      CreationTimestamp:      Tue, 03 Jan 2023 17:41:27 +0000
      Labels:                 <none>
      Annotations:            deployment.kubernetes.io/revision: 1
      Selector:               app=nginx
      Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
      StrategyType:           RollingUpdate
      MinReadySeconds:        0
      RollingUpdateStrategy:  25% max unavailable, 25% max surge
      Pod Template:
        Labels:  app=nginx
        Containers:
         nginx:
          Image:        nginx:1.14.2
          Port:         80/TCP
          Host Port:    0/TCP
          Environment:  <none>
          Mounts:       <none>
        Volumes:        <none>
      Conditions:
        Type           Status  Reason
        ----           ------  ------
        Available      True    MinimumReplicasAvailable
        Progressing    True    NewReplicaSetAvailable
      OldReplicaSets:  <none>
      NewReplicaSet:   nginx-deployment-6595874d85 (2/2 replicas created)
      Events:
        Type    Reason             Age   From                   Message
        ----    ------             ----  ----                   -------
        Normal  ScalingReplicaSet  15s   deployment-controller  Scaled up replica set nginx-deployment-6595874d85 to 2
      
      ----
      
    3. Expose the service to the internet through a load balancer; enter:
      kubectl expose deployment nginx-deployment --type=LoadBalancer --name=my-service 
    4. Verify that the load balancer has a public IP; enter:
      kubectl get services
      OCI will spin up the load balancer and assign the public IP. After a few moments, you should see something like this output:
      NAME        TYPE          CLUSTER-IP     EXTERNAL-IP    PORT(S)              AGE 
      kubernetes  ClusterIP     10.96.0.1      <none>         443/TCP, 12250/TCP   16m 
      nginx-lb    LoadBalancer  10.96.224.64   138.2.218.135  80:30343/TCP         3m1s

Provision and Deploy an OKE Cluster with ARM Pool with Ampere A1

You can automatically provision the OKE cluster with an ARM pool and then deploy an Nginx website by using a Terraform script provided on GitHub. The script will provision a cluster with two nodes (on the same availability domain) on two OCPUs. The configuration step shows how to modify the number of nodes and OCPUs. The script will then provision Nginx (2 pods) and expose it behind a load balancer.
To provision the cluster and deploy a website, use this procedure:
  1. First, you need to configure the cluster.
    1. Ensure you have an OCI tenancy with an OCI compartment.
    2. Log into the OCI Console, open a command shell, and run these commands:
      git clone https://github.com/badr42/OKE_A1
      cd OKE_A1
      export TF_VAR_tenancy_ocid='tenancy-ocid'
      export TF_VAR_compartment_ocid='comparment-ocid'
      export TF_VAR_region='home-region'
      <optional>
      ### Select Availability Domain, zero based, if not set it defaults to 0, 
      ### this allows you to select an AD that has available A1 chips
      export TF_VAR_AD_number='0'
      
      ### Select number of nodes
      export TF_VAR_node_count='2'
      
      ### Set OCPU count per node
      export TF_VAR_ocpu_count='2'

    Use your specific values for tenancy-ocid, comparment-ocid, and home-region.

  2. Build the cluster; enter:
    terraform init
    terraform plan
    terraform apply

    After running terraform apply, allow about 25 minutes for the service launch.

  3. Once the service launches, start the Nginx load balancer; enter:
    kubectl --kubeconfig kubeconfig get service
    You should see something like this system output:
    NAME        TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)              AGE
    kubernetes  ClusterIP      10.96.0.1       <none>           443/TCP, 12250/TCP   16m
    nginx-lb    LoadBalancer   10.96.224.64    138.2.218.135    80:30343/TCP       3m1s
  4. When you are finished with the deployment, terminate the environment; enter:
    terraform destroy

Test the Deployment

To test your deployment, open a web browser and enter the public IP address in the address bar. If you successfully deployed the webpage, you should be able to access your new Nginx server welcome page.