3.3 Key Components Used By an OHS Deployment
An Oracle HTTP Server (OHS) deployment uses the Kubernetes components such as pods and Kubernetes services.
Container Image
A container image is an unchangeable, static file that includes executable code. When deployed into Kubernetes, it is the container image that is used to create a pod. The image contains the system libraries, system tools, and Oracle binaries required to run in Kubernetes. The image shares the OS kernel of its host machine.
A container image is compiled from file system layers built onto a parent or base image. These layers encourage the reuse of various components. So, there is no need to create everything from scratch for every project.
A pod is based on a container image. This container image is read-only. Each pod has its own instance of a container image.
A container image contains all the software and libraries required to run the product. It does not require the entire operating system. Many container images do not include standard operating utilities such as the vi editor or ping.
When you upgrade a pod, you are actually instructing the pod to use a different container image. For example, if the container image for Oracle HTTP Server is based on the July 2025 bundle patch, then to upgrade the pod to use the July 2025 bundle patch, you have to tell the pod to use the July 2025 image and restart the pod. Further information on upgrading can be found in Patching and Upgrading.
Pods
A pod is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers. A pod's contents are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific logical host that contains one or more application containers which are relatively tightly coupled.
In an Oracle HTTP Server (OHS) deployment, each OHS runs in a different pod.
- You or the Node Controller deletes the node object.
- The kubelet on the unresponsive node starts responding, terminates the pod, and removes the entry from the apiserver.
- You force delete the pod.
Oracle recommends the best practice of using the first or the second approach. If a node is confirmed to be dead (for example: permanently disconnected from the network, powered down, and so on), delete the node object. If the node suffers from a network partition, try to resolve the issue or wait for the partition to heal. When the partition heals, the kubelet completes the deletion of the pod and frees up its name in the apiserver.
Typically, the system completes the deletion if the pod is no longer running on a node or an administrator has deleted it. You may override this by force deleting the pod.
Pod Scheduling
By default, Kubernetes will schedule a pod to run on any worker node that has sufficient capacity to run that pod. In some situations, it is desirable that scheduling occurs on a subset of the worker nodes available. This type of scheduling can be achieved by using Kubernetes labels.
Kubernetes Services
Kubernetes services expose the processes running in the pods regardless of the number of pods that are running. For example, Oracle HTTP Servers each running in different pods will have a service associated with them. This service will redirect your request to the individual pods in the cluster.
Kubernetes services can be internal or external to the cluster. Internal
services are of the type ClusterIP
and external services are of the
type NodePort
.
Some deployments use a proxy in front of the service. This proxy is typically provided by an 'Ingress' load balancer such as Ngnix. Ingress allows a level of abstraction to the underlying Kubernetes services.
In this guide, Oracle HTTP Server (OHS) is exposed as type
NodePort
and Ingress is not used to access OHS.
The Kubernetes services use a small port range. Therefore, when a Kubernetes service is created, there will be a port mapping. For instance, if an OHS pod is using port 7777, then a Kubernetes nodeport service may use 30777 as its port, mapping port 30777 to 7001 internally. It is worth noting that using individual NodePort Services, the corresponding Kubernetes service port will be reserved on every worker node in the cluster.
Kubernetes/ingress services are known to each worker node, regardless of the worker node on which the containers are running. Therefore, a load balancer is often placed in front of the worker node to simplify routing and worker node scalability.
If OHS is communicating using mod_wl_ohs to a WebLogic Server, then it interacts with
those services using the format: worker_node_hostname:Service port
.
This format is applicable whether you are using individual NodePort Services or a
consolidated Ingress node port service.
If OHS communicates with multiple WebLogic worker nodes, then you should include
multiple worker nodes in your calls to remove single points of failure. In this
guide OHS makes direct proxy calls using WebLogicCluster
directives. More information on this can be found in Supported Architectures for Oracle HTTP Server.
Ingress Controller
Whilst this guide uses NodePort for Oracle HTTP Server (OHS) access, if
OHS communicates with WebLogic Server on an independent Kubernetes cluster, then the
WebLogicCluster
directive should point to the port of the
Ingress controller used by WebLogic Server.
Ingress is a proxy server which sits inside the Kubernetes cluster, unlike the NodePort Services which reserve a port per service on every worker node in the cluster. With an ingress service, you can reserve single ports for all HTTP / HTTPS traffic.
An Ingress service works in the same way as the Oracle HTTP Server. It has the concept of virtual hosts and can terminate SSL, if required.
More information on this can be found in Supported Architectures for Oracle HTTP Server.
Domain Name System
Every service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client pod's DNS search list includes the pod's own namespace and the cluster's default domain.
- Services
Record Type: A or AAAA record
Name format:
my-svc.namespace.svc.cluster-example.com
- Pods
Record Type: A or AAAA record
Name format:
podname.namespace.pod.cluster-example.com
Kubernetes uses a built-in DNS server called 'CoreDNS' which is used for the internal name resolution.
loadbalancer.example.com
) may not possible inside the
Kubernetes cluster. If you encounter this issue, you can use one of the following
options:
- Option 1 - Add a secondary DNS server to CoreDNS for the company domain.
- Option 2 - Add individual host entries to CoreDNS for
the external hosts. For example:
loadbalancer.example.com
Namespaces
Namespaces enable you to organize clusters into virtual sub-clusters which are helpful when different teams or projects share a Kubernetes cluster. You can add any number of namespaces within a cluster, each logically separated from others but with the ability to communicate with each other.
In this guide the OHS deployment uses the namespace
ohsns
.