9 Preparing for an On-Premises Enterprise Deployment

An on-premises enterprise deployment involves preparing the hardware load balancer and ports, file system, operating system, and Kubernetes cluster.

This chapter includes the following topics:

Preparing the Load Balancer and Firewalls for an Enterprise Deployment

Preparing for an on-premises enterprise deployment also includes configuring a hardware or software load balancer and ports that you have to open on the firewalls used in the topology.

Configuring Virtual Hosts on the Hardware Load Balancer

The hardware load balancer configuration facilitates to recognize and route requests to several virtual servers and associated ports for different types of network traffic and monitoring.

The following topics explain how to configure the hardware load balancer, provide a summary of the virtual servers that are required, and provide additional instructions for these virtual servers:

Overview of the Hardware Load Balancer Configuration

As shown in the topology diagrams, you must configure the hardware load balancer to recognize and route requests to several virtual servers and associated ports for different types of network traffic and monitoring.

In the context of a load-balancing device, a virtual server is a construct that allows multiple physical servers to appear as one for load-balancing purposes. It is typically represented by an IP address and a service, and it is used to distribute incoming client requests to the servers in the server pool.

The virtual servers should be configured to direct traffic to the appropriate host computers and ports for the various services that are available in the enterprise deployment.

In addition, you should configure the load balancer to monitor the host computers and ports for availability so that the traffic to a particular server is stopped as soon as possible when a service is down. This ensures that incoming traffic on a given virtual host is not directed to an unavailable service in the other tiers.

Note that after you configure the load balancer, you can later configure the web server instances in the web tier to recognize a set of virtual hosts that use the same names as the virtual servers that you defined for the load balancer. For each request coming from the hardware load balancer, the web server can then route the request appropriately, based on the server name included in the header of the request. See Configuring Oracle HTTP Server for Administration and Oracle Web Services Manager.

If you want to configure a load balancer to direct traffic to your worker nodes, you should configure the load balancer as a network load balancer, which directs all traffic sent to it to the target nodes regardless of the source port.

Typical Procedure for Configuring the Hardware Load Balancer

The following procedure outlines the typical steps for configuring a hardware load balancer for an enterprise deployment.

Note that the actual procedures for configuring a specific load balancer will differ, depending on the specific type of load balancer. There may also be some differences depending on the type of protocol that is being load balanced. For example, TCP virtual servers and HTTP virtual servers use different types of monitors for their pools. Refer to the vendor-supplied documentation for actual steps.

  1. Create a pool of servers. This pool contains a list of servers and the ports that are included in the load-balancing definition.

    For example, for load balancing between the web hosts, create a pool of servers that would direct requests to hosts WEBHOST1 and WEBHOST2 on port 7777.

  2. Create rules to determine whether a given host and service is available and assign it to the pool of servers that are described in Step 1.

  3. Create the required virtual servers on the load balancer for the addresses and ports that receive requests for the applications.

    For a complete list of the virtual servers required for the enterprise deployment, see Summary of the Load Balancer Virtual Servers Required for an Enterprise Deployment.

    When you define each virtual server on the load balancer, consider the following:

    1. If your load balancer supports it, specify whether the virtual server is available internally, externally, or both. Ensure that internal addresses are only resolvable from inside the network.

    2. Configure SSL Termination, if applicable, for the virtual server.

    3. Assign the pool of servers created in Step 1 to the virtual server.

Load Balancer Health Monitoring

The load balancer must be configured to check that the services in the Load Balancer Pool are available. Failure to do so will result in requests being sent to hosts where the service is not running.

The following table shows examples of how to determine whether a service is available:

Table 9-1 Examples Showing How to Determine Whether a Service is Available

Service Monitor Type Monitor Mechanism

OUD

ldap

ldapbind to cn=oudadmin

OHS

http

check for GET /\r\n

Summary of the Load Balancer Virtual Servers Required for an Enterprise Deployment

This topic provides details of the virtual servers that are required to be configured for an enterprise deployment.

The following table provides a list of the virtual servers that you must define on the hardware load balancer for the Oracle Identity and Access Management enterprise topology:

Table 9-2 Virtual Servers to be Defined on the Hardware Load Balancer for the Oracle Identity and Access Management Enterprise Topology

Virtual Host Server Pool Protocol SSL Termination? Other Required Configuration/ Comments

login.example.com:443

WEBHOST1.example.com:7777

WEBHOST2.example.com:7777

HTTPS

Yes

Identity Management requires that the following be added to the HTTP header:

Header Name: IS_SSL

Header Value: ssl

Header Name: WL-Proxy-SSL

Header Value: true

prov.example.com:443

WEBHOST1.example.com:7777

WEBHOST2.example.com:7777

HTTPS

Yes

Identity Management requires that the following be added to the HTTP header:

Header Name: IS_SSL

Header Value: ssl

Header Name: WL-Proxy-SSL

Header Value: true

iadadmin.example.com:80

WEBHOST1.example.com:7777

WEBHOST2.example.com:7777

HTTP

igdadmin.example.com:80

WEBHOST1.example.com:7777

WEBHOST2.example.com:7777

HTTP

igdinternal.example.com:7777

WEBHOST1.example.com:7777

WEBHOST2.example.com:7777

HTTP

oiri.example.com

Kubernetes Worker Hosts:30777

HTTP

No

Required only when deploying OIRI in the standalone mode with ingress enabled.

oaaadmin.example.com

Kubernetes Worker Hosts:30636

HTTPS

No

Required only when deploying OAA in the standalone mode with ingress enabled.

oaa.example.com

Kubernetes Worker Hosts:307636

HTTPS

No

Required only when deploying OAA in the standalone mode with ingress enabled.

Note:

  • Port 80 is the load balancer port used for HTTP requests.

  • Port 443 is the load balancer port used for HTTPS requests.

  • Port 7777 is the load balancer port used for internal callback requests.

Configuring Firewalls and Ports for an Enterprise Deployment

As an administrator, it is important that you become familiar with the port numbers that are used by various Oracle Fusion Middleware products and services. This ensures that the same port number is not used by two services on the same host, and that the proper ports are open on the firewalls in the enterprise topology.

The following tables lists the ports that you must open on the firewalls in the topology:

Firewall notation:

  • FW0 refers to the outermost firewall.
  • FW1 refers to the firewall between the web tier and the application tier.
  • FW2 refers to the firewall between the application tier and the data tier.

Table 9-3 Firewall Ports Common to All Fusion Middleware Enterprise Deployments

Type Firewall Port and Port Range Protocol / Application Inbound / Outbound Other Considerations and Timeout Guidelines

Browser request

FW0

80

HTTP / Load Balancer

Inbound

Timeout depends on the size and type of HTML content.

Browser request

FW0

443

HTTPS / Load Balancer

Inbound

Timeout depends on the size and type of HTML content.

Browser request

FW1

80

HTTP / Load Balancer

Outbound (for intranet clients)

Timeout depends on the size and type of HTML content.

Browser request

FW1

443

HTTPS / Load Balancer

Outbound (for intranet clients)

Timeout depends on the size and type of HTML content.

Callbacks and Outbound invocations

FW1

80

HTTP / Load Balancer

Outbound

Timeout depends on the size and type of HTML content.

Callbacks and Outbound invocations

FW1

443

HTTPS / Load Balancer

Outbound

Timeout depends on the size and type of HTML content.

Load balancer to Oracle HTTP Server

n/a

7777

HTTP

n/a

n/a

Session replication within a WebLogic Server cluster

n/a

n/a

n/a

n/a

By default, this communication uses the same port as the server's listen address.

Database access

FW2

1521

SQL*Net

Both

Timeout depends on database content and on the type of process model used for SOA.

Coherence for deployment

n/a

9991

n/a

n/a

n/a

Oracle Notification Server (ONS)

FW2

6200

ONS

Both

Required for Gridlink. An ONS server runs on each database server.

Elasticsearch

FW1

31920

HTTP

Outbound

Used for sending the log files to Elasticsearch (Optional).

Elasticsearch

FW12

31920

HTTP

Inbound

Used for sending the log files to Elasticsearch (Optional).

Table 9-4 Firewall Ports Specific to the Oracle Identity and Access Management Enterprise Deployment

Type Firewall Port and Port Range Protocol / Application Inbound / Outbound Other Considerations and Timeout Guidelines

Webtier Access to Oracle Weblogic Administration Server (IAMAccessDomain)

FW1

30701

HTTP / Oracle HTTP Server and Administration Server

Inbound

N/A

Webtier Access to Oracle Weblogic Administration Server (IAMGovernanceDomain)

FW1

30711

HTTP / Oracle HTTP Server and Administration Server

Inbound

N/A

Enterprise Manager Agent - web tier to Enterprise Manager

FW1

5160

HTTP / Enterprise Manager Agent and Enterprise Manager

Both

N/A

Oracle HTTP Server to Ingress

FW1

30777

HTTP

Inbound

N/A

Oracle HTTP Server to oam_server

FW1

30410

HTTP / Oracle HTTP Server to WebLogic Server

Inbound

Timeout depends on the mod_weblogic parameters used

Oracle HTTP Server oim_server

FW1

30140

HTTP / Oracle HTTP Server to WebLogic Server

Inbound

Timeout depends on the mod_weblogic parameters used

Oracle HTTP Server soa_server

FW1

30801

HTTP / Oracle HTTP Server to WebLogic Server

Both

Timeout depends on the mod_weblogic parameters used

Oracle HTTP Server oam_policy_mgr

FW1

30510

HTTP / Oracle HTTP Server to WebLogic Server

Both

Timeout depends on the mod_weblogic parameters used

Oracle HTTP Server to OIRI UI

FW1

30305

HTTP/ Oracle HTTP Server to OIRI UI microservice

Both

NA

Oracle HTTP Server to OIRI

FW1

30306

HTTP/ Oracle HTTP Server to OIRI microservice

Both

NA

Oracle Coherence Port

FW1

8000–8088

TCMP

Both

N/A

OUD Port

FW1

31389

LDAP

Inbound

Ideally, these connections should be configured not to time out

Note: Required only if you need to access OUD outside of the Kubernetes cluster.

OUD SSL Port

FW1

31636

LDAPS

Inbound

Ideally, these connections should be configured not to time out

Note: Required only if you need to access OUD outside of the Kubernetes cluster.

Kubernetes Cluster to Database Listener

FW2

1521

SQL*Net

Both

Timeout depends on all database content and on the type of process model used for Oracle Identity and Access Management

Oracle Notification Server (ONS)

FW2

6200

ONS

Both

Required for Gridlink. An ONS server runs on each database server

Oracle HTTP Server to OAA Admin

FW1

32721

HTTP/ Oracle HTTP Server to OAA Admin UI microservice

Both

NA

Preparing a Kubernetes Cluster for the Enterprise Deployment

If you want to use an on-premises based Kubernetes cluster, you have to create the cluster. There are many flavors of Kubernetes available but Oracle recommends the use of Oracle Cloud Native Environment.

While the instructions in this guide should work with any Kubernetes deployment, they have been validated on an Oracle Cloud Native Environment cluster.

Host Requirements for the Kubernetes Cluster

A Kubernetes cluster consists of two types of hosts:
  • Control plane hosts: The control plane hosts are responsible for managing the Kubernetes cluster. Although you can use a single host for the control plane, Oracle strongly recommends that you create a highly available control plane consisting of a minimum of three control plane hosts, ideally five.
  • Worker hosts: The worker hosts are used to run the Kubernetes containers. Each Kubernetes worker node requires to have sufficient capacity to run your deployment. You will need a minimum of two worker nodes to ensure high availability. The number of worker nodes will depend on the application components you plan to deploy and the capacities of the worker nodes. The greater the number of applications, greater the number of worker nodes and/or capacities.

Deployment Options

You can install Kubernetes on any of the following server types:

  • Bare-metal server
  • Virtual Machine instance running on on-premises hardware or in the cloud
  • Cloud based bare-metal instance
  • Cloud based Infrastructure virtual instance
  • Oracle Private Cloud Appliance virtual instance
  • Oracle Private Cloud at Customer virtual instance

Hardware Requirements

The following are the minimum hardware requirements for your deployment (based on an Oracle Cloud Native Environment deployment):

  • Kubernetes Control Plane Node Hardware - A minimum Kubernetes control plane node configuration is:
    • 4 CPU cores (Intel VT-capable CPU)
    • 16 GB RAM
    • 1 GB Ethernet NIC
    • XFS file system (the default file system for Oracle Linux)
    • 40 GB hard disk space in the /var directory
  • Kubernetes Worker Node Hardware - A minimum Kubernetes worker node configuration is:
    • 1 CPU cores (Intel VT-capable CPU)
    • 8 GB RAM
    • 1 GB Ethernet NIC
    • XFS file system (the default file system for Oracle Linux)
    • 15 GB hard disk space in the /var directory
  • Operator Node Hardware - A minimum operator node configuration is:
    • 1 CPU cores (Intel VT-capable CPU)
    • 8 GB RAM
    • 1 GB Ethernet NIC
    • 15 GB hard disk space in the /var directory

Note:

These are the minimum requirements to run a Kubernetes cluster. Your application will most likely need far more resources than these minimum values.

Creating a Kubernetes Cluster

The instructions for deploying the Kubernetes cluster are outside the scope of this guide. Each Kubernetes deployment will have its own deployment mechanisms. To install Oracle Cloud Native Environment, see Installing Oracle Cloud Native Environment.

Enabling the Firewall Rule for Oracle Cloud Native Environment

If you are deploying a Kubernetes cluster using Oracle Cloud Native Environment, you should ensure that firewall masquerade is enabled on each Kubernetes worker node. To enable this firewall, use the following command on each worker node:

sudo firewall-cmd --add-masquerade --permanent

Preparing Storage for an Enterprise Deployment

Before starting an enterprise deployment, it is important to understand the storage requirements. You should obtain and configure the storage.

For instructions to configure storage, see Storage Requirements for an Enterprise Deployment.

Summary of the Shared Storage Volumes in an Enterprise Deployment

It is important to understand the shared volumes and their purpose in a typical Oracle Fusion Middleware enterprise deployment.

To understand the storage requirements for an Oracle Identity and Access Management deployment, see Storage Requirements for an Enterprise Deployment.

For information about recommendations for storage, see Storage Requirements for an Enterprise Deployment.

Summary of Local Storage Used in an Enterprise Deployment

To understand the local storage requirements for an enterprise deployment, see Summary of Local Storage Used in an Enterprise Deployment.

Local Storage Requirements

Each worker node needs to have sufficient storage to hold not only the containers but also the container images.

Preparing the Kubernetes Host Computers for an Enterprise Deployment

Preparing the host computers largely depends on the flavor of Kubernetes you are deploying. See the appropriate Kubernetes installation instructions.

In addition, you should perform the following actions:

Verifying the Minimum Hardware Requirements for Each Host

After you procure the required hardware for the enterprise deployment, it is important to ensure that each host computer meets the minimum system requirements of the Kubernetes cluster. See Hardware Requirements.

Ensure that you have sufficient local disk storage and shared storage configured as described in Storage Requirements for an Enterprise Deployment.

Allow sufficient swap and temporary space; specifically:

  • Swap Space – Most Kubernetes deployments recommend disabling swap. See the Kubernetes documentation for your version specific recommendations.

  • Temporary Space – There must be a minimum of 500 MB of free space in the /tmp directory.

Verifying Linux Operating System Requirements

You can review the typical Linux operating system settings for an enterprise deployment.

To ensure the host computers meet the minimum operating system requirements of your Kubernetes cluster, ensure that you have installed a certified operating system and that you have applied all the necessary patches for the operating system.

In addition, review the following sections for typical Linux operating system settings for an enterprise deployment.

Setting Linux Kernel Parameters

The kernel of the Kubernetes worker hosts must be large enough to support the deployment of all of the containers you want to run on it. The values shown in Table 9-5 are the absolute minimum values to deploy Oracle Identity and Access Management. Oracle recommends that you tune these values to optimize the performance of the system. See your operating system documentation for more information about tuning kernel parameters.

The values in the following table are the current Linux recommendations. For the latest recommendations for Linux and other operating systems, see Oracle Fusion Middleware System Requirements and Specifications.

Table 9-5 UNIX Kernel Parameters

Parameter Value

kernel.sem

256 32000 100 142

kernel.shmmax

4294967295

To set these parameters:

  1. Sign in as root and add or amend the entries in the /etc/sysctl.conf file.
  2. Save the file.
  3. Activate the changes by entering the following command:
    /sbin/sysctl -p
Setting the Open File Limit and Number of Processes Settings on UNIX Systems

On UNIX operating systems, the Open File Limit is an important system setting, which can affect the overall performance of the software running on the host computer.

For guidance on setting the Open File Limit for an Oracle Fusion Middleware enterprise deployment, see Host Computer Hardware Requirements.

Note:

The following examples are for Linux operating systems. Consult your operating system documentation to determine the commands to be used on your system.

For more information, see the following sections.

Viewing the Number of Currently Open Files

You can see how many files are open with the following command:

/usr/sbin/lsof | wc -l

To check your open file limits, use the following commands.

C shell:

limit descriptors

Bash:

ulimit -n
Setting the Operating System Open File and Processes Limits

To change the Open File Limit values on Oracle Enterprise Linux 6 or greater:

  1. Sign in as root user and edit the following file:

    /etc/security/limits.d/*-nproc.conf

    For example:

    /etc/security/limits.d/20-nproc.conf

    Note:

    The number can vary from host to host.
  2. Add the following lines to the file. (The values shown here are for example only):
    * soft  nofile  4096
    * hard  nofile  65536
    * soft  nproc   2047
    * hard  nproc   16384
    

    The nofiles values represent the open file limit; the nproc values represent the number of processes limit.

  3. Save the changes, and close the file.
  4. Re-login into the host computer.

Creating and Mounting the Directories for an Enterprise Deployment

Kubernetes uses storage differently as compared to traditional Unix deployments. In a Kubernetes deployment, container data is stored within a persistent volume. While a persistent volume can be a local file system, Oracle recommends that you create the persistent volume (PV) on the shared storage that is available to each Kubernetes node. This method makes it possible for a Kubernetes container to be started on any of the Kubernetes worker nodes.

  • This guide uses NFS persistent volumes. In an NFS persistent volume, you create a "Share" on the shared storage, and then each Kubernetes container mounts that share. This feature ensures that the robustness of enterprise storage is used for crititical data. For details, see Storage Requirements for an Enterprise Deployment.

    An alternative approach is to mount the NFS volumes to each worker node. The downside to this approach is that every Kubernetes worker node where a container starts must have the file system mounted. In a large cluster, this can significantly increase the management overhead. However, it does allow you to use the redundant NFS volumes to protect from possible corruption. In this scenario, you are responsible for creating your own processes to keep your redundant NFS mounts in sync. Using NFS mounted to the pods removes the dependency on the worker nodes. This feature allows the pod to run on any cluster node mounting the NFS as needed, which simplifies management. But you rely on the inbuilt redundancy of your NFS storage rather than creating the redundancy manually.

  • This guide assumes that the Oracle Web tier software is installed on a local disk or a privately attached shared storage.

    The Web tier installation is typically performed on local storage to the WEBHOST nodes. When you use shared storage, you can install the Oracle Web tier binaries (and create the Oracle HTTP Server instances) on a shared disk. However, if you do so, then the shared disk must be separate from the shared disk used for the application tier, and you must consider the appropriate security restrictions for access to the storage device across tiers.

    As with the application tier servers (OAMHOST1 and OAMHOST2), use the same directory path on both computers.

    For example:

    /u02/private/oracle/products/web
  • During the deployment, Oracle recommends mounting the 'Shares', which are mounted to containers, to the deployment host. The mounting makes copying files to the persistent volumes easier even when containers have not started. It also makes it simpler to clean up failed installations and, if needed, debug. Many container images do not contain utilities such as Vim for viewing files.
Mounting File Systems on Hosts

For OHS hosts, place the entries in /etc/fstab with the following mount options:

Sample OHS /etc/fstab entry:


<IP>:/export/IAMBINARIES/webbinaries1 /u02/private/oracle/products nfs auto,rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768
<IP>:/export/IAMCONFIG/webconfig1  /u02/private/oracle/config nfs auto,rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768

Before you can use the file system with the containers, ensure that you can write to the file system. Mount the file system to the bastion node and write to it. If you are unable to write, use the chmod command to enable writing to the file system.

For example:

sudo mkdir -p /u02/private/oracle/products /u02/private/oracle/config
sudo mount -a
sudo chmod -R 777 /u02/private/oracle

Table 9-6 Summary of Hosts and the File Systems to be Mounted

Mount Host File Systems Comments

webhost1

webbinaries1

Mounted as /u02/private/oracle/products.

webhost2

webbinaries2

Mounted as /u02/private/oracle/products.

webhost1

webconfig1

Mounted as /u02/private/oracle/config.

webhost2

webconfig2

Mounted as /u02/private/oracle/config.

All Kubernetes nodes

images

nfs_volumes*

Used as a staging directory to temporarily store container images.

Mounted as /images.

bastion node

oudconfigpv

Mounted as /nfs_volumes/oudconfigpv.

oudpv

Mounted as /nfs_volumes/oudpv.

oudsmpv

Mounted as /nfs_volumes/oudsmpv.

oigpv

Mounted as /nfs_volumes/oigpv.

oampv

Mounted as /nfs_volumes/oampv.

oiripv

Mounted as /p_volumes/oiripv.

dingpv

Mounted as /p_volume/idingpv.

docker_repo*

Mounted as /docker_repo.

Optionally, mount all PVs. This option lets you delete deployments during the configuration phase, if necessary. Remove these mounts after the system is up and running.

Note:

* Alternatively, for these file systems, you can use block volumes.

Enabling Unicode Support

Oracle recommends you to enable Unicode support in your operating system to process characters in Unicode.

Your operating system configuration can influence the behavior of characters supported by Oracle Fusion Middleware products.

On UNIX operating systems, Oracle highly recommends that you enable Unicode support by setting the LANG and LC_ALL environment variables to a locale with the UTF-8 character set. This enables the operating system to process any character in Unicode. Oracle Identity and Access Management technologies, for example, are based on Unicode.

If the operating system is configured to use a non-UTF-8 encoding, Oracle Identity and Access Management components may function in an unexpected way. For example, a non-ASCII file name might make the file inaccessible and cause an error. Oracle does not support problems caused by operating system constraints.

Setting the DNS Settings

After you have created the Kubernetes cluster, you have to ensure that the pods are capable of resolving the application URLs and hosts used in your deployment. Kubernetes uses 'coreDNS' to resolve host names. Out-of-the-box, this DNS service is used for internal Kubernetes pod name resolution. However, it can be extended to include your custom entries too.

There are two ways of including custom entries:
  • By adding the individual host entries to coreDNS
  • By adding the corporate DNS server to coreDNS for the application domain
Adding Individual Host Entries to CoreDNS
To add individual host entries to CoreDNS:
  1. Edit the coreDNS configmap using the command:
    kubectl edit configmap/coredns -n kube-system
  2. Add a hosts section to the file including one entry for each of the hosts you want to define. For example:
    # Please edit the object below. Lines beginning with a '#' will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health {
               lameduck 5s
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
               pods insecure
               fallthrough in-addr.arpa ip6.arpa
               ttl 30
            }
            prometheus :9153
            forward . /etc/resolv.conf {
               max_concurrent 1000
            }
            cache 30
            loop
            reload
            loadbalance
            hosts custom.hosts example.com {
                 1.1.1.1 login.example.com
                 1.1.1.2 prov.example.com
                 1.1.1.3 iadadmin.example.com
                 1.1.1.4 igdadmin.example.com
                 1.1.1.5 igdinternal.example.com
                 fallthrough
               }
        }
    kind: ConfigMap
    metadata:
      creationTimestamp: "2021-08-13T13:01:56Z"
      name: coredns
      namespace: kube-system
      resourceVersion: "11617043"
      uid: 2facd555-692d-4dfd-80be-5f9e608b0d71
    
  3. Save the file.
  4. Restart coreDNS using the command:
    kubectl rollout restart -n kube-system deploy coredns 
    To ensure that the coreDNS pods restart without any issue, use the following command:
    kubectl get pods -n kube-system
    If any errors occur, use the following command to view them:
    kubectl logs -n kube-system coredns--<ID>

    Correct the errors by editing the configmap again.

Adding the Corporate DNS Server to CoreDNS for the Application Domain
To ensure that CoreDNS forwards all entries to the corporate DNS server for the application domain:
  1. Edit the coreDNS configmap using the command:
    kubectl edit configmap/coredns -n kube-system
  2. Add a hosts section to the file including one entry for each of the hosts you want to define. For example:
    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health {
               lameduck 5s
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
               pods insecure
               fallthrough in-addr.arpa ip6.arpa
               ttl 30
            }
            prometheus :9153
            forward . /etc/resolv.conf {
               max_concurrent 1000
            }
            cache 30
            loop
            reload
            loadbalance}
        example.com:53 {
            errors
            cache 30
            forward . CORPORATE_DNS_IP_ADDRESS
            }
        }
    kind: ConfigMap
    metadata:
      creationTimestamp: "2021-08-13T13:01:56Z"
      name: coredns
      namespace: kube-system
      resourceVersion: "11587286"
      uid: 2facd555-692d-4dfd-80be-5f9e608b0d71
  3. Save the file.
  4. Restart coreDNS using the command:
    kubectl delete pod --namespace kube-system --selector k8s-app=kube-dns
    Ensure that the coredns pods restart without any issue, using the command:
    kubectl get pods -n kube-system
    If any errors occur, use the following command to view them:
    kubectl logs -n kube-system coredns--<ID>

    Correct the errors by editing the configmap again.

Validating the DNS Resolution

Most containers do not have in-built networking tools that enable you to check that the configuration changes you made are correct. The easiest way to validate the changes is to use a lightweight container with the network tools that you have installed. For example: Alpine.

Alpine is a slimmed down bash environment. You can start an Alpine container using a command as follows:
kubectl run -i --tty --rm debug --image=docker.io/library/alpine:latest --restart=Never -- sh
This command provides access to nslookup where you can check for host resolution using a command as follows:
nslookup login.example.com

Configuring a Host to Use an NTP (time) Server

All hosts in the deployment must have the same time. The best way to achieve this is to use an NTP server. To configure a host to use an NTP server:

  1. Determine the name of the NTP server(s) you wish to use. For security reasons, ensure that these are inside your organization.
  2. Log in to the host as the root user.
  3. Edit the file /etc/ntp.conf to include a list of the time servers. After editing, the file appears as follows:
    server ntphost1.example.com
    server ntphost2.example.com
    
  4. Run the following command to synchronize the system clock to the NTP server:
    /usr/sbin/ntpdate ntpserver1.example.com
    /usr/sbin/ntpdate ntpserver2.example.com
    
  5. Start the NTP client using the following command:
    service ntpd start
    
  6. Validate that the time is set correctly using the date command.
  7. To make sure that the server always uses the NTP server to synchronize the time. Set the client to start on reboot by using the following command:
    chkconfig ntpd on

Preparing the File System for an Enterprise Deployment

Preparing the file system for an enterprise deployment involves understanding the requirements for local and shared storage, as well as the terminology that is used to reference important directories and file locations during the installation and configuration of the enterprise topology.

Overview of Preparing the File System

It is important to set up your storage in a way that makes the enterprise deployment easy to understand, configure, and manage.

To fully understand the storage requirements for your enterprise deployment, see Storage Requirements for an Enterprise Deployment.

In addition to mounting these file systems inside the containers, you have to mount the following volumes to the administration/deployment host. Your administration host is where you will deploy the software.

Table 9-7 Volume and Mount Point for the Administration Host

Shared Volume Name Mount Point
oudconfigpv /nfs_volumes/oudconfigpv
oudpv /nfs_volumes/oudpv
oudsmpv /nfs_volumes/oudsmpv
oigpv /nfs_volumes/oigpv
oampv /nfs_volumes/oampv
oiripv /idmpvs/oiripv
dingpv /idmpvs/dingpv
oaacredpv /nfs_volume/oaacredpv
oaaconfigpv /nfs_volume/oaaconfigpv
oaalogpv /nfs_volume/oaalogpv
oaavaultpv /nfs_volume/oaavaultpv

Note: Required when using a file-based vault

docker_repo* /docker_repo

Preparing a Disaster Recovery Environment

A disaster recovery environment is a replica of your primary environment located in a different region from the primary region. This environment is a standby environment that you switch over to in the event of the failure of your primary environment.

The standby environment will be a separate cluster, ideally in a different data center. If the cluster is dedicated to the application, the second cluster should be a mirror of the primary cluster with the same number and specifications of worker nodes. If your cluster is a multi-purpose cluster that is used by different applications, ensure sufficient spare capacity in the standby site to run the full application workload of the primary cluster.

Each Kubernetes cluster will run the same operating system version and the Kubernetes major release.

Your network will be such that:

  • The primary and standby database networks communicate with each other to facilitate the creation of a Data Guard database.
  • The primary and standby file system networks communicate with each other to facilitate the replication of the file system data. If you have run to the Rsync process to achieve the replication inside the cluster, then the primary Kubernetes worker network will be able to communicate with the Kubernetes worker network on the standby site.
  • A global load balancer will be used to direct the traffic between the primary and standby sites. This load balancer is often independent of the site-specific load balancers used for on-site communication.
  • The SSL certificates used in the load balancers must be the same in each load balancer. The traffic should not be aware when the load balancer switches sites.