3 Deploying and Validating the Solution Components
This chapter provides information about deploying and validating the solution components.
Assumptions for Deploying the Solution
-
The Solution Specialist can operate and run commands as well as support underlying cloud native competencies and technologies.
-
The Solution Specialist can access, operate, and run commands across operating systems, such as Linux, Unix, and so on.
-
The Solution Specialist is familiar with different resources required for operations, maintenance, and escalations of the service.
-
The cloud native environment infrastructure requirements of all applications in the solution are fulfilled prior to performing the procedures.
-
Preparing the cluster (set up the solution infrastructure). See Preparing the Cluster for more information.
-
Deploying the individual applications that are part of the solution. See Deploying the Solution Components for more information.
-
Validating the solution components. See Validating the Solution Components for more information.
Preparing the Cluster
This section provides information about deploying a comprehensive cloud infrastructure for the Digital Business Experience solution that involves setting up various foundational and service-specific components, enabling secure, scalable, and efficient operations. It also outlines the steps and objectives for deploying key infrastructure elements, including a Virtual Cloud Network (VCN), a Public Bastion for secure access, a Kubernetes Cluster using Oracle Kubernetes Engine (OKE), and an Oracle Base Database Service. Each component serves a distinct purpose and together they form a robust cloud environment suitable for a range of enterprise applications.
Deploy the following key infrastructure elements in the sequence provided below:
-
VCN: Helps establishing a secure, isolated, and customizable virtual network, which serves as the foundation for deploying various cloud resources. VCN provides full control over the network architecture, including IP address ranges, subnets, route tables, and security lists. See Deploying the Virtual Cloud Network for more information.
-
Public Bastion: Allows secure access to resources within the VCN without exposing those resources to the public internet. Public Bastion acts as a gateway for administrators and authorized users to manage resources in private subnets, enhancing the security posture of the environment. See Deploying the Public Bastion for more information.
-
OKE: Provides a scalable platform for deploying, managing, and automating the operations of containerized applications, ensuring high availability and fault tolerance. See Deploying an Oracle Kubernetes Engine Environment for more information.
-
Oracle Base Database Service: Provides reliable, high-performance data management, with configurations optimized for the specific needs of the application workloads. See Deploying the Oracle Base Database Service for more information.
-
Kubernetes cluster using Oracle Cloud Native Environment (OCNE): The Kubernetes cluster is set up using the OCNE Command Line Interface (CLI) using the libvirt provider. It is designed for running cloud native applications at scale, providing a standardized environment for managing microservices, containers, and workloads across various cloud providers. Its benefits include improved agility, scalability, and cost efficiency by automating infrastructure management and simplifying deployment and orchestration of containerized applications. See Deploying a Kubernetes Cluster using Oracle Cloud Native Environment for more information.
-
Cloud Native Computing Foundation (CNCF): CNCF is an open source software foundation that promotes the adoption of cloud-native computing. It is a subsidiary of the Linux Foundation that aims to establish a vendor-agnostic community of developers, end users, and IT technology and service providers to collaborate on Open Source projects. CNCF hosts and supports projects, such as Kubernetes, Prometheus, and Envoy, which are essential components of many modern cloud-native architectures. See Deploying a Cloud Native Computing Foundation Environment for more information.
The following sections provide detailed instructions for deploying the infrastructure elements mentioned above.
Deploying the Virtual Cloud Network
This section provides detailed instructions about the following:
-
Creating the Virtual Cloud Network (VCN): Helps you to define the IP address space and create the VCN.
-
Creating Subnets: Helps you to set up public and private subnets within the VCN, ensuring they are logically separated for different types of resources.
-
Configuring Route Tables and Security Lists: Helps you to set up route tables for network traffic management and security lists for controlling ingress and egress traffic.
Creating a VCN
To create a VCN within Oracle Cloud Infrastructure (OCI):
- Log in to Oracle Cloud Infrastructure using your credentials.
The OCI home page opens.
- From the Navigation pane, select Networking, and then click the
Virtual cloud networks link.
The Create a Virtual Cloud Network page opens. Provide the following details:
- In the Name text box, enter the desired vcn name. For example, dbe-vcn.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- In the CIDR Block field, enter the CIDR in the 0.0.0.0/0 format. For example, 10.0.0.0/28.
- Click Create VCN. The VCN is created.
Creating a Service Gateway, NAT Gateway, and Internet Gateway in the VCN
To create a Service Gateway:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, and then select your VCN. On the Resources pane, select Service Gateways, and then click the Create Service Gateway button.
- In the Create Service Gateway page, provide the following details:
- In the Name text box, enter the service gateway name. For example, serviceGW.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- In the Services text box, enter the required services. For example, All IAD Services in Oracle Services Network.
- Click Create Service Gateway.
To create a NAT Gateway:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, and then select your VCN. On the Resources pane, select NAT Gateways, and then click the Create NAT Gateway button.
- In the Create NAT Gateway page, provide the following details:
- In the Name text box, enter the NAT gateway name. For example, NATGW.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- Select the Ephemeral Public IP Address option.
- Click Create NAT Gateway.
To create an Internet Gateway:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, and then select your VCN. On the Resources pane, select Internet Gateways, and then click the Create Internet Gateway button.
- In the Create Internet Gateway page, provide the following details:
- In the Name text box, enter the internet gateway name. For example, InternetGW.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- Click Create Internet Gateway.
Creating Public and Private Route Tables in the VCN
To create a public route table:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, and then select your VCN.
- On the Resources pane, select Route Tables, and then click Create Route Table.
- In the Create Route Table page, provide the following details:
- In the Name text box, enter the public route table name. For example, rt_publicsubnet.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- Click Create Route Table.
- To attach an internet gateway to the created public route table:
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select your public route table, and then click the Add Route Rules button.
The Add Route Rules page opens.
- From the Target Type drop-down list, select the Internet Gateway
option.
The Destination CIDR Block and Target Internet Gateway in Compartment fields are auto-populated.
- Click Add Route Rules.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select your public route table, and then click the Add Route Rules button.
To create a private route table:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select Route Tables, and then click the Create Route Table button.
- In the Create Route Table page, provide the following details:
- In the Name text box, enter the public route table name. For example, rt_privatesubnet.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- Click Create Route Table.
- To attach a NAT gateway to the created private route table:
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select your private route table, and then click the Add Route Rules button.
The Add Route Rules page opens.
- From the Target Type drop-down list, select the NAT Gateway option.
The Destination CIDR Block and Target NAT Gateway in Compartment fields are auto-populated.
- Click Add Route Rules.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select your private route table, and then click the Add Route Rules button.
- To attach a service gateway to the created private route table:
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select your private route table, and then click the Add Route Rules button.
The Add Route Rules page opens.
- From the Target Type drop-down list, select the Service Gateway
option.
The Destination Service and Target Service Gateway in Compartment fields are auto-populated.
- Click Add Route Rules.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select your private route table, and then click the Add Route Rules button.
Creating Public, Private, and LB Subnets in the VCN
To create a public subnet:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select Subnets, and then click the Create Subnet button.
The Create Subnet page opens.
- In the Name text box, enter the public subnet name. For example, Public subnet.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- From the Subnet Type field, select Regional (Recommended).
- In the IPv4 CIDR Block text box, enter the CIDR in the 0.0.0.0/0 format. For example, 10.0.0.0/28.
- From the Route Table Compartment drop-down list, select the created public route table.
- From the Subnet Access field, select Public Subnet.
- Under the DNS Resolution field, select the Use DNS hostnames in this
Subnet check box.
The DNS Label and DNS Domain Name fields are auto-populated.
- Click Create Subnet.
To create a private subnet:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select Subnets, and then click the Create Subnet button.
The Create Subnet page opens.
- In the Name text box, enter the private subnet name. For example, Private subnet.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- From the Subnet Type field, select Regional (Recommended).
- In the IPv4 CIDR Block text box, enter the CIDR in the 0.0.0.0/0 format. For example, 10.0.0.0/24.
- From the Route Table Compartment drop-down list, select the created private route table.
- From the Subnet Access field, select Private Subnet.
- Under the DNS Resolution field, select the Use DNS hostnames in this
Subnet check box.
The DNS Label and DNS Domain Name fields are auto-populated.
- Click Create Subnet.
To create a private LB subnet:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, then from the Resources pane, select Subnets, and then click the Create Subnet button.
The Create Subnet page opens.
- In the Name text box, enter the private subnet name. For example, Private LB subnet.
- In the Create In Compartment text box, enter the compartment name. For example, demo.
- From the Subnet Type field, select Regional (Recommended).
- In the IPv4 CIDR Block text box, enter the CIDR in the 0.0.0.0/0 format. For example, 10.0.2.0/28.
- From the Route Table Compartment drop-down list, select the created private route table.
- From the Subnet Access field, select Private Subnet.
- Under the DNS Resolution field, select the Use DNS hostnames in this
Subnet check box.
The DNS Label and DNS Domain Name fields are auto-populated.
- Click Create Subnet.
Modifying the Security List in the VPN
You can add, modify, or terminate the ingress and egress rules of your VCN.
To modify the Security List of your VCN:
- Log in to OCI.
- From the Navigation pane, select Networking, then select Virtual cloud networks, then select your VCN, and then from the Resources pane, select Security List Details.
The Default Security List page of your VCN opens.
- To add an Ingress rule:
- From the Resources pane, select Ingress Rules.
- Click Add Ingress Rules.
- Provide the required details and then click Save.
- To add an Egress rule:
- From the Resources pane, select Egress Rules.
- Click Add Egress Rules.
- Provide the required details and then click Save.
- To modify the security list details, select any one rule from the ingress or egress rules list, and then click Edit.
- Modify the required details, and then click Save.
Deploying the Public Bastion
This section provides detailed instructions about deploying the public bastion and verifying it.
To deploy the public bastion:
- Log in to OCI.
- From the Navigation pane, select Compute, then select Instances, and then click Create instance.
The Create compute instance page opens, provide the following details:
- Name. For example, Public-bastion.
- Image and Shape
- VCN
- Boot volume
- SSH public keys
Note:
You must create your own public keys.
- Click Create. The public bastion is created and a public IP is generated.
Note:
Ensure that you are connected to the OCNA VPN before starting the below procedure.To verify the public bastion:
- Open Git Bash on your local machine.
- Run the following command:
ssh -i /path/to/private-key opc@<bastion-public-ip>
Replace /path/to/private-key with the path to your SSH private key.
Replace <bastion-public-ip> with the bastion's public IP address.
If correct values are provided in the above command, you will get the access to the public bastion.
Deploying the Oracle Base Database Service
This section provides detailed instructions about provisioning the Oracle Base Database Service, integrating it with the VCN, configuring automated backups, and monitoring for the database to ensure data integrity and performance.
To deploy the Oracle Base Database Service:
- Log in to OCI.
- From the Navigation pane, select Oracle Database, then select Oracle Base Database, then select DB Systems, and then click the Create DB System button.
A Create DB System page opens.
- Click DB system information, and provide the following details:
- From the Select a compartment drop-down list, select the appropriate compartment.
- In the Name your DB System text box, enter the desired name. For example, dbe-database.
- From the Select an availability domain list, select an appropriate domain.
- In the Configure shape pane, attach the shape details.
- From the Configure storage pane, select the appropriate storage management software type.
- In the Configure the DB System field, enter the host name.
- Click Save.
- Click Database information, and configure the administrator credentials as follows:
- In the Database name text box, enter the desired name. For example, DB0822.
- Click the Change database image button in the Database image field, browse the appropriate image, and upload the image.
- In the Create administrator credentials pane, provide the following details:
- In the Password text box, enter the desired password.
Note:
The Username field will be auto-populated and is a read-only field. - In the Confirm Password text box, enter the password again.
- Select the Use the administrator password for the TDE wallet check box.
- In the Password text box, enter the desired password.
- From the Configure database backups pane, select the Enable automatic backups check box.
- Click Create DB System. The Oracle Base Database is created.
The Database system information page opens.
- From the Resources pane, click Nodes, and save the Private IP address.
Verifying and Accessing the Database
To verify and access the Oracle Base Database:
- Open the Command Line Interface (CLI).
- Log in to public bastion using SSH. See Deploying the Public Bastion for more information.
- Run the following command inside the public bastion:
$ ssh opc@Private_IP_Address
Replace Private_IP_Address with the Oracle Database private IP address generated after it was created.
- Run the following commands to verify the access to Oracle Database:
$ sudo su -oracle sqlplus / as sysdba
Creating a Pluggable Database within the Oracle Base Database
To create a pluggable database (PDB):
- Log in to OCI.
- From the Navigation pane, select Oracle Database, then select Oracle Base Database Service, then from the Compartment drop-down list, select your compartment, and then select your Oracle Database System.
- From the Resources pane, click Databases, and then select the existing database from the table.
- From the Resources pane, click Pluggable Databases, and then click the Create pluggable database button.
- Enter the values for the following fields:
- PDB name, select the Unlock the PDB admin account check box.
- PDB admin password
Note:
You must always refer to the OCI vault for the password. - Confirm PDB admin password
- TDE wallet password of database
- Click Create pluggable database.
Deploying an Oracle Kubernetes Engine Environment
This section provides detailed instructions about creating a Kubernetes cluster using Oracle Kubernetes Engine (OKE), configuring the node pools, setting up the Kubernetes access, and verifying the access.
To deploy the Kubernetes cluster using OKE:
- Log in to OCI.
- From the Navigation pane, select Developer Services, and then select Kubernetes Cluster.
- Click the Create cluster button.
A Create cluster (custom) page opens. Provide the following details:
- In the Name text box, enter the desired cluster name. For example, dbe-cluster.
- In the Compartment text box, enter the name of the container. For example, demo.
- In the Kubernetes version text box, enter the Kubernetes version. For example, v1.30.1.
- From the Network type field, select your VCN.
- In the VCN in Compartment text box, enter your VCN name. For example, dbe-vcn.
- From the Kubernetes service LB subnets in compartment field, select the LB subnet created in your VCN.
- From the Kubernetes API endpoint subnet in compartment field, select the
private subnet created in your VCN.
The node pool details will be auto-populated in the Node pools section.
- Click the Review link from the left pane to review the details.
- Click Create cluster.
The Kubernetes cluster is created.
Verifying and Accessing the Cluster
Prerequisites before verifying the access to the cluster:
- Download OCI CLI version 2.24.0 or later. See Installing the CLI for more information.
- Configure CLI on your local machine. See Configuring the CLI for more information.
- Navigate to the cluster details in OCI, click Access Cluster, and then select the Local Access option.
To verify the access to the cluster:
- Open OCI CLI on your local machine.
- Log in to the public bastion using SSH.
- Navigate to the cluster details in OCI, then click Access Cluster, and then copy the commands from the Access Your Cluster page.
- Run the commands in OCI CLI in the sequence mentioned in the Access Your Cluster page to download and configure the kubectl file.
- Run the following commands in OCI CLI to verify the cluster access:
$kubectl get ns $kubectl get nodes
Deploying a Kubernetes Cluster using Oracle Cloud Native Environment
This section provides detailed instructions about deploying a Kubernetes cluster using Oracle Cloud Native Environment (OCNE).
To deploy a Kubernetes cluster using OCNE:
- Log in to OCI.
- From the Navigation pane, select Compute, and then select Instances.
A Create Instance page opens.
- Click Create Instance.
- From the Compartment drop-down list, select the appropriate compartment for your instance. For example, operations-staging.
- In the Instance Name field, enter the display name of your instance.
- Under the Image and Shape field, provide the following details:
- In the Image field, browse and upload an appropriate image for your instance. For example, Oracle Linux, Ubuntu, or a custom image.
- In the Shape field, select an appropriate shape. For example, VM.Standard.E2.1.Micro.
- Under the Configure Networking field, provide the following details:
- From the VCN drop-down list, select an existing VCN.
Note:
You can also create a new VCN by clicking the Create VCN button. - From the Subnet drop-down list, select the subnet within the VCN selected in the above step.
- From the VCN drop-down list, select an existing VCN.
- Select either Public IP address or Private IP address option for the instance.
- Click Create Instance. The instance is created.
- After the instance is created, see Oracle Cloud Native Environment Quick Start Guide and follow the instructions in the sequence mentioned in this guide to complete the deployment of the Kubernetes cluster using OCNE.
Deploying a Cloud Native Computing Foundation Environment
This section provides detailed instructions about deploying a Cloud Native Computing Foundation (CNCF) environment, which involves creating a control plane and creating a worker node.
Prerequisites for Deploying a CNCF Environment
- Ensure that the system has at least four CPUs and 64 GB of memory.
- You must be on an Oracle Linux 8 operating system.
- You must disable the swap option on all nodes.
- You must have a good internet connection to download the packages.
Creating a Control Plane
To create a control plane:
- Log in to the virtual machine (VM) control plane and run the following script on
Oracle Linux 8 to configure a Kubernetes control
plane:
#!/bin/bash ################# Adapted for Oracle Linux 8 ################ # This script configures a Kubernetes control plane on Oracle Linux 8. # Set Kubernetes and crictl versions export VER="v1.31.1" export K8S_VER="1.31" export K8S_PKG="v1.31" # Check if the script has been run before FILE=/k8scp_run if [ -f "$FILE" ]; then echo "WARNING!" echo "$FILE exists. Script has already been run on control plane." echo exit 1 else echo "$FILE does not exist. Running script." fi # Prevent script from running twice sudo touch /k8scp_run # Update the system sudo dnf update -y # Install necessary software sudo dnf install -y curl vim git wget gnupg2 socat \ yum-utils device-mapper-persistent-data lvm2 # Add the Kubernetes repo cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/${K8S_PKG}/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/${K8S_PKG}/rpm/repodata/repomd.xml.key #exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF # Install Kubernetes components sudo dnf install -y kubelet kubeadm kubectl sudo dnf versionlock add kubelet kubeadm kubectl # Ensure Kubelet is running sudo systemctl enable --now kubelet # Disable swap sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab # Load necessary kernel modules cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # Update networking settings cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system # Install containerd sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo dnf install -y containerd.io # Configure containerd sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml sudo systemctl restart containerd sudo systemctl enable containerd # Install and configure crictl wget https://github.com/kubernetes-sigs/cri-tools/releases/download/${VER}/crictl-${VER}-linux-amd64.tar.gz tar zxvf crictl-${VER}-linux-amd64.tar.gz sudo mv crictl /usr/local/bin sudo crictl config --set \ runtime-endpoint=unix:///run/containerd/containerd.sock \ --set image-endpoint=unix:///run/containerd/containerd.sock # Initialize the Kubernetes cluster sudo kubeadm init --pod-network-cidr=10.244.0.0/16 | sudo tee /var/log/kubeinit.log # Configure kubectl for the current user mkdir -p $HOME/.kube sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # Install Cilium CLI export CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt) export CLI_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} cilium install --set ipam.mode=cluster-pool --set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 --set ipam.operator.clusterPoolIPv4MaskSize=24 #Disable firewall sudo systemctl stop firewalld sudo systemctl disable firewalld sudo systemctl mask --now firewalld # Install Helm wget https://get.helm.sh/helm-v3.16.2-linux-amd64.tar.gz tar -xf helm-v3.16.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/ # Output the state of the cluster kubectl get node # release storage sudo /usr/libexec/oci-growfs -y echo "Setup complete. Proceed to the next step."
- Check if the status of the control plane node is Ready. The following is a sample command and its output:
[opc@vanillak8sre-cp ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION vanillak8sre-cp Ready control-plane 78m v1.31.4
- Run the following command to get the kubejoin details:
kubeadm token create --print-join-command
Creating a Worker Node
To create a worker node:
- Log in to the VM worker node and run the following script on Oracle Linux 8 to set
up a Kubernetes worker node:
Note:
You can also add multiple worker nodes based on your environment's requirement.#!/bin/bash ################# Adapted for Oracle Linux 8 ################ # This script sets up a Kubernetes worker node on Oracle Linux 8. export VER="v1.31.1" export K8S_VER="1.31" export K8S_PKG="v1.31" # Check if the script has been run before. Exit if it has. FILE=/k8scp_run if [ -f "$FILE" ]; then echo "WARNING!" echo "$FILE exists. Script has already been run." echo "Do not run on the control plane. Run on a worker node." echo exit 1 else echo "$FILE does not exist. Running script." fi # Prevent script from being run multiple times sudo touch /k8scp_run # Update the system sudo dnf update -y # Install necessary software sudo dnf install -y curl vim git wget gnupg2 socat \ yum-utils device-mapper-persistent-data lvm2 # Add the Kubernetes repo cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/${K8S_PKG}/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/${K8S_PKG}/rpm/repodata/repomd.xml.key #exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF # Install Kubernetes components and lock their versions sudo dnf install -y kubelet kubeadm kubectl sudo dnf versionlock add kubelet kubeadm kubectl # Ensure Kubelet is running sudo systemctl enable --now kubelet # Disable swap sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab # Load necessary kernel modules cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # Update networking settings cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system # Install containerd sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo dnf install -y containerd.io # Configure containerd sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml sudo systemctl restart containerd sudo systemctl enable containerd # Install and configure crictl wget https://github.com/kubernetes-sigs/cri-tools/releases/download/${VER}/crictl-${VER}-linux-amd64.tar.gz tar zxvf crictl-${VER}-linux-amd64.tar.gz sudo mv crictl /usr/local/bin sudo crictl config --set \ runtime-endpoint=unix:///run/containerd/containerd.sock \ --set image-endpoint=unix:///run/containerd/containerd.sock #Disable firewall sudo systemctl stop firewalld sudo systemctl disable firewalld sudo systemctl mask --now firewalld # release storage sudo /usr/libexec/oci-growfs -y # Instructions for joining the worker node to the cluster sleep 3 echo echo echo '***************************' echo echo "Continue to the next step" echo echo "Use sudo and copy the kubeadm join command from" echo "the control plane node." echo echo '***************************' echo echo
- Run the kubejoin command and check the node status. The following is a sample kubejoin command and its status:
[opc@abc8sre-cp ~]$ kubeadm join 10.0.5.248:6443 --token ukztj1.sgd165r61xknz2qs --discovery-token-ca-cert-hash sha256:7ab8dff2197aaa6c0701c35132006ab0934e95e8e259611b17d4710285dbfbc9 [opc@abc8sre-cp ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION abc8sre-cp Ready control-plane 78m v1.31.4 abc8sre-wn1 Ready <none> 67m v1.31.4
Deploying the Solution Components
This section provides detailed instructions for deploying Digital Business Experience solution components.
To deploy the Digital Business Experience solution:
- Request for an environment for the solution. Contact Oracle Support for assistance.
- Prepare the cluster for the solution. See Preparing the Cluster for information about deploying the key cloud infrastructure elements.
- Deploy Launch Cloud Service and CX Industries Framework. See Oracle Communications Launch Implementation Guide for detailed instructions about deploying Launch and CX Industries Framework applications.
- Deploy BRM. See Oracle Communications Billing and Revenue Management Cloud Native Deployment Guide for detailed instructions about deploying BRM.
- Deploy PDC. See Oracle Communications Billing and Revenue Management PDC Installation Guide for detailed instructions about deploying PDC.
- Deploy ECE. See Oracle Communications Billing and Revenue Management ECE Installation Guide for detailed instructions about deploying ECE.
- Deploy OCOMC. See Oracle Communications Offline Mediation Controller Cloud Native Installation and Administration Guide for detailed instructions about deploying OCOMC.
- Deploy OAP. See Installing and Configuring Oracle Analytics Server for detailed instructions about deploying OAP.
- Deploy SCD. See Oracle Communications Service Catalog and Design Studio Installation Guide for detailed instructions about deploying SCD.
- Deploy OSM. See Oracle Communications
Order and Service Management Cloud Native Deployment
Guide for detailed instructions about
deploying OSM.
- Deploy Order to Activate (O2A). See Order and Service Management Cartridges for Application Integration Architecture Cloud Native Deployment Guide for detailed instructions about deploying O2A.
- Deploy Siebel CRM. See Developing and Deploying Siebel CRM for detailed instructions about deploying Siebel CRM.
- Deploy ODI. See Oracle Fusion Middleware Installing and Configuring Oracle Data Integrator Guide for detailed instructions about deploying ODI.
- Deploy AIA. See Oracle Communications Application Integration Architecture Cloud Native Deployment Guide for detailed instructions about deploying AIA.
Validating the Solution Components
This section provides information about various validations required for all solution components. These validations are necessary for smooth and unrestricted functioning of the solution.
To validate the deployment of the solution:
- Verify the password expiration of the cloud native applications: See Verifying the Password Expiration for detailed procedure for verifying the password expiration of the cloud native applications.
- Validate the public certificates: See Validating the Public Certificates for detailed procedure about validating the various public certificates.
- Validate the Launch and CXIF deployment: See Validating the Connection in Oracle Communications Launch Cloud Service Integration Guide and follow the procedure for validating the Launch and CXIF connection.
- Validate the SCD deployment: See Validating the Solution Designer Instance in Oracle Communications Service Catalog and Design Solution Designer Installation Guide and follow the procedure for validating the SCD deployment.
- Validate the AIA deployment: See Validating the AIA Cloud Native Deployment in Oracle Communications Application Integration Architecture Cloud Native Deployment Guide and follow the procedure for validating the AIA deployment.
Note:
If any of the above validation fails, stop the validation process, and contact Oracle Support.Verifying the Password Expiration
- Run the following command to access the sql client:
sqlplus / as sysdba
The sample output is as follows:
[opc@vanillak8s-bastion .ssh]$ ssh -i vanillaK8sdb_private.key vanillak8sdb.sub05131336501.ops.oraclevcn.com [opc@vanillak8sdb ~]$ sudo su - oracle [oracle@vanillak8sdb]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Wed Dec 11 11:46:04 2024 Version 19.24.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE High Perf Release 19.0.0.0.0 - Production Version 19.24.0.0.0 SQL>
- Run the following command to access the PDB:
alter session set container=BRMCN15;
- Run the following command to check the expiry of a DB user:
select username, account_status, EXPIRY_DATE from dba_users;
- Run the following command to set a DB user to no expiration:
SQL> alter profile DEFAULT limit PASSWORD_REUSE_TIME unlimited; SQL> alter profile DEFAULT limit PASSWORD_LIFE_TIME unlimited;
Validating the Public Certificates
This section provides information about validating various public certificates for the Digital Business Experience environment.
Validating the CXIF Certificate
To validate the CXIF certificate, run the following command:
$ openssl s_client -showcerts -connect rododsiebel.jetpen.com:31000
The following is a sample output:
$ openssl s_client -showcerts -connect rododsiebel.jetpen.com:31000
CONNECTED(00000003)
depth=2 C = US, O = Internet Security Research Group, CN = ISRG Root X1
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R11
verify return:1
depth=0 CN = rododsiebel.jetpen.com
verify return:1
---
Validating the Siebel and PDC Certificates
Before validating the Siebel and PDC certificates, you must identify the Siebel and PDCRSM API endpoint URLs. For example, Siebel API Endpoint URL for Variant 1 is: https://rododsiebel.jetpen.com:32401
To validate the Siebel and PDC certificates, run the following command on a jump host with openssl:
$ echo -n Q | openssl s_client -connect rododsiebel.jetpen.com:32401 | openssl x509 -noout -dates ---
The following is a sample output:
[mimatysk@test-overlay-t83z-hc ~]$ echo -n Q | openssl s_client -connect rododsiebel.jetpen.com:32401 | openssl x509 -noout -dates ---
depth=2 C = US, O = Internet Security Research Group, CN = ISRG Root X1
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R11
verify return:1
depth=0 CN = rododsiebel.jetpen.com
verify return:1
DONE
notBefore=Oct 24 21:39:58 2024 GMT
notAfter=Jan 22 21:39:57 2025 GMT