27 Scaling Procedures for an Enterprise Deployment

The scaling procedures for an enterprise deployment include scale out and scale in. During a scale-out operation, you add Managed Servers to new nodes. You can remove these Managed Servers by performing a scale in operation.

This chapter includes the following topics:

Scaling Out the Topology

When you scale out the topology, you add new Managed Servers to new nodes.

This section describes the procedures to scale out the Identity Management topology.

Scaling Out Oracle Unified Directory

This section lists the prerequisites for scaling out Oracle Unified Directory, explains the procedure, and describes the steps to verify the scale-out process.

Prerequisites for Scaling Out

Before you perform a scale out of the OUD topology, ensure that you meet the following requirements:

  • As the starting point, you have at least one OUD server instance running. This is the primary instance.
  • A Kubernetes worker node is available with sufficient capacity to host the new pod.
Scaling Out by Adding a New Replicated Instance
You can use one of the following ways to scale out the domain:
  1. Modifying the override_oud.yaml file.

    Creating a new OUD instance involves modifying the server overrides file. See Creating a Server Overrides File.

    If you do not have a replOUD section, then you need to add this section to the file as described in Creating a Server Overrides File.

    Increasing the number of replicas involves increasing the value of the replicaCount parameter. Set this value to the total number of replicas you require. This number does not include your primary instance. Therefore, if you require a total of four OUD instances, then set the value to 3, where the fourth instance is the primary instance.

  2. Using Helm to increase the number of running instances.
    After you update the override_oud.yaml file, use the following commands:
    cd WORKDIR/fmw-kubernetes/OracleUnifiedDirectory/kubernetes/helm
    helm upgrade --namespace <NAMESPACE> --values WORKDIR/override_oud.yaml <OUD_PREFIX> oud-ds-rs
    For example:
    cd /workdir/OUD/fmw-kubernetes/OracleUnifiedDirectory/kubernetes/helm
    helm upgrade --namespace oudns --values /workdir/oud/override_oud.yaml edg oud-ds-rs

    Sample output:

    Release "edg" has been upgraded. Happy Helming!
    NAME: edg
    LAST DEPLOYED: Thu Apr 8 06:35:30 2021
    NAMESPACE: oudns
    STATUS: deployed
    REVISION: 2
    NOTES:
    
    Copyright (c) 2020, Oracle and/or its affiliates.
    
     Licensed under the Universal Permissive License v 1.0 as shown at
     https://oss.oracle.com/licenses/upl
Verifying the Scale Out

After scaling out and starting the server, proceed with the following verifications:

  1. Check the Kubernetes cluster to see that the required number of servers are running, by using the command:
    kubectl -n <namespace> get all -o wide
    For example:
    kubectl -n oudns get all -o wide
  2. Verify the log files to ensure that the new instance is created correctly.
    kubectl logs -n <OUDNS> pod/<OUD_PREFIX>-oud-ds-rs-2
    For example:
    kubectl logs -n oudns pod/edg-oud-ds-rs-2

Scaling Out a WebLogic Domain

This section describes the procedures to scale out a WebLogic domain such as Oracle Access Manager.

Prerequisites for Scaling Out

Before you perform a scale out of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with Managed Servers already running.

  • A Kubernetes worker node is available with sufficient capacity to host the new pod.
  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

  • You are currently not running the maximum number of servers you defined when at the time of creating the domain.
Scaling Out a Domain
Scaling the domain requires modifying the Replicas parameter in it. The simplest way to modify the parameter is to patch the domain using the following command:
kubectl patch cluster -n <NAMESPACE> <DOMAIN_NAME>-<CLUSTER_NAME> --type=merge -p '{"spec":{"replicas":<NO OF REPLICAS>}}'
For example:
kubectl patch cluster -n oamns accesdomain-oam-cluster --type=merge -p '{"spec":{"replicas":3}}'
Scaling Out the Cluster Using the Sample Script
Oracle provides a number of scripts to make domain life cycle operations simple. These scripts are included in the sample files you download from GitHub and are located in:
fmw-kubernetes/<PRODUCT>/kubernetes/domain-lifecycle
You can scale out the cluster using the following command:
./scaleCluster.sh -d <DOMAIN_NAME> -n <NAMESPACE> -c <CLUSTER_NAME> -r <REPLICAS>
Verifying the Scale Out
After scaling out and starting the server, proceed with the following verifications:
  1. Check the Kubernetes cluster to see that the required number of servers are running, by using the command:
    kubectl -n <namespace> get all -o wide
    For example:
    kubectl -n oamns get all -o wide

    Or

    kubectl -n oigns get all -o wide
  2. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      https://igdinternal.example.com:7777/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

  3. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  4. Verify the service migration, as described in Validating Automatic Service Migration in Static Clusters.

Scaling In the Topology

When you scale in the topology, you remove new Managed Servers or instances or both from your running system.

Scaling In Oracle Unified Directory

This section lists the prerequisites for scaling in Oracle Unified Directory, explains the procedure, and describes the steps to verify the scale-in process.

Prerequisites for Scaling In

Before you perform a scale in of the OUD topology, ensure that you have at least two OUD server instance running.

Scaling In by Removing a Replicated Instance
You can use one of the following ways to scale out the domain:
  1. Modifying the override_oud.yaml file.

    Removing an OUD instance involves modifying the server overrides file . See Creating a Server Overrides File.

    Decreasing the number of replicas involves reducing the value of the replicaCount parameter. Set this value to the total number of replicas you require. This number does not include your primary instance. Therefore, if you require a total of four OUD instances, then set the value to 3, where the fourth instance is the primary instance.

  2. Using Helm to reduce the number of running instances.
    After you update the override_oud.yaml file, use the following commands:
    cd WORKDIR/fmw-kubernetes/OracleUnifiedDirectory/kubernetes/helm
    helm upgrade --namespace <NAMESPACE> --values WORKDIR/override_oud.yaml <OUD_PREFIX> oud-ds-rs
    For example:
    cd /workdir/OUD/fmw-kubernetes/OracleUnifiedDirectory/kubernetes/helm
    helm upgrade --namespace oudns --values /workdir/oud/override_oud.yaml edg oud-ds-rs

    Sample output:

    Release "edg" has been upgraded. Happy Helming!
    NAME: edg
    LAST DEPLOYED: Thu Apr 8 06:35:30 2021
    NAMESPACE: oudns
    STATUS: deployed
    REVISION: 2
    NOTES:
    
    Copyright (c) 2020, Oracle and/or its affiliates.
    
     Licensed under the Universal Permissive License v 1.0 as shown at
     https://oss.oracle.com/licenses/upl
Verifying the Scale In

After scaling in and starting the server, proceed with the following verifications:

  1. Check the Kubernetes cluster to see that the required number of servers are running, by using the command:
    kubectl -n <namespace> get all -o wide
    For example:
    kubectl -n oudns get all -o wide
  2. Verify the log files to ensure that the new instance is created correctly.
    kubectl logs -n <OUDNS> pod/<OUD_PREFIX>-oud-ds-rs-2
    For example:
    kubectl logs -n oudns pod/edg-oud-ds-rs-2

Scaling In a WebLogic Domain

This section lists the prerequisites for scaling in a WebLogic domain such as Oracle Access Manager and Oracle Identity Governance, and explains the procedure to scale in the domain.

Prerequisites for Scaling In

Before you perform a scale In of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with Managed Servers already running.
  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.
Scaling In a Domain

Scaling in the domain involves asking the WebLogic Operator for Kubernetes to stop extra pods (Managed Servers). You can use one of the following ways to scale in the domain:

Modifying the domain.yaml/domain_oim_soa.yaml file
  1. Locate the entry for the cluster you want to scale in. Decrease the value of the parameter replicas to the desired number of servers you want to start. For example: replicas: 1
  2. Save the file and apply the changes using the following command:
    kubectl apply -f domain.yaml
Modifying the domain directly
  1. Use the following command:
    kubectl edit domain <domain_name> -n <namespace>
    For example:
    kubectl edit domain accessdomain -n oamns
  2. Locate the entry for the cluster you want to scale in. Decrease the value of the parameter replicas to the desired number of servers you want to start. For example: replicas: 1
  3. Save the file. The operator will automatically start the required number of servers.
Scaling In the Cluster Using the Sample Script
Oracle provides a number of scripts to make domain life cycle operations simple. These scripts are included in the sample files you download from GitHub and are located in:
fmw-kubernetes/<PRODUCT>/kubernetes/domain-lifecycle
You can scale in the cluster using the following command:
./scaleCluster.sh -d <DOMAIN_NAME> -n <NAMESPACE> -c <CLUSTER_NAME> -r <REPLICAS>
Verifying the Scale In

After scaling in and starting the server, proceed with the following verifications:

  1. Check the Kubernetes cluster to see that the required number of servers are running, by using the command:
    kubectl -n <namespace> get all -o wide
    For example:
    kubectl -n oamns get all -o wide

    Or

    kubectl -n oigns get all -o wide
  2. Verify the log files to ensure that the new instance is created correctly.
    kubectl logs -n <NAMESPACE> pod/<DOMAIN_NAMR>-<SERVER_NAME>

    For example:

    kubectl logs -n oamns pod/accessdomain-oam-server1

    Or

    kubectl logs -n oigns pod/governancedomain-oim-server1