4 Setting Up Prerequisite Software

Learn about prerequisite tasks to perform before installing the Oracle Communications Solution Test Automation Platform (STAP) deployment toolkit, such as installing and configuring third-party software.

STAP Prerequisite Tasks

As part of preparing your environment for STAP, you choose, install, and set up various components and services in ways that are best suited for your environment.

The high-level prerequisite tasks for STAP are:

  1. Ensure you have downloaded the correct versions of the third-party tools. See "Common Software Compatibility" in STAP Compatibility Matrix for information about the compatible versions.

  2. Install Kubernetes and create a cluster. See "Creating a Kubernetes Cluster".

  3. Install Podman and a container runtime supported by Kubernetes. See "Installing Podman".
  4. Install Helm. See "Installing Helm".

Prepare your environment with these technologies installed, configured, and tuned for performance, networking, security, and high availability.

Creating a Kubernetes Cluster

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery. When you deploy Kubernetes, you get a physical cluster with machines called nodes. A reliable cluster must have multiple worker nodes spread over separate physical components, and a very reliable cluster must have multiple primary nodes spread over separate physical components.

Set up a Kubernetes cluster for your STAP deployment, securing access to the cluster and its objects with the help of service accounts and proper authentication and authorization modules.

For more information about Kubernetes, see the Kubernetes documentation:

https://kubernetes.io/docs/concepts/

Installing Podman

You use the Podman platform to containerize STAP. Install Podman to use the prebuilt images provided with the STAP cloud native deployment package.

To install Podman, refer to the Podman documentation:

https://podman.io/

You can use Podman or any container runtime that supports the Open Container Initiative if it supports the Kubernetes version specified in Compatibility Matrix.

Installing Helm

Helm is a package manager that helps you install and maintain software on a Kubernetes system. In Helm, a package is called a chart, which consists of YAML files and templates rendered into Kubernetes manifest files. The STAP deployment package includes Helm charts that help create Kubernetes objects with a single command.

To install Helm, see the information and downloads on the Helm website:

https://github.com/helm/helm/releases.

For the list of supported Helm versions, see Compatibility Matrix.

Helm uses a configuration file to allow the helm command to access the Kubernetes cluster. By default, this file is $HOME/.kube/config, but you can specify another location by setting it in the $KUBECONFIG environment variable. Helm inherits the permissions set up for this access into the cluster. If role-based access control is configured, you must grant Helm users sufficient cluster permissions.

Creating a STAP Application in Oracle Identity Cloud Service

Note:

Create this only if you use OAuth authentication.

When you create a confidential OAuth application in Oracle Identity Cloud Service (IDCS), it provides you with a client ID and client secret. Your client will need the client ID and client secret to request OAuth access tokens for accessing STAP.

To create a confidential OAuth application in IDCS:

  1. Log in to the IDCS Admin Console.
  2. Create a new application by selecting Add and choosing Confidential Application.
  3. Under application URL, enter https://STAP-UI:PORT/ (ensure the trailing slash is included)
    where:
    • STAP-UI is the UI host
    • PORT is the UI port
  4. In the Resource server configuration section, do the following:
    1. Select the Configure this application as a resource server now option.
    2. Set the access token expiration time.
    3. Enable Allow refresh token and provide the refresh token expiration time.
    4. Under Primary Audience, enter https://STAP-UI:PORT/ (include the trailing slash)
    5. Add a scope named stap.
  5. In the Client configuration section, select the Configure this application as a client now option.
  6. In the Authorization section, do the following:
    1. In the Allowed Grant Types field, select Resource owner, Client credentials, Authorization code, Refresh token options.
    2. In the Allowed Operations field, select Introspect, On behalf of options.
    3. In the Authorized Resources field, select All.
  7. Set Redirect URL to https://STAP-UI:PORT/oidc/redirect.
  8. Set Post Logout Redirect URL to https://STAP-UI:PORT/.
  9. Add a scope using the application name. This creates https://STAP-UI:PORT/stap
  10. In the Application Added pop-up window, make note of the client ID and client secret. You will provide this to the person who needs to generate the OAuth access token.
  11. Click Activate and then click Activate Application to confirm the activation.

For more information on Oracle IDCS, see the Oracle Identity Cloud Service documentation.

Setting Up Persistent Volume

STAP uses two Persistent Volumes: one stores data in its microservices, the other contains published results of a scenario.

Setting Up Persistent Volume for STAP Microservices

To set up Persistent Volume for STAP microservices, you define it for just one microservice. TES and TDS share the same Persistent Volume. It is recommended to set up Persistent Volume for TES.

Before setting up Persistent Volume for the TES microservice, ensure you have a Persistent Volume available with a preferred access mode. It is recommended you use an RWO for a single writer, and an RWX for multiple replicas.

  1. Create the /data/config directory. Ensure that following directories are set within /data/config:
    • /data/config/attributeConfig: Copy the attributeData.properties file and data faker plugin files here.
    • /data/config/adapters: Copy any adapter configuration files here. For example, the configuration file for PDF report generation.
    • /data/config/context: Copy the global.ctx file here, with key value pairs, if any.
    • /data/config/actions: Copy the UI action files and page property files here.
    • /data/config/plugins: Contains browser driver binaries and configurations. Copy browser drivers and configurations here, and run chmod+x on the executables.
  2. Configure the path for the following helm values:
    Helm Value Path
    attributeData.home /data/config/attributeConfig
    adapters.home /data/config/adapters
    globalContext.home /data/config/context
    uiActions.home /data/config/actions
    plugins.home /data/config/plugins
  3. Mount the Persistent Volume named tdaas-persistent-storage in the /data/config directory with readOnly set to false so TES can write as needed.
  4. Configure the pod’s security context: run as a user, group, or FsGroup, or adjust ownership in an initContainer to ensure TES has write access.

    Note:

    If you get a Permission Denied error, adjust the fsGroup or ownership under plug-ins.
  5. Verify that your cluster’s security policies, such as SELinux, Security Context Constraints, or Pod Security, allow writes to the Persistent Volume.
  6. To verify the Persistent Volume, run the following command in the TES pod:
    ls -l /data/config /data/config/context/global.ctx

    If the TES logs confirms that configurations are loaded without any missing errors, the Persistent Volume is set up successfully.

    If your TES logs return missing errors, confirm that mount path for the /data/config directory and its Helm values are correctly matched.

Setting Up Persistent Volume to Publish Data Files

  1. Ensure TES and the host mount the PV at /data/config so TES can use the files. You can check the host mount by running df -h and verifying the PV is mounted on /data/config.
  2. Create a directory /data/config/dataFiles is to store published data files.
  3. Set the PV location in publish-automation.properties file by setting these values:
    • tdaas.connection: tdaasEnvironment
    • persistence-volume.environment: persistence-volume-environment
    • /data/config/dataFiles: persistence-volume.location
  4. Fill in the host connection details in persistence-volume-environment.properties so the publisher can write to the PV path:
    • name: persistence-volume
    • type: SSH
    • hostname: hostIp
    • port: 22
    • authorization: YES
    • authorization.type: basic
    • username: username
    • password: password
  5. To test that you can connect and write, run an SSH command to the host, and create a test file in /data/config/dataFiles. Once successful, you can delete it.
  6. Publish your scenarios so the PV copies each scenario’s data folder into the path you set in persistence-volume.location.
  7. To verify the Persistent Volume, run the following command in the TES pod:
    ls -l /data/config/dataFiles

    If the TES logs confirms that configurations are loaded without any missing errors, the Persistent Volume is set up successfully.

    If your TES logs return missing errors, confirm that mount path for the /data/config directory and its Helm values are correctly matched.