2 About the Unified Inventory and Topology Toolkit
This chapter describes the components required for Unified Inventory and Topology.
Unified Inventory and Topology Toolkit
From Oracle Software Delivery Cloud, download the following:
- Oracle Communications Unified Inventory Management Common Toolkit
- Oracle Communications Unified Inventory Management Cloud Native Image Builder
- Oracle Communications Unified Inventory Management ATA Image Builder
- Oracle Communications Unified Inventory Management SmartSearch Image
- Oracle Communications Unified Inventory Management Authorization Image Builder
Perform the following tasks:
-
Copy the above downloaded archives into directory workspace and unzip the archives.
-
Export the unzipped path to the WORKSPACEDIR environment variable.
-
On Oracle Linux, where Kubernetes is hosted, download and extract the tar archive on each host. This host has a connectivity to the Kubernetes cluster.
-
Alternatively, on OKE, for an environment where Kubernetes is running, extract the contents of the tar archive (on each OKE client host). The OKE client host is the bastion host that is set up to communicate with the OKE cluster.
$ mkdir workspace $ export WORKSPACEDIR=$(pwd)/workspace //Untar UIM Builder $ tar -xf $WORKSPACEDIR/uim-image-builder.tar.gz --directory $WORKSPACEDIR //Untar ATA Builder $ tar -xf $WORKSPACEDIR/ata-builder.tar.gz --directory $WORKSPACEDIR //Untar Authorization Builder $tar -xf $WORKSPACEDIR/authorization-builder.tar.gz --directory $WORKSPACEDIR //Untar Common Toolkit $ tar -xf $WORKSPACEDIR/common-cntk.tar.gz --directory $WORKSPACEDIR $ export COMMON_CNTK=$WORKSPACEDIR/common-cntk
Assembling the Specifications
To assemble the specifications:
- Create a directory (either in local machine or version control system where the deployment pipelines are available) to maintain the specification files needed to deploy the service. Export the directory to SPEC_PATH environment variable.
- Export the environment variables as
follows:
export COMMON_CNTK=$WORKSPACEDIR/common-cntk export SPEC_PATH=<path to directory created in step 1> export STRIMZI_NS=<namespace for strimzi deployment> ex. strimzi export PROJECT=<k8s namespace where you planned to deploy the services> ex. sr export INSTANCE=<the instance name to be used for your services> ex. quick - Run the following command to assemble the required specification
files:
$COMMON_CNTK/scripts/assemble-specifications.sh -p $PROJECT -i $INSTANCE -s $SPEC_PATHIf the script runs successfully without any errors, verify the configuration files by checking the contents of the $SPEC_PATH directory.
Note:
All scripts in the Common cloud native toolkit are designed to read the required configuration files only from the path specified by $SPEC_PATH. - Copy other specification files as required:
- Persistent volumes and persistent volume claims files from $COMMON_CNTK/samples/nfs
- Role and role bindings from $COMMON_CNTK/samples/rbac
- Credential files from $COMMON_CNTK/samples/credentials
About the Specifications File
After assembling the specifications, it generates a directory structure with the following files:
- $SPEC_PATH/$PROJECT/$INSTANCE: This file contains all specifications files (applications-base.yaml and app-<serviceName>.yaml) for the services along with the shapes and config directories.
- $SPEC_PATH/$PROJECT/$INSTANCE/shapes: This file contains the resource configuration files for each service. The immediate subdirectories represent shape names, and each of these contains <serviceName>.yaml files that define the hardware resource specifications. New directories can also be added here, by following the same pattern, with shape-specific YAML files for all services. By default, the shape configurations are for devsmall, dev, prodsmall, prod, and prodlarge shapes.
- $SPEC_PATH/$PROJECT/$INSTANCE/config: This file contains applications and logging configuration files for all the services, if you want to change or provide any application-level configuration you need to edit in this directory.
- $SPEC_PATH/$PROJECT/$INSTANCE/config/common: This file contains the common configuration files that are common across all services.
- $SPEC_PATH/$PROJECT/opensearch: This file contains the files required for OpenSearch deployment configuration.
- $SPEC_PATH/$STRIMZI_NS: This file contains the files required for Strimzi operator deployment to override the default configuration.
Note:
All scripts in the Common cloud native toolkit are designed to read the required configuration files only from the path specified by $SPEC_PATH.
Details within the Specification Files
You can find the following details in the corresponding specification files:
- applications-base.yaml: This file contains the common deployment configuration for all the services like selection and configuration of ingressController, loadbalancer port, storageVolume, gcLogs and so on. The contents of this file are applicable to all services.
- app-<serviceName>.yaml: This file contains the specific deployment configurations for each service such as image details , affinity configurations, and so on. The contents of this file are only applicable to the corresponding service.
- <shapeName>/<serviceName>.yaml: This file contains the hardware resources' configurations such as CPU, memory, replica counts, heap memory configuration, and so on.
- database.yaml: This file contains the configurations such as database image, storageVolume, tablespace details, and so on, for performing any operations on database schemas of the services.
Customizing the Shapes
The predefined shapes: devsmall, dev, prodsmall, prod, and prodlarge are available at $SPEC_PATH/$INSTANCE/$PROJECT/shapes. Each directory contains the <serviceName>.yaml files that define the hardware resources for that service.
The devsmall shape includes the smallest size defined and it does not include the requests and limits defined. Therefore, pods do not fail to schedule if there are a limited number of CPUs or memory, but as this shape does not have any upper limit set, the shape can grow to a larger size at the runtime.
Use the predefined shapes as much as possible. As per your requirement, you can create your own shape and use it with all services.
To create the new shape (for example: customShape):
- Create a new directory customShape at
$SPEC_PATH/$PROJECT/$INSTANCE/shapes as
follows:
cd $SPEC_PATH/$PROJECT/$INSTANCE/shapes/ mkdir customShape - Copy the files from the predefined shape dev as
follows:
cp dev/*.yaml customShape/ - Edit and change the configurations of <serviceName>.yaml files in the customShape directory as per your requirement.
- Edit $SPEC_PATH/$PROJECT/$INSTANCE/applications-base.yaml and provide
customShape as the shape
value:
shape: customShapeWhen you create or upgrade any service as per the values defined in customShape files, the corresponding service will be configured.
Image Builders
The following image builders are required to build the corresponding services for an end-to-end integrated environment:
- UIM Image Builder: This includes archive
uim-image-builder.tar.gz, which is required to build UIM, UIM DB Installer Images. See "Creating the UIM Cloud Native Images" in UIM Cloud Native Deployment Guide for more information. - Authorization Builder: This includes
authorization-builder.tar.gz, required to build Authorization images. For more information, see "Creating Authorization Images". - ATA Builder: This includes
ata-builder.tar.gz, required to build ATA API, ATA UI, ATA PGX, ATA Consumer, and the ATA DB Installer images. See "Prerequisites for Creating ATA Images" for more information.
All builder toolkits include manifest files and scripts to build the images.
About the Manifest File
A manifest file can be found in directory path $WORKSPACEDIR/<service-builder>/bin/<service>_manifest.yaml. The manifest file describes the input that goes into the service images. It is consumed by the image build process. The default configuration in the latest manifest file provides all necessary components for creating the service images easily. A service can be ATA, Authorization, SmartSearch, OpenSearch, or UIM.
You can also customize the manifest file. This enables you to:
- Specify any Linux image as the base, as long as it is a binary and is compatible with Oracle Linux.
- Upgrade the Oracle Enterprise Linux version to a newer version to uptake a quarterly CPU.
- Upgrade the JDK version to a newer JDK version to uptake a quarterly CPU.
- Choose a different userid and groupid for oracle:oracle user:group that the image specifies. The default is 1000:1000.
Note:
The schemaVersion and date parameters are maintained by Oracle. Do not modify
these parameters. Version numbers provided here are only examples. The manifest file
specifies the actual versions that Oracle recommends.
There are various sections in the manifest file such as:
- Service Base Image: The Service Base image is a necessary
building block of the final service container images. However, it is not
required by the service to create or manage any service instances.
Linux parameter: The
Linuxparameter specifies the base Linux image to be used as the base Docker or Podman image. The version is the two-digit version from /etc/redhat-release:linux: vendor: Oracle version: 9-slim image: <container>/os/oraclelinux:9-slimThe vendor and the version details are used for validating while an image is being built and for querying at run-time.
Note:
To troubleshoot issues, Oracle support requires you to provide these details in the manifest file used to build the image. - The
userGroupparameter that specifies the default userId and groupId:userGroup: username: <username> userid: <userID> groupname: <groupname> groupid: <groupID> - The
jdkparameter that specifies the JDK vendor, version, and the staging path:jdk: vendor: Oracle version: <jdk_version> path: $CN_BUILDER_STAGING/downloads/java/jdk-<jdk_version>_linux-x64_bin.tar.gz - The Tomcat parameter specifies the Tomcat version and its staging path.
Note:
This is applicable only for the ATA service.tomcat: version: <tomcat_version> path: $CN_BUILDER_STAGING/downloads/tomcat/apache-tomcat-<tomcat_version>.tar.gz - A serviceImage parameter, where tag is the tag name of the service
image.
serviceImage: tag: latest
Deployment Toolkits
The Common Cloud Native toolkit is required to deploy the services for an end-to-end integrated environment. It includes the common-cntk.tar.gz file that is required to deploy Authorization, ATA, SmartSearch, OpenSearch, UIM, and Message Bus services in the cloud native environment.
See "Creating a Basic UIM Cloud Native Instance" in UIM Cloud Native Deployment Guide, for more information.
Common Cloud Native Toolkit
The Common cloud native toolkit (Common CNTK) includes:
- Helm charts to manage the ATA, Authorization, SmartSearch, OpenSearch, UIM, and Message Bus services.
- Scripts to manage secrets for the services.
- Scripts to manage schemas for the services.
- Scripts to create, update, and delete the ATA, Authorization, SmartSearch, OpenSearch, UIM, and Message Bus services.
- Sample pv and pvc yaml files to create persistent volumes.
- Sample charts to install Nginx.
- Scripts to register and un-register the namespaces with Strimzi and WebLogic operator.
- The applications-base.yaml file that contains the common configuration for all services like ingress controller, tls, authentication and storageVolume, and so on.
- The application-specific files app-uim.yaml, app-ata.yaml, app-messaging-bus.yaml, app-authorization.yaml, and app-smartsearch.yaml that contain the corresponding configuration options for individual services, such as image details, java options, and support overriding the common configurations defined in applications-base.yaml.
- The database.yaml file that contains the configurations that is used while creating, deleting, or upgrading schemas for services, such as database installer images, tablespace info, and so on.
- The hardware resource configuration files (shape directory) for the development and production environments.
- The config directory that contains the logging and application configurations for the services.
- The strimzi-operator-override-values.yaml file that enables you to override the configuration for deploying strimzi operator which is used for message bus service.
The applications-base.yaml and database.yaml files have common values that are applicable for all services in Common CNTK.
The values applicable to individual services are present in app-<service-name>.yaml files along with the values that are applicable for specific services.
For customized configurations to override the default values, update the values under the specific files at $SPEC_PATH/$PROJECT/$INSTANCE.
While running the scripts, the project and instance values should be provided, where project indicates the namespace of the Kubernetes environment where the service is deployed and instance is the identifier of the corresponding service instance, if multiples instances are created within the same namespace.
Note:
As multiple instances of Message Bus cannot exist in the same namespace, only one instance is created for all services within the same namespace.While creating a basic instance for all these services, the project name is considered as sr and the instance name is considered as quick.
Note:
- The project and instance names must not contain any special characters.
- There are common values specified in the applications-base.yaml and database.yaml files for the services. To override the common value user can specify that value in the specific file of a service. If the value under in the specific file is empty, then common value is considered.
Deploying the Services
You must deploy and configure all services in the following sequence:
- (Optional) Select your Identity provider for Authentication.
Note:
You can choose any Generic Identity Provider that support SAML 2.0 and OAUTH 2.0 protocols for authentication. The samples of Oracle IDCS Identity Provider are packaged with ATA. - (Optional) Create OAUTH 2.0 client which will be configured with the below services,
- Deploy Authorization service.
- Deploy Message Bus.
- Deploy OpenSearch.
- Deploy SmartSearch.
- Deploy UIM (traditional or cloud native).
- Configure Traditional UIM with Message Bus and ATA, and restart UIM. See "Setting System Properties" in UIM System Administrator’s Guide, for more information.
- Deploy ATA.
Note:
Ensure that each individual service is deployed successfully and verified in the above mentioned order as there are dependencies between these services. Ensure that for production instance, for High Availability, the Message Bus is set up with at least 3 replicas for kafka-cluster.
Setting Up Prometheus and Grafana
Install the Prometheus Operator and Grafana using the kube-prometheus-stack helm charts available at Prometheus Community GitHub: https://github.com/prometheus-community/helm-charts.
The prometheus-community/kube-prometheus-stack is a Helm chart
maintained by the Prometheus Community. The kube-prometheus-stack
gives you a production-ready Kubernetes monitoring stack with minimal effort, instead of
installing Prometheus, Grafana, and Alertmanager separately and integrating them
manually.
The following are sample commands provided for your reference. For more details, see the
community GitHub on installing the kube-prometheus-stack chart.
Install kube-prometheus-stack
#Add the Helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
#Install the chart with kube-prometheus-stack release name
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n <prometheus-operator-namespace>
#Verify the pods
kubectl get pods -n <prometheus-operator-namespace>Configuring Metrics for Services
The UIM and associated services (such as Kafka Message Bus, ATA, and SIA) are validated with the Prometheus Operator PodMonitor and ServiceMonitor custom resources, which enable the automatic configuration of metrics scrape jobs in Prometheus. After the Prometheus operator is successfully deployed, configure the metrics scrape jobs. The common CNTK includes sample Helm charts that automate this configuration using PodMonitor and ServiceMonitor resources.
The Helm chart is available at $COMMON_CNTK/samples/charts/prometheus-monitor-resources. See the accompanying README.md file for instructions on automating the configuration of metrics scrape jobs.
Overview:
- Verify that all prerequisites are met.
- Update the values.yaml file to match your target environment.
- UIM metrics URL access needs credentials in Kubernetes secret. Validate the creation.
- Install, upgrade, or uninstall the Helm release to automate the configuration of scrape jobs.
Install prometheus-monitor-resource chart
#Install the chart for configuring scrape jobs.
helm install <application-namespace>-<application-instance>-prometheus-monitor-resource \
$COMMON_CNTK/samples/charts/prometheus-monitor-resources \
-n <prometheus-operator-namespace>
#verify the deployment of PodMonitors
kubectl get podmonitors -n <prometheus-operator-namespace>
#verify the deployment of ServiceMonitor
kubectl get servicemonitors -n <prometheus-operator-namespace>

