2 Planning and Validating Your Cloud Environment

In preparation for Oracle Communications Service and Catalog Design - Solution Designer cloud native deployment, you must set up and validate prerequisite software. This chapter provides information about planning, setting up, and validating the environment for Solution Designer cloud native deployment.

Required Components for Solution Designer Deployment

In order to run, manage, and monitor the Solution Designer deployment, the following components and capabilities are required. These must be configured in the cloud environment:

  • Kubernetes Cluster
  • Oracle Multitenant Container Database (CDB)
  • Container Image Management
  • Helm
  • Load Balancer
  • Domain Name System (DNS)
  • Persistent Volumes
  • Authentication
  • Secrets Management
  • Object Store

For details about the required versions of these components, see Service Catalog and Design Compatibility Matrix.

In order to utilize the full flexibility, reliability and value of the deployment, the following aspects must also be set up:

  • Continuous Integration (CI) pipelines for custom images
  • Continuous Delivery (CD) pipelines for creating, scaling, updating, and deleting instances of the cloud native deployment

Planning Your Cloud Native Environment

This section provides information about planning and setting up Solution Designer cloud native environment. As part of preparing your environment for Solution Designer cloud native, you choose, install, and set up various components and services in ways that are best suited for your cloud native environment. The following sections provide information about each of those required components and services, the available options that you can choose from, and the way you must set them up for your cloud native environment.

Setting Up Your Kubernetes Cluster

For Solution Designer, Kubernetes worker nodes must be capable of running Linux 8.x pods with software compiled for Intel 64-bit cores. A reliable cluster must have multiple worker nodes spread over separate physical infrastructure and a very reliable cluster must have multiple master nodes spread over separate physical infrastructure.

Solution Designer requires Kubernetes.

To check the Kubernetes version, run the following command:
kubectl version

Typically, Kubernetes nodes are not used directly to run or monitor Kubernetes workloads. You must reserve worker node resources for the execution of Kubernetes workload. However, multiple users (manual and automated) of the cluster require a point from which to access the cluster and operate on it. This can be achieved by using kubectl commands (either directly on command line and shell scripts or through Helm) or Kubernetes APIs. For this purpose, set aside a separate host or set of hosts. Operational and administrative access to the Kubernetes cluster can be restricted to these hosts and specific users can be given named accounts on these hosts to reduce cluster exposure and promote traceability of actions.

Typically, the Continuous Delivery pipeline automation deploys directly on a set of such operations hosts (as in the case of Jenkins) or leverage runners deployed on such operations hosts (as in the case of GitLab CI). These hosts must run Linux, with all interactive-use packages installed to support tools such as Bash, Wget, cURL, Hostname, Sed, AWK, cut, and grep. An example of this is the Oracle Linux 8.7 image (Oracle-Linux-8.7) on Oracle Cloud Infrastructure.

In addition, you need the appropriate tools to connect to your overall environment, including the Kubernetes cluster. For instance, for a Container Engine for Kubernetes (OKE) cluster, you must install and configure the Oracle Cloud Infrastructure Command Line Interface.

Additional integrations may need to include Authentication software such as Keycloak for users to be able to login to this host, appropriate NFS mounts for home directories, security lists and firewall configuration for access to the overall environment, and so on.

Synchronizing Time Across Servers

It is important that you synchronize the date and time across all machines that are involved in testing, including client test drivers and Kubernetes worker nodes. Oracle recommends that you do this using Network Time Protocol (NTP), rather than manual synchronization, and strongly recommends it for Production environments. Synchronization is important in inter-component communications and in capturing accurate run-time statistics.

Provisioning Oracle Multitenant Container Database (CDB)

Solution Designer deployment architecture is best supported by the multitenant architecture that enables an Oracle database to function as a multitenant container database (CDB).

A container database is either a Pluggable Database (PDB), Autonomous Database (ADB) or the root container. The root container is a collection of schemas, schema objects, and non-schema objects to which all PDBs or ADB's belong. A PDB container contains the Solution Designer schema. Solution Designer requires access to PDBs in an Oracle 19c Multitenant database. For more information about the benefits of Oracle Multitenant Architecture for database consolidation, see Oracle Database Concepts Guide.

Provisioning an Empty PDB
To create an empty PDB:
  1. Run the following SQL commands using the sysdba account for the CDB:

    CREATE PLUGGABLE DATABASE _replace_this_text_with_db_service_name_ ADMIN USER _replace_this_text_with_admin_name_ IDENTIFIED BY "database_administrator_password" DEFAULT TABLESPACE "USERS" DATAFILE '+DATA' SIZE 5M REUSE AUTOEXTEND ON;
    ALTER PLUGGABLE DATABASE _replace_this_text_with_db_service_name_ open instances = all;
    ALTER PLUGGABLE DATABASE _replace_this_text_with_db_service_name_ save state instances = all;
    alter session set container=_replace_this_text_with_db_service_name;
    GRANT CREATE ANY CONTEXT TO SYS WITH ADMIN OPTION;
    GRANT CREATE ANY CONTEXT TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE ANY VIEW TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE SNAPSHOT TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE SYNONYM TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE TABLE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE USER TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE VIEW TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE materialized view to _replace_this_text_with_admin_name_;
    GRANT GRANT ANY PRIVILEGE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT QUERY REWRITE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT UNLIMITED TABLESPACE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT SELECT ON SYS.DBA_TABLESPACES TO _replace_this_text_with_admin_name_ WITH GRANT OPTION;
    GRANT SELECT ON SYS.V_$PARAMETER TO _replace_this_text_with_admin_name_ WITH GRANT OPTION;
    GRANT SELECT on SYS.dba_jobs to _replace_this_text_with_admin_name_ with grant option;
    GRANT "CONNECT" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "DBA" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "EXP_FULL_DATABASE" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "IMP_FULL_DATABASE" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "RESOURCE" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT EXECUTE ON SYS.DBMS_LOCK TO _replace_this_text_with_admin_name_ WITH GRANT OPTION;
    grant execute on utl_file to _replace_this_text_with_admin_name_ with grant option;
    grant sysdba to _replace_this_text_with_admin_name_;
    ADMINISTER KEY MANAGEMENT SET KEY USING TAG 'tag' FORCE KEYSTORE IDENTIFIED BY "_replace_this_text_with_sys_password_" WITH BACKUP USING '_replace_this_text_with_db_service_name_backup_';
  2. Log into the PDB as the sysdba account for the PDB (defined by the "_replace_this_text_with_admin_name_" parameter in the above commands) and adjust the PDB tablespace by running the following command:

    Note:

    In the command, replace DATA with the proper name from v$asm_diskgroup.
    create tablespace USERS datafile '+DATA' size 1024m reuse autoextend on next 64m;

Service Catalog and Design uses the USERS tablespace by default. You can choose to name the tablespace as per your requirements. If securing the data is a requirement, the recommended approach is to use transparent data encryption (TDE) to encrypt the tablespaces used to store data.

Provisioning an ADB

To use Autonomous Database (ADB) for Service Catalog and Design database, you can create an Autonomous Database of type Autonomous Transaction Processor that is using database version 19c. After you create the ADB, download the database Wallet.

To create ADB and download the Database wallet:

  1. Create an Autonomous Database. See "Provision Autonomous Database" for instructions about creating an Autonomous Database.

    Note:

    While creating the ADB, choose a workload type as Transaction Processing.
  2. Download the database wallet.

    To download the database wallet:
    1. In Oracle Cloud Infrastructure console, go to the ADB page. Click Database Connections.

      From the Download client credentials (Wallet) section, download an Instance Wallet.

    2. In the Connection Strings section, note the TNS Name that ends in _tp. For example, jkddddd9sfeq1ug4_tp. This is needed during the Solution Designer provisioning.

    3. Select the Wallet Type as Instance Wallet.

    4. Click Download wallet.

      Choose a Wallet password and download the wallet. Remember the password you chose.

      Note:

      This downloads a zip file which is required while setting up the Database connection details in Service Catalog and Design.
    5. Save and unpack the zip file on the system you will be running the CNTK.

About Container Image Management

A Solution Designer cloud native deployment generates container images for the microservices and its database installer. Additionally, images are downloaded for the ingress (for example, OIDC Relying Party).

Oracle highly recommends that you create a private container repository and ensure that all nodes have access to that repository. Images are saved in this repository and all nodes would then have access to the repository. This may require networking changes (such as routes and proxy) and include authentication for logging in to the repository. Oracle recommends that you choose a repository that provides centralized storage and management of not just container images, but also other artifacts.

Failing to ensure that all nodes have access to a centralized repository will mean that images have to be synchronized to the hosts manually or through custom mechanisms (for example, using scripts), which are error-prone operations as worker nodes are commissioned, decommissioned or even rebooted. When an image on a particular worker node is not available, then the pods using that image are either not scheduled to that node, wasting resources, or failing on that node. If image names and tags are kept constant (such as myapp:latest), the pod may pick up a pre-existing image of the same name and tag, leading to unexpected and hard to debug behaviors.

Installing Helm

Solution Designer cloud native requires Helm, which delivers reliability, productivity, consistency, and ease of use.

In the cloud native environment, using Helm enables you to achieve the following:

  • You can apply custom domain configuration by using a single and consistent mechanism, which leads to an increase in productivity.
  • You can change the Solution Designer domain configuration with Helm using the following features:
    • Helm Lint allows pre-validation of syntax issues before changes are applied.
    • Multiple changes can be pushed to the running instance with a single upgrade command.
    • Configuration changes may map to updates across multiple Kubernetes resources (such as deployments, config maps and so on). With Helm, you merely update the Helm release and its responsibility to determine which Kubernetes resources are affected.
  • Including configuration in Helm charts allows the content to be managed as code, through source control, which is a fundamental principle of modern DevOps practices.

In order to co-exist with older Helm versions in production environments, Solution Designer requires Helm 3.7 or later saved as helm in PATH.

Note:

The Helm version mentioned in the commands is used as an example only. See Service Catalog and Design Compatibility Matrix for the recommended versions.

The following text shows sample commands for installing and validating Helm:

$ cd some-tmp-dir
$ wget https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz
$ tar -zxvf helm-v3.7.0-linux-amd64.tar.gz  
# Find the helm binary in the unpacked directory and move it to its desired destination. You need root user.
$ sudo mv linux-amd64/helm /usr/local/bin/helm

# verify Helm version
$ helm version
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b",GitTreeState:"clean", GoVersion:"go1.16.8"}

Helm leverages kubeconfig for users running the helm command to access the Kubernetes cluster. By default, this is $HOME/.kube/config. Helm inherits the permissions set up for this access into the cluster. You must ensure that sufficient cluster permissions are granted to users running Helm.

About Load Balancing

Each Solution Designer cloud native instance is a Kubernetes cluster. To access application endpoints, you must enable HTTP/S connectivity to the cluster through an appropriate mechanism. This mechanism must be able to route traffic to the appropriate Solution Designer cloud native instance in the Kubernetes cluster.

For Solution Designer cloud native, ingress endpoint is required to expose appropriate services from the Solution Designer cluster and direct traffic appropriately to the cluster members. An external load balancer is an optional add-on.

An external load balancer serves to provide a highly reliable singe-point access into the services exposed by the Kubernetes cluster. In this case, this would be the NodePort services exposed by the Relying Party on behalf of the Solution Designer cloud native instance. Using a load balancer removes the need to expose Kubernetes node IPs to the larger user base, and insulates the users from changes (in terms of nodes appearing or being decommissioned) to the Kubernetes cluster. It also serves to enforce access policies. In the case of HTTPS, the load balance mechanism must enable TLS and handle it appropriately.

Using Domain Name System (DNS)

A Kubernetes cluster can have many routable entrypoints. Common choices are:

  • External load balancer (IP and port)
  • Ingress service (master node IPs and ingress port)
  • Ingress service (worker node IPs and ingress port)

You must identify the proper entrypoint for your Kubernetes cluster.

Solution Designer cloud native requires hostnames to be mapped to routable entrypoints into the Kubernetes cluster. Regardless of the actual entrypoints (external load balancer, Kubernetes master node, or worker nodes), users who need to communicate with the Solution Designer cloud native instances require name resolution. The Solution Designer cloud native instances must have the DNS entry and SSL certificate to secure it.

Configuring Kubernetes Persistent Volumes

Typically, runtime artifacts in Solution Designer cloud native are created within the respective pod filesystems. As a result, they are lost when the pod is deleted.

While this impermanence may be acceptable for highly transient environments, it is typically desirable to have access to these artifacts outside of the lifecycle of the Solution Designer cloud native instance. It is also highly recommended to deploy a tool chain for logs to provide a centralized view with a dashboard. To allow for artifacts to survive independent of the pod, Solution Designer cloud native allows for them to be maintained on Kubernetes Persistent Volumes.

Solution Designer cloud native does not dictate the technology that supports Persistent Volumes, but provides samples for NFS-based persistence. Additionally, for Solution Designer cloud native on an Oracle OKE cloud, you can use persistence based on File Storage Service (FSS). It is recommended to have at least 25 GB of free space.

About NFS-based Persistence

For use with Solution Designer cloud native, one or more NFS servers must be designated.

In general, ensure that the instances have dedicated NFS support, so that they do not compete for disk space or network IOPS with others.

The exported filesystems must have enough capacity to support the intended workload. Given the dynamic nature of the Solution Designer cloud native instance, it is prudent to put in place a set of operational mechanisms to:

  • Monitor disk usage and warn when the usage crosses a threshold
  • Clean out the artifacts that are no longer needed

About Authentication

You must configure an identity provider to access the Solution Designer application. For authenticating users, Identity and Access Management (IAM) such as KeyCloak can be used. These users can perform the Solution Designer application administrative tasks. Solution Designer uses OpenID Connect (OIDC) as the identity provider protocol.

When Solution Designer cloud instances use IAM, ensure that you create separate users and groups for each environment (or class of environments) in the external authenticator service. The specifications of the users and groups depend on the authenticator.

Solution Designer cloud native toolkit provides a sample configuration that uses KeyCloak. Solution Designer uses OpenID Connect for authentication and authorization. The Solution Designer client that provides access to Solution Designer functions is used for OIDC client applications. Each OIDC client application must have the following properties:
  • Client type: OpenID Connect
  • Client authentication: enabled
  • Authorization: enabled
  • Standard flow: enabled
  • Direct access grants: enabled
  • OAuth 2.0 Device Authorization Grant: enabled

Note:

The names or the properties may differ depending on the IAM used.

Scopes

The following are the Scopes to define for Solution Designer:

Table 2-1 Scopes for Solution Designer

Scope Client Application Description
/lcm The Solution Designer Client application Used for user access and any external API access such as TMF interface
/lcmOperation The Solution Designer Client application Used for internal operations between microservices

Authentication in REST API:

For REST APIs, you must create a token for REST APIs and pass these tokens while calling the REST APIs.

Audience

Each application requires an audience for the scopes used.

Roles

Solution Designer provides various roles to access user interface. The roles must be provided in the access tokens for the user. The roles must appear in JSON Web token (JWT) under the Token Claim Name groups. A user must be assigned with the appropriate roles. In addition to the roles assigned to a user, the user token must also include the appropriate scope.

The user interface access is controlled using Role Based Access Control (RBAC) and implemented using OAUTH Roles from the OIDC provider. Users are assigned appropriate roles based on their needs. All the different types of roles are also independent of each other. A user could have access to Initiative entity in Solution Designer, but may not have access to Initiative application to interact with the entity.

Landing Page Roles

On the landing page, you will see only those applications that you have access to, based on the roles assigned to them.

Table 2-2 Landing Page Roles

Role Description
Service Specialist The Service Specialist has access to the following applications:
  • PSR models
  • Data Elements
  • Service Specifications
  • Resource Specifications
  • Domains
Service Catalog Admin The Service Catalog Admin can work with all the functionality available in the Solution Designer application. The Service Catalog Admin has access to the following applications:
  • PSR models
  • Data Elements
  • Service Specifications
  • Resource Specifications
  • Domains
  • Initiatives
  • Workspaces

Initiative Lifecycle Roles

Initiatives lifecycle management is controlled using the following roles. The users can be assigned various roles depending on the business requirements.

Table 2-3 Initiative Lifecycle Roles

Role Description
Initiative Approve Functional Testing A state transition for an initiative to progress an initiative through its life cycle.
Initiative Approve Rollout A state transition for an initiative to progress an initiative through its life cycle.
Initiative Complete Definition A state transition for an initiative to progress an initiative through its life cycle.
Initiative Complete Testing A state transition for an initiative to progress an initiative through its life cycle.
Initiative Content Provider A state transition for an initiative to progress an initiative through its life cycle.
Initiative Discard A state transition for an initiative to progress an initiative through its life cycle.
Initiative Launch A state transition for an initiative to progress an initiative through its life cycle.
Initiative Reopen A state transition for an initiative to progress an initiative through its life cycle.
Initiative Start Acceptance Testing A state transition for an initiative to progress an initiative through its life cycle.
Initiative Complete Advanced Configuration A state transition for an initiative to progress an initiative through its life cycle. The user assigned to this role can perform simulated publish and also can complete the Advanced Configuration state.

Note:

You must have the Service Catalog Admin role to be able to use the Initiative Lifecycle roles. If you have access to the lifecycle roles does not imply that you have access to Initiatives application.

Domain Filter Roles

Domain filter roles can be assigned to the user based on the domains that you create in Solution Designer. After you create a service or technical domain in Solution Designer, you must assign appropriate domain filter roles to the users. You need these roles to access all the entities in Solution Designer User Interface that are associated with the specified domain. To define the domain filter roles, you must prefix a role with sDOMAIN: for service domains and tDOMAIN: for technology domains. These domain roles can be defined as:

sDOMAIN:Service_Domain_Id
tDOMAIN:Technology_Domain_Id
DOMAIN:all

Note:

You must use the ID of the domain.

Table 2-4 Domain Filter Roles

Role Description
sDOMAIN This is a service domain role. This role gives access to all the entities such as PSR models, Service Specifications and Resource Specifications that the specified service domain is associated with. For example, sDOMAIN:BroadbandInternet gives access to all the entities associated with the BroadbandInternet domain.
tDOMAIN This is a technical domain role. This role gives access to all the entities such as PSR models, Service Specifications, and Resource Specifications that the specified service domain is associated with. For example, tDOMAIN:Wireless5G gives access to all the entities associated with the Wireless5G technology domain.
DOMAIN:all This role acting as a superuser gives access to all the entities in all the domains.

Note:

Users must have at least one of the domain filter roles assigned to them to model a service using entities such as PSR models, Service Specifications, and Resource Specifications in Solution Designer.

Domain Management Role

Domain management role provides access to the Domains application in Solution Designer. When you have the domain administrator role, you can create, update, and delete service and technology domains in the Domains application.

Table 2-5 Domain Management Role

Role Description
DOMAIN:admin This is a domain administrator role. This user can manage service and technology domains in Solution Designer. They can create, update, and delete domains.
About Relying Party

Relying party is an application that relies upon the user's credentials such as user ID and password to grant access to an application. Relying Party outsources its user authentication function to an identity provider. The Solution Designer CNTK provides a sample Apache Relying Party.

The Relying Party must handle the following authorization flows:

Table 2-6 Relying Party authorization flows and its URL prefixes

Authorization Flows Description URL prefixes
User Interface flow Accessing the Solution Designer application using a web browser.
Protect by OpenID Connect
  • /scd/
  • /apps/
API flow Accessing the TMF REST APIs used for integration.

Protect by OAuth2

  • /scd/tmf-api/
About Object Store

For use with Solution Designer cloud native, an Object Store S3 provider must be designated. Examples of S3 Object Store providers are Minio, Oracle OCI Object Store and so on. Only one S3 provider is supported. Setup a bucket to store build dependencies for the UIM participant. Create and record an access key and secret.

Given the dynamic nature of the Solution Designer cloud native instance, you must have a set of operational mechanisms to:

  • Monitor disk usage and warn when the usage crosses a threshold.
  • Clean out the artifacts that are no longer needed.

Management of Secrets

Solution Designer cloud native leverages Kubernetes Secrets to store sensitive information securely. This sensitive information is, at a minimum, the database credentials. Additional credentials may be stored to authenticate with the external authenticator system. Your cartridges may need to communicate with other systems, such as Oracle Communications Unified Inventory Management (UIM). The credentials for such systems too are managed as Kubernetes Secrets.

These secrets need to be secured over their lifecycle by the Kubernetes cluster administration.

Solution Designer cloud native scripts assume that a set of prerequisite secrets exist when they are invoked. As such, creation of the secrets is a prerequisite step in the pipeline. Solution Designer cloud native toolkit provides a script to create some of the common secrets it needs, but this script is interactive and therefore not suitable for Continuous Delivery (CD) automation pipelines. The script serves to provide a basic mechanism to add secrets and illustrates the names and structure of the secrets that Solution Designer cloud native requires. You can create the secrets manually by using the script for each instance.

Using Kubernetes Monitoring Toolchain

A multi-node Kubernetes cluster with multiple users requires a capable set of tools to monitor and manage the cluster. There are tools that provide data, rich visualizations and other capabilities such as alerts. Solution Designer cloud native does not require any particular system to be used, but recommends using such a monitoring, visualization and alerting capability.

For Solution Designer, the key aspects of monitoring are:

  • Worker capacity in CPU and memory. Monitoring the free capacity leads to predictable Solution Designer instance creation and scale-up.
  • Worker node disk pressure
  • Worker node network pressure
  • Health of the core Kubernetes services
The namespaces and pods that Solution Designer cloud native uses provide a cross instance view of Solution Designer cloud native.

Planning Your Container Engine for Kubernetes (OKE) Cloud Environment

This section provides information about planning your cloud environment if you want to use Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) for Solution Designer cloud native. Some of the components, services, and capabilities that are required and recommended for a cloud native environment are applicable to the Oracle OKE cloud environment as well.

  • Kubernetes and Container Images: You can choose from the version options available in OKE as long as the selected version conforms to the range described in the section about planning cloud native environment.
  • Container Image Management: Solution Designer cloud native recommends using Oracle Cloud Infrastructure Registry with OKE. Any other repository that you use must be able to serve images to the OKE environment in a quick and reliable manner.
  • Oracle Multitenant Database: It is strongly recommended to run Oracle DB outside of OKE, but within the same Oracle Cloud Infrastructure tenancy and the region as an Oracle DB service (BareMetal or VM). The database version should be 19c.
  • Helm : Install Helm on a Linux host that has access to kubectl of the cluster.
  • Persistent Volumes: Use NFS-based persistence. Solution Designer cloud native recommends the use of Oracle Cloud Infrastructure File Storage service in the OKE context.
  • Authentication and Secrets Management: These aspects are common with the cloud native environment. Choose your mechanisms to deliver these capabilities and implement them in your OKE instance.
  • Monitoring Toolchains: While the Oracle Cloud Infrastructure Console provides a view of the resources in the OKE cluster, it also enables you to use the Kubernetes Dashboard. Any additional monitoring capability must be built up.
  • CI and CD Pipelines: The considerations and actions described for CI and CD pipelines in the cloud native environment apply to the OKE environment as well.

Compute Disk Space Requirements

Given the size of the Solution Designer cloud native container images, the size of the Solution Designer containers, and the volume of the logs generated, it is recommended that the Kubernetes worker nodes have at least 40 GB of free space for the container storage and the image storage.

Work with your Cloud Infrastructure administrator to ensure worker nodes have enough disk space. Common options are to use Compute shapes with larger boot volumes or to mount an Oracle Cloud Infrastructure Block Volume to container storage.

Connectivity Requirements

Solution Designer cloud native assumes the connectivity between the OKE cluster and the Oracle CDBs is a LAN-equivalent in reliability, performance and throughput. This can be achieved by creating the Oracle CDBs within the same tenancy as the OKE cluster, and in the same Oracle Cloud Infrastructure region.

Using Load Balancer as a Service (LBaaS)

For load balancing, you have the option of using the services available in OKE. The infrastructure for OKE is provided by Oracle's IaaS offering, Oracle Cloud Infrastructure. In OKE, the master node IP address is not exposed to the tenants. The IP addresses of the worker nodes are also not guaranteed to be static. This makes DNS mapping difficult to achieve. Additionally, it is also required to balance the load between the worker nodes. In order to fulfill these requirements, you can use Load Balancer as a Service (LBaaS) of Oracle Cloud Infrastructure.

The relying party must be configured to use a service load balancer.

For additional details, see the following:

About Using Oracle Cloud Infrastructure Domain Name System (DNS) Zones

While a custom DNS service can provide the addressing needs of Solution Designer cloud native even when Solution Designer is running in OKE, you can evaluate the option of Oracle Cloud Infrastructure Domain Name System (DNS) zones capability. Configuration of DNS zones (and integration with on-premise DNS systems) is not within the scope of Solution Designer cloud native.

Using Persistent Volumes and File Storage Service (FSS)

In the OKE cluster, Solution Designer cloud native can leverage the high performance, high capacity, high reliability File Storage Service (FSS) as the backing for the persistent volumes of Solution Designer cloud native. It is recommended to have at least 25 GB of free space. There are two flavors of FSS usage in this context:
  • Allocating FSS by setting up NFS mount target
  • Native FSS

To use FSS through an NFS mount target, see instructions for allocating FSS and setting up a Mount Target in "Creating File Systems" in the Oracle Cloud Infrastructure documentation. Note down the Mount Target IP address and the storage path and use these in the Solution Designer cloud native instance specification as the NFS host and path. This approach is simple to set up and leverages the NFS storage provisioner that is typically available in all Kubernetes installations. However, the data flows through the mount target, which models an NFS server.

FSS can also be used natively, without requiring the NFS protocol. This can be achieved by leveraging the FSS storage provisioner supplied by OKE. A broad outline of how to do this is available in the blog post "Using File Storage Service with Container Engine for Kubernetes" on the Oracle Cloud Infrastructure blog.

Leveraging Oracle Cloud Infrastructure Services

For your OKE environment, you can leverage existing services and capabilities that are available with Oracle Cloud Infrastructure. The following table lists the Oracle Cloud Infrastructure services that you can leverage for your OKE cloud environment.

Table 2-7 Oracle Cloud Infrastructure Services for OKE Cloud Environment

Type of Service Service Indicates Mandatory / Recommended / Optional
Developer Service Container Clusters Mandatory
Developer Service Registry Recommended
Core Infrastructure Compute Instances Mandatory
Core Infrastructure File Storage Recommended
Core Infrastructure Block Volumes Optional
Core Infrastructure Networking Mandatory
Core Infrastructure Load Balancers Recommended
Core Infrastructure DNS Zones Optional
Database BareMetal and VM Recommended