3 Procedure for Installing OAA and OARM
Topics
- About the Management Container
- Prerequisite Configurations for Installing OAA and OARM
- Downloading Installation Files and Preparing the Management Container
- Preparing the Properties file for OAA and OARM Installation
- Deploying OAA and OARM
- Running the Management Container
- Deploying OAA and OARM
- Printing Deployment Details
- Post-Installation Steps for NodePort
- Cleaning Up Installation
- Troubleshooting OAA and OARM Installation
3.1 About the Management Container
The Management Container is a container that includes all the required scripts and tools needed to install OAA and OARM on a new or existing Kubernetes cluster.
This container runs as a pod in the Kubernetes cluster. It is not part of the OAA and OARM deployment itself, but facilitates deploying OAA and OARM to the Kubernetes cluster.
oraclelinux
, along with the standard linux utilities such as zip, iputils, net-tools, and vim:
- kubectl
- helm
- sqlplus: instantclient_19_10
- openssl
For more information about the Management Container, see the following topics:
3.1.1 Components of the Management Container
This section provides an overview of important files and folders in the Management Container pod.
Table 3-1 Management Container Files and Folder Reference
Files and Folders | Description |
---|---|
OAA.sh |
This script file is used to install OAA and OARM. The installOAA.properties file must be given as an argument to the script for installing OAA, OAA-OARM, and OARM.
For more information, see Preparing the Properties file for OAA and OARM Installation |
installsettings |
This folder contains the oaaoverride.yaml that can be customized to set the replicaCount for some of the services in OAA and OARM.
To enable this you must set the |
helmcharts |
This folder contains helm charts and values.yaml for all OAA and OARM services. |
libs |
This folder contains the following files:
|
logs |
This folder maps to the NFS volume <NFS_LOG_PATH> and stores logs and status of the OAA and OARM installation.
|
oaa_cli |
This folder contains files that can be customized and used to install geo-location data for OARM. For more information, see Loading Geo-Location Data |
scripts/creds |
This folder maps to the NFS volume <NFS_CREDS_PATH> and contains the following files used during installation:
|
scripts/settings |
This folder maps to the NFS volume <NFS_CONFIG_PATH> and stores installOAA.properties , and oaaoverride.yaml configuration files required for installation.
|
service/store/oaa/ |
This folder maps to the NFS volume <NFS_VAULT_PATH> that is shared between management container and the OAA and OARM deployment. It stores the file based vault (if not using OCI based vault).
|
3.1.2 Preset Environment Variables in Management Container
The Management Container pod is configured with a predefined set of environment variables.
Table 3-2 Preset Environment Variables
Environment Variable | Description |
---|---|
HELM_CONFIG |
This is set to
/u01/oracle/scripts/creds/helmconfig .
|
KUBECONFIG |
This is set to
/u01/oracle/scripts/creds/k8sconfig .
|
SCRIPT_PATH |
This is set to /u01/oracle/scripts . This contains the installation scripts.
|
CONFIG_DIR |
This is a NFS volume <NFS_CONFIG_PATH> used to store the configuration externally.
It is mounted to the path |
CREDS_DIR |
This is a NFS volume <NFS_CREDS_PATH> used to store credentials, such as helm config, kube config, and login private keys.
It is mounted to the path |
LOGS_DIR |
This is a NFS volume <NFS_LOGS_PATH> used to store installation logs and status.
It is mounted to path |
HELM_CHARTS_PATH |
This is the path where all the helm charts related to the installation exist. |
LD_LIBRARY_PATH |
Sets the instantclient folder. The variable is required to run the sqlplus and DB-related commands from instantclient present in the container.
|
LIBS_DIR |
This exists in the path /u01/oracle/libs .
It contains the jar file required for customizing email and SMS providers and the OAM Authentication plugin. It also contains jars that are required for file based vault deployment. |
JARPATH |
This contains the jars required for file based vault to run properly. |
3.1.3 Mounted Volumes in the Management Container
This section provides details about the mounted volumes in the Management Container pod.
Table 3-3 Mounted Volumes in Management Container
Mount Folder | Description | Permissions to be Set |
---|---|---|
/u01/oracle/logs |
Path not configurable. This is used to store installation logs and status. This maps to NFS volume |
Read-Write-Execute The NFS volume |
/u01/oracle/scripts/settings |
Path not configurable. This is used to store the customized configuration file for installing OAA and OARM. This maps to NFS volume |
Read-Write-Execute The NFS volume |
/u01/oracle/scripts/creds |
Path not configurable. This is used to store credential files such as k8sconfig, helmconfig, trust.p12 and cert.p2. This maps to NFS volume |
Read-Write-Execute The NFS volume |
/u01/oracle/service/store/oaa |
Path is configurable. This is used to store the vault artifacts for file-based vault. This maps to NFS volume |
Read-Write-Execute The NFS volume |
3.2 Prerequisite Configurations for Installing OAA and OARM
Before progressing to the installation steps, ensure you have performed the following:
- Configuring a Kubernetes Cluster
- Configuring NFS Volumes
- Installing an Oracle Database
- Installing and Configuring OAM OAuth
- Setting Up Users and Groups in the OAM Identity Store
- Installing a Container Image Registry (CIR)
- Configuring CoreDNS for External Hostname Resolution
- Installation Host Requirements
- Generating Server Certificates and Trusted Certificates
- Create a Kubernetes Namespace and Secret
3.2.1 Configuring a Kubernetes Cluster
OAA and OARM are designed to be deployed on a Cloud Native Environment. OAA and OARM is composed of multiple components that run as microservices on a Kubernetes cluster, managed by Helm charts. Specifically, each component (microservice) is composed as a Kubernetes Pod, which is deployed to a Kubernetes Node in the cluster.
You must install a Kubernetes cluster that meets the following requirements:
- The Kubernetes cluster must have a minimum of three nodes.
- The nodes must meet the following system minimum specification requirements:
System Minimum Requirements Memory 64 GB RAM Disk 150 GB CPU 8 x CPU with (Virtualization support. For example, Intel VT) - An installation of Helm is required on the Kubernetes cluster. Helm is used to create and deploy the necessary resources.
- A supported container engine must be installed and running on the Kubernetes cluster.
- The Kubernetes cluster and container engine must meet the minimum version requirements outlined in Document ID 2723908.1 on My Oracle Support.
- The nodes in the Kubernetes cluster must have access to a shared volume such as a Network File System (NFS) mount. Ths NFS mounts are used by the Management Container pod during installation, during runtime for the File Based Vault (if not using OCI based vault), and for other post installation tasks such as loading geo-location data.
3.2.2 Configuring NFS Volumes
All nodes in the Kubernetes cluster require access to a shared volumes on an NFS server. During the OAA/OARM installation, the Management container pod stores configuration information, credentials and logs in the NFS volumes. Once the installation is complete the pods require access to a volume that contains the File based vault (if not using OCI based vault) for storing and accessing runtime credentials.
The following NFS volumes must be created prior to the installation. In all cases the NFS export path must have read/write/execute permission for all. Make sure the NFS volumes are accessible to all nodes in the cluster.
Volume | Description | Path |
---|---|---|
Configuration | A NFS volume which stores the OAA configuration such as installOAA.properties .
|
<NFS_CONFIG_PATH> |
Credentials | A NFS volume which stores OAA credentials such as Kubernetes and Helm configuration, SSH key, or PKCS12 files. | <NFS_CREDS_PATH> |
Logs | A NFS volume which stores OAA installation logs and status. | <NFS_LOGS_PATH> |
File based vault | A NFS volume which stores OAA runtime credentials. | <NFS_VAULT_PATH> |
Note:
The NFS Server IP address and PATH's will be set in theinstallOAA.properties
. See Preparing the Properties file for OAA and OARM Installation
3.2.3 Installing an Oracle Database
OAA and OARM uses a database schema to store information. You must install and configure an Oracle Database either on OCI or on-premises. The database must support partitioning feature/capabilities.
OAA and OARM supports Oracle Database 12c (12.2.0.1+), 18c, and 19c. For more detailed information on supported database versions, see http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html.
The Kubernetes cluster where OAA/OARM is to be installed, must have network connectivity to the database.
Note:
If using a non ASM database, you must make sure that the database has the parameterDB_CREATE_FILE_DEST
set. For
example:SQL> connect SYS/<password> as SYSDBA;
Connected.
SQL> show parameter DB_CREATE_FILE_DEST;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /u01/app/oracle/oradata
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/app/oracle/oradata' scope=both;
where /u01/app/oracle/oradata
is the path where your datafiles
reside.
3.2.4 Installing and Configuring OAM OAuth
OAA and OARM need access to an Oracle Access Management (OAM) installation with OAuth enabled. The Kubernetes cluster where OAA/OARM is to be installed, must have network connectivity to the OAM installation.
Note:
You can skip the OAuth configuration in this section if the UI components are not required or need to be disabled during the installation. If skipping OAuth configuration you must setoauth.enabled=false
along with associated properties in installOAA.properties
. For more details, see OAM OAuth Configuration.
- Install Oracle Access Management. For details, see Installing Oracle Access Management.
- Register WebGates with OAM. For details, see Registering and Managing OAM Agents.
- Enable OAuth on the Oracle Access Management Console:
- Log in to the OAM Console
https://<OAMAdminHost>:<OAMAdminPort>/oamconsole/
- From the Welcome page, click Configuration and then click Available Services
- Click Enable Service beside OAuth and OpenIDConnect Service (or confirm that the green status check mark displays).
- Log in to the OAM Console
- Open the mod_wl_ohs.conf file located at
<OHS_HOME>/user_projects/domains/base_domain/config/fmwconfig/components/OHS/<ohs_instance_name>
and add the following:<Location /oauth2> SetHandler weblogic-handler WebLogicHost <OAM_Managed_Server_Host> WebLogicPort <OAM_Managed_Server_Port> </Location> <Location /oam> SetHandler weblogic-handler WebLogicHost <OAM_Managed_Server_Host> WebLogicPort <OAM_Managed_Server_Port> </Location> <Location /.well-known/openid-configuration> SetHandler weblogic-handler WebLogicHost <OAM_Managed_Server_Host> WebLogicPort <OAM_Managed_Server_Port> PathTrim /.well-known PathPrepend /oauth2/rest </Location> <Location /.well-known/oidc-configuration> SetHandler weblogic-handler WebLogicHost <OAM_Managed_Server_Host> WebLogicPort <OAM_Managed_Server_Port> PathTrim /.well-known PathPrepend /oauth2/rest </Location> <Location /CustomConsent> SetHandler weblogic-handler WebLogicHost <OAM_Managed_Server_Host> WebLogicPort <OAM_Managed_Server_Port> </Location>
Note:
<OAM_Managed_Server_Host>
and<OAM_Managed_Server_Port>
is the host name and port of the OAM managed server. - Open the httpd.conf file located at
<OHS_HOME>/user_projects/domains/base_domain/config/fmwconfig/components/OHS/<ohs_instance_name>/
and add the following:Note:
Specify a value for your OAuth Identity Domain in<DomainName>
. The<DomainName>
will be used later in the parameteroauth.domainname
in theinstallOAA.properties
.<IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^/oauth2/rest/authorize? /oauth2/rest/authorize?domain=<DomainName> [PT,QSA,L] RewriteRule ^/oauth2/rest/token? /oauth2/rest/token?domain=<DomainName> [PT,QSA,L] RewriteRule ^/oauth2/rest/token/info? /oauth2/rest/token/info?domain=<DomainName> [PT,QSA,L] RewriteRule ^/oauth2/rest/authz? /oauth2/rest/authz?domain=<DomainName> [PT,QSA,L] RewriteRule ^/oauth2/rest/userinfo? /oauth2/rest/userinfo?domain=<DomainName> [PT,QSA,L] RewriteRule ^/oauth2/rest/security? /oauth2/rest/security?domain=<DomainName> [PT,QSA,L] RewriteRule ^/oauth2/rest/userlogout? /oauth2/rest/userlogout?domain=<DomainName> [PT,QSA,L] </IfModule> <IfModule mod_headers.c> #Add Identity domain header always for OpenID requests RequestHeader set X-OAUTH-IDENTITY-DOMAIN-NAME "<DomainName>" </IfModule>
- For the OHS WebGate defined in the previous steps, perform the following in the
OAM console:
- Create each of the following resources and set the Protection
Level as
Excluded
.- /oauth2/rest/**
- /oam/**
- /.well-known/openid-configuration
- /iam/access/binding/api/v10/oap/**
- /oam/services/rest/**
- /iam/admin/config/api/v1/config/**
- /oaa-admin/**
- /admin-ui/**
- /oaa/**
- /policy/**
- /oaa-policy/**
- /oaa-email-factor/**
- /oaa-sms-factor/**
- /oaa-totp-factor/**
- /oaa-yotp-factor/**
- /fido/**
- /oaa-kba/**
- /oaa-push-factor/**
- /risk-analyzer/**
- /risk-cc/**
- /consolehelp/**
- /otpfp/**
- Create each of the following resources and set the
Protection Level as
Protected
and set the Authentication Policy and Authorization Policy asProtected Resource Policy
- /oauth2/rest/approval (this is for
POST
operation) - /oam/pages/consent.jsp (this is for
GET
operation)
- /oauth2/rest/approval (this is for
For more information, see Adding and Managing Policy Resource Definitions
- Create each of the following resources and set the Protection
Level as
- Configure the OHS as reverse proxy in OAM. To do this:
- Log in to the OAM Console
https://<OAMAdminHost>:<OAMAdminPort>/oamconsole/
- From the Welcome page, click Configuration and in the Settings tile, click View > Access Manager.
- Under Load Balancing specify the OHS Host and OHS Port.
- Log in to the OAM Console
3.2.5 Setting Up Users and Groups in the OAM Identity Store
- OAA-Admin-Role, which is used to authenticate users who have permission to access the OAA Administration Console UI.
- OAA-App-User, which contains users who have permission to access the OAA User Preferences UI.
OAA-App-User
group, otherwise they will not be able to log in
to the OAA User Preferences UI through OAM OAuth. Similarly, for the administrator
to be able to access the OAA Administration console, they must be a member of the
OAA-Admin-Role
group.
Note:
A user cannot be a member of both the OAA-Admin-Role and OAA-App-User groups. Therefore, it is recommended that you have a dedicated administrator user name.Creating Users and Groups
- Create an LDIF file
oaa_admin.ldif
with the following contents:Note:
The following example is for an OAM enabled directory.dn: cn=oaaadmin,cn=Users,dc=example,dc=com changetype: add objectClass: orclUserV2 objectClass: oblixorgperson objectClass: person objectClass: inetOrgPerson objectClass: organizationalPerson objectClass: oblixPersonPwdPolicy objectClass: orclAppIDUser objectClass: orclUser objectClass: orclIDXPerson objectClass: top objectClass: OIMPersonPwdPolicy givenName: oaaadmin uid: oaaadmin orclIsEnabled: ENABLED sn: oaaadmin userPassword: <Password> mail: oamadmin@example.com orclSAMAccountName: oaaadmin cn: oaaadmin obpasswordchangeflag: false ds-pwp-password-policy-dn: cn=FAPolicy,cn=pwdPolicies,cn=Common,cn=Products,cn=OracleContext,dc=example,dc=com dn:cn=OAA-Admin-Role,cn=Groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupofuniquenames uniqueMember: cn=oaaadmin,cn=Users,dc=example,dc=com dn:cn=OAA-App-User,cn=Groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupofuniquenames
- Load the LDIF file into the directory. The following example assumes you are
using Oracle Unified
Directory:
$ cd INSTANCE_DIR/OUD/bin ldapmodify -h <OUD_HOSTNAME> -p 1389 -D "cn=Directory Manager" -w <password> -f oaa_admin.ldif
Adding Existing Users to the OAA User Group
OAA-App-User
group created above:
- Run the following commands in the LDAP instance. These commands create an
LDIF file that adds all your existing users to the
OAA-App-User
group:echo "dn:cn=OAA-App-User,cn=Groups,dc=example,dc=com" > update_group.ldif
echo "changetype: modify" >> update_group.ldif
echo "add: uniqueMember" >> update_group.ldif
ldapsearch -h <OUD_HOSTNAME> -p 1389 "cn=Directory Manager" -w <password> -b cn=Users,dc=example,dc=com "cn=*" dn | grep -v oaaadmin | grep -v "dn: cn=Users,dc=example,dc=com" | grep cn| awk ' { print "uniqueMember: "$2 } ' >> update_group.ldif
- Edit the
update_group.ldif
and remove any users you don't want to add to the group. - Load the LDIF file into the
directory:
ldapmodify -h <OUD_HOSTNAME> -p 1389 -D "cn=Directory Manager" -w <password> -f update_group.ldif
3.2.6 Installing a Container Image Registry (CIR)
During the Management Container installation, OAA and OARM container images are pushed to a Container Image Registry (CIR). When OAA and OARM is deployed the deployment pulls the images from the same Container Image Registry. You must therefore install a Container Image Registry as a prerequisite. The Container Image Registry must be accessible from all nodes in the Kubernetes cluster where OAA/OARM is to be deployed.
- oaa-admin
- oaa-factor-email
- oaa-factor-fido
- oaa-factor-kba
- oaa-factor-push
- oaa-factor-sms
- oaa-factor-totp
- oaa-factor-yotp
- oaa-factor-custom
- oaa-mgmt
- oaa-policy
- oaa-spui
- oaa-svc
- risk-cc
- risk-engine
If you do not have a CIR, you can download Docker Registry from: https://hub.docker.com/_/registry/.
3.2.7 Configuring CoreDNS for External Hostname Resolution
In order for the Kubernetes cluster to resolve the required hostnames for the installation, you must configure CoreDNS in your cluster.
- Either add the hostname.domain and IP addresses of any Proxy Severs, the Kubernetes nodes, the OAM OAuth server, the Oracle Database, and your Container Image Registry; or
- Add the Domain Names Servers (DNS) that can resolve the hostname.domain and IP addresses of any Proxy Severs, the Kubernetes nodes, the OAM OAuth server, the Oracle Database, and your Container Image Registry.
Adding individual hostnames and IP addresses or DNS to CoreDNS
- Run the following command to edit the coredns configmap:
This will take you into an edit session similar tokubectl edit configmap/coredns -n kube-system
vi
. - If you prefer to add each individual hostname and IP address, add a hosts section to the file including one entry for each of the hosts you wish to define. For example:
Alternatively, if you prefer to add the Domain Name Server (DNS) then add a section for the DNS:apiVersion: v1 data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance hosts custom.hosts example.com { 1.1.1.1 oam.example.com 1.1.1.2 db.example.com 1.1.1.3 container-registry.example.com 1.1.1.4 masternode.example.com 1.1.1.5 worker1.example.com 1.1.1.6 worker2.example.com fallthrough } } kind: ConfigMap metadata: creationTimestamp: "2021-11-09T14:08:31Z" name: coredns namespace: kube-system resourceVersion: "25242052" uid: 21e623cf-e393-425a-81dc-68b1b06542b4
apiVersion: v1 data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } example.com:53 { errors cache 30 forward . <DNS_IPADDRESS> } kind: ConfigMap metadata: creationTimestamp: "2021-11-09T14:08:31Z" name: coredns namespace: kube-system resourceVersion: "25242052" uid: 21e623cf-e393-425a-81dc-68b1b06542b4
- Save the file (
!wq
). - Restart CoreDNS:
- Run the following command to restart coredns:
kubectl delete pod --namespace kube-system --selector k8s-app=kube-dns
- Ensure the coredns pods restart without any problems by running the following command:
If any errors are shown use the following command to view the logs, then correct by editing the coredns configmap again:kubectl get pods -n kube-system
kubectl logs -n kube-system coredns--<ID>
- Run the following command to restart coredns:
Validating DNS Resolution
- Run the following command to run an alpine container:
This will take you inside a bash shell in the container.kubectl run -i --tty --rm debug --image=docker.io/library/alpine:latest --restart=Never -- sh
- Inside the container you can then run
nslookup
against the Database, OAM OAuth Server, Container Image Registry etc, for example:nslookup oam.example.com
3.2.8 Installation Host Requirements
The Management Container installation can take place from any node that has access to deploy to the Kubernetes cluster. This section lists the specific requirements for the node where the installation of the Management Container will take place.
The installation host must meet the following requirements:
- Linux x86_64.
- A minimum of 2 x CPU's and 16GB RAM.
- At least 40GB of free space in the root partition "
/
". - The node must have access to deploy to the Kubernetes cluster where the Management Container and OAA/OARM will be installed. The kubectl version requirements are the same as per Configuring a Kubernetes Cluster.
- Podman 3.3.0 or later. (If podman is not an option, Docker 19.03 or later can be used).
- Helm 3.5 or later.
- Openssl.
- If your environment requires proxies to access the internet, you must set the relevant proxies in order to connect to the Oracle Container Registry. For example:
export http_proxy=http://proxy.example.com:80 export https_proxy=http://proxy.example.com:80 export HTTPS_PROXY=http://proxy.example.com:80 export HTTP_PROXY=http://proxy.example.com:80
You must also make sure thatno_proxy
is set and includes the nodes referenced in the output underserver
inkubectl config view
. For example ifkubectl config view
shows:
then set the following:kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://masternode.example.com:6443 name: kubernetes contexts: etc...
export NO_PROXY=masternode.example.com:$NO_PROXY export no_proxy=masternode.example.com:$no_proxy
- The node must have access to your Container Image Registry as per Installing a Container Image Registry (CIR).
- In order for the installation to pull supporting images, the Administrator performing the install must have login credentials for Oracle Container Registry. You will be prompted for these credentials during the installation of the Management Container.
- Make sure you can login to Oracle Container Registry from the installation host:
podman login container-registry.oracle.com
Note:
If you are are not using podman and are using Docker then run:docker login container-registry.oracle.com
.
3.2.9 Generating Server Certificates and Trusted Certificates
OAA and OARM uses SSL for communication. You must generate server certificate and trusted certificate keystores (PKCS12) prior to installation.
Using a third party CA for generating certificates
- On the node where the Management container installation will be run from, create a directory and navigate to that folder, for example:
mkdir <workdir>/oaa_ssl export WORKDIR=<workdir> cd $WORKDIR/oaa_ssl
- Generate a 4096 bit private key (
oaa.key
) for the server certificate:openssl genrsa -out oaa.key 4096
- Create a Certificate Signing Request (
oaa.csr
):
When prompted enter details to create your Certificate Signing Request (CSR). For example:openssl req -new -key oaa.key -out oaa.csr
You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:US State or Province Name (full name) []:California Locality Name (eg, city) [Default City]:Redwood City Organization Name (eg, company) [Default Company Ltd]:Example Company Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:oaa.example.com Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:
- Send the CSR (
oaa.csr
) to the third party CA. - Once you receive the certificate from the CA, rename the file to
oaa.pem
and copy it to the$WORKDIR/oaa_ssl
directory.Note:
The certificateoaa.pem
needs to be in PEM format. If not in PEM format convert it to PEM using openssl. For example, to convert from DER format to PEM:openssl x509 -inform der -in oaa.der -out oaa.pem
- Copy the Trusted Root CA certificate (
rootca.pem
), and any other CA certificates in the chain (rootca1.pem
,rootca2.pem
, etc) that signed theoaa.pem
to the$WORKDIR/oaa_ssl
directory. As per above, the CA certificates must be in PEM format, so convert if necessary. - If your CA has multiple certificates in a chain, create a
bundle.pem
that contains all the CA certificates:cat rootca.pem rootca1.pem rootca2.pem >>bundle.pem
- Create a Trusted Certificate PKCS12 file (
trust.p12
) from the files as follows:
When prompted enter and verify the Export Password.openssl pkcs12 -export -out trust.p12 -nokeys -in bundle.pem
Note:
If your CA does not have a certificate chain replacebundle.pem
withrootca.pem
. - Create a Server Certificate PKCS12 file (
cert.p12
) as follows:
When prompted enter and verify the Export Password.openssl pkcs12 -export -out cert.p12 -inkey oaa.key -in oaa.pem -chain -CAfile bundle.pem
Note:
If your CA does not have a certificate chain replacebundle.pem
withrootca.pem
.
Note:
The path tocert.p12
and trust.p12
will be used later in the parameters common.local.sslcert
and common.local.trustcert
in the installOAA.properties
.
Generate your own CA and certificates for testing purposes
- Create a Trusted Certificate PKCS12 file (
trust.p12
) as follows:- On the node where the Management container installation will be run from, create a directory and navigate to that folder, for example:
mkdir <workdir>/oaa_ssl export WORKDIR=<workdir> cd $WORKDIR/oaa_ssl
- Generate a 4096-bit private key for the root Certificate Authority (CA):
openssl genrsa -out ca.key 4096
- Create a self-signed root CA certificate (
ca.crt
):
When prompted enter the details to create your CA. For example:openssl req -new -x509 -days 3650 -key ca.key -out ca.crt
You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:US State or Province Name (full name) []:California Locality Name (eg, city) [Default City]:Redwood City Organization Name (eg, company) [Default Company Ltd]:Example Company Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:OAA Certificate Authority Email Address []:
- Generate a PKCS12 file for the CA certificate:
When prompted enter and verify the Export Password.openssl pkcs12 -export -out trust.p12 -nokeys -in ca.crt
- On the node where the Management container installation will be run from, create a directory and navigate to that folder, for example:
- Create a Server Certificate PKCS12 file (
cert.p12
) as follows:- Generate a 4096 bit private key (
oaa.key
) for the server certificate:openssl genrsa -out oaa.key 4096
- Create a Certificate Signing Request (
cert.csr
):
When prompted enter details to create your Certificate Signing Request (CSR). For example:openssl req -new -key oaa.key -out cert.csr
You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:US State or Province Name (full name) []:California Locality Name (eg, city) [Default City]:Redwood City Organization Name (eg, company) [Default Company Ltd]:Example Company Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:oaa.example.com Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:
- Generate a certificate from the CSR using the CA created earlier:
openssl x509 -req -days 1826 -in cert.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out oaa.crt
- Generate a PKCS12 file (
cert.p12
) from the private key and server certificate:
When prompted enter and verify the Export Password.openssl pkcs12 -export -out cert.p12 -inkey oaa.key -in oaa.crt -chain -CAfile ca.crt
- Generate a 4096 bit private key (
Note:
The path tocert.p12
and trust.p12
will be used later in the parameters common.local.sslcert
and common.local.trustcert
in the installOAA.properties
.
3.2.10 Create a Kubernetes Namespace and Secret
Create a Kubernetes namespace and secret for the OAA and OARM deployment.
- Create a Kubernetes namespace for the OAA and OARM deployment:
For example:kubectl create namespace <namespace>
kubectl create namespace oaans
Note:
The namespace given will be used later in the parametercommon.kube.namespace=oaans
in theinstallOAA.properties
. - Create a Kubernetes secret for your Container Image Registry (CIR) in the OAA
namespace. This is required so the management container pod can push images to
your CIR and so the OAA/OARM deployment can pull images from your
CIR.
For example:kubectl create secret docker-registry dockersecret --docker-server=<CONTAINER_REGISTRY> \ --docker-username='<USER_NAME>' \ --docker-password='<PASSWORD>' \ --docker-email='<EMAIL_ADDRESS>' \ --namespace=<namespace>
kubectl create secret docker-registry dockersecret --docker-server=container-registry.example.com \ --docker-username="user@example.com" \ --docker-password=<PASSWORD> --docker-email=user@example.com \ --namespace=oaans
Note:
The secret namedockersecret
will be used later in the parameterinstall.global.imagePullSecrets\[0\].name
in theinstallOAA.properties
.
Next Steps:Preparing the Properties file for OAA and OARM Installation
3.3 Downloading Installation Files and Preparing the Management Container
This section provides steps for downloading installation files and preparing the Management Container for OAA and OARM.
The Management Container installation can take place from any node that has access to deploy to the Kubernetes cluster. During installation the Management Container pod will deploy to one of the nodes in the Kubernetes cluster.
- Download the latest OAA installation Image
<OAA_Image>.zip
from My Oracle Support by referring to the document ID 2723908.1. - On the node where the Management container installation will run from, create a
$WORKDIR/oaaimages
directory and copy the<OAA_Image>.zip
to it:mkdir -p $WORKDIR/oaaimages cd $WORKDIR/oaaimages cp <download_location>/<OAA_Image>.zip . unzip <OAA_Image>.zip
This will give you a
<tar_file_name>.tar
file. - Navigate to the
$WORKDIR/oaaimages/oaa-install
directory and copy the install template file toinstallOAA.properties
:cd $WORKDIR/oaaimages/oaa-install cp installOAA.properties.template installOAA.properties
- Prepare this
installOAA.properties
as per Preparing the Properties file for OAA and OARM Installation
3.4 Preparing the Properties file for OAA and OARM Installation
You can customize the OAA, OAA-OARM and OARM installation by setting properties in the installOAA.properties
file. The installOAA.properties
is used by the Management Container installation script and is copied to <NFS_CONFIG_PATH>
during the installation of the Management Container pod. The installOAA.properties
file is later passed as an argument to the OAA.sh
script when deploying OAA and/or OARM. See Deploying OAA and OARM.
The following sections provide description for the customizations allowed in the installOAA.properties
.
3.4.1 Common Deployment Configuration
This section provides details about the common deployment configuration properties that can be set in the installOAA.properties
.
Table 3-4 Common Deployment Configuration
Properties | Mandatory/Optional | Installation Type | Description |
---|---|---|---|
common.dryrun |
Optional | OAA, OAA-OARM, and OARM | If enabled and set to true, the helm installation
will only display generated values and will not actually perform the
OAA/OARM installation on the Kubernetes cluster.
This
is equivalent to |
common.deployment.name |
Mandatory | OAA, OAA-OARM, and OARM | Name of the OAA installation. It is unique per
kubernetes cluster and namespace when the helm install command is
run.
The value given must be in lowercase. |
common.deployment.overridefile |
Optional | OAA, OAA-OARM, and OARM | Override file for chart parameters override. The helm charts are present in helmcharts directory inside the management container. All the parameters defined in values.yaml can be overridden by this file, if enabled. The format of this file should be YAML only. A sample oaaoverride.yaml file is present in the ~/installsettings directory inside the management container.
|
common.kube.context |
Optional | OAA, OAA-OARM, and OARM | Name of the Kubernetes context to be used.
If the context is not provided, the default Kubernetes context is used. |
common.kube.namespace |
Optional | OAA, OAA-OARM, and OARM | The namespace where you want to create the OAA deployment. This should be the namespace created in Create a Kubernetes Namespace and Secret. If the parameter is not set it will deploy to the default namespace. |
common.deployment.sslcert |
Mandatory | OAA, OAA-OARM, and OARM | The server certificate PKCS12 file to be used in the
OAA installation. The file name, for example
cert.p12 , is the same file name as the one
generated in Generating Server Certificates and Trusted Certificates. The PATH should not change as this is the internal path mapped
inside the container.
The file is seeded into the vault and downloaded by all OAA microservices |
common.deployment.trustcert |
Mandatory | OAA, OAA-OARM, and OARM | The trusted certificate PKCS12 file to be used in
the OAA installation. The file name, for example
trust.p12 , is the same file name as the one
generated in Generating Server Certificates and Trusted Certificates. The PATH should not change as this is the internal path mapped
inside the container.
The file is seeded into the vault and downloaded by all OAA microservices |
common.deployment.importtruststore |
Mandatory | OAA, OAA-OARM, and OARM | If this is enabled then the trusted certificate is imported in the JRE truststore. |
common.deployment.keystorepassphrase |
Mandatory | OAA, OAA-OARM, and OARM | Passphrase for the certificate PKCS12 file. This is the passphrase used when
creating the keystore in Generating Server Certificates and Trusted Certificates.
If you do not specify the value here, you are prompted for the value during installation. |
common.deployment.truststorepassphrase |
Mandatory | OAA, OAA-OARM, and OARM | Passphrase for the trusted certificate PKCS12 file. This is the passphrase used
when creating the trusted keystore in Generating Server Certificates and Trusted Certificates If you do not specify the value here you are prompted for the value during installation. |
common.deployment.generate.secret |
Mandatory | OAA, OAA-OARM, and OARM | If set to true, the installation generates three symmetric keys and adds them to the cert.p12 referenced by the parameter common.deployment.sslcert .
The encryption keys generated are:
If you create these keys yourself then the value must be set to
false . To create the keys, run the following command: for example:
|
common.deployment.mode |
Mandatory | OAA, OAA-OARM, and OARM | The following values can be set in installOAA.properties
|
common.migration.configkey |
Optional | OAA, OAA-OARM, and OARM | Base64 encoded config key from the transitioning system. If enabled, the value is placed in the vault and used for transitioning of legacy data. Use this only if you transition from Oracle Adaptive Access Manager 11gR2PS3. |
common.migration.dbkey |
Optional | OAA, OAA-OARM, and OARM | Base64 encoded Database key from the transitioning system. If enabled, the value is placed in the vault and used for transitioning of database data. Use this only if you transition from Oracle Adaptive Access Manager 11gR2PS3. |
common.oim.integration |
Optional | OAA and OAA-OARM | To integrate with OIM, set the property to true. This also enables the forgot password functionality. Use this only if you transition from Oracle Adaptive Access Manager 11gR2PS3. |
common.deployment.push.apnsjksfile |
Optional | OAA and OAA-OARM | File used when enabling push factor for the Apple Push Notification Service. You need to set this only if you have already configured the JKS file prior to install. Else, you can configure this post installation. The JKS file should be copied to the <NFS_VAULT_PATH>/ChallengeOMAPUSH/apns/ directory. The value should be set to /u01/oracle/service/store/oaa/ChallengeOMAPUSH/apns/APNSCertificate.jks . For more details, see Configuring Oracle Mobile Authenticator Push Notification for iOS.
|
3.4.2 Database Configuration
This section provides details about the database configuration properties that can be set in the installOAA.properties
.
Table 3-5 Database Configuration
Properties | Mandatory/Optional | Description |
---|---|---|
database.createschema |
Mandatory |
Enables creation of the schema during installation. If this is set to |
database.host |
Mandatory | Specify the database hostname or IP address. |
database.port |
Mandatory | Specify the database port.. |
database.sysuser |
Mandatory | Specify the sysdba user of the
database.
|
database.syspassword |
Mandatory | Specify the sys password.
If you do not specify the value here, you are prompted for value during installation. |
database.schema |
Mandatory | Specify the name of the database schema to be used for installation. |
database.tablespace |
Mandatory | Specify the tablespace name to be used for the installation. |
database.schemapassword |
Mandatory | Specify the schema password.
If you do not specify the value here, you are prompted for value during installation. |
database.svc |
Mandatory | Specify the database service name. |
database.name |
Mandatory | Specify the database name. This can be the same as
database service name.
This parameter is not required if using a RAC database. |
Note:
If using a secure connection to an Oracle Database via SSL, then additional configuration steps are required. These steps must be performed after the Management Container is started, and before: Deploying OAA and OARM:- Obtain the Oracle Wallet for the Database:
- For a standard Oracle database refer to your Database specific documentation for details on how to find the Oracle Database Wallet.
- For an Oracle Autonomous Database on Shared Exadata Infrastructure (ATP-S) database follow: Download Client Credentials.
- Create a
db_wallet
directory in the<NFS_CONFIG_PATH>
used by the OAA deployment. Copy the wallet file(s) to the<NFS_CONFIG_PATH>/db_wallet
directory. - Enter a bash shell for the OAA management pod:
For example:kubectl exec -n <namespace> -ti <oaamgmt-pod> -- /bin/bash
kubectl exec -n oaans -ti oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv9 -- /bin/bash
- Inside the container set the
TNS_ADMIN
environment variable:
Theexport TNS_ADMIN=<NFS_CONFIG_PATH>/db_wallet
db_wallet
directory must have the correct read and write access privileges to be accessible from inside the container. - Deploy OAA as per Deploying OAA and OARM.
3.4.3 OAM OAuth Configuration
This section provides details about the OAM OAuth configuration properties that can be set in the installOAA.properties
.
Ensure you have followed the prerequisite steps for configuring OAM for OAuth. For details, see Installing and Configuring OAM OAuth .
Table 3-6 OAM OAuth Configuration
Properties | Mandatory/Optional | Description |
---|---|---|
oauth.enabled |
Mandatory |
OAuth is required if you want to use the OAA Administration User Interface (UI) and OAA User Preferences UI. If access to the UI's is required, you must set this to If you do not want access to the UI's set this to
false . If you set oauth.enabled=false you must also set the following properties to false , otherwise the installation fails:
If
oauth.enabled=false you must also set these parameters to false under Optional Configuration:
|
oauth.createdomain |
Optional | Creates the OAuth domain.
The OAuth domain is required to create OAuth resource and client. |
oauth.createresource |
Optional | Creates the OAuth resource.
The OAuth resource is required to create the OAuth client. |
oauth.createclient |
Optional | Creates the OAuth client.
The OAuth client is required if |
oauth.domainname |
Mandatory if |
Specify the OAuth domain name. This must be same as the <DomainName> provided in Installing and Configuring OAM OAuth.
|
oauth.identityprovider |
Mandatory if oauth.createdomain is
set to true |
Specify the identity provider for the OAM OAuth Domain. This is the name of the User Identity Store used in OAM. |
oauth.clientname |
Mandatory if |
Specify the OAuth client name that will be created during the installation. |
oauth.clientgrants |
Mandatory if oauth.createclient is
set to true |
Specify the client grants for the OAuth client. OAuth client must have CLIENT_CREDENTIALS , which is used during validation stage to check OAuth status. Values must be:
"PASSWORD","CLIENT_CREDENTIALS","JWT_BEARER","REFRESH_TOKEN","AUTHORIZATION_CODE","IMPLICIT". |
oauth.clienttype |
Mandatory if oauth.createclient is
set to true |
Specify the OAuth Client Type. OAM OAuth supports
the following client types:
PUBLIC_CLIENT, CONFIDENTIAL_CLIENT, MOBILE_CLIENT. As OAuth is used for the OAA Administration and User Preference consoles, PUBLIC_CLIENT should be used. |
oauth.clientpassword |
Mandatory if
oauth.enabled=true |
Specify the password that will be used for the OAuth
client. The client password must conform to regex
^[a-zA-Z0-9.\-\/+=@_ ]*$ with a maximum length
of 500.
|
oauth.resourcename |
Mandatory if
oauth.enabled=true |
Specify the OAuth resource name to be created during installation. Also used for validation of the OAuth setup. |
oauth.resourcescope |
Mandatory if
oauth.enabled=true |
Specify the OAuth resource scope to be created during installation. Also used for validation of the OAuth setup. |
oauth.redirecturl |
Mandatory if oauth.createclient is
set to true |
Specify the client redirect URL. Post authentication redirecturl is required. This is used for validating configuration of OAuth services in OAM by generating an access token. |
oauth.applicationid |
Mandatory if oauth.createclient is
set to true |
Application ID of OAA protected by oauth. The value can be any valid string. It is required to setup runtime integration between OAM and OAA post OAA installation. See Integrating OAA with OAM. |
oauth.adminurl |
Mandatory if
oauth.enabled=true |
Specify the OAuth Administration URL This is the URL
of the OAM Administration Server, for example
http://oam.example.com:7001 ..
|
oauth.basicauthzheader |
Mandatory if
oauth.enabled=true |
Base64 encoded authorization header for the OAM
Adminstration Server. The value can be found by executing:
echo -n weblogic:<password> |
base64 .
|
oauth.identityuri |
Mandatory if
oauth.enabled=true |
URL of the identity server used to retrieve OIDC
metadata using /.well-known/openid-configuration
endpoint. This is the front-end URL of the OAM Managed server
providing runtime support for OAuth Services. For example :
http://ohs.example.com:7777 .
|
3.4.4 Vault configuration
This section provides details about the vault configuration properties that can be set in the installOAA.properties
.
If you are using OCI vault, you can ignore the properties to be set for file-based vault.
Table 3-7 Vault Configuration
Properties | Description |
---|---|
vault.deploy.name |
Name to be used in the vault for this deploymemt. If the name is already present in the vault it will be reused. |
vault.create.deploy |
If the value is set to true , vault
creation is performed. However, if a vault with the name provided in
vault.deploy.name already exists then vault
creation is skipped.
|
vault.provider |
Specify if the vault is OCI or file based.
Specify one of the following values:
|
The following properties are mandatory for OCI-based vault configurations if you have set vault.provider=oci . For for information about creating OCI vault, see Managing Vaults. The OCI vault must exist before setting the parameters below.
|
|
vault.oci.uasoperator |
Specify the Base64 encoded private key of the user with read and write permission on OCI vault. |
vault.oci.tenancyId |
Specify the Base64 encoded OCI ID of the tenancy id. |
vault.oci.userId |
Specify the Base64 encoded OCID of the user with read and write permission on OCI vault. |
vault.oci.fpId |
Specify the Base64 encoded finger print of the user with read and write permission on OCI vault. |
vault.oci.compartmentId |
Specify the Base64 encoded OCID of the compartment where the vault exists in OCI. |
vault.oci.vaultId |
Specify the Base64 encoded OCID of the vault on OCI. |
vault.oci.keyId |
Specify the Base64 encoded OCID of the master secret key in OCI vault used to encrypt the secrets in the vault. |
The following properties are mandatory for file-based vault configurations if you have set vault.provider=fks .
|
|
vault.fks.server |
Specify the NFS server host name or IP address for the <NFS_VAULT_PATH> .
For more details, see Configuring NFS Volumes. |
vault.fks.path |
Specify the <NFS_VAULT_PATH> which will store the file based vault.
For more details, see Configuring NFS Volumes. |
vault.fks.key |
Specify a Base64 encoded password for the file based
vault. To find the Base64 encoded version of the password use:
echo -n weblogic:<password> |
base64 .
|
vault.fks.mountpath |
The mount path in the management container and for
installed services where the vault exists. The value of this
property must be the same as the value passed through the helm
chart. Do not change this value:
/u01/oracle/service/store/oaa .
|
3.4.5 Helm Chart Configuration
This section provides details about the helm chart configuration properties that can be set in the installOAA.properties
.
These properties are passed as input to the helm chart during Installation.
Table 3-8 Helm Chart Configuration
Properties | Mandatory/Optional | Description |
---|---|---|
install.global.repo |
Mandatory |
Specify the Container Image Registry where the OAA container images exists. For more details, see Installing a Container Image Registry (CIR) |
install.riskdb.service.type |
Mandatory | You must set the value of this property always to ExternalName , as the database is external to the OAA installation.
|
install.global.imagePullSecrets\[0\].name |
Mandatory | Specify the Kubernetes secret reference that needs to be used while pulling the container images from the protected Container Image Registry.
Note: This must be set to the Kubernetes secret that you set earlier e.gdockersecret . For more details, see Create a Kubernetes Namespace and Secret.
|
install.global.image.tag |
Mandatory | Update the global image tag to the image tag in your Container Image Registry.
Note: If you copied theinstallOAA.properties.template to installOAA.properties this tag will be already set.
|
install.global.oauth.logouturl |
Optional | Specify the logout URL for OAuth protected resource.
This is the front-end URL of the OAM Managed server. For example :
http://ohs.example.com:7777/oam/server/logout .
Required only when oauth.enabled is set to
true .
|
install.global.uasapikey |
Mandatory | Specify the REST API key to be used used for protecting rest endpoints in OAA microservice. |
install.global.policyapikey |
Mandatory | Specify the REST API key to be used used for protecting REST endpoints in the OAA policy microservice. |
install.global.factorsapikey |
Mandatory | Specify the REST API key to be used for protecting REST endpoints in the OAA factor microservice. |
install.global.riskapikey |
Optional | Specify the REST API key to be used for protecting
REST endpoints in the OAA risk microservice.
This parameter is mandatory if performing an OAA-OARM installation or OARM only installation. |
In case of OCI vault, the following configurations can be overridden if provided for read-only users during helm installation. If the values are not provided in the following properties then the values are picked from Vault Configuration. | ||
install.global.vault.mapId |
Optional | For a pre-existing vault you can provide the Base64 mapId. If the property is set then it validates against the deploy information in the vault. |
install.global.vault.oci.uasoperator |
Optional | Specify the Base64 encoded private key of the user with the read-only permission on the vault. |
install.global.vault.oci.tenancyId |
Optional | Specify the Base64 encoded tenancy id from OCI. |
install.global.vault.oci.userId |
Optional | Specify the Base64 encoded user id from OCI. |
install.global.vault.oci.fpId |
Optional | Specify the Base64 encoded finger print id of the user from the OCI. |
3.4.6 Optional Configuration
This section provides details about the optional configuration properties that can be set in the installOAA.properties
.
Properties | Mandatory/Optional | Description |
---|---|---|
install.global.ingress.enabled |
Optional | This property is used to indicate if ingress is to be enabled for the deployment. If the value is set to true, the ingress resource in the Kubernetes cluster for the deployment will be generated. If a pure NodePort based deployment is required, the value should be set to false. |
install.global.ingress.runtime.host |
Optional | You can specify the Host name to be used for ingress definition for the runtime host. If the value for the property is missing, ingress definition is created using '*' host.
The runtime host is used for accessing runtime services including all factors, oaa, spui and risk. |
install.global.ingress.admin.host |
Optional | You can specify the Host name to be used for ingress definition for the admin host. If the value for the property is missing, ingress definition is created using '*' host.
The admin host is used for accessing admin, policy and risk-cc services. |
install.global.dbhost
|
Optional | These properties are related to the database. If the property is not specified here, the values provided in the Database Configuration are used. |
install.global.oauth.oidcidentityuri
|
Optional | The following properties are related to OAuth. If they are not specified here, the values provided in the OAuth Configuration are used. |
install.global.serviceurl |
Optional | If load balancer/ingress url is present, then configure the url here. All UI services will be behind this load balancer/ingress. In case ingress installation is set to true, the appropriate service url will be fetched after ingress installation and will be used as service url. If install.global.serviceurl is provided, the service url from this property will have higher priority and override the original value.
|
install.oaa-admin-ui.serviceurl |
Optional | Service URL of oaa admin, if different from install.global.serviceurl .
|
install.spui.enabled=false
|
Optional | If oauth.enabled=false the OAA Admin console (oaa-admin-ui ), User Preferences console (spui ) , and FIDO (fido ) and KBA (oaa-kba ) factors cannot be used. If oauth.enabled=false you must uncomment these properties.
When |
install.totp.enabled=false
|
Authentication factor services are enabled by default. To disable them uncomment the lines.
When |
|
install.service.type=NodePort
|
Optional | Default service type for services is NodePort.
When deployment mode is Risk the following service are not deployed : fido, push, yotp, email ,sms, totp and kba. If |
For details on installing using ingress, see: Installing OAA and OARM Using NGINX Ingress
3.4.7 Ingress Configuration
This section provides details about the Ingress configuration properties that can be set in the installOAA.properties
.
Table 3-9 Ingress Configuration
Properties | Mandatory/Optional | Description |
---|---|---|
ingress.install |
Mandatory |
Set value to Set to If this is set to |
ingress.namespace |
Mandatory if ingress.install=true | The Kubernetes namespace which will be used to install ingress. The install will create this namespace in Kubernetes. For example, ingress-nginx .
|
ingress.admissions.name=ingress-nginx-controller-admission |
Optional if ingress.install=true |
The name of the Admissions controller. The Admissions controller can be installed separately.If Ingress admissions name is not present, the |
ingress.class.name=ingress-nginx-class |
Mandatory if ingress.install=true | Ingress class name that needs to be used for the installation. It must not be an existing class name. |
ingress.service.type |
Mandatory if ingress.install=true |
Set the value to Set the value to |
ingress.install.releaseNameOverride=base |
Optional if ingress.install=true | Anything starting with ingress.install can be additionally supplied to set the ingress chart value.
|
For details on installing using ingress, see: Installing OAA and OARM Using NGINX Ingress
3.4.8 Management Container Configuration
This section provides details about the Management Container configuration properties that can be set in the installOAA.properties
.
Table 3-10 Management Configuration
Properties | Mandatory/Optional | Description |
---|---|---|
install.mount.config.path |
Mandatory |
Set the value of |
install.mount.config.server |
Mandatory | The IP address of the NFS server for the <NFS_CONFIG_PATH> .
|
install.mount.creds.path |
Mandatory |
Set the value of |
install.mount.creds.server |
Mandatory | The IP address of the NFS server for the <NFS_CREDS_PATH> .
|
install.mount.logs.path |
Mandatory |
Set the value of |
install.mount.logs.server |
Mandatory | The IP address of the NFS server for the <NFS_LOGS_PATH> .
|
install.mgmt.release.name |
Optional | Name of the OAA management container installation used when the helm install command is run. If not set you will be prompted for the name during the installation.
The value given must be in lowercase. |
install.kube.creds |
Optional | Set the value to the local PATH where kubeconfig resides. If not set the management container will use $KUBECONFIG or ~/.kube/config for Kubernetes credentials.
|
common.local.sslcert |
Mandatory | Set the value to the local PATH where the server certificate PKCS12 file (cert.p12 ) resides.
|
common.local.trustcert |
Mandatory | Set the value to the local PATH where the trusted certificate PKCS12 file (trust.p12 ) resides.
|
For details on NFS mounts, see: Configuring NFS Volumes
For details on the PKCS12 files, see: Generating Server Certificates and Trusted Certificates
3.5 Running the Management Container
Run the installManagementContainer.sh
script to create the Management Container.
- On the node where you downloaded the installation files, navigate to the
$WORKDIR/oaaimages/oaa-install
directory:cd $WORKDIR/oaaimages/oaa-install
- Run the
installManagementContainer.sh
script with arguments. For example:./installManagementContainer.sh -t ./<oaa-image>.tar
A full list of arguments forinstallManagementContainer.sh
is shown in the table below:Command line argument Mandatory Description -t
No Path to the OAA image tar file. Usage:- If not provided pull, tag, and push to the Container Image Registry will not be performed.
- If pull, tag, and push is required the path to the image
<oaa-image.tar>
must be provided, for example:-t ./oaa-latest.tar
. - The install script will first attempt to use podman to perform this task, and if not found will use Docker if available. If neither podman nor Docker are available the script will exit.
-c
No Path to OAA management helm chart. If not provided the script will use
./charts/oaa-mgmt
as the path.-d
No Perform a helm dry-run of the installation. -f
No Path to installOAA.properties
.If not provided
./installOAA.properties
will be used.-v
No Runs the script in verbose mode. -p
No Set http/https proxies in the OAA management container's environment. By default the proxies will not be set. If specified the script will use its environment to find the proxy configuration to use.
-e
No Add entries to OAA management container's /etc/hosts
.By default entries are not added. If specified the script will prompt for the information.
-n
No Do not prompt By default the script will prompt for the information it needs to install the OAA management chart and before proceeding from one stage to the next of the install.
If this option is set the script will not prompt for missing information or between stages. If required information is missing it will exit in error instead.
-u
No Perform an update instead of an install. By default the script will determine whether to perform and install or an upgrade by looking for the helm chart previously installed.
As the install progresses you will be prompted to answer various questions and perform certain tasks. The table below outlines some of the questions or tasks you may be asked to answer or perform:
Output Action Use 'podman login' to login into your private registry if you have not done so previously.Login successful? [Y/N]:
Note:
If using Docker the above will showdocker login
.If your private Container Image Registry (CIR) where you store images requires a login, use podman login
ordocker login
to log into the CIR and enter your credentials when prompted:
or:podman login <container-registry.example.com>
docker login <container-registry.example.com>
Would you like to specify a kube context (otherwise 'kubernetes-admin@kubernetes' will be used)? [Y/N]:
If you have multiple kube contexts in your cluster you can choose which context to use. If you select " Y
" you must type the context you wish to use. If you wish to use the default context chosen, or only have one context in your kube config, choose "N
".Note:
The table above does not include an exhaustive list of all the prompts you will see during the install as most are self explanatory.Once the Management Container installation is complete you will see output similar to the following:NAME: oaamgmt LAST DEPLOYED: <DATE> STATUS: deployed REVISION: 1 TEST SUITE: None Waiting 15 secs for OAA mgmt deployment to start... Executing 'kubectl get pods oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv -n oaans'... NAME READY STATUS RESTARTS AGE oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv 0/1 ContainerCreating 0 15s Waiting 15 secs for OAA mgmt deployment to run... Executing 'kubectl get pods oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv -n oaans'... NAME READY STATUS RESTARTS AGE oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv 0/1 ContainerCreating 0 30s Waiting 15 secs for OAA mgmt deployment to run... Executing 'kubectl get pods oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv -n oaans'... NAME READY STATUS RESTARTS AGE oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv 0/1 ContainerCreating 0 46s Waiting 15 secs for OAA mgmt deployment to run... Copying OAA properties file to oaans/oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv:/u01/oracle/scripts/settings Generating kube config for OAA mgmt pod 'oaans/oaamgmt-oaa-mgmt'. Using service account 'oaans/oaamgmt-oaa-mgmt'. Using token name 'oaamgmt-oaa-mgmt-token-5m88n'. Using cluster URL 'https://<URL>'. Cluster "oaa-cluster" set. User "oaa-service-account" set. Context "kubernetes-admin@kubernetes" created. Switched to context "kubernetes-admin@kubernetes". Copying OAA kube config files to oaans/oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv:/u01/oracle/scripts/creds... Using helm config '/home/opc/.config/helm/repositories.yaml'. Copying helm config to oaans/oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv:/u01/oracle/scripts/creds/helmconfig Copying certificates to oaans/oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv:/u01/oracle/scripts/creds Use command 'kubectl exec -n oaans -ti oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv -- /bin/bash' to get a shell to the OAA mgmt pod. From pod shell, use command 'kubectl get pods' to verify communication with the cluster. Continue OAA installation from the OAA mgmt pod. OAA management installation complete.
- As per the output connect to the OAA management pod, for
example:
This will take you into a bash shell inside the OAA management pod:kubectl exec -n oaans -ti oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv9 -- /bin/bash
[oracle@oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv /]$
Next Steps: Deploying OAA and OARM
3.6 Deploying OAA and OARM
Deploying OAA, OAA-OARM, or OARM
- Enter a bash shell for the OAA management pod if not already
inside
one:
For example:kubectl exec -n <namespace> -ti <oaamgmt-pod> -- /bin/bash
kubectl exec -n oaans -ti oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv9 -- /bin/bash
- Inside the OAA management pod bash shell, deploy OAA, OAA-OARM, or OARM by running the
OAA.sh
script:cd ~ ./OAA.sh -f installOAA.properties
Note:
This will use theinstallOAA.properties
in the<NFS_CONFIG_PATH>
.
Next Steps: If the install is successful you will see output similar to Printing Deployment Details.
3.7 Printing Deployment Details
Note:
In cases where you install the ingress controller that ships with OAA/OARM, the host and port is set to the worker node where the controller gets installed. In cases where you are using your own ingress controller, assuming a basic setup, the host and port is set to the value of the propertyinstall.global.serviceurl
in installOAA.properties
.
In cases where ingress is disabled, the host and NodePort of the worker nodes are printed.
############################OAA Deployment Details: START################################### OAAService=https://oaainstall-host/oaa/runtime AdminUrl=https://oaainstall-host/oaa-admin PolicyUrl=https://oaainstall-host/oaa-policy SpuiUrl=https://oaainstall-host/oaa/rui Email=https://oaainstall-host/oaa-email-factor Push=https://oaainstall-host/oaa-push-factor Fido=https://oaainstall-host/fido SMS=https://oaainstall-host/oaa-sms-factor TOTP=https://oaainstall-host/oaa-totp-factor YOTP=https://oaainstall-host/oaa-yotp-factor KBA=https://oaainstall-host/oaa-kba RELEASENAME=oaainstall # Key below is Base64 encoded API key oaaapikey=YXBpa2V5dG9iZXNldGR1cmluZ2luc3RhbGxhdGlvbgo= # Key below is Base64 encoded Policy API key oaapolicyapikey=cG9sYXBpa2V5dG9iZXNldGR1cmluZ2luc3RhbGxhdGlvbgo= # Key below is Base64 encoded Factor API key oaafactorapikey=ZmFjdG9yYXBpa2V5dG9iZXNldGR1cmluZ2luc3RhbGxhdGlvbgo= ############################Deployment Details: END###################################
############################Risk Deployment Details: START###################################
AdminUrl=https://oaainstall-host/oaa-admin
PolicyUrl=https://oaainstall-host/oaa-policy
RISK=https://oaainstall-host/risk-analyzer
RISKCC=https://oaainstall-host/risk-cc
RELEASENAME=riskinstall
# Key below is Base64 encoded Policy API key
oaapolicyapikey=cG9sYXBpa2V5dG9iZXNldGR1cmluZ2luc3RhbGxhdGlvbgo=
# Key below is Base64 encoded Factor API key
riskfactorapikey=cmlza2ZhY3RvcmFwaWtleXRvYmVzZXRkdXJpbmdpbnN0YWxsYXRpb24K
############################Deployment Details: END###################################
- Enter a bash shell for the OAA management pod if not already inside one:
For example:kubectl exec -n <namespace> -ti <oaamgmt-pod> -- /bin/bash
kubectl exec -n oaans -ti oaamgmt-oaa-mgmt-7d7597c694-vn4ds -- /bin/bash
- Run
printOAADetails.sh
to print the deployment details:cd ~/scripts ./printOAADetails.sh -f settings/installOAA.properties
Note:
This will use theinstallOAA.properties
in the<NFS_CONFIG_PATH>
.
Based on the information printed for the deployment, the consoles can be accessed as follows:
Console | Print Details Reference * | URL | Username | Password |
---|---|---|---|---|
OAA/OARM Administration | AdminUrl |
https://<hostname.domain>:<port>/oaa-admin |
oaadmin |
<password> set in OAM OAuth identity store.
|
OAA User Preferences |
SpuiUrl |
https://<hostname.domain>:<port>/oaa/rui |
Username from OAM OAuth identity store. | <password> set in OAM OAuth identity store.
|
* Throughout this documentation the value in the Print Details Reference column is used to denote the URL to use. For example: "Launch a browser and access the <AdminURL>
", refers to accessing the corresponding URL https://<hostname.domain>:<port>/oaa-admin
shown.
Based on the information printed for the deployment, the REST API endpoint information is as follows:
REST API | Print Details Reference ** | URL | Username ** | Password ** |
---|---|---|---|---|
OAA/OARM Admin | PolicyUrl |
https://<hostname.domain>:<port>/oaa-policy |
<RELEASENAME>-oaa-policy |
<Base64Decoded(oaapolicyapikey)> |
OAA Runtime | OAAService |
https://<hostname.domain>:<port>/oaa/runtime |
<RELEASENAME>-oaa |
<Base64Decoded(oaaapikey)> |
Risk |
RISK |
https://<hostname.domain>:<port>/risk-analyzer |
<RELEASENAME>-risk |
<Base64Decoded(riskfactorapikey)> |
Risk Customer Care | RISKCC |
https://<hostname.domain>:<port>/risk-cc |
<RELEASENAME>-risk-cc |
<Base64Decoded(riskfactorapikey)> |
KBA | KBA |
https://<hostname.domain>:<port>/oaa-kba |
<RELEASENAME>_OAA_KBA |
<Base64Decoded{ oaafactorsapikey}> |
** Throughout this documentation, when REST API examples are given, the value in the Print Details Reference column is used to denote the URL to use, and the values in the Username and Password columns represent the username and password to use.
curl --location -g --request POST '<OAAService>/preferences/v1' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic <Base64Encoded(<username>:<password>)>'
etc...
<OAAService>
refers to accessing the corresponding URL https://<hostname.domain>:<port>/oaa/runtime
, and <username> refers to <RELEASENAME>-oaa
, and <password> refers to <Base64Decoded(oaaapikey)>
.
3.8 Post Installation Steps
3.8.1 Post Installation Steps for all installations
Follow these post installation steps for all installation types; OAA only , OAA-OARM and OARM only.
Import the policy snapshot by performing the following steps:
- Obtain the latest snapshot policy file from My Oracle
Support by referring to the document ID
2723908.1.
Note:
If the latest snapshot file is not available, use the/u01/oracle/scripts/oarm-12.2.1.4.1-base-snapshot.zip
file inside the management container. - Copy the latest snapshot file to the
<NFS_CONFIG_PATH>
. - Edit the
<NFS_CONFIG_PATH>/installOAA.properties
and add the following parameters to the bottom of the Common Deployment configuration:common.deployment.import.snapshot=true common.deployment.import.snapshot.file=/u01/oracle/scripts/settings/<latest_snapshot>.zip
Note:
If not using the latest snapshot file set:common.deployment.import.snapshot.file=/u01/oracle/scripts/settings/oarm-12.2.1.4.1-base-snapshot.zip
- Enter a bash shell for the OAA management pod if not already inside
one:
For example:kubectl exec -n <namespace> -ti <oaamgmt-pod> -- /bin/bash
kubectl exec -n oaans -ti oaamgmt-oaa-mgmt-7dfccb7cb7-lj6sv9 -- /bin/bash
- Run the following command inside the bash shell to import the policy snapshot:
cd ~/scripts ./importPolicySnapshot.sh -f settings/installOAA.properties
Note:
This will use theinstallOAA.properties
in the<NFS_CONFIG_PATH>
.The snapshot will import and if successful you will see the message
Successfully applied snapshot: 10001
or similar.Exit the bash shell.
- Edit the
<NFS_CONFIG_PATH>/installOAA.properties
and change the following parameter at the bottom of the Common Deployment configuration:common.deployment.import.snapshot=false
Note:
This is an important step to avoid overwriting and erasing policies during a future upgrade.
3.8.2 Post Installation Steps for OAA-OARM and OARM installs
Follow these post installation steps for OAA-OARM and OARM only installations.
Set oaa.browser.cookie.domain and oaa.risk.integration.postauth.cp
Note:
The steps below are only applicable to OAA-OARM installations.The properry oaa.browser.cookie.domain
must be set to the OAA host domain
in order to collect the device cookie. For example, if the OAA is accessible on
https://oaa.example.com
, then set the value to
oaa.example.com
.
The property oaa.risk.integration.postauth.cp
must be set to
postauth
to invoke risk rules for usecases such as Risky IP,
Geo-velocity, and Geo-location.
- Set the properties as follows:
Use the
<PolicyUrl>/policy/config/property/v1
REST API to set the properties. For example:curl --location -g --request PUT '<PolicyUrl>/policy/config/property/v1' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic <Base64Encoded(<username>:<password>)>' \ --data '[ { "name": "oaa.browser.cookie.domain", "value": "<host.domain>" }, { "name": "oaa.risk.integration.postauth.cp", "value": "postauth" } ]'
Note:
In this case remove/oaa-policy
from the<PolicyUrl>
, for example usehttps://<host>:<port>/policy/config/property/v1
nothttps://<host>:<port>/oaa-policy/policy/config/property/v1
For details about finding the
PolicyUrl
and authenticating, see OAA Admin API.For details about the Configuration Properties REST Endpoint, see Configuration Properties REST Endpoints.
3.8.3 Post-Installation Steps for NodePort
For all installation types, if you are using nodeport as oppose to ingress, you must update the OAuth client with the relevant redirect URLs. Perform the following steps:
- Find the URLs for the
spui
,oaa-admin-ui
, andfido
pods as described in Printing Deployment Details, for example:AdminUrl=https://worker1.example.com:32701/oaa-admin SpuiUrl=https://worker1.example.com:32721/oaa/rui Fido=https://worker1.example.com:30414/fido
Note:
For OARM only installation, you only need to find the URL for theoaa-admin-ui
pod. - Encode the OAM administrator user and its password by using the
command:
For example:echo -n <username>:<password> | base64
This value should be used forecho -n weblogic:<password> | base64
<ENCODED_OAMADMIN>
in the examples below. - For OAA and OAA-OARM installation, update the OAuth Client using
REST APIs as
follows:
curl --location --request PUT 'http://<OAuth_Host>:<OAuth_Port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/client?name=OAAClient' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic <ENCODED_OAMADMIN>' \ --data '{ "id": "OAAClient", "clientType": "PUBLIC_CLIENT", "idDomain": "OAADomain", "name": "OAAClient", "redirectURIs": [ { "url": "https://worker1.example.com:32701/oaa/rui", "isHttps": true }, { "url": "https://worker1.example.com:32701/oaa/rui/oidc/redirect", "isHttps": true }, { "url": "https://worker1.example.com:32721/oaa-admin", "isHttps": true }, { "url": "https://worker1.example.com:32721/oaa-admin/oidc/redirect", "isHttps": true }, { "url": "https://worker1.example.com:30414/fido", "isHttps": true }, { "url": "https://worker1.example.com:30414/fido/oidc/redirect", "isHttps": true } ] }'
Note: For details about the REST API see, REST API for OAuth in Oracle Access Manager
For OARM only installation, update the OAuth Client as follows:curl --location --request PUT 'http://<OAuth_Host>:<OAuth_Port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/client?name=OAAClient' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic <ENCODED_OAMADMIN>' \ --data '{ "id": "OAAClient", "clientType": "PUBLIC_CLIENT", "idDomain": "OAADomain", "name": "OAAClient", "redirectURIs": [ { "url": "https://worker1.example.com:32721/oaa-admin", "isHttps": true }, { "url": "https://worker1.example.com:32721/oaa-admin/oidc/redirect", "isHttps": true } ] }'
3.9 Troubleshooting OAA and OARM Installation
This section provides troubleshooting tips for installing OAA and OARM.
Podman issues during OAA Management Container installation
- Podman fails to load the OAA images in the tar file due to image or file format errors. For example:
This may happen because of lack of free space in the root partition of the installation host (podman stores temporary files underStoring signatures Getting image source signatures Copying blob 01092b6ac97d skipped: already exists Copying blob dba9a6800748 skipped: already exists Copying blob bae273a35c58 skipped: already exists Copying blob 7f4b55b885b0 skipped: already exists Copying blob 93e8a0807a49 skipped: already exists Copying blob fa5885774604 skipped: already exists Copying blob 3b8528487f10 skipped: already exists Copying blob 3a1c2e3e35f4 [==========================>-----------] 213.8MiB / 298.1MiB Copying blob 6d31843e131e [=================================>----] 210.5MiB / 236.5MiB Copying blob f35b9630ef38 [===========>--------------------------] 213.8MiB / 672.2MiB Copying blob ef894c2768e3 done Copying blob 846fd069f886 [==========>---------------------------] 197.7MiB / 672.2MiB Copying blob 257c48b76c82 done Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)
/var/tmp
), or because the podman version is not 3.3.0 or later. If this error occurs, remove all files under/var/tmp
before retrying the installation once the issues have been addressed. - Podman fails to load the OAA images in the tar file due to permissions issues. For example:
Using image release files ./releaseimages.txt and ./nonreleaseimages.txt... tee: ./oaainstall-tmp/run.log: Permission denied Using install settings from ./installOAA.properties. tee: ./oaainstall-tmp/run.log: Permission denied Checking kubectl client version... WARNING: version difference between client (1.23) and server (1.21) exceeds the supported minor version skew of +/-1 tee: ./oaainstall-tmp/run.log: Permission denied kubectl version required major:1 minor:18, version detected major:1 minor:23 tee: ./oaainstall-tmp/run.log: Permission denied
This may happen if you extract the zip file as one user and run
installManagementContainer.sh
as a different user who doesn't have permissions. In this situation remove the$WORKDIR/oaaimages/oaa-install/oaainstall-tmp
directory and retry the install with the same user who extracted the zip file. - Podman failed to load the OAA images in the previous attempt to install and now it won't pull/tag/push of all required images. In this situation remove the
$WORKDIR/oaaimages/oaa-install/oaainstall-tmp
directory and retry.
OAA Management chart installation failure
Executing 'helm install ... oaamgmt charts/oaa-mgmt'.
Continue? [Y/N]:
y
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "volumMounts" in io.k8s.api.core.v1.Container
it is likely that the manifest files for the OAA management chart got corrupted. Copy installOAA.properties
, cert.p12
, and trust.p12
to a safe location, remove the install directory $WORKDIR/oaaimages/oaa-install
, extract the <OAA_Image>.zip
and restart the installation.
Installation script times out waiting for OAA Management Container pod to start
NAME READY STATUS RESTARTS AGE
oaamgmt-oaa-mgmt-74c9ff789d-wq82h 0/1 ContainerCreating 0 2m3s
Waiting 15 secs for OAA mgmt deployment to run...
Executing 'kubectl get pods oaamgmt-oaa-mgmt-74c9ff789d-wq82h -n oaans'...
NAME READY STATUS RESTARTS AGE
oaamgmt-oaa-mgmt-74c9ff789d-wq82h 0/1 ContainerCreating 0 2m18s
Waiting 15 secs for OAA mgmt deployment to run...
...
OAA mgmt pod is not running after 450 secs, cannot proceed with install.
Critical error, exiting. Check ./oaainstall-tmp/run.log for additional information.
then run the following commands to get additional information:$ kubectl get pods -n oaans
$ kubectl describe pod oaamgmt-<pod> -n oaans
- In case of NFS errors, verify that the NFS volume information in
installOAA.properties
is correct. In this situationkubectl describe
will show the following:Output: mount.nfs: mounting <ipaddress>:/scratch/oaa/scripts-creds failed, reason given by server: No such file or directory Warning FailedMount 15s kubelet, <ipaddress> Unable to attach or mount volumes: unmounted volumes=[oaamgmt-oaa-mgmt-configpv oaamgmt-oaa-mgmt-credpv oaamgmt-oaa-mgmt-logpv], unattached volumes=[oaamgmt-oaa-mgmt-configpv oaamgmt-oaa-mgmt-credpv oaamgmt-oaa-mgmt-logpv oaamgmt-oaa-mgmt-vaultpv default-token-rsh62]: timed out waiting for the condition
- In case of image pull errors verify that the image pull secret (
dockersecret
) was created correctly, and that the propertiesinstall.global.repo
,install.global.image.tag, and install.global.imagePullSecrets\[0\].name
ininstallOAA.properties
are correct. In this situationkubectl describe pod
will show the following:Warning Failed 21s (x3 over 61s) kubelet, <ipaddress> Error: ErrImagePull Normal BackOff 7s (x3 over 60s) kubelet, <ipaddress> Back-off pulling image "container-registry.example.com/oracle/shared/oaa-mgmt:<tag>" Warning Failed 7s (x3 over 60s) kubelet, <ipaddress> Error: ImagePullBackOff
- In case of timeouts with no apparent error it may be possible that the cluster took too long to download the OAA management image. In this case the management pod will eventually start but the installation will abort. If this happens, delete the OAA management helm release using
helm delete oaamgmt -n oaans
and rerun the installation script.
General failures during OAA.sh
OAA.sh
deployment fails at any stage during the install you can generally fix the issue and rerun OAA.sh
. The install performs a number of checks against the Database, OAuth, and Vault. If re-running the OAA.sh
fails at these checks because the Database schema, OAuth configuration, or Vault already exists, then set these properties in the installOAA.properties
before trying the OAA.sh
again:
- If Database schema is already present:
database.createschema=false
- If OAuth configuration is already present:
oauth.createdomain=false
oauth.createresource=false
oauth.createclient=false
- If Vault configuration is present:
vault.create.deploy=false
OAuth creation fails during OAA.sh
During the installation, the OAuth domain, client, and resource server are created. If they fail, check if the parameters for OAuth are correct. See Installing and Configuring OAM OAuth.
OAuth check fails during OAA.sh
This occurs if the httpd.conf
and mod_wl_ohs.conf
files are not updated. To update the values, see Installing and Configuring OAM OAuth.
During OAA.sh installation fails because of pods in Container Creating status
kubectl logs oaainstall-email-6fd7c9b9dd-lr5lm
describe pod
command. For example:kubectl describe pod oaainstall-email-6fd7c9b9dd-lr5lm
During OAA.sh pods fail to start and show CrashLoopBackOff
Run the kubectl logs <pod>
command against the pods showing the error. The following may be one of the reasons for the error:
Pods were not able to connect to http://www.example.oracle.com:7791/.well-known/openid-configuration
because the PathTrim
and PathPrepend
in the mod_wl_ohs.conf
for that entry were not updated. See Installing and Configuring OAM OAuth.
OAA.sh installation timed out but pods show as running
If the OAA installation timed out but the OAA pods show no errors and eventually end up in running state, it is possible that the cluster took too long to download the OAA images. In this case the OAA pods will eventually start but the installation will not complete. If this happens, clean up the installation and rerun the installation script.
kubectl reports "Unable to connect to the server: net/http: TLS handshake timeout"
- Proxies are defined in the environment and the
no_proxy
" environment variable does not include the cluster nodes. To resolve the issue the cluster node IPs or hostnames must be added to theno_proxy
environment variable. - The kube config file
~/.kube/config
or/etc/kubernetes/admin.conf
is not valid.
Unable to delete the OAA domain from OAuth during cleanup
- Encode the OAM administrator user and its password by using the
command:
For example:echo -n <username>:<password> | base64
This value should be used forecho -n weblogic:<password> | base64
<ENCODED_OAMADMIN>
in the example below. - Run the
following:
$ curl --location --request DELETE 'http://<OAuth_Host>:<OAuth_port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/oauthidentitydomain?name=OAADomain' \ --header 'Authorization: Basic <ENCODED_OAMADMIN>' OAuth Identity Domain is not empty. Kindly remove (resource/client) entities from identity domain $ curl --location --request GET 'http://<OAuth_Host>:<OAuth_port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/client?identityDomainName=OAADomain' --header 'Content-Type: application/json' --header 'Authorization: Basic <ENCODED_OAMADMIN>' $ curl --location --request GET 'http://<OAuth_Host>:<OAuth_port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/application?identityDomainName=OAADomain' --header 'Content-Type: application/json' --header 'Authorization: Basic <ENCODED_OAMADMIN>'
3.10 Cleaning Up Installation
Perform the following steps to clean up an OAA or OAA-OARM installation completely.
- From the installation host, connect to the management container and delete the file
based vault and the logs from their respective NFS
mounts:
kubectl exec -n <namespace> -ti oaamgmt-oaa-mgmt-7d7597c694-tzzdz -- /bin/bash $ rm -rf /u01/oracle/logs/* $ rm -rf /u01/oracle/service/store/oaa/.* $ exit
- Run the following to find the helm charts
installed:
For example:helm ls -n <namespace>
The output will look similar to the following:helm ls -n oaans
Delete the OAA charts:NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION oaainstall oaans 1 <date> deployed oaa-1.0.0-<tag> 0.1.0 oaamgmt oaans 1 <date> deployed oaa-mgmt-1.0.0-<tag> 0.1.0
helm delete oaainstall -n oaans helm delete oaamgmt -n oaans
- Perform the following steps to delete
coherence-operator
:helm delete coherence-operator -n coherence
kubectl get sts
kubectl get coherence.coherence.oracle.com
kubectl delete mutatingwebhookconfigurations coherence-operator-mutating-webhook-configuration
- Outside the container, run:
kubectl get pods -n oaans
kubectl get pods -n coherence
If any pods remain then run:kubectl delete <pod_name> -n <namespace>
- Delete the OAuth client and resources:
- Encode the OAM administrator user and its password by using the
command:
For example:echo -n <username>:<password> | base64
This value should be used forecho -n weblogic:<password> | base64
<ENCODED_OAMADMIN>
in the examples below. - Delete the OAuth Client. For
example:
curl --location --request DELETE 'http://<OAuth_Host>:<OAuth_port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/client?name=OAAClient&identityDomainName=OAADomain' \ --header 'Authorization: Basic <ENCODED_OAMADMIN>'
- Delete the OAuth Resource Server. For
example:
curl --location --request DELETE 'http://<OAuth_Host>:<OAuth_port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/application?name=OAAResource&identityDomainName=OAADomain' \ --header 'Authorization: Basic <ENCODED_OAMADMIN>'
- Delete the OAuth Domain. For
example:
curl --location --request DELETE 'http://<OAuth_Host>:<OAuth_port>/oam/services/rest/ssa/api/v1/oauthpolicyadmin/oauthidentitydomain?name=OAADomain' \ --header 'Authorization: Basic <ENCODED_OAMADMIN>'
- Encode the OAM administrator user and its password by using the
command:
- Drop the database schemas as
follows:
sqlplus sys/<password> as SYSDBA alter session set "_oracle_script"=TRUE; ** Required for PDB’s only ** drop user <OAA_RCU_PREFIX>_OAA cascade; delete from SCHEMA_VERSION_REGISTRY where comp_name='Oracle Advanced Authentication' and OWNER=UPPER('<OAA_RCU_PREFIX>_OAA'); commit; set pages 0 set feedback off spool /tmp/drop_directories.sql select 'drop directory '||directory_name||';' from all_directories where directory_name like 'EXPORT%' / spool off @/tmp/drop_directories
- In order to repeat the pull/tag/push of the OAA images, remove the directory
$WORKDIR/oaaimages/oaa-install/oaainstall-tmp
before rerunning theinstallManagementContainer.sh
script.