Deploying Siebel CRM on OCI
After you have performed all the prerequisite tasks, you can use SCM to deploy Siebel CRM on OCI. To do this, you prepare a suitable payload and then you execute this payload on SCM.
This topic contains the following information:
-
Overview of Siebel CRM Deployment Steps using Siebel Cloud Manager
- Notes on BYO-FSS (File System Service)
- Using Security Adapters for Siebel CRM
- Terminating SSL/TLS at the Load Balancer (FrontEnd SSL) using SCM
- Auto-enablement of Siebel Migration Application
Related Topics
Customizing Configurations Prior to Greenfield Deployment
Making Incremental Changes to Your Siebel CRM Deployment on OCI
Overview of Siebel CRM Deployment Steps using Siebel Cloud Manager
Payload execution (details under Parameters in Payload Content) results in the execution of various stages necessary for Siebel CRM deployment by SCM. Primary divergence point in the execution flow path is a result of whether or not the "Use existing resources" option was selected during Siebel Cloud Manager stack creation.
The term "BYO" stands for "Bring Your Own" and is indicative of existing resources at the disposal of the user. For example BYOD stands for "Bring Your Own Database".
Siebel applications can be deployed using SCM in different ways based on the type of infrastructure information provided in the payload:
- User brings all resources (Fully BYOR): When "Use existing resource" is chosen while provisioning SCM, all the resources such as existing mount target, file system, OKE, database will have to be provided by the user as part of payload information that the SCM instance will use to create Siebel CRM deployment(s).
-
SCM creates resources:
- All infra resources created by SCM: When "Use existing resource" is not chosen while provisioning SCM, all the required infrastructure for a Siebel CRM deployment that is database, OKE, mount target, file system etc. will be created by SCM and configured.
- All infra resources except database created by SCM: When "Use existing resource" is not chosen while provisioning SCM, user can still provide information of an existing database in the payload. All other infra resources (mount target, file system, OKE etc.) will be created by the SCM for Siebel CRM deployment. User must make sure that database can be connected from the SCM and the OKE.
After SCM installation is completed, user can invoke a payload with the necessary information to start the deployment process of Siebel CRM. After that, the user can monitor the deployment stages completed using the necessary REST API calls as mentioned in Checking the Status of a Requested Environment.
Notes on Authorization Information
The auth_info
section provided in the payload is mandatory in case
of "Use existing resources" as the SCM doesn't modify anything in the database when BYOD
option is chosen (which means "Use existing resource" option is chosen during SCM stack
creation).
For SCM provisioned environment (which means "Use existing resources" option was not
chosen during SCM stack creation), auth_info
is not mandatory and will
be defaulted with certain values if not provided.
The required details in auth_info
are:
admin_user_name
andadmin_user_password
:The Siebel administrator username and password is required for configuring Siebel CRM topology.
default_user_password
:The default user password which is used for logging in the rest of the users, when user info is exported, and made available in the lifted artifacts.
table_owner_user
andtable_owner_password
:The schema name and password which is the owner for all Siebel CRM tables, the password is required to execute
postinstalldb
process during update deployments.anonymous_user_password
:The password used for connecting the anonymous user to the database and provided in Siebel CRM configuration.
Notes on BYO-VCN (Virtual Cloud Network)
BYO-VCN feature allows you use your own VCN in OCI. This provides significant flexibility in setting up networking components like VCN, subnet, Internet Gateway, Service Gateway, NAT gateway and so on to launch the SCM instance and subsequently the Siebel CRM environment. This helps you ensure compliance with your specific network topology and security requirements.
Existing VCN can be used for SCM provisioning and/or during Siebel CRM environment provisioning.
During SCM provisioning:
-
When "Use existing VCN" option is chosen, SCM will not create a VCN. This option allows:
- Selection of subnets for the SCM instance, mount target, resources (OKE, file system, database) for Siebel CRM environment provisioning.
- Optionally, using an existing database (see Notes on BYOD section) which can be in the same or different user specified VCN where other resources like OKE, file system are created.
-
When the "Use existing VCN" option is not selected:
- SCM will create a VCN. SCM instance, mount target, other resources (OKE, file system, database) for Siebel CRM deployment will be created in that VCN (that is, in this case, SCM stack and Siebel CRM will be in the same VCN).
- Optionally, an existing VCN can still be used only for Siebel CRM environment provisioning (which means, the SCM instance and Mount Target can be in different VCNs where Siebel environment will be created in).
- One can still use an existing database (see Notes on BYOD section) which can be in the same or different user specified VCNs where other resources like OKE, file system for Siebel CRM deployment are created in.
Effectively, regardless of "Use existing VCN" option chosen, there is flexibility regarding using existing VCNs for Siebel CRM deployment resources (OKE, file system) and database.
You must ensure that the following permissions/access OCI policy requirements are met:
- ATP Database: Allow dynamic group {namespace}-instance-principal-group to manage
autonomous-database in compartment id {siebel-compartment-id}.
For more information, refer Policy Details for Autonomous Database on Serverless.
- DBCS Database: Allow dynamic group {namespace}-instance-principal-group to manage
database-family in compartment id {siebel-compartment-id}.
For more information, refer Policy Details for Base Database Service.
- OKE: Allow dynamic group {namespace}-instance-principal-group to manage
cluster-family in compartment id {siebel-compartment-id}.
For more information, refer Policy Configuration for Cluster Creation and Deployment.
- File system: Allow dynamic group {namespace}-instance-principal-group to manage
file-family in compartment id {siebel-compartment-id}.
For more information, refer Create, manage, and delete file systems.
Notes on BYO-FSS (File System Service)
BYO file system allows users to use an existing file system and mount target (exports for the file system) during the provisioning of Siebel CRM environment. This will enable users to use existing Siebel file systems in NFS shares to work with Siebel CRM deployed in Kubernetes through SCM without any shifting of the file system content or use an existing NFS share for shifting the lifted Siebel CRM file system (when the user does not depend on OCI file system service).
Scenarios of shifting the file system:
- When the
shift_siebel_fs
key is set tofalse
, a valid Siebel CRM file system is expected in the NFS share provided in the payload. The below directories will be expected to be present in each Siebel CRM file system and no other validation is done other than verifying the directory structure, user should ensure the right NFS shares are used.- att
- atttmp
- cms
- eim
- Marketing
- red
- ssp
- If no value is set for the key
shift_siebel_fs
, then the value defaults totrue
and shifting of the file systems are carried out (for lift and shift use case, the lift bucket should contain the file system artifacts).The provided file system will be expected to have a directory structure such as:
MOUNT_TARGET_IP:/EXPORT_PATH e.g. 10.0.0.1:/siebfs0
In the
siebfs0
path it is expected to contain a valid Siebel CRM file system as per the directory structure given above.Users can even provide multiple mount targets for different file systems. The parameters involved are:
mounttarget_exports
,siebfs_mt_export_paths
, andzookeeper_mt_export_path
. For more information, see Parameters in Payload Content.
Notes on BYO Kubernetes
Bring your own Kubernetes refers to the concept of using your own Kubernetes clusters for Siebel CRM Provisioning rather than SCM creating managed OKE cluster.
Choosing your own Kubernetes can provide significant benefits in terms of customization, cost management, security, performance, avoiding vendor lock-in, innovation and skill development. However, it also requires higher levels of expertise and operational overhead compared to utilizing SCM creating managed OKE cluster. Organizations should weigh these factors carefully based on specific needs and capabilities before deciding to go with bring your own Kubernetes.
-
Customization and Control: With Bring your own kuebrnetes, you have full control over your Kubernetes cluster, including control planes and worker nodes. This allows for more granular customization as well as optimization tailored to specific requirements.
-
Cost Management: BYOK can be more cost-effective in certain scenarios, especially if you have an existing infrastructure setup with all network configurations or need to run a large number of clusters. Managed OKE often come up with additional costs for convenience and support they provide.
You can optimise resource allocation and scaling policies to match workload needs, potentially reducing unnecessary expenses.
-
Security and Compliance: For organizations with strict data sovereignty requirements, managing own Kubernetes clusters ensures that data remains within your control and complies with company's regulations.
You can implement custom security measures to your cluster, such as network policies, access controls, encryption standards that meet organization's specific compliance and security needs.
-
Performance and Reliability: You can design and implement your own high availability and disaster recovery strategies tailored to your infrastructure and business requirements.
-
Avoiding vendor lock-in: BYO Kubernetes allows you to avoid vendor lock-in by maintaining the flexibility to move your workloads across different cloud providers or on-premises environments without being tied to specific vendor's managed Kubernetes service.
If the "Use existing resources" option is selected at SCM deployment (meaning that SCM will not create a cluster, but use the one provided by the user) or the user wants to provide own cluster during Siebel CRM environment provisioning through REST API POST invocation, one of the following Kubernetes cluster options can be used:
- BYO OKE - Bring your own Oracle Kubernetes Engine (OKE) option allows you to use an existing OKE Cluster for Siebel CRM deployment.
- BYO OCNE - Bring your own Oracle Cloud Native Environment (OCNE) option lets you to leverage your own OCNE cluster for Siebel CRM deployment.
- BYO Other - Bring your own Other option enables the use of any other Kubernetes cluster which adheres to CNCF standards for Siebel CRM deployment.
These rules must be satisfied for user provided Kubernetes cluster or else execution workflow fails during resource state validation stage:
- The user provided Kubernetes cluster should not contain namespaces such as
<env_name>
before deployment, as these namespaces will be used during Siebel CRM environment provisioning. - At least one node should be in Active state.
- The Kubernetes cluster should be accessible from the SCM instance with required polices and VCN peering, if necessary, should configured before deployment.
Notes on OKE (Oracle Container Engine for Kubernetes)
To use your own OKE cluster, user wants to provide their own OKE cluster during
Siebel CRM environment provisioning through REST API POST invocation, payload parameter
kubernetes_type
should be BYO_OKE
and
oke_cluster_id
and oke_endpoint
together or
oke_kubeconfig_path
alone is required as input under the
kubernetes > byo_oke
section.
For more information, see Parameters in Payload Content.
You can provision multiple Siebel CRM environments in the same OKE cluster when the "Use Existing Resource" option is selected.
- Either uninstall using the command:
flux uninstall all --namespace=<namespace>
- Or upgrade existing flux setup with
flag --watch-all-namespaces=false
to restrict the scope to watch the namespace where the toolkit is installed.
Notes on OCNE (Oracle Cloud Native Environment)
OCNE is an integrated suite of open-source software tools and platform designed to facilitate the development, deployment and management of cloud-native applications.
OCNE is build around Kubernetes, the leading orchestration platform and includes additional tools and components to enhance Kubernetes capabilities making it easier for organizations to adopt cloud-native technologies in a secure, scalable and reliable manner.
To use OCNE cluster, user wants to provide their own OCNE cluster during Siebel CRM
environment provisioning through REST API POST invocation, payload parameter
kubernetes_type
should be BYO_OCNE
and
kubeconfig_path
alone is required as input under the
kubernetes > byo_ocne
section. For more information, see Parameters in Payload Content.
Notes on Other Kubernetes Cluster
To use any other Kubernetes cluster, user wants to provide their own cluster during
Siebel CRM environment provisioning through REST API POST invocation, payload parameter
kubernetes_type
should be BYO_OTHER
and
kubeconfig_path
alone is required as input under the
kubernetes > byo_other
section.
For more information, see Parameters in Payload Content.
Notes on BYOD (Bring Your Own Database)
SCM can deploy Siebel CRM with Oracle Database only.
Selection of "Use existing resources" option during SCM stack creation (refer to Downloading and Installing Siebel Cloud Manager) allows use of an existing Oracle database (apart from the ability to use existing VCN, OKE, mount target etc. among others) for a Siebel CRM application deployment in OCI. Note that selecting the "Use existing resources" parameter results in all resources (for example VCN, OKE, mount target, database) to be provided by the user and will not be created by SCM. Similarly, if an SCM instance is provisioned without enabling/choosing "Use existing resources", BYOD will be still supported. In that scenario, you can use your own database and rest of the resources that is OKE, mount target etc. will be provisioned by SCM. Various parameters that are required for using existing database for an OCI Siebel SCM deployment using SCM are described in Parameters in Payload Content. Before using for deployment, you must make sure to establish connectivity between the existing Siebel CRM database and a) the SCM instance b) the pods in Kubernetes cluster deploying Siebel CRM in OCI. Connecting to an empty (without any data) existing database is not supported.
Connectivity Information
Certain connectivity information such as wallet details and connection identifier need to be provided.
wallet_path
: The absolute path of the Oracle net services configuration files or Oracle client credentials (wallet) is required for connecting to the database. The wallet files have to be copied inside the SCM container. The wallet should contain atleast thetnsnames.ora
for a valid folder. During environment provisioning the wallet will be validated if it contains thetnsnames.ora
. TLS enabled wallets are also supported. The provided wallet path will be copied inside the environment directory for usage.tns_connection_name
: The connection identifier provided in the field will be validated whether it is present in the tnsnames.ora file or not. If it is not available, a client side validation (400) will be raised.
The provided connection string will be used by the Siebel applications to connect with the database.
When your existing database is present in OCI (either within the same region or different region as that of SCM VCN), you can use private routing to avoid connections through the internet. To use that, you need to establish a connection from your existing SCM VCN to the VCN where the database resides. The different scenarios where the database can reside and how to establish connection are:
-
Present in the same region:
If the database is present in the same VCN as the SCM VCN, then you need to establish local peering to both the VCN.
For more information, refer to Local VCN Peering using Local Peering Gateways.
-
Present in different region:
If your database is present in different region than your SCM VCN, then you need to establish Remote peering connection to establish connectivity between both the VCN.
For more information, refer to Remote VCN Peering using a Legacy DRG.
In the above cases, you will be required to add a route in the route table to allow traffic to the database or vice versa.
Every Siebel CRM deployment will be required to have connection from 2 subnets:
-
SCM subnet:
This will be required to run administrative tasks such as verifying if the database is in right shape, are the provided credentials valid etc. This is done prior to creating a Siebel CRM deployment.
-
OKE Nodes subnet:
This will be required by all Siebel CRM applications to establish connection to the database starting from user authentication to querying tables. For SCM subnet mentioned above, it can be done prior to the deployment, but in case of OKE nodes subnet, they are not yet created at the stage. So users can provide the OCID of the Dynamic Routing Gateway (DRG) (using the field drg_ocid) which needs to be attached to the OKE Nodes subnet and the destination CIDR block of the DB's subnet or VCN (using the field
destination_db_cidr_block
) where the traffic has routed through the DRG.
Provided the above are done, the traffic which is controlled by security list also should allow traffic through these ranges. The traffic going outside of SCM instance subnet and OKE nodes subnet are already taken care by the deployment. It will allow traffic to go out. From the database subnet's security rules, similar rules have to be written to allow traffic to come in. In case, the traffic is only controlled through security List, ATP will still require a Network Security Group (NSG) to allow the traffic through it.
For more information, refer to Private Endpoints Configuration Examples on Autonomous Database.
Connectivity Tests
Before the provisioning of the environment, the database needs to be accessible from 2 different places:
- From SCM instance:
- Admin User/Password based access
- Table Owner User/Password based access
- Guest User access
- From Kubernetes nodes in which the Siebel CRM application lives.
Issues with theses connectivity requirements will be reported in stage "validate-connectivity" and the provisioning activities in OCI for Siebel CRM deployment will be stopped here. The deployment can be re-run after fixing issues related to connectivity.
Workflow Continuation
There will be no database import done in case of the BYOD flow. So the "import-db-stage" will be marked as "Passed".
Debugging Methods
The individual stage logs will log all the connection tests logs and provide the details. The logs for connectivity related tests can be found in the stage "validate-connectivity". When the tests are passing they leave trail of the events, such as:
- """
- admin user validation in progress
- admin user validation completed
- tblo user validation in progress
- tblo user validation completed
- """
The validations can be done manually using SQL Plus to check, and then after the issue has been fixed, the workflow can be re-run by submitting the payload as before. Common reasons for which the connections might fail are:
- Host provided in the
tnsnames.ora
is not reachable.Proper connection has to be established to validate this. Incase of OCI, the VCN in which the database resides should have proper security rules to the SCM instance.
In case of any other externally hosted Oracle database, the guidelines for those providers needs to be followed and whitelisted to provide access to the SCM instance.
- Invalid info in wallet.
The data provided in the wallet has to be valid to establish the connection.
- Invalid authorization information.
The data provided in the
auth_info
section has to be valid in order to establish the connection.
Other scenarios which cause failure of connectivity are caught and the details are provided in the stage logs.
Checklist for Creating a BYOR Deployment
Before deploying a BYOR environment, you need to go through a checklist consisting of various steps to ensure that you have a smooth deployment. If the steps or resources are used directly by your environment, without any validation, the application may fail. Here is a resource-wise list of the different steps which need to be performed/validated.
OKE
When using an existing OKE, verify the following:
- Check if the API server URL is accessible from the SCM instance. If the kube
config path is provided, then make sure that the API server is accessible. It
can be validated by running basic
kubectl
commands. If the URL is not accessible, see Debugging Common URL Connectivity Issues to debug. - If the Cluster ID is provided while creating a deployment, ensure if the SCM
instance's principal (User or Instance principal) has the required access to
download kube config. This can be validated by running the following OCI
command:
oci ce cluster create-kubeconfig --cluster-id <SOME_CLUSTER_ID> --file $HOME/.kube/config --region us-phoenix-1 --token-version 2.0.0 --kube-endpoint PRIVATE_ENDPOINT
- If the SCM instance uses Instance Principal, verify if the following policy
exists:
Here'Allow <subject> to manage cluster-family in compartment id <oke_compartment_ocid>'.
<subject>
can be group/dynamic_group etc.For more information, refer to Policy Syntax documentation.
Once you have saved the configuration, you need to set the variables for KUBECONFIG and OCI_CLI_AUTH.
# set the required env variables # Possible values are - api_key or instance_principal based on your OCI principal configuration. export OCI_CLI_AUTH=api_key # Path to your OKE kube config export KUBECONFIG=/path/to/your/oke/config # fetch the cluster info kubectl cluster-info --request-timeout 5s # Get the nodes info kubectl get nodes
- If it is not accessible, then the instance might not have access to read/use the resource. Either contact your tenancy administrator for proper permissions or check for your policies.
- If you are behind a proxy, make sure that either the API server is accessible
through the Proxy server or if it can be bypassed. Provide it in
no_proxy
(Contact your administrator for appropriate choice). - Login to any one of the nodes and validate if the GitLab server is accessible by either making a request or cloning a repository etc.
Mount Target
Mount targets' access endpoints are always private endpoints. So inter-network/VCN validations must be done.
- These are accessed internally. If you are running behind a proxy, chances are
your proxy settings might route it to the proxy server. So if it requires to be
bypassed, pass it in the
no_proxy
settings. - NFS uses port 2049 by default. One can configure a different port as well. Ensure that your Security List/NSG's have rules to allow the traffic. If the URL is not accessible, see Debugging Common URL Connectivity Issues to debug.
- NFS client exports are also controlled in mount target and are configured to read or read/write. Ensure that you have read/write permissions on it.
- Use NFS mount commands to mount a local directory to verify if the SCM instance
can communicate. You can unmount the directory after verifying. If you are
unable to mount, it is likely that the mount target URL is not reachable. In
this case, see Debugging Common URL Connectivity Issues to debug.
nfs mount <args> nfs umount <args>
- Ensure that your mount targets' endpoints are accessible from your OKE nodes. You can verify it by logging in to the OKE's node and checking if the endpoint is accessible. This is a mandatory requirement as the applications need to access the file systems.
Database
You need to validate database endpoints, SID, Listener, and Credentials before creating an environment.
- Ensure that the endpoints are accessible from both SCM container and the OKE node.
- To check if the database endpoints are accessible from SCM, connect to the
database endpoint using tools such as
telnet
from the SCM container. You also need to verify if the listener is valid. - To check if the DB's endpoints are accessible from OKE, connect to any one of
the OKE nodes and use commands such as
telnet
to check if they can be reached from there. - If the DB endpoints are private endpoints (DB endpoints), then there is a chance that your OKE node might not be able to resolve the Host name URL. In such case, verify it with the IP address of the host. If nodes can be setup with an option to resolve the DB hostnames then IP address will not be required.
- Verify all the credentials such as: Siebel Admin, Table Owner, anonymous user
credentials etc. You can use
sqlplus
available in the SCM instance to login and validate the credentials. - To connect to a database using sqlplus, set the following
variables.
export ORACLE_HOME=/usr/lib/oracle/21/client64 export PATH=$PATH:$ORACLE_HOME/bin export TNS_ADMIN=/home/opc/siebel/IIG8L6/wallet
- After setting the variables, connect to sqlplus
CLI.
sqlplus username/password@connection_identifier # username - db user whom you would like to authenticate # password - password of that db user. # connection string which you would like to establish connection and verify with. Can be found in tnsnames.ora
GitLab
The GitLab instance, where the helmcharts and SCM have to be created, should be accessible from both the SCM instance and the OKE nodes. SCM access is required to ensure any changes in release yamls and terraform configuration is tracked. Helm charts repo hold all the details of the charts installed and upgrades.
To verify from the SCM container, exec into the SCM container and run the following
commands. If required, to verify from OKE, list down all the OKE Nodes and
ssh
into any one of the node and run the following steps to
verify it.
GitLab should be accessible on both via SSH (for cloning, pushing, and pulling code) and HTTP/HTTPS (for API access to create/delete repositories)
# Check if you are able to ping to the gitlab IP/URL and access it.
ping gitlaburl.com
OR
# Hit any of the existing gitlab API with the token to verify
curl https://gitlaburl.com/api/v4/users --header 'Authorization: Basic token'
# even a 40x response also makes it clear that the URL is accessible
# Try to clone an existing repo to verify if SSH access is available
git clone git@gitlaburl.com/repo-name.git # using SSH
Debugging Common URL Connectivity Issues
If any URL is not reachable or not able to communicate, you can debug the issue using the following steps.
You can use utilities such as telnet, traceroute, ping, curl etc. Install these
utilities using yum/dnf
. If you are behind a proxy server and not
able to reach the repo, then you need to configure proxy details in
/etc/yum.conf
sudo yum install ping curl traceroute telnet
- To verify if an URL is accessible or not, verify the security rules/NSG rules of
the corresponding resource and the host from which you are connecting.
For more information, refer to the Network Security Groups documentation.
- There is a possibility that there is a secondary barrier also added from
resource side, that is ACL for DB's, NFS client export options for mount targets
etc. Check if they are whitelisted.
For more information, refer to the Configure Access Control Lists for an Existing Autonomous Database Instance documentation.
- If you are connecting from your on-prem through a Fast connect coupled with a
DRG, make sure you have matching rules for your DRG. This is applicable if two
or more VCN's are also connected (even with Local Peering Gateway (LPG)).
For more information, refer to the FastConnect with Multiple DRGs and VCNs documentation.
- Check if you are behind a proxy server and your proxy server allows connection
to the URL. You can verify this by disabling proxy or adding in no_proxy to test
it.
# verify the list of values set currently printenv | grep 'PROXY\|proxy' # update the required var - HTTP_PROXY, HTTPS_PROXY, NO_PROXY export HTTP_PROXY=myproxyserver.com
- Use
telnet
to see if you are able to reach a URL on a particular URL. Some tools such as sqlplus get hung when a connection does not happen.telnet someurl.com 22 Connected to someurl.com # Port - 22 on someurl.com is reachable from the current host. telnet notreachableurl.com 1522 ... # 1522 on notreachableurl.com is not reachable
- Use
traceroute
to see where exactly the hopping stopped. that is it might go out of an instance but may not go out to the public internet because an IG/NAT was not connected. In this case, the last hop would have been only the VCN.traceroute someurl.com 1 hop1.com (10.0.35.153) 292.885 ms 289.622 ms 376.783 ms 2 hop2.com (10.0.32.130) 250.955 ms 250.505 ms 289.326 ms 3 hop3.com (10.0.29.42) 250.155 ms 250.227 ms 290.869 ms 4 hop4.com (10.76.13.210) 271.508 ms 268.169 ms 309.374 ms 5 hop5.com (10.76.13.209) 276.570 ms 273.716 ms 277.106 ms 6 hop6.com (10.76.27.10) 272.482 ms 272.206 ms 269.685 ms 7 hop7.com (10.76.27.68) 269.659 ms 269.013 ms 268.582 ms 8 hop8.com (10.196.6.42) 272.557 ms 273.320 ms 279.004 ms 9 * * * 10 final.destination.com (100.10.14.9) 272.173 ms !Z 271.058 ms !Z 318.078 ms !Z # if it gets hung at ***, then possibly the packet is not able to proceed further from there to the next hop/router.
- Ping the URL to verify if the server is up or not. Internet Control Message
Protocol (ICMP) has to be enabled).
ping google.com PING google.com (172.217.14.78): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ^C --- google.com ping statistics --- 4 packets transmitted, 0 packets received, 100.0% packet loss # Not able to connect/ping and there is a 100% loss. ping someworkingurl.com PING someworkingurl.com (100.10.14.1): 56 data bytes 64 bytes from 100.10.14.1: icmp_seq=0 ttl=46 time=284.899 ms 64 bytes from 100.10.14.9: icmp_seq=1 ttl=46 time=271.194 ms ^C --- someworkingurl.com ping statistics --- 3 packets transmitted, 2 packets received, 33.3% packet loss round-trip min/avg/max/stddev = 271.194/278.047/284.899/6.852 ms # packets are transmitted, so it can be reached and also provides additional diagnostics info.
- cURL a URL to verify if the URL is accessible. Check the response headers for
the response code to see what has gone missing. Based on the response headers,
validate what could have gone wrong. Here are some sample responses: 400 - Bad
request(client side validation), 401 - Bad authorization, 302 - Redirect
found.
curl -I https://oracle.com:443 HTTP/1.0 200 Connection established HTTP/1.1 301 Moved Permanently Date: Tue, 04 Apr 2023 15:07:52 GMT Content-Type: text/html Content-Length: 175 Connection: keep-alive Location: https://www.oracle.com/ # Connection established to oracle.com which means the URL is accessible.
Connectivity Information
Certain connectivity information such as wallet details and connection identifier need to be provided.
wallet_path
: The absolute path of the Oracle net services configuration files or Oracle client credentials (wallet) is required for connecting to the database. The wallet files must be copied inside the SCM container. The wallet should contain atleast thetnsnames.ora
for a valid folder. During environment provisioning the wallet will be validated if it contains the tnsnames.ora. TLS enabled wallets are also supported. The provided wallet path will be copied inside the environment directory for usage. You can also copy the wallet file to SCM using File Sync Utility, for more information see Uploading Files to the SCM Container Using File Sync Utility.tns_connection_name
: The connection identifier provided in the field will be validated whether it's present in thetnsnames.ora
file or not. If it isn't available, a client side validation (400) will be raised.
The provided connection string will be used by the Siebel CRM applications to connect with the database.
Connectivity Tests
Before the provisioning of the environment, the database needs to be accessible from 2 different places:
- From the SCM instance:
- Admin User/Password based access
- Table Owner User/Password based access
- Guest User access
- From Kubernetes nodes in which the Siebel CRM application lives.
Issues with theses connectivity requirements will be reported in stage "validate-connectivity" and the provisioning activities in OCI for Siebel deployment will be stopped here. The deployment can be rerun after fixing issues related to connectivity.
Workflow Continuation
There will be no database import done in case of the BYOD flow. So the "import-db-stage" will be marked as "Passed".
Debugging Methods
The individual stage logs will log all the connection tests logs and provide the details. The logs for connectivity related tests can be found in the stage "validate-connectivity". When the tests are passing they leave trail of the events, such as:
- """
- admin user validation in progress
- admin user validation completed
- tblo user validation in progress
- tblo user validation completed
- """
The validations can be done manually using SQL Plus to check, and then after the issue has been fixed, the workflow can be rerun by submitting the payload as before. Common reasons for which the connections might fail are:
- Host provided in the
tnsnames.ora
isn't reachable.Proper connection must be established to validate this. Incase of OCI, the VCN in which the database resides should have proper security rules to the Cloud Manager instance.
In case of any other externally hosted Oracle database, the guidelines for those providers needs to be followed and whitelisted to provide access to the Cloud Manager instance.
- Invalid info in wallet
The data provided in the wallet must be valid to establish the connection.
- Invalid authorization information
The data provided in the
auth_info
section must be valid to establish the connection.
Other scenarios which cause failure of connectivity are caught and the details are provided in the stage logs.
Using Security Adapters for Siebel CRM
This topic is part of Deploying Siebel CRM on OCI.
This section describes how to configure security adapters (security profile) provided with Siebel Business Applications.
SCM supports configuration of security adapter types DB and LDAP.
The SCMapplication sets authentication-related configuration parameters for Siebel Business Applications and Siebel Gateway authentication, but does not make changes to the LDAP directory. Make sure the configuration information you enter is compatible with your directory server.
When you specify LDAP as the security adapter type in the payload during environment provisioning, the setting you specify provides the value for the Enterprise Security Authentication Profile (Security Adapter Mode) parameter.
The Security Adapter Mode and Security Adapter Name (named subsystem) parameters can be set for:
- Siebel Gateway
- Siebel Enterprise Server
- All interactive Application Object Manager components
For more information, see the "Security Adapter Authentication" chapter in the Siebel Security Guide.
Use payload parameter security_adapter_type
to specifiy the security
adapter type. For more information, see Parameters in Payload Content.
- If you pass ‘DB’ as the
security_adapter_type
, then the database details from thedatabase
payload section will be considered for configuring security adapter during environment provisioning. - If you pass ‘LDAP’ as the
security_adapter_type
, then one needs to pass details under subsectionldap
undersiebel
section.
- For greenfield environment or any lift bucket lifted from SCM version prior to
CM_23.7.0, parameters under
siebel>ldap
sub-section of Payload Elements of SCM under Parameters in Payload Content will be applicable. - For lift bucket lifted using SCM version CM_23.7.0 or above and source
environment is of security adapter type LDAP, then during shifting (using REST
API for deployment), only below user credential parameters will be mandatory
(since these information cannot be ‘lifted’) and rest are optional and it will
be taken from lifted data if not passed in payload.
- application_password
- siebel_admin_username
- siebel_admin_password
- anonymous_username
- anonymous_user_password
For greenfield environment, the value of siebel_admin_username
must be
SADMIN
and value of anonymous_username
must be
GUESTCST
since database will be having only greenfield data.
Example payload section specific to the case when security_adapter_type
is LDAP
. For complete sample payload structure, see Parameters in Payload Content.
{
"name": "test173",
"siebel": {
....
....
"security_adapter_type": "ldap",
"ldap":
{
"ldap_host_name": <ldap_FQDN>,
"ldap_port": "389",
"application_user_dn": "cn=Directory Manager",
"application_password": "ocid1.vaultsecret.oc1.uk-london-1.iaheyoqdfpc33khmp42wec",
"base_dn": "ou=people,o=siebel.com",
"shared_db_credentials_dn": "uid=sadmin,ou=people,o=siebel.com",
"shared_db_username": "sadmn",
"shared_db_password": "ocid1.vaultsecret.oc1.uk-london-1.tkyyppq733brnkhmp42wec",
"password_attribute_type": "userPassword",
"siebel_admin_username": "sadmin",
"username_attribute_type": "uid",
"credentials_attribute_type": "mail",
"siebel_admin_password": "ocid1.vaultsecret.oc1.uk-london-1.amaaaaaa4n2rr5ia2wcc",
"anonymous_username": "GUESTCST",
"anonymous_user_password": "ocid1.vaultsecret.oc1.uk-london-1.amaaaaaa4n2rnkhmp42was",
"use_adapter_username": "true",
"siebel_username_attribute_type" : "uid",
"propagate_change": "true",
"hash_db_password": "true",
"hash_user_password": "true",
"salt_user_password": "true",
"salt_attribute_type": "title"
}
}
"infrastructure": {
....
....
security_adapter_type
is LDAP
and enable_ssl
is set to
true (that is, for LDAPs). Note the change in
ldap_port
value. For complete sample payload structure, see Parameters in Payload Content.{
"name": "test173",
"siebel": {
....
....
"security_adapter_type": "ldap",
"ldap":
{
"ldap_host_name": <ldap_FQDN>,
"ldap_port": "636",
"application_user_dn": "cn=Directory Manager",
"application_password": "ocid1.vaultsecret.oc1.uk-london-1.iaheyoqdfpc33khmp42wec",
"base_dn": "ou=people,o=siebel.com",
"shared_db_credentials_dn": "uid=sadmin,ou=people,o=siebel.com",
"shared_db_username": "sadmn",
"shared_db_password": "ocid1.vaultsecret.oc1.uk-london-1.tkyyppq733brnkhmp42wec",
"password_attribute_type": "userPassword",
"siebel_admin_username": "sadmin",
"username_attribute_type": "uid",
"credentials_attribute_type": "mail",
"siebel_admin_password": "ocid1.vaultsecret.oc1.uk-london-1.amaaaaaa4n2rr5ia2wcc",
"anonymous_username": "GUESTCST",
"anonymous_user_password": "ocid1.vaultsecret.oc1.uk-london-1.amaaaaaa4n2rnkhmp42was",
"use_adapter_username": "true",
"siebel_username_attribute_type" : "uid",
"propagate_change": "true",
"hash_db_password": "true",
"hash_user_password": "true",
"salt_user_password": "true",
"salt_attribute_type": "title"
"enable_ssl": "true",
"ldap_wallet_path": "/home/opc/siebel/ewallet.p12",
"ldap_wallet_password": "ocid1.vaultsecret.oc1.uk-london-1.aaa4noqkyyppq7lf4oamvb7f2cxx"
}
}
"infrastructure": {
....
....
Terminating SSL/TLS at the Load Balancer (FrontEnd SSL) using SCM
When Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, you can specify that you want to terminate SSL at the load balancer. This configuration is known as frontend SSL. To implement frontend SSL, we have to define a listener at a port such as 443, and associate an SSL certificate with the listener.
Load balancers commonly use single domain certificates. However, load balancers with listeners that include request routing configuration might require a subject alternative name (SAN) certificate (also called multi-domain certificate) or a wildcard certificate. The Load Balancing service supports each of these certificate types.
Oracle Cloud Infrastructure (OCI) accepts x.509 type certificates in PEM format only. The following is an example PEM encoded certificate:
-----BEGIN CERTIFICATE-----
<Base64_encoded_certificate>
-----END CERTIFICATE-----
To terminate SSL certificate at the load balancer with custom ssl certificate, you must supply a certificate during environment provisioning using the following payload parameters:
- load_balancer_ssl_cert_path
- load_balancer_private_key_path
- load_balancer_private_key_password
For more information, see the "Payload Elements for Siebel Cloud Manager" table in Parameters in Payload Content. If the above optional parameters are not provided during environment provisioning, SCM will generate a self signed certificate and associate the same with the load balancer listener through Nginx service.
Updating SSL/TLS Certificates for an Existing Load Balancer Post Deployment
Solution 1 - Updating certificates to existing Load Balancer from OCI Console
- Go to the OCI console and navigate to Load Balancer service.
- Go to the Load Balancer of the current environment.
- Click on Certificates on left menu and select Load Balancer Managed certificate in the Certificate resource dropdown.
- Click on Add certificate and upload SSL certificate and private key in respective fields.
- Go to the listeners from left menu and edit the listener with name 'TCP-443'.
- Select Load Balancer Managed certificate in the 'ertificate resource dropdown.
- Select the new load balancer certificate in the 'certificate name' dropdown.
- If private key is encrypted, first decrypt it using the
command:
openssl rsa -in <load_balancer_private_key_path> -out <decrypted_load_balancer_private_key_path>
- Create a Kubernetes tls secret for load balancer ssl certificate using the
command:
kubectl create secret tls lb-ssl-certificate --key <decrypted_load_balancer_private_key_path> --cert <load_balancer_ssl_cert_path> -n <namespace>
Note: Iflb-ssl-certificate
is already present, you need to delete it first using command:'kubectl delete secret lb-ssl-certificate -n <namespace>'
- Update the ssl certificate in Ingress definition:
- SSH into the SCM instance.
- docker exec -it cloudmanager bash.
- cd /home/opc/siebel/<env_id>/<namespace>-cloud-manager/flux-crm/infrastructure/nginx/.
- Edit the
siebel-ingress-app.yaml
file. - Update 'secretName' under 'tls' to 'lb-ssl-certificate'' if not present.
- Update the 'hosts' under 'tls' with your domain hostname.
- Follow the same steps for
siebel-ingress-smc.yaml
file. - Push your changes to remote git repository:
git add . git commit -m "<message>" git push
- The certificate will not be updated to an existing Load Balancer
automatically. We have to delete the existing Load Balancer so that a new
Load Balancer will get created with updated certificates.
- First delete ingress-nginx-controller service. To delete existing
load
balancer:
kubectl delete svc <namespace>-ingress-nginx-controller -n <namespace>
- Update ingress-nginx chart version in <namespace>-helmcharts
repository to inititiate new load balancer creation.
- SSH into the SCM instance.
- docker exec -it cloudmanager bash.
- cd /home/opc/siebel/<env_id>/<namespace>-helmcharts/ingress-nginx.
- Edit Chart.yaml and increment the Chart version .
- Push the changes to remote git
repository:
git add . git commit -m "<message>" git push
The flux reconcilation and new load balanacer creation might take up to 10 minutes.
- To get the new Load Balanacer External IP address, use the command - 'kubectl get svc <namespace>-ingress-nginx-controller -n <namespace> .
- The IP address of the new Load Balancer should be used in Siebel Application URLs.
- First delete ingress-nginx-controller service. To delete existing
load
balancer:
Auto-enablement of Siebel Migration Application
This topic is part of Deploying Siebel CRM on OCI.
The Siebel Migration application is a Web-based tool for migrating Siebel Repositories and seed data and performing related tasks, which is provided with the Siebel Application Interface (SAI) installation.
An environment deployed through "Lift-and-Shift" mechanisms using the lift tool and SCM
has the Siebel Migration application auto-enabled in the deployed Siebel CRM
environment. Once the deployment is done, Siebel Migration Application endpoint will be
included in the URL list with a form ending with /siebel/migration. Use the
migration_package_mt_export_path
parameter described in Parameters in Payload Content.
For more information about the activities that you can perform in the Siebel Management Console (SMC) post-deployment, refer Siebel Bookshelf.
For more information about troubleshooting, see Troubleshooting a Siebel Cloud Manager Instance or Requested Environment.
Parameters in Payload Content
This topic is part of Deploying Siebel CRM on OCI.
The following table provides information about each of the payload parameters. For an example payload and for usage guidelines, see Example Payload to Deploy Siebel CRM.
Note the following usage considerations for some of the payload parameters:
-
The
config_id
parameter is required for and used only when provisioning a greenfield environment with a configuration that you previously customized. For more information, see Customizing Configurations Prior to Greenfield Deployment. -
The
database_type
andindustry
parameters are required for and used only for greenfield deployments. -
Under database, the
db_type
parameter (not the same asdatabase_type
) is used to specify either ATP (for Oracle Autonomous Database) or DBCS_VM (for Oracle Database Cloud Service) or BYOD. Different database parameters are expected for each selection. -
The
bucket_url
parameter is used only for the deployment scenario that uses the Siebel Lift utility. This parameter is not used for greenfield deployments.
The users are advised to get familiarized with various Notes before proceeding to the section on payload parameters.
Payload Parameter | Section | Description |
---|---|---|
name |
(top level) |
(Required) A short name for identification of the environment. This name is used as a prefix in all the resources. The namespace in the Kubernetes cluster is created with this name. Choose something meaningful and short (no more than 10 to 15 alphanumeric characters), such as DevExample (perhaps using the name of your company or organization). |
config_id |
(top level) |
(Required for customization workflow) The configuration ID that is obtained as described in Customizing Configurations Prior to Greenfield Deployment. You specify this configuration ID in the payload only when you provision a greenfield environment with a configuration that you previously customized. |
database_type |
siebel |
(Required for greenfield deployments) Specifies the database type to use for a greenfield deployment. The available options are:
Note: This parameter is used only for greenfield deployments and is not
used for the deployment scenario that uses the Siebel Lift
utility.
|
industry |
siebel |
(Required for greenfield deployments) Specifies the industry-specific functionality to enable in a greenfield deployment. The available options are:
Note: This parameter is used only for greenfield deployments and is not
used for the deployment scenario that uses the Siebel Lift
utility.
|
registry_url |
siebel |
(Required) Specifies the URL of the Open Container Initiative (OCI) compliant container registry. For example, for the Oracle Cloud Infrastructure container registry in the Ashburn region, you might use iad.ocir.io. For more information, see the registry concepts information in the Oracle Cloud Infrastructure documentation (https://docs.oracle.com/en-us/iaas/Content/Registry/Concepts/registryprerequisites.htm). |
registry_user |
siebel |
(Required) Specifies the user ID to connect to the container registry. This user must have container registry access to push and pull images. |
registry_password |
siebel |
(Required) Specifies the password or authentication token for this user. |
registry_prefix |
siebel |
(Optional) Specifies a prefix that's appended after the
For OCI container registry, this should be the tenancy namespace, if needed, you can add a suffix to it. As it's an optional field, it can be left blank. |
bucket_url |
siebel |
(Required for lift and shift deployments) Specifies the bucket created when you ran the Siebel Lift utility, which you are using to upload deployment artifacts. Create a pre-authenticated request URL for the bucket. The access type must permit object reads and the bucket must enable object listing. Note: This parameter is used only for the deployment scenario that uses
the Siebel Lift utility and is not used for greenfield
deployments.
To create a pre-authenticated request URL:
|
keystore |
siebel |
(Optional) This parameter allows for Custom Keystore Management |
gateway_cluster_replica_count |
siebel |
(Optional) This parameter installs and configures a gateway cluster based on the number given in gateway_cluster_replica_count. This is applicable for both Greenfield and Lift & Shift. Siebel Gateway (CGW) Cluster requires a minimum of 3 replicas and it is recommended to be an odd number. A 3 node gateway cluster will be created by default, if this parameter is not overridden. Otherwise the gateway cluster is created with the overridden value in the payload. |
security_adapter_type |
siebel |
(Optional) Specify the security adapter type. Supported values are 'DB' and 'LDAP'. Default value: DB. |
siebel_keystore_path |
siebel > keystore |
(Required for Siebel keystore : when "keystore" parameter is used) This parameter specifies the path to a custom keystore file in jks format. For more information, see Managing Custom Keystore. |
siebel_truststore_path |
siebel > keystore |
(Required for Siebel keystore : when "keystore" parameter is used) This parameter specifies the path to a custom keystore file in jks format. For more information, see Managing Custom Keystore. |
siebel_keystore_password |
siebel > keystore |
(Required for Siebel keystore : when "keystore" parameter is used) Password used for the keystore. |
siebel_truststore_password |
siebel > keystore |
(Required for Siebel keystore : when "keystore" parameter is used) Password used for the truststore. |
ldap_host_name |
siebel > ldap |
(Required) Host name of the ldap server for ldap authentication. Note that you may have to include the IP address if the server is configured to listen only with the IP address: You must specify the FQDN (fully qualified domain name) of the LDAP server, not just the domain name. For example, specify ldapserver.example.com, not example.com. |
ldap_port |
siebel > ldap |
(Required) Specify the port number for the ldap for ldap authentication. For example, 389. |
application_user_dn |
siebel > ldap |
(Required) Specify the user name of a record in the directory with sufficient permissions to read any user's information and do any necessary administration. This user provides the initial binding of the LDAP directory with the Application Object Manager when a user requests the login page, or else anonymous browsing of the directory is required. You enter this parameter as a full distinguished name (DN), for example "uid=appuser, ou=people, o=example.com" (including quotes) for LDAP. The security adapter uses this name to bind. You must implement an application user. |
application_password |
siebel > ldap |
(Required) OCID of the secret containing the password for the user defined by the Application User Distinguished Name parameter. The secret must be stored encrypted in the vault. In an LDAP directory, the password is stored in an attribute and clear text passwords are not supported for the LDAPSecAdpt named subsystem. |
base_dn |
siebel > ldap |
(Required) Specify the base distinguished name, which is the root of the tree under which users of this Siebel application are stored in the directory. Users can be added directly or indirectly after this directory. For example, a typical entry for an LDAP server might be: BaseDN = "ou=people, o=domain_name" where:
|
credentials_attribute_type |
siebel > ldap |
(Required) Specify the attribute type that stores a database account. For example, if Credentials Attribute is set to dbaccount, then when a user with user name HKIM is authenticated, the security adapter retrieves the database account from the dbaccount attribute for HKIM. This attribute value must be of the form username=U password=P, where U and P are credentials for a database account. There can be any amount of space between the two key-value pairs but no space within each pair. The keywords username and password must be lowercase. In LDAP security adapter authentication to manage the users in the directory through the Siebel client, the value of the database account attribute for a new user is inherited from the user who creates the new user. The inheritance is independent of whether you implement a shared database account, but does not override the use of the shared database account. |
password_attribute_type |
siebel > ldap |
(Required) Specify the attribute type under which the user’s login password is stored in the directory. |
roles_attribute_type |
siebel > ldap |
(Optional) Specify the attribute type for roles stored in the directory. For example, if Roles Attribute is set to roles, then when a user with user name HKIM is authenticated, the security adapter retrieves the user’s Siebel responsibilities from the roles attribute for HKIM. |
shared_db_credentials_dn |
siebel > ldap |
(Optional) Specify the absolute path (not relative to the Base Distinguished Name) of an object in the directory that has the shared database account for the application. If not set, then the database account is looked up in the user’s DN as usual. If set, then the database account for all users is looked up in the shared credentials DN instead. The attribute type is determined by the value of the Credentials Attribute parameter. For example, if the Shared Database Account Distinguished Name parameter is set to "uid=HKIM, ou=people, o=example.com" when a user is authenticated, the security adapter retrieves the database account from the appropriate attribute in the HKIM record. This parameter’s default value is an empty string. |
shared_db_username |
siebel > ldap |
(Optional) Specify the user name to connect to the Siebel database. You must specify a valid Siebel user name and password for the Shared DB User Name and Shared DB Password parameters. Specify a value for this parameter if you store the shared database account user name as a parameter rather than as an attribute of the directory entry for the shared database account. To use this parameter, you can use an LDAP directory. |
shared_db_password |
siebel > ldap |
(Optional) OCID of the secret containing the password associated with the Shared DB User Name parameter. |
username_attribute_type |
siebel > ldap |
(Required) Specifies the attribute type under which the user’s login name is stored in the directory. For example, if User Name Attribute Type is set to uid, then when a user attempts to log in with user name HKIM, the security adapter searches for a record in which the uid attribute has the value HKIM. This attribute is the Siebel user ID, unless the Security Adapter Mapped User Name check box is selected. |
use_adapter_username |
siebel > ldap |
(Optional) If this boolean parameter is set to true, then when the user key name passed to the security adapter is not the Siebel User ID, then the security adapter retrieves the Siebel User ID for authenticated users from an attribute defined by the Siebel Username Attribute parameter. |
siebel_username_attribute_type |
siebel > ldap |
This is mandatory parameter when 'use_adapter_username' is set to 'true' If set, then this parameter is the attribute from which the security adapter retrieves an authenticated user’s Siebel User ID. If not set, then the user name passed in is assumed to be the Siebel User ID. |
siebel_admin_username |
siebel > ldap |
(Required) The username of the Siebel CRM administrative user. |
siebel_admin_password |
siebel > ldap |
(Required) OCID of the secret containing the Siebel CRM Administration User password. |
anonymous_username |
siebel > ldap |
(Required) The username of the web anonymous user. |
anonymous_user_password |
siebel > ldap |
(Required) OCID of the secret containing the anonymous user password which will be updated. |
propagate_change |
siebel > ldap |
(Optional) This is a boolean flag. Set this parameter to True to allow administration of the directory through Siebel Business Applications UI. When an administrator then adds a user or changes a password from within the Siebel application, or a user changes a password or self-registers, the change is propagated to the directory. A non-Siebel security adapter must support the SetUserInfo and ChangePassword methods to allow dynamic directory administration. |
hash_db_password |
siebel > ldap |
(Optional) This is a boolean flag. Set this parameter to True to specify password hashing for database credentials passwords. Hash Algorithm will be set to "SHA1", which is the default value, is read-only for the Siebel Gateway (SGW) security profile. |
hash_user_password |
siebel > ldap |
(Optional) This is a boolean flag. Set this parameter to True to specify password hashing for user passwords. Hash Algorithm will be set to "SHA1", which is the default value, is read-only for the SGW security profile |
salt_attribute_type |
siebel > ldap |
(Optional) This is a boolean flag. Specifies the attribute that stores the salt value if you have chosen to add salt values to user passwords. The default attribute is title. |
salt_user_password |
siebel > ldap |
(Optional) This is a boolean flag. Set this parameter to True to specify that salt values are to be added to user passwords before they are hashed. This parameter is ignored if the Hash User Password parameter is set to False. |
enable_ssl |
siebel > ldap |
(Optional) Specifies whether to enable SSL for connections to the LDAP server ( that is, LDAP over SSL or, in short, LDAPs). |
ldap_wallet_path |
siebel > ldap |
(Required only when This parameter specifies the path to the wallet file required for LDAP over SSL connection. The wallet file (Example: ewallet.p12) wont be lifted during lift process and one needs to manually copy it to OCI SCM container location and pass the path in this payload parameter. You can also copy the wallet file to the SCM container using File Sync Utility, for more information see Uploading Files to the SCM Container Using File Sync Utility. Here, the wallet should be created from Oracle Wallet Manager and the Oracle wallet must contain CA server certificate that has been issued by Certificate Authorities to LDAP directory server. |
ldap_wallet_password |
siebel > ldap |
(Required when OCID of the secret containing the password to open the LDAP wallet that contains a certificate for the certificate authority used by the LDAP directory server. |
gitlab_url |
infrastructure |
(Required) Specifies the URL for the GitLab instance. |
gitlab_user |
infrastructure |
(Required) Specifies a user with access to create GitLab projects in the specified GitLab instance. |
gitlab_accesstoken |
infrastructure |
(Required) Specifies an access token for this GitLab user. You can create the access token in user settings. The access token must have API scope. |
gitlab_selfsigned_cacert |
infrastructure |
(Required) Specifies the path to a self-signed certificate. Copy the certificate (from the GitLab instance, for example) to the
SCM instance at
|
siebel_lb_subnet_cidr |
infrastructure |
(Required for advanced network configuration) CIDR range for Load Balancer subnet. For more information about CIDR ranges for subnets, see Using Advanced Network Configuration. |
siebel_private_subnet_cidr |
infrastructure |
(Required for advanced network configuration) CIDR range for Kubernetes worker nodes private subnet. |
siebel_db_subnet_cidr |
infrastructure |
(Required for advanced network configuration) CIDR range for the database private subnet. |
siebel_cluster_subnet_cidr |
infrastructure |
(Required for advanced network configuration) CIDR range for OKE cluster subnet (Kubernetes API server). |
siebel_lb_subnet_ocid |
infrastructure |
(Required for using existing VCN resource) OCID of the regional subnet where the Load Balancer will be attached. Allow TCP port 443 from your client network where the users will access Siebel application. |
siebel_private_subnet_ocid |
infrastructure |
(Required for using existing VCN resource) OCID of the subnet where the OKE worker nodes will be attached. The following needs to be ensured:
|
siebel_db_subnet_ocid |
infrastructure |
(Required for using existing VCN resource) OCID of the subnet where the Database will be created. The following needs to be ensured:
|
siebel_cluster_subnet_ocid |
infrastructure |
(Required for using existing VCN resource) OCID of the subnet where the Kuberenetes API end point will be made available. The following needs to be ensured:
|
vcn_ocid_of_db_subnet |
infrastructure |
(Required for using existing VCN resource) OCID of the VCN which will be attached to the access control list of autonomous database (ATP). This is needed for establishing connection when the database is launched in a different VCN than the worker node subnet. |
load_balancer_type |
infrastructure |
(Optional) Option to make load balancer as private/public Customer can restrict visibility of the Siebel application using this payload parameter. Supported values are one of: Private, Public. Choosing the "Public" option will assign a loadbalancer with public IP for public access. Choosing the "Private" option will create a loadbalancer with only private IP which can be accessed within the network only. If it is not specified, a public IP will be assigned. |
load_balancer_ssl_cert_path | infrastructure |
(Optional) Specifies the path of the ssl certificate file which contains public certificate or collection of public certificates that you can provide as an aggregated group for load balancer. The ssl certificate should be in PEM format only. If your ssl certificate submission returns an error, the most common reasons are:
|
load_balancer_private_key_path | infrastructure |
(Optional) Speficies the path of the private key file for the Load Balancer TLS/SSL certificate. The private key should be in PEM format only. If your private key submission returns an error, the most common reasons are:
|
load_balancer_private_key_password | infrastructure |
(Optional) The OCID of the secret containing the password of the Load Balancer private key. This will be used to decrpyt the private key provided in the 'load_balancer_private_key_path' parameter. |
load_balancer_tls_secret_name | infrastructure |
Specifies the name of the Load Balancer tls secret name to be given during environment provisioning. Note: If you provide ingress annotations, the value of tls-secret annotation should be same as the value of this parameter. The default value for load_balancer_tls_secret_name is "lb-tls-certificate". You can provide "lb-tls-certificate" for the value of tls-secret annotation under the ingress controller annotation section if this parameter is not configured in the payload. |
shift_siebel_fs | infrastructure | (Optional) This parameter specifies whether shifting of the file system is to be executed or skipped while BYO-FS(infrastructure > mounttarget_exports) is used. Default value is set to True. |
mounttarget_exports | infrastructure |
(Required if the "Use existing resources" option is chosen during SCM stack creation) The mount_target_private_ip and export_path information to be used for Siebel file system. |
kubernetes_type | infrastructure > kubernetes |
Specifies type of kubernetes supported by SCM. Allowed values are OKE or BYO_OKE or BYO_OCNE or BYO_OTHER If OKE, then SCM will create an OKE during environment provisioning If BYO_OKE, user needs to provide OKE cluster details. If BYO_OCNE, user needs to provide OCNE cluster details. If BYO_OTHER, user can provide any other type of cluster which adheres to CNCF standards. This field will become mandatory if the "Use existing resources" option is chosen during SCM stack creation). |
oke_node_count | infrastructure > kubernetes > oke |
(Optional) Specifies the number of nodes to be created in the cluster. On a region with multiple availability domains, node pools are distributed across all availability domains. The default is 3 availability domains. For more information about node counts, see OCI documentation. |
oke_node_shape | infrastructure > kubernetes > oke |
(Optional for Flex shape type) Specifies the compute shape for the cluster node. Example shape options include:
Note: For Flex (flexible) node shape options only, the parameters under node_shape_config specify values for the memory and ocpus parameters. (For non-flexible node shape options, these parameters are not editable.) For more information about compute shapes, see OCI documentation. |
memory_in_gbs | infrastructure > kubernetes > oke > oke_node_shape_config |
(Optional for Flex shape type) Specifies the amount of memory available to each node in the node pool, in gigabytes. This setting is editable only for flexible node shape options. |
ocpus | infrastructure > kubernetes > oke > oke_node_shape_config |
(Optional for Flex shape type) Specifies the number of Oracle CPUs (OCPUs) available to each node in the node pool. This setting is editable only for flexible node shape options. |
oke_cluster_id Note: You can either pass oke_cluster_id and oke_endpoint or you can pass only oke_kubeconfig_path in payload |
infrastructure > kubernetes > byo_oke |
(Required when 'kubernetes_type' is BYO_OKE) The OCID of the OCI Kubernetes Cluster. Note:
For more information, see Using Vault for Managing Secrets. |
oke_endpoint Note: You can either pass oke_cluster_id and oke_endpoint or you can pass only oke_kubeconfig_path in payload |
infrastructure > kubernetes > byo_oke |
(Required when 'kubernetes_type' is BYO_OKE) Specifies the endpoint used to generate kubeconfig and access cluster. The available options are
Depending on the input, either private or public endpoint will be used to access cluster. |
oke_kubeconfig_path | infrastructure > kubernetes > byo_oke |
(Required when 'kubernetes_type' is BYO_OKE) Specifies the path of kubeconfig file of an existing OKE to access and configure cluster. Copy the kubeconfig file and to the SCM instance at this location: '/home/opc/siebel' and provide the path for the file, such as '/home/opc/siebel/kubeconfig' Note:
For more information, see Using Vault for Managing Secrets. |
kubeconfig_path | infrastructure > kubernetes > byo_ocne infrastructure > kubernetes > byo_other |
(Required when 'kubernetes_type' is BYO_OCNE or BYO_OTHER) Specifies the path of kubeconfig file of an existing Kubernetes cluster (other than OKE, for example, an OCNE cluster) to access and configure cluster. Copy the kubeconfig file and to the SCM instance at this location: '/ home/opc/siebel' and provide the path for the file, such as '/home/opc/ siebel/kubeconfig' Note: SCM instance should have access to Kubernetes cluster to perform any operation on cluster-related resources. |
ingress_service_type | infrastructure > ingress_controller |
Specifies ingress service type to be provisioned during Siebel CRM deployment. Allowed values are LoadBalancer or NodePort. |
ingress_controller_service_annotations | infrastructure > ingress_controller |
(Optional) Specifies annotations that needs to be added to ingress service Note: When ingress_service_type is LoadBalancer and for 'BYO OKE' or 'BYO OCNE' use case 'service.beta.kubernetes.io/oci-load-balancer-subnet1' annotation is required under sub-section 'ingress_controller_service_annotations' |
siebfs_mt_export_paths |
infrastructure > mounttarget_exports |
(Required if the "Use existing resources" option is chosen during SCM stack creation) The list of mount_target_private_ip and export_path information to be used for Siebel file system matching the number of siebel_file_system_count in source environment. The payload structure would be: "infrastructure": { "mounttarget_exports":{ "siebfs_mt_export_paths":[ {"mount_target_private_ip" : ****,"export_path": "/exttest2-siebfs0"}, {"mount_target_private_ip" : **** ,"export_path": "/exttest2-siebfs1"}, {"mount_target_private_ip" : ****, "export_path": "/exttest2-siebfs1"} ] }, (other infrastructure payload parameters) } |
migration_package_mt_export_path |
infrastructure > mounttarget_ exports |
(Required if the "Use existing resources" option is chosen during SCM stack creation) The mount_target_private_ip and export_path information to be used for Migration storage. The payload structure would be:
Note: If this parameter is not provided for SCM created Siebel Deployment, SCM will create a dedicated export path for migration storage with path /<env_namespace-migration. This can be mounted in target environments. |
db_type |
database |
Specifies one of the following:
For ATP, also include options under database > atp. For DBCS_VM, also include options under database > dbcs_vm. For BYOD, also include options under database > byod. For more information, see Notes on BYOD (Bring Your Own Database). |
siebel_admin_username |
database > auth_info |
(Mandatory) The username of the Siebel administrative user. |
siebel_admin_password |
database > auth_info |
(Mandatory) OCID of the secret containing the Siebel Administration User password. Password should have atleast 2 Upper characters, 2 Lower characters, 2 Digits and 2 special characters from _,#,- of length 9 to 30 characters. Password should not contain the username as a part of it. For more information, see Using Vault for Managing Secrets. |
table_owner_user |
database > auth_info |
(Mandatory) The Table owner in which the Siebel schema will be imported. |
table_owner_password |
database > auth_info |
(Mandatory) OCID of the secret containing he login password used for the Siebel table owner. Password should have at least 2 Upper characters, 2 Lower characters, 2 Digits and 2 special characters from _,#,- of length 9 to 30 characters. Password should not contain the username as a part of it. For more information, see Using Vault for Managing Secrets. |
default_user_password |
database > auth_info |
(Mandatory) OCID of the secret containing the default user password updated for all the users. Password should have at least 2 Upper characters, 2 Lower characters, 2 Digits and 2 special characters from _,#,- of length 9 to 30 characters. For more information, see Using Vault for Managing Secrets. |
anonymous_user_password |
database > auth_info |
(Mandatory) OCID of the secret containing the anonymous user password which will be updated. Password should have atleast 2 Upper characters, 2 Lower characters, 2 Digits and 2 special characters from _,#,- of length 9 to 30 characters. For more information, see Using Vault for Managing Secrets. |
admin_password |
database > atp |
OCID of the secret for the password of the ATP database administrator user. Password should be have at least 12 to 30 characters, 1 upper character, 1 lower character and one number. Password cannot contain "" or the word "admin" in it. Review the password policy for shared ATP infrastructure in OCI and provide a valid password. For more information about the Oracle Autonomous Database, see https://docs.oracle.com/en/cloud/paas/atp-cloud/index.html on Oracle Help Center. For more information, see Using Vault for Managing Secrets. |
wallet_password |
database > atp |
(OCID)(Required) OCID of the secret containing the password for ATP wallet download. Password can contain alphanumeric characters and of length 8 to 60. For more information, see Using Vault for Managing Secrets. |
cpu_cores |
database > atp |
(Required) Specifies the ATP database's allocated OCPUs. The minimum value is 1. |
whitelist_cidrs | database > atp |
Specifies the cidrs to be added to the ATP DB ACL list when cloudmanager creates database Cloudmanager creates Autonomous Database with the Secure access from allowed IPs and VCNs only option, you can restrict network access by defining Access Control Lists (ACLs). When using bring your own flow like BYO OCNE and if you want to include cidrs of bring your own components in ACL list of ATP DB to establish connection between them, you can utilize this parameter. Example: "whitelist_cidrs": "[129.0.0.0/8]" |
storage_in_tbs |
database > atp |
(Required) Specifies the ATP database's disk storage, in terabytes. The minimum value is 1. |
wallet_path |
database > byod |
(Required for user provided database if the "Use existing resources" option is chosen during SCM stack creation) The absolute path of the Oracle net services configuration files or Oracle client credentials (wallet) is required for connecting to the database. The wallet files have to be copied inside the SCM container. The wallet should contain atleast the tnsnames.ora for a valid folder. During environment provisioning the wallet will be validated if it contains the tnsnames.ora. TLS enabled wallets are also supported. The provided wallet path will be copied inside the environment directory for usage. For more information, see Notes on BYOD (Bring Your Own Database). |
tns_connection_name |
database > byod |
(Required for user provided database if the "Use existing resources" option is chosen during SCM stack creation) This is the connection identifier which will be used by the Siebel CRM application to establish connection to the database. The provided connection identifier will be validated if it’s present in the tnsnames.ora. For more information, see Notes on BYOD (Bring Your Own Database). |
drg_ocid | database > byod | (Optional) OCID of the DRG to be attached with the OKE nodes subnet
to allow traffic from the VCN (where Database resides) provided that the
both the DB VCN and CM VCN is peered. For more information, see Using Vault for Managing Secrets. |
destination_db_cidr_block | database > byod | (Optional) Destination CIDR block where traffic has to be routed from OKE nodes subnet to the VCN (where Database resides) provided that the both the DB VCN and CM VCN is peered. |
availability_domain |
database > dbcs_vm |
(Optional) The availability domain in which the database is to be used. Possible availability domains are 1, 2, and 3, depending on the region. Defaults to 1. |
cpu_count |
database > dbcs_vm |
(Optional) The OCPU count for the DBCS database node. Possible values are from 4 to 64. Required memory is calculated on the formula of 16 GB times the number of OCPU cores. The current supported flex type relevant to this setting is VM.Standard.E4.Flex. |
data_storage_size_in_gbs |
database > dbcs_vm |
(Required) The storage size of the database instance, in gigabytes. The different storage sizes are: 256, 512, 1024, 2048, 4096, 6144, 8192, 10240, 12288, 14336, 16384, 18432, 20480, 22528, 24576, 26624, 28672, 30720, 32768, 34816, 36864, 38912, or 40960. |
database_edition |
database > dbcs_vm |
(Optional) The edition of Oracle Database to be used. Currently supported versions are:
|
db_admin_username |
database > dbcs_vm |
(Required) Username for the Oracle schema user to be created with DBA privileges for administration activities. Username should have atleast 6 to 15 characters and only alphabets. |
db_admin_password |
database > dbcs_vm |
(OCID)(Required) OCID of the secret for the password of the Oracle
schema user. Password should have atleast 2 Upper characters, 2 Lower
characters, 2 Digits and 2 special characters from _,#,- of length 9 to
30 characters. Password should not contain the username as a part of
it.Password should not contain the username as a part of it. For more information, see Using Vault for Managing Secrets. |
mount_target_ip | database>dbcs_vm | (Required when infrastructure > mounttarget_exports is provided) IP address of the mount target used for creating the database directory in the DB node. |
export_path | database>dbcs_vm | (Required when infrastructure > mounttarget_exports is provided)
Export path in the mount target used for creating the database directory
in the DB node. Note: This export path will be used for copying the database dumps and database directory for the import in database shifting stage. |
db_version |
database > dbcs_vm |
(Optional) The version of Oracle Database to be used. Currently supported versions are 19.x.0.0 and 21.x.0.0. Defaults to 19.x.0.0. |
shape |
database > dbcs_vm |
(Required) The shape of the node for the Oracle Database instance. The different shapes in which the database can be provisioned can be found in the Limits, Quotas, and Usage section in the OCI console. |
cpu |
size > ses_resource_limits |
(Optional) Specifies CPU resource limits of SES containers. This parameter specifies the max number of CPU units that can be allocated to the container. It can be given as a whole number like "1" or as a decimal number like "0.5" or in milliCPU units like "500m". The default is "2". Precision finer than "1m" is not allowed. For more information, refer to Kubernetes documentation. If not specified in payload, default value is used. ses_resource_limits must be greater than or equal to the value of ses_resource_requests parameter. |
memory |
size > ses_resource_limits |
(Optional) Specifies memory resource limits of SES containers. This parameter specifies the max amount of memory that can be allocated to the container. It can be given in Ki,Mi,Gi and Ti units. The default is "4Gi". Specify in multiples of 2, such as 4, 8, 16, and so on. For more information, refer to Kubernetes documentation. If not specified in payload, default value is used. ses_resource_limits must be greater than or equal to the value of ses_resource_requests parameter. |
cpu |
size > ses_resource_requests |
(Optional) Specifies the minimum guaranteed amount of CPU resources that is to be reserved for SES containers. It can be given as a whole number or with a decimal point like "0.5" or in milliCPU units like "500m". The default is "1". A request with a decimal point, such as "0.1", is converted to "100m" (100 milliCPU) by the API. Precision finer than "1m" is not allowed. For more information, refer to Kubernetes documentation. If not specified in payload, default value is used. ses_resource_limits must be greater than or equal to the value of ses_resource_requests parameter. |
memory |
size > ses_resource_requests |
(Optional) Specifies the minimum guaranteed amount of memory resources that is to be reserved for SES containers. It can be given in Ki,Mi,Gi and Ti units. The default is "4Gi". Specify in multiples of 2, such as 4, 8, 16, and so on. For more information, refer to Kubernetes documentation. If not specified in payload, default value is used. ses_resource_limits must be greater than or equal to the value of ses_resource_requests parameter. |
cpu |
size > cgw_resource_limits |
(Optional) Specifies CPU resource limits of Siebel Cloud Gateway containers. Default value is "2". If not specified in payload, default value is used. cgw_resource_limits must be greater than or equal to the value of cgw_resource_requests parameter. |
memory |
size > cgw_resource_limits |
(Optional) Specifies memory resource limits of Siebel Cloud Gateway containers. Default value is "4Gi". If not specified in payload, default value is used. cgw_resource_limits must be greater than or equal to the value of cgw_resource_requests parameter. |
cpu |
size > cgw_resource_requests |
(Optional) Specifies the minimum guaranteed amount of CPU resources that is to be reserved for Siebel Cloud Gateway containers. Default value is "1". If not specified in payload, default value is used. cgw_resource_limits must be greater than or equal to the value of cgw_resource_requests parameter. |
memory |
size > cgw_resource_requests |
(Optional) Specifies the minimum guaranteed amount of memory resources that is to be reserved for Siebel Cloud Gateway containers Default value is "4Gi". If not specified in payload, default value is used. cgw_resource_limits must be greater than or equal to the value of cgw_resource_requests parameter. |
cpu |
size > sai_resource_limits |
(Optional) Specifies CPU resource limits reserved for Siebel Application Interface containers (SAI). Default value is "2". If not specified in payload, default value is used. sai_resource_limits must be greater than or equal to the value of sai_resource_requests parameter. |
memory |
size > sai_resource_limits |
(Optional) Specifies memory resource limits of Siebel Application Interface containers (SAI). Default value is "4Gi". If not specified in payload, default value is used. sai_resource_limits must be greater than or equal to the value of sai_resource_requests parameter. |
cpu |
size > sai_resource_requests |
(Optional) Specifies the minimum guaranteed amount of CPU resources that is to be reserved for Siebel Application Interface containers (SAI). Default value is "1". If not specified in payload, default value is used. sai_resource_limits must be greater than or equal to the value of sai_resource_requests parameter. |
memory |
size > sai_resource_requests |
(Optional) Specifies the minimum guaranteed amount of memory resources that is to be reserved for Siebel Application Interface containers (SAI). Default value is "4Gi". If not specified in payload, default value is used. sai_resource_limits must be greater than or equal to the value of sai_resource_requests parameter. |
siebel_monitoring |
observability |
(Optional) Set this value to true if you want to enable Siebel CRM Observability – Monitoring feature. Set this value to false to disable all of monitoring feature. |
enable_oci_monitoring |
observability |
(Optional) Set this value to true to send metrics from Prometheus to the OCI monitoring service and create an OCI Application Performance Monitoring (APM) dashboard in OCI. Set this value to false to restrict sending the metrics from Prometheus to the OCI monitoring service and to restrict creating the OCI APM dashboard. Notes: The OCI infrastructure metrics for OCI resources will be available in OCI irrespective of the value of this parameter. siebel_monitoring should be 'true' and the oci_config parameter must be configured when enable_oci_monitoring is set to 'true'. |
send_alerts |
observability |
(Optional) Set this value to true if you want to enable alerting feature in Siebel CRM Observability – Monitoring Set this value to false to disable alerting feature in Siebel CRM Observability – Monitoring. Note: siebel_monioring should be 'true' when send_alerts is set to 'true' in payload. |
siebel_logging |
observability |
(Optional) Set this value to true if you want to enable Siebel CRM Observability – Log Analytics feature. Set this value to false to disable Siebel CRM Observability – Log Analytics feature. |
enable_oci_log_analytics |
observability |
Set this value to true if you want to enable log streaming to OCI Logging Analytics. Set this value to false to disable log streaming to OCI Logging Analytics. Note: siebel_logging should be 'true' when enable_oci_log_analytics is set to 'true' in payload. |
enable_oracle_opensearch |
observability |
Set this value to true if you want to create Oracle OpenSearch infrastructure and enable log streaming to Oracle OpenSearch. Set this value to false to disable log streaming to Oracle OpenSearch. Note: siebel_logging should be 'true' when enable_oracle_opensearch is set to 'true' in payload. |
oci_log_analytics |
observability |
Required only for enabling OCI Logging Analytics for BYOR scenario, else optional. This section provides identifiers for various input parameters needed for enabling OCI Logging Analytics when BYOR ("Use existing resource") option is chosen during SCM installation. |
smc_log_group_id |
observability > oci_log_analytics |
OCID of the log group in OCI Logging Analytics to send all SMC logs. This is required only when enable_oci_log_analytics is set to 'true' in "Siebel CRM Observability – Monitoring and Log Analytics" solution and "Use existing resources" option is selected. |
sai_log_group_id |
observability > oci_log_analytics |
OCID of the log group in OCI Log Analytics to push all SAI related logs. This is required only when enable_oci_log_analytics is set to 'true' in "Siebel CRM Observability – Monitoring and Log Analytics" solution and "Use existing resources" option is selected. |
ses_log_group_id |
observability > oci_log_analytics |
OCID of the log group in OCI Log Analytics to push all SES related logs. This is required only when enable_oci_log_analytics is set to 'true' in "Siebel CRM Observability – Monitoring and Log Analytics" solution and "Use existing resources" option is selected. |
gateway_log_group_id |
observability > oci_log_analytics |
OCID of the log group in OCI Log Analytics to push all Gateway related logs. This is required only when enable_oci_log_analytics is set to 'true' in "Siebel CRM Observability – Monitoring and Log Analytics" solution and "Use existing resources" option is selected. |
node_logs_log_group_id |
observability > oci_log_analytics |
OCID of the log group in OCI Log Analytics to push all Pod logs. This is required only when enable_oci_log_analytics is set to 'true' in "Siebel CRM Observability – Monitoring and Log Analytics" solution and "Use existing resources" option is selected. |
log_source_name |
observability > oci_log_analytics |
Name of the log source in OCI Log Analytics for identifying the origin of logs. This is required only when enable_oci_log_analytics is set to 'true' in "Siebel CRM Observability – Monitoring and Log Analytics" solution and "Use existing resources" option is selected. |
mount_target_private_ip |
observability->monitoring_mt_export_path |
Mount target private IP details required for monitoring component. |
export_path |
observability->monitoring_mt_export_path |
Mount target export path details required for monitoring component. |
storage_class_name |
observability > prometheus observability > oracle_opensearch |
(Optional In SCM Observability feature, Prometheus and Oracle OpenSearch use block volume. Block Volumes can be provisioned in one of the two following ways.
If your Kubernetes cluster doesn't have support for dynamic provisioning of block volumes, and you want to use local storage of a node for Prometheus or Oracle OpenSearch., you can provide local-storage as the storage_class_name. You can also provide your own custom integration storage type by passing the name of the storage class in this parameter. Default value for this field is 'oci-bv'. |
local_storage |
observability > prometheus > local_storage_info observability > oracle_opensearch > local_storage_info |
If storage_class_name is local-storage, then this parameter specifies the local storage path. |
kubernetes_node_hostname |
observability > prometheus > local_storage_info observability > oracle_opensearch > local_storage_info |
If storage_class_name is local-storage, then this parameter specifies the hostname in which the local storage path is present. |
oci_config_path |
observability->oci_config |
Specifies the path to the oci config file. This is required only when either siebel_monitoring or enable_oci_log_analytics is enabled. Note: The region defined in the oci configuration file provided as oci_config_path parameter should be same as region where SCM is deployed. |
oci_private_api_key_path |
observability->oci_config |
Specifies the path to the oci private key file. This is required only when either siebel_monitoring or enable_oci_log_analytics is enabled for Siebel CRM Observability – Monitoring and Log Analytics solution. |
oci_config_profile_name |
observability->oci_config |
Specifies the profile name to be used in the oci config file. This is required only when either siebel_monitoring or enable_oci_log_analytics is enabled for Siebel CRM Observability – Monitoring and Log Analytics solution. |
smtp_host |
observability->alertmanager_email_config |
Specifies the SMTP host name required for SMTP configuration. This is required only when send_alerts is set to 'true' in Siebel CRM Observability – Monitoring and Log Analytics solution. |
smtp_from_email |
observability->alertmanager_email_config |
Specifies the SMTP from email address using which emaill will be sent required for SMTP configuration. This is required only when send_alerts is set to 'true' in Siebel CRM Observability – Monitoring and Log Analytics solution. |
smtp_auth_username |
observability->alertmanager_email_config |
Specifies the SMTP auth username required for SMTP configuration. This is required only when send_alerts is set to 'true' in Siebel CRM Observability – Monitoring and Log Analytics solution. |
smtp_auth_password_vault_ocid |
observability->alertmanager_email_config |
Specifies the ocid having SMTP auth password required for SMTP configuration. This is required only when send_alerts is set to 'true' in Siebel CRM Observability – Monitoring and Log Analytics solution. |
to_email |
alertmanager_email_config |
Specifies the email to which alerts should be sent. This is required only when send_alerts is set to 'true' in Siebel CRM Observability – Monitoring and Log Analytics solution. |
Executing the Payload to Deploy Siebel CRM
This topic describes how to execute the payload to deploy Siebel CRM. This topic is part of Deploying Siebel CRM on OCI.
To execute the payload to deploy Siebel CRM
-
Create an application/json body with the payload information. For an example, see Example Payload to Deploy Siebel CRM.
-
Do a
POST
API like the following:POST https://<CM_Instance_IP>:16690/scm/api/v1.0/environment
Note: Specify a payload appropriate for your use case. For an example payload and for usage guidelines, see Example Payload to Deploy Siebel CRM. -
Use Basic Auth and provide credentials like the following:
User: "admin"
Password: "<Password available in the file /home/opc/cm_app/{CM_RESOURCE_PREFIX}/config/api_creds.ini>"
Environment information is displayed. Copy the
selfLink
value for monitoring purposes. For example:"selfLink": "https://<CM_Instance_IP>:16690/scm/api/v1.0/environment/4ZZYX5"
Example Payload to Deploy Siebel CRM
In order to deploy Siebel CRM on OCI, you can prepare a payload like the following to be executed by SCM. Note the following usage guidelines:
-
To deploy Siebel CRM with the default configuration (greenfield deployment use case 1), omit the
config_id
parameter. -
To create a Siebel CRM configuration to customize (greenfield deployment use case 2), use the
POST
API command in Creating the Configuration and Obtaining the Configuration ID. Include all the same payload parameters you would use in greenfield deployment use case 1. -
To deploy Siebel CRM with a customized configuration (greenfield deployment use case 2), use the
POST
API command in Executing the Payload to Deploy Siebel CRM. Include in the payload only theconfig_id
parameter (set to the configuration ID you obtained when you created the configuration) and name parameter. Omit all other parameters. -
For usage guidance on additional parameters required for the lift and shift use case or for greenfield deployments, see Parameters in Payload Content.
Example Payload when "Do not use Vault" Checkbox is Selected
{
"config_id": "<config_id of custom configuration>",
"name": "DevExample",
"siebel": {
"registry_url": "iad.ocir.io",
"siebel_architecture": "CRM",
"registry_user": "deploygroup/user.name@example.com",
"registry_password": "<xxxxxx>",
"bucket_url": "https://objectstorage.us-example-1.oraclecloud.com/p/s0EgeDE9-
NMc2lTazIY3LuXO1IbGx5ASAilKxJexLHNjirdl4AKJh8RBxou1J4S1/n/deploygroup/b/bucket_example/o/",
"keystore" : {
"siebel_keystore_path": "/home/opc/test/ca/siebelcerts/keystore.jks",
"siebel_keystore_password": "<xxxxxx>",
"siebel_truststore_path": "/home/opc/test/ca/siebelcerts/truststore.jks",
"siebel_truststore_password": "<xxxxxx>"
}
},
"infrastructure": {
"gitlab_url": "https://<IP address>",
"gitlab_accesstoken": "<yyyyy>",
"gitlab_user": "user.name",
"gitlab_selfsigned_cacert": "/home/opc/certs/rootCA.crt",
"siebel_cluster_subnet_ocid": "<cluster_subnet_ocid>",
"siebel_lb_subnet_ocid": "<lb_subnet_ocid>",
"siebel_private_subnet_ocid": "<private_subnet_ocid>",
"siebel_db_subnet_ocid": "<db_subnet_ocid>",
"vcn_ocid_of_db_subnet": "<VCN_ocid_of_worker_node>",
"load_balancer_type": "public",
"kubernetes": {
"kubernetes_type": "OKE",
"oke": {
"oke_node_count": 3,
"oke_node_shape": "VM.Standard.E4.Flex",
"oke_node_shape_config": {
"memory_in_gbs": 60,
"ocpus": 4
}
}
}
},
"database": {
"db_type": "ATP",
"atp": {
"admin_password": "<Plain-text of your admin password>",
"storage_in_tbs": 1,
"cpu_cores": 3,
"wallet_password": "<Plain-text of your wallet password's secret>"
},
"auth_info": {
"siebel_admin_username": "<provide your own values>",
"siebel_admin_password": "<Your Siebel admin password's secret in plain-text>",
"default_user_password": "<Your default user password's secret in plain-text>",
"table_owner_password": "<Your table owner password's secret in plain-text>",
"table_owner_user": "<provide your own values>",
"anonymous_user_password": "<Your anonymous user password's secret in plain-text>"
}
},
"size": {
"ses_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"ses_resource_requests": {
"cpu": "1.0",
"memory": "4Gi"
},
"cgw_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"cgw_resource_requests": {
"cpu": "1000m",
"memory": "4Gi"
},
"sai_resource_limits": {
"cpu": "1",
"memory": "4Gi"
},
"sai_resource_requests": {
"cpu": "1",
"memory": "4Gi"
}
}
}
Example Payload when "Use existing resources" Checkbox is Not Selected
{
"config_id": "<config_id of custom configuration>",
"name": "DevExample",
"siebel": {
"registry_url": "iad.ocir.io",
"siebel_architecture": "CRM",
"registry_user": "deploygroup/user.name@example.com",
"registry_password": "<xxxxxx>",
"bucket_url": "https://objectstorage.us-example-1.oraclecloud.com/p/s0EgeDE9-NMc2lTazIY3LuXO1IbGx5ASAilKxJexLHNjirdl4AKJh8RBxou1J4S1/n/deploygroup/b/bucket_example/o/",
"keystore" : {
"siebel_keystore_path" : "/home/opc/test/ca/siebelcerts/keystore.jks",
"siebel_keystore_password": "<xxxxxx>",
"siebel_truststore_path": "/home/opc/test/ca/siebelcerts/truststore.jks",
"siebel_truststore_password": "<xxxxxx>"
}
},
"infrastructure": {
"gitlab_url": "https://<IP address>",
"gitlab_accesstoken": "<yyyyy>", "gitlab_user": "user.name",
"gitlab_selfsigned_cacert": "/home/opc/certs/rootCA.crt",
"siebel_lb_subnet_cidr" : "10.0.1.0/24",
"siebel_private_subnet_cidr" : "10.0.2.0/24",
"siebel_db_subnet_cidr" : "10.0.3.0/24",
"siebel_cluster_subnet_cidr" : "10.0.4.0/24",
"load_balancer_type": "public",
"kubernetes": {
"kubernetes_type": "OKE",
"oke": {
"oke_node_count": 3,
"oke_node_shape": "VM.Standard.E3.Flex",
"oke_node_shape_config": {
"memory_in_gbs": "60",
"ocpus": "4"
}
}
}
},
"database": {
"db_type": "ATP",
"atp": {
"admin_password": "<OCID of your admin password>",
"storage_in_tbs": 1,
"cpu_cores": 3,
"wallet_password": "<OCID of your wallet password's secret>"
},
"auth_info": {
"siebel_admin_username": "<provide your own values>",
"siebel_admin_password": "<OCID of your Siebel admin password's secret>",
"default_user_password": "<OCID of your default user password's secret>",
"table_owner_password": "<OCID of your table owner password's secret>",
"table_owner_user": "<provide your own values>",
"anonymous_user_password": "<OCID of your anonymous user password's secret>"
}
},
"size": {
"ses_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"ses_resource_requests": {
"cpu": "1.0",
"memory": "4Gi"
},
"cgw_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"cgw_resource_requests": {
"cpu": "1000m",
"memory": "4Gi"
},
"sai_resource_limits": {
"cpu": "1",
"memory": "4Gi"
},
"sai_resource_requests": {
"cpu": "1",
"memory": "4Gi"
}
}
}
Example Payload when "Use existing VCN" Checkbox is Selected
{
"config_id": "<config_id of custom configuration>",
"name": "DevExample",
"siebel": {
"registry_url": "iad.ocir.io",
"siebel_architecture": "CRM",
"registry_user": "deploygroup/user.name@example.com",
"registry_password": "<xxxxxx>",
"bucket_url": "https://objectstorage.us-example-1.oraclecloud.com/p/s0EgeDE9- NMc2lTazIY3LuXO1IbGx5ASAilKxJexLHNjirdl4AKJh8RBxou1J4S1/n/deploygroup/b/bucket_example/o/",
"keystore" :
{
"siebel_keystore_path" : "/home/opc/test/ca/siebelcerts/keystore.jks",
"siebel_keystore_password": "<xxxxxx>",
"siebel_truststore_path": "/home/opc/test/ca/siebelcerts/truststore.jks",
"siebel_truststore_password": "<xxxxxx>"
}
},
"infrastructure": {
"gitlab_url": "https://<IP address>",
"gitlab_accesstoken": "<yyyyy>",
"gitlab_user": "user.name",
"gitlab_selfsigned_cacert": "/home/opc/certs/rootCA.crt",
"siebel_cluster_subnet_ocid": "<cluster_subnet_ocid>",
"siebel_lb_subnet_ocid": "<lb_subnet_ocid>",
"siebel_private_subnet_ocid": "<private_subnet_ocid>",
"siebel_db_subnet_ocid": "<db_subnet_ocid>",
"vcn_ocid_of_db_subnet": "<VCN_ocid_of_worker_node>",
"load_balancer_type": "public",
"kubernetes": {
"kubernetes_type": "OKE",
"oke": {
"oke_node_count": 3,
"oke_node_shape": "VM.Standard.E3.Flex",
"oke_node_shape_config": {
"memory_in_gbs": "60",
ocpus": "4"
}
}
}
},
"database": {
"db_type": "ATP",
"atp": {
"admin_password": "<OCID of your admin password>",
"storage_in_tbs": 1,
"cpu_cores": 3,
"wallet_password": "<OCID of your wallet password's secret>"
},
"auth_info": {
"siebel_admin_username": "<provide your own values>",
"siebel_admin_password": "<OCID of your Siebel admin password's secret>",
"default_user_password": "<OCID of your default user password's secret>",
"table_owner_password": "<OCID of your table owner password's secret>",
"table_owner_user": "<provide your own values>",
"anonymous_user_password": "<OCID of your anonymous user password's secret>"
}
},
"ses_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"ses_resource_requests": {
"cpu": "1.0",
"memory": "4Gi"
},
"cgw_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"cgw_resource_requests": {
"cpu": "1000m",
"memory": "4Gi"
},
"sai_resource_limits": {
"cpu": "1",
"memory": "4Gi"
},
"sai_resource_requests": {
"cpu": "1",
"memory": "4Gi"
}
}
}
Example Payload when "Use existing resources" Checkbox is Selected
The following is an example payload sent to SCM to deploy Siebel CRM using user provided inputs regarding existing infrastructure for Siebel CRM deployment. Specific section, for example for OKE, for mount target etc. are further given as separate examples in the subsequent sections.
{
"name": "test1",
"siebel": {
"siebel_architecture": "CRM",
"registry_url": "iad.ocir.io",
"registry_user": "<registry_user>",
"registry_password": "<registry_password>",
"database_type": "Vanilla",
"industry": "Telecommunications",
"keystore" :
{
"siebel_keystore_path" : "/home/opc/test/ca/siebelcerts/keystore.jks",
"siebel_keystore_password": "<xxxxxx>",
"siebel_truststore_path": "/home/opc/test/ca/siebelcerts/truststore.jks",
"siebel_truststore_password": "<xxxxxx>"
}
},
"infrastructure": {
"gitlab_url": "https://150.mmm.xxx.yyy",
"gitlab_accesstoken": "<gitlab_token>",
"gitlab_user": "root",
"gitlab_selfsigned_cacert": "/home/opc/certs/rootCa.crt" ,
"load_balancer_type": "public",
"siebel_lb_subnet_cidr" : "10.0.1.0/24",
"siebel_private_subnet_cidr" : "10.0.2.0/24",
"siebel_db_subnet_cidr" : "10.0.3.0/24",
"siebel_cluster_subnet_cidr" : "10.0.4.0/24",
"kubernetes": {
"kubernetes_type": "BYO_OKE",
"byo_oke": {
"oke_cluster_id": "<cluster-ocid>",
"oke_endpoint": "PRIVATE",
"oke_kubeconfig_path": "<path-to-kubeconfig-file>"
}
},
"mounttarget_exports": {
"siebfs_mt_export_paths": [
{"mount_target_private_ip" : "10.0.255.171","export_path": "/siebfs0"}
]
}
},
"database": {
"db_type": "BYOD",
"byod": {
"wallet_path": "/home/opc/certs/wallet",
"tns_connection_name": "<provide tns connection name value>"
},
"auth_info": {
"siebel_admin_username": "<provide your own values>",
"siebel_admin_password": "<OCID of your Siebel admin password's secret>",
"default_user_password": "<OCID of your default user password's secret>",
"table_owner_password": "<OCID of your table owner password's secret>",
"table_owner_user": "<provide your own values>",
"anonymous_user_password": "<OCID of your anonymous user password's secret>"
}
},
"size": {
"ses_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"ses_resource_requests": {
"cpu": "1.0",
"memory": "4Gi"
},
"cgw_resource_limits": {
"cpu": "2",
"memory": "4Gi"
},
"cgw_resource_requests": {
"cpu": "1000m",
"memory": "4Gi"
},
"sai_resource_limits": {
"cpu": "1",
"memory": "4Gi"
},
"sai_resource_requests": {
"cpu": "1",
"memory": "4Gi"
}
}
}
Example Database Sections for DBCS_VM Database Type for a BYOD Case
"database": {
"db_type": "DBCS_VM",
"dbcs_vm": {
"db_version": "21.0.0.0",
"database_edition": "ENTERPRISE_EDITION_HIGH_PERFORMANCE",
"availability_domain": "1",
"db_home_admin_password": "<OCID of your db home admin password's secret>",
"shape": "VM.Standard1.1",
"data_storage_size_in_gbs": "512",
"db_admin_username": "<provide your own values>",
"db_admin_password": "OCID of your db admin password’s secret"
}
"auth_info": {
"siebel_admin_username": "<provide your own values>",
"siebel_admin_password": "<OCID of your Siebel admin password's secret>",
"default_user_password": "<OCID of your default user password's secret>",
"table_owner_password": "<OCID of your table owner password's secret>",
"table_owner_user": "<provide your own values>",
"anonymous_user_password": "<OCID of your anonymous user password's secret>"
}
},
The following is an example database section of the payload for the DBCS_VM database type, using a VM flex shape type:
"database": {
"db_type": "DBCS_VM",
"dbcs_vm": {
"db_version": "21.0.0.0",
"database_edition": "ENTERPRISE_EDITION_HIGH_PERFORMANCE",
"availability_domain": "1",
"db_home_admin_password": "<OCID of your db home admin password's secret>",
"shape": "VM.Standard.E4.Flex",
"cpu_count": "2",
"data_storage_size_in_gbs": "512",
"db_admin_username": "<provide your own values>",
"db_admin_password": "OCID of your db admin password’s secret"
}
"auth_info": {
"siebel_admin_username": "<provide your own values>",
"siebel_admin_password": "<OCID of your Siebel admin password's secret>",
"default_user_password": "<OCID of your default user password's secret>",
"table_owner_password": "<OCID of your table owner password's secret>",
"table_owner_user": "<provide your own values>",
"anonymous_user_password": "<OCID of your anonymous user password's secret>"
}
},
The following is an example database section of the payload for the DBCS_VM database
type, using BYO-FS with payload parameter dbcs_vm > mount_target_ip
and export_path
included:
"database": {
"db_type": "DBCS_VM",
"dbcs_vm": {
"db_version": "21.0.0.0",
"database_edition": "ENTERPRISE_EDITION_HIGH_PERFORMANCE",
"availability_domain": "1",
"db_home_admin_password": "<OCID of your db home admin password's secret>",
"shape": "VM.Standard.E4.Flex",
"cpu_count": "2",
"data_storage_size_in_gbs": "512",
"db_admin_username": "<provide your own values>",
"db_admin_password": "<OCID of your db admin password’s secret>",
"mount_target_private_ip": "<IP address of your mount target>",
"export_path": "<Export path in the mount target for using in DATA DIR>"
},
"auth_info": {
"siebel_admin_username": "<provide your own values>",
"siebel_admin_password": "<OCID of your Siebel admin password's secret>",
"default_user_password": "<OCID of your default user password's secret>",
"table_owner_password": "<OCID of your table owner password's secret>",
"table_owner_user": "<provide your own values>",
"anonymous_user_password": "<OCID of your anonymous user password's secret>"
}
},
Example Kubernetes Cluster Sections for BYO-Kubernetes
Payloads for all Kubernetes Cluster options.
Example payload when user chooses to go with SCM creating OKE during environment provisioning:
{
"infrastructure": {
"kubernetes": {
"kubernetes_type": "OKE",
"oke": {
"oke_node_count": 3,
"oke_node_shape": "VM.Standard.E3.Flex",
"oke_node_shape_config": {
"memory_in_gbs": "60",
"ocpus": "4"
}
}
}
}
}
Example payload when user chooses to use their cluster for environment provisioning and kubernetes type is BYO_OKE:
{
"infrastructure": {
"kubernetes": {
"kubernetes_type": "BYO_OKE",
"byo_oke": {
"oke_cluster_id": "ocid1.****",
"oke_endpoint": "PRIVATE",
"oke_kubeconfig_path": "/home/opc/siebel/kubeconfig.yaml"
},
"ingress_controller": {
"ingress_service_type": "LoadBalancer",
"ingress_controller_service_annotations": {
"oci.oraclecloud.com/load-balancer-type": "lb",
"service.beta.kubernetes.io/oci-load-balancer-internal": "false",
"service.beta.kubernetes.io/oci-load-balancer-shape": "flexible",
"service.beta.kubernetes.io/oci-load-balancer-shape-flex-min": "10",
"service.beta.kubernetes.io/oci-load-balancer-shape-flex-max": "100",
"service.beta.kubernetes.io/oci-load-balancer-ssl-ports": "443",
"service.beta.kubernetes.io/oci-load-balancer-tls-secret": "lb-tls-certificate",
"service.beta.kubernetes.io/oci-load-balancer-subnet1": "ocid1.subnet.oc1.iad.aaaaaaaayt53nlge54fhrhvrnvyvvgqvtenngwz4tqljvpn2chn7ws4chm6q"
}
}
}
}
}
Example payload when user chooses to use their cluster for environment provisioning and kubernetes type is BYO_OCNE:
{
"infrastructure": {
"kubernetes": {
"kubernetes_type": "BYO_OCNE",
"byo_ocne": {
"kubeconfig_path": "/home/opc/siebel/kubeconfig.yaml"
}
},
"ingress_controller": {
"ingress_service_type": "LoadBalancer",
"ingress_controller_service_annotations": {
"oci.oraclecloud.com/load-balancer-type": "lb",
"service.beta.kubernetes.io/oci-load-balancer-internal": "false",
"service.beta.kubernetes.io/oci-load-balancer-shape": "flexible",
"service.beta.kubernetes.io/oci-load-balancer-shape-flex-min": "10",
"service.beta.kubernetes.io/oci-load-balancer-shape-flex-max": "100",
"service.beta.kubernetes.io/oci-load-balancer-ssl-ports": "443",
"service.beta.kubernetes.io/oci-load-balancer-tls-secret": "lb-tls-certificate",
"service.beta.kubernetes.io/oci-load-balancer-subnet1": "ocid1.subnet.oc1.iad.aaaaaaaayt53nlge54fhrhvrnvyvvgqvtenngwz4tqljvpn2chn7ws4chm6q"
}
}
}
}
Example payload when user chooses to use their cluster for environment provisioning and kubernetes type is OCNE, observability is enabled and local-storage is used for Prometheus and oracle-opensearch:
{
"infrastructure": {
"kubernetes": {
"kubernetes_type": "BYO_OCNE",
"byo_ocne": {
"kubeconfig_path": "/home/opc/siebel/kubeconfig.yaml"
}
},
"ingress_controller": {
"ingress_service_type": "LoadBalancer",
"ingress_controller_service_annotations": {
"oci.oraclecloud.com/load-balancer-type": "lb",
"service.beta.kubernetes.io/oci-load-balancer-internal": "false",
"service.beta.kubernetes.io/oci-load-balancer-shape": "flexible",
"service.beta.kubernetes.io/oci-load-balancer-shape-flex-min": "10",
"service.beta.kubernetes.io/oci-load-balancer-shape-flex-max": "100",
"service.beta.kubernetes.io/oci-load-balancer-ssl-ports": "443",
"service.beta.kubernetes.io/oci-load-balancer-tls-secret": "lb-tls-certificate",
"service.beta.kubernetes.io/oci-load-balancer-subnet1": "ocid1.subnet.oc1.iad.aaaaaaaayt53nlge54fhrhvrnvyvvgqvtenngwz4tqljvpn2chn7ws4chm6q"
}
}
}
},
"observability": {
"siebel_monitoring": true,
"oci_config": {
"oci_config_path": "/home/opc/config/config1",
"oci_private_api_key_path": "/home/opc/config/oci_api_key.pem",
"oci_config_profile_name": "DEFAULT"
},
"prometheus": {
"storage_class_name": "local-storage",
"local_storage_info": {
"local_storage": "/mnt/test",
"kubernetes_node_hostname": "olcne-worknode-1"
}
},
"oracle_opensearch": {
"storage_class_name": "local-storage",
"local_storage_info": [
{
"local_storage": "/mnt/test1",
"kubernetes_node_hostname": "olcne-worknode-2"
},
{
"local_storage": "/mnt/test2",
"kubernetes_node_hostname": "olcne-worknode-2"
},
{
"local_storage": "/mnt/test3",
"kubernetes_node_hostname": "olcne-worknode-2"
}
]
},
"monitoring_mt_export_path": {
"mount_target_private_ip": "10.0.1.168",
"export_path": "/olcne-migration"
}
}
}