Create Non-Replicated TimesTen Databases
This example shows you how to create non-replicated TimesTen Classic databases using the ttclassic
Helm chart. The example uses a YAML manifest file and assumes you have created the kube_files/helm/customyaml
directory for the file.
- On your development host, change to the
helm
directory.cd kube_files/helm
- Create a YAML file that defines the variables for your non-replicated configuration.
vi customyaml/norepsamplehelm.yaml storageClassName: oci-bv storageSize: 10Gi image: repository: container-registry.oracle.com/timesten/timesten tag: "22.1.1.34.0" imagePullSecret: sekret replicationTopology: none replicas: 3 rollingUpdatePartition: 2 dbConfigMap: - name: norepsamplehelm directory: cm
Note the following:-
The
storageClassName
isoci-bv
. Replaceoci-bv
with the name of your storage class. -
The
storageSize
is10Gi
. Replace10Gi
with the amount of storage that needs to be requested for each Pod to hold TimesTen. -
For the
image
variable:-
repository
: Therepository
iscontainer-registry.oracle.com/timesten/timesten
. Replace
with the name and location of your TimesTen container image.container-registry.oracle.com/timesten/timesten
-
tag
: The tag is22.1.1.34.0
. Replacetag
with the tag for the TimesTen release.
-
-
The
imagePullSecret
issekret
. Replacesekret
with the image pull secret that Kubernetes needs to use to fetch the TimesTen container image. -
For a non-replicated configuration:
-
The
replicationTopology
isnone
, indicating a non-replicated configuration consisting ofreplicas
number of Pods. Each Pod contains an independent TimesTen database. -
The number of
replicas
is3
, indicating the number of Pods, each of which contains a TimesTen database. Replace3
with the number of Pods you want provisioned. Valid values are between1
and3
, with1
being the default. - The
rollingUpdatePartition
is2
. This variable is specific to upgrades and determines the number of TimesTen databases to upgrade. Kubernetes upgrades Pods with an ordinal value that is greater than or equal to therollingUpdatePartition
value. For example, if you have three non-replicated Pods (replicas
=3
and Pods arenorepsamplehelm-0
,norepsamplehelm-1
, andnorepsamplehelm-2
) and you setrollingUpdatePartition
to2
, thenorepsamplehelm-2
Pods is upgraded, but thenorepsamplehelm-1
andnorepsamplehelm-0
Pods are not. There are examples in the upgrade section that show you howrollingUpdatePartition
works. You have the option of changing the value during the upgrade process.
-
- The name of the ConfigMap is
norepsamplehelm
. The location of the metadata files is in thecm
directory, which is located within thekube_file/helm/ttclassic
directory tree..
-
- Install the
ttclassic
chart.helm install -f customyaml/norepsamplehelm.yaml norepsamplehelm ./ttclassic
Let's look at thishelm
install
command:-
The
-f
option indicates that a YAML file is passed to thehelm
install
command. -
The name of the YAML file that contains the customizations is
norepsamplehelm.yaml
, which is located in thecustomyaml
directory. -
The name of the release is
norepsamplehelm
. -
The name of the chart is
ttclassic
, which is located in thekube_files/helm/ttclassic
directory.
Let's look at the output from the
helm
install
command.NAME: norepsamplehelm LAST DEPLOYED: Thu Jan 16 17:42:47 2025 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: Version 2211340.1.0 of the ttclassic chart has been installed. This release is named "norepsamplehelm". To learn more about the release, try: $ helm status norepsamplehelm $ helm get all norepsamplehelm $ helm history norepsamplehelm To rollback to a previous version of the chart, run: $ helm rollback norepsamplehelm <REVISION> - run 'helm history norepsamplehelm' for a list of revisions.
Note the following:-
The
ttclassic
chart version is2211340.1.0
, corresponding to TimesTen release22.1.1.34.0
. -
The release name is
norepsamplehelm
. -
The status of the release is
deployed
.
-
- (Optional) Verify the release.
helm list
The output is similar to the following:NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION norepsamplehelm default 1 2025-01-16 17:42:47.180635098 +0000 UTC deployed ttclassic-2211340.1.0 22.1.1.26.0
The
helm
list
command shows thenorepsamplehelm
release exists and is installed in your namespace. - Monitor the progress.
NAME STATE ACTIVE AGE norepsamplehelm NoReplicasReady N/A 94s
The provisioning starts, but is not yet complete as indicated by the
NoReplicasReady
state.Wait a few minutes. Then, check again.kubectl get ttc norepsamplehelm
The output is similar to the following:NAME STATE ACTIVE AGE norepsamplehelm AllReplicasReady N/A 12m
The provisioning process completes. Databases are up and running and operational as indicated by the
AllReplicasReady
state. - Confirm the ConfigMap and the metadata files exist.
kubectl get configmap norepsamplehelm
The output is similar to the following:NAME DATA AGE norepsamplehelm 4 15m
Check the metadata files.
kubectl describe configmap norepsamplehelm
The output is similar to the following:
Name: norepsamplehelm Namespace: default Labels: app.kubernetes.io/managed-by=Helm Annotations: meta.helm.sh/release-name: norepsamplehelm meta.helm.sh/release-namespace: default Data ==== testUser: ---- sampletestuser/sampletestuserpwd1 adminUser: ---- adminuser/adminuserpwd db.ini: ---- PermSize=200 DatabaseCharacterSet=AL32UTF8 schema.sql: ---- create table adminuser.emp (id number not null primary key, name char (32)); BinaryData ==== Events: <none>
The
norepsamplehelm
ConfigMap exists and contains the metadata files. Since thetestUser
file exists, you can use Helm to test TimesTen. See Test TimesTen for a Non-Replicated Configuration. - (Optional) Confirm the Pods.
kubectl get pods
The output is similar to the following:NAME READY STATUS RESTARTS AGE norepsamplehelm-0 3/3 Running 0 17m norepsamplehelm-1 3/3 Running 0 17m norepsamplehelm-2 3/3 Running 0 17m ...
There are three Pod running in your namespace, each of which contains an independent TimesTen database.
ttclassic
chart for a non-replicated configuration. TimesTen databases are up and running and fully operational.