Migrate Your Applications Manually
Create a Docker image based on your Oracle Application Container Cloud Service application, and then deploy it to Oracle Cloud Infrastructure Container Engine for Kubernetes.
Topics:
- Create a Kubernetes Cluster
- Configure Kubectl
- Build the Docker Image
- Install Additional Linux Packages
- Push the Docker Image to Oracle Cloud Infrastructure Registry
- Set up the Environment Variables
- Configure Java EE System and Service Binding Properties
- Enable Connectivity between the Kubernetes Cluster and Oracle Cloud Services
- Create the Kubernetes Configuration Files
- Set Up the Docker Registry Secret and the SSL Certificate
- Deploy the Application
- Set up a Custom URL
- Set up an Ingress Controller
Create a Kubernetes Cluster
To migrate your Oracle Application Container Cloud Service applications to Oracle Cloud Infrastructure, you can create a Kubernetes cluster or use an existing cluster.
An existing Kubernetes cluster should have the following characteristics:
- The Virtual Cloud Network (VCN) that the cluster uses should include an Internet Gateway to enable remote access.
- The Load Balancing (LB) subnets in the cluster should have ingress security rules that allow incoming traffic from the Internet to the subnet on port 80 or 443 (if SSL is enabled).
- The shape and quantity of nodes in the cluster should be large enough to meet the CPU and memory capacity requirements for all of the applications that you want to migrate to this cluster.
You can use the Oracle Cloud Infrastructure console to create new Kubernetes clusters. You must specify details like the cluster name and the Kubernetes version to install on master nodes.
Configure Kubectl
You need to download the kubeconfig
file so that you can access the cluster by using kubectl
.
Build the Docker Image
To build your Docker image, you need a Dockerfile
that contains the instructions to assemble it. Based on your application runtime, you can use a template to create your Dockerfile
.
Install Additional Linux Packages
You can install additional Linux packages by creating the linux-packages.txt
file, and bundling it with your application.
In your project directory, create the package-installer.sh
file, and paste the contents of the following script:
#Step 1: Check if the file linux-packages.txt exists at /u01/app
#Step 2: Iterate through the file linux-packages.txt, to verify package exists in our Oracle repo
#If step 2 success, then go ahead and install all packages. If step 2 fails, then exit with failure and display specify packages
# The timeout function is called with 20m hard timeout, and within the stiplulated time if all the package are installed
# then return from function will be caught in parent process.
# An appropriate message displays back, depending on the return status.
# Return code for below scenarios:
# Syntax error: 2
# Validation failure : 3
# Success : 4
# Transaction error: 5
#!/bin/bash
timeout_value=20
cust_loc=/u01/app
export cust_loc
cd $cust_loc
if [ ! -s $cust_loc/linux-packages.txt ] || [ ! -f $cust_loc/linux-packages.txt ]
then
exit 0
fi
export uuid=`date +%Y%m%d%H%M%S`
export LOG_NAME="/tmp/output_$uuid.log"
export PKG_LOGNAME="/tmp/pkgoutput_$uuid.log"
export GRP_LOGNAME="/tmp/grpoutput_$uuid.log"
export ERR_LOG_NAME="/tmp/error_$uuid.log"
rm -rf $LOG_NAME
function cleanup()
{
/bin/rm -rf $PKG_LOGNAME
/bin/rm -rf $GRP_LOGNAME
/bin/rm -rf /tmp/tmp_log /tmp/tmp_syn /tmp/tmp_succ /tmp/tmp_val /tmp/tmp_err
}
function install_packages()
{
cur_loc=`pwd`
ret_flag=0
syn_flag=0
sucpack_list=""
sucgroup_list=""
synpack_list=""
errpack_list=""
sucdisp_list=""
grpdisp_list=""
install_pkgs=""
failed_pkgs=""
#Fix for file created in notepad++
sed -i -e '$a\' $cust_loc/linux-packages.txt
echo "VALIDATION_CHECK_START" > $LOG_NAME
while read package_rec
do
if [[ "$package_rec" != "#"* ]] && [[ ! -z "$package_rec" ]]
then
echo "Record Picked: $package_rec" >> $LOG_NAME
package_name=`echo $package_rec | awk -F':' '{print $2}' | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//'`
install_type=`echo $package_rec | awk -F':' '{print $1}' | tr -d '[:space:]'`
if [ "package_install" == "$install_type" ];then
yum list $package_name 1>>$LOG_NAME 2>&1
var=$?
if [ $var -eq 1 ]
then
if [[ -z "$errpack_list" ]];then
errpack_list=$package_name
else
errpack_list=$errpack_list","$package_name
fi
ret_flag=1
elif [ $var -eq 0 ]
then
if [[ -z "$sucdisp_list" ]];then
sucdisp_list=$package_name
else
sucdisp_list=$sucdisp_list","$package_name
fi
sucpack_list=$sucpack_list" "$package_name
fi
elif [ "group_install" == "$install_type" ];then
yum grouplist "$package_name" 1>>$LOG_NAME 2>&1 1>/tmp/tmp_log
cat /tmp/tmp_log | grep -E -iw -q "Available Groups|Installed Groups:"
var=$?
if [ $var -eq 1 ]
then
if [[ -z "$errpack_list" ]];then
errpack_list=$package_name
else
errpack_list=$errpack_list","$package_name
fi
ret_flag=1
elif [ $var -eq 0 ]
then
if [[ -z "$grpdisp_list" ]];then
grpdisp_list=$package_name
else
grpdisp_list=$grpdisp_list","$package_name
fi
if [[ -z "$sucgroup_list" ]];then
sucgroup_list=$package_name
else
sucgroup_list=$sucgroup_list","$package_name
fi
fi
else
#Syntax failure scenario if not properly provided
if [[ -z "$synpack_list" ]];then
synpack_list=$package_rec
else
synpack_list=$synpack_list","$package_rec
fi
ret_flag=-1
syn_flag=1
fi
fi
done < $cust_loc/linux-packages.txt
echo "VALIDATION_CHECK_END" >> $LOG_NAME
if [ $syn_flag -eq 1 ]
then
echo "Syntax Error: $synpack_list" > /tmp/tmp_syn
return 2
fi
if [ $ret_flag -eq 1 ]
then
echo "Valid Packages: $sucdisp_list,$grpdisp_list" > /tmp/tmp_val
echo "Invalid Packages: $errpack_list" >> /tmp/tmp_val
return 3
fi
if [ $ret_flag -eq 0 ]
then
echo "INSTALL_START" >> $LOG_NAME
if [ ! -z "$sucpack_list" ];then
yum -y install $sucpack_list 1>>$PKG_LOGNAME 2>&1
resp=$?
if [ $resp -eq 1 ];then
/bin/rm -rf $PKG_LOGNAME
ret_flag=2
for pkg_name in $sucpack_list
do
yum -y install $pkg_name 1>>$PKG_LOGNAME 2>/tmp/tmp_log
res=$?
if [ $res -eq 1 ];then
if [ -z "$failed_pkgs" ];then
failed_pkgs=$pkg_name
else
failed_pkgs=$failed_pkgs","$pkg_name
fi
echo "Package Name: $pkg_name" >> $ERR_LOG_NAME
cat /tmp/tmp_log >> $ERR_LOG_NAME
cat /tmp/tmp_log >> $PKG_LOGNAME
elif [ $res -eq 0 ];then
if [ -z "$install_pkgs" ];then
install_pkgs=$pkg_name
else
install_pkgs=$install_pkgs","$pkg_name
fi
fi
done
cat $PKG_LOGNAME >> $LOG_NAME
fi
if [ $resp -eq 0 ]
then
cat $PKG_LOGNAME >> $LOG_NAME
install_pkgs=$sucdisp_list
fi
fi
if [ ! -z "$sucgroup_list" ];then
yum -y groupinstall "$sucgroup_list" 1>>$GRP_LOGNAME 2>&1
resp=$?
if [ $resp -eq 1 ];then
ret_flag=2
/bin/rm -rf $GRP_LOGNAME
IFS=","
for grp_name in $sucgroup_list
do
yum -y groupinstall "$grp_name" 1>>$GRP_LOGNAME 2>/tmp/tmp_log
ret_res=$?
if [ $ret_res -eq 1 ];then
if [ -z "$failed_pkgs" ];then
failed_pkgs=$grp_name
else
failed_pkgs=$failed_pkgs","$grp_name
fi
echo "Group Name: $grp_name" >> $ERR_LOG_NAME
cat /tmp/tmp_log >> $ERR_LOG_NAME
cat /tmp/tmp_log >> $GRP_LOGNAME
elif [ $ret_res -eq 0 ];then
if [ -z "$install_pkgs" ];then
install_pkgs=$grp_name
else
install_pkgs=$install_pkgs","$grp_name
fi
fi
done
cat $GRP_LOGNAME >> $LOG_NAME
fi
if [ $resp -eq 0 ]
then
cat $GRP_LOGNAME >> $LOG_NAME
if [ -z "$install_pkgs" ];then
install_pkgs=$sucgroup_list
else
install_pkgs=$install_pkgs","$sucgroup_list
fi
fi
fi
echo "INSTALL_END" >> $LOG_NAME
fi
if [ -z "$failed_pkgs" ];then
ret_flag=0
fi
if [ $ret_flag -eq 0 ];then
echo "Installed Packages: $sucdisp_list,$grpdisp_list" > /tmp/tmp_succ
return 4
fi
if [ $ret_flag -eq 2 ];then
echo "Installable Packages: $install_pkgs" > /tmp/tmp_err
echo "Failed Packages: $failed_pkgs" >> /tmp/tmp_err
return 5
fi
}
#End of install_package function
export -f install_packages
timeout "$timeout_value"m bash -c install_packages
rest_status=$?
# Timeout scenario
if [ $rest_status -eq 124 ]
then
echo "RESULT_START"
echo "SYNTAX_ERROR"
echo "Error Message : Timed out while installing & configuring linux packages/groups. Reduce the number of specified linux packages/groups."
echo "RESULT_END"
cleanup
exit 1
fi
# Syntax error scenario
if [ $rest_status -eq 2 ]
then
echo "RESULT_START"
echo "SYNTAX_ERROR"
cat /tmp/tmp_syn
echo "RESULT_END"
cleanup
exit 1
fi
#Validation error scenario
if [ $rest_status -eq 3 ]
then
echo "RESULT_START"
echo "VALIDATION_FAILURE"
cat /tmp/tmp_val
echo "RESULT_END"
cat $LOG_NAME
cleanup
exit 1
fi
#Success scenario
if [ $rest_status -eq 4 ]
then
echo "RESULT_START"
echo "SUCCESS"
cat /tmp/tmp_succ
echo "RESULT_END"
cleanup
cat $LOG_NAME
exit 0
fi
#Transaction error scenario
if [ $rest_status -eq 5 ]
then
echo "RESULT_START"
echo "ERROR_PACKAGE"
cat /tmp/tmp_err
echo "RESULT_END"
echo "ERROR_PKGS_START"
cat $ERR_LOG_NAME
echo "ERROR_PKGS_END"
cleanup
cat $LOG_NAME
exit 1
fi
Push the Docker Image to Oracle Cloud Infrastructure Registry
After you build your Docker image, you can push it to Oracle Cloud Infrastructure Registry and make it available in the cloud.
Set up the Environment Variables
Migrate the environment variables that are required for you Oracle Application Container Cloud Service application to a Kubernetes configuration.
- Oracle Application Container Cloud Service environment variables: Oracle Application Container Cloud Service created these variables automatically when you deployed your application. These environment variables are required and you can't remove them from the configuration file. See Configure Environment Variables.
- Custom environment variables: You defined these variables for your application in the
deployment.json
file or by using the Oracle Application Container Cloud Service console. - Service binding environment variables: Oracle Application Container Cloud Service created these variables automatically when you added service bindings in your application.
Configure Java EE System and Service Binding Properties
If your Java Enterprise Edition (Java EE) application requires system or service binding properties, then you must specify them in the env.properties
file.
- To use system properties for your Java EE application, define the
EXTRA_JAVA_PROPERTIES
property in theenv.properties
file.EXTRA_JAVA_PROPERTIES=<value>
- If your Java EE application uses any of the following JNDI service binding properties, then you must add them to the
env.properties
file.- jndi-name
- max-capacity
- min-capacity
- driver-properties
<ocic-service-type>_SERVICE_BINDING_NAME=<service-name> <ocic-service-type>_PROPERTIES=jndi-name:<jndi-name>|max-capacity:<max-capacity>|min-capacity:<min-capacity>|driver-properties:<driver-properties>
Placeholder Description <ocic-service-type> Service type, for example: DBAAS, MYSQLCS,
etc.<service-name> Name of your service. <jndi-name> JNDI name of your service. It should be in format "jdbc/<value>"
, for example:"jdbc/dbcs"
.<max-capacity> Maximum capacity of the connection pool. <min-capacity> Minimum capacity of the connection pool. <driver-properties> List of the JDBC driver properties semi-colon separated.
Example:
# ACCS environment variables(DO NOT REMOVE)
HOSTNAME=webapp01-service:443
APP_HOME=/u01/app
PORT=8080
ORA_PORT=8080
ORA_APP_NAME=webapp01
# Application environment variables
APP_LIB_FOLDER=./lib
# Service bindings environment variables
MYSQLCS_CONNECT_STRING=10.x.x.1:3306/mydb
MYSQLCS_MYSQL_PORT=3306
MYSQLCS_USER_PASSWORD=Password1
MYSQLCS_USER_NAME=TestUser
DBAAS_DEFAULT_CONNECT_DESCRIPTOR=10.x.x.x:1521/mydb
DBAAS_USER_NAME=TestUser
DBAAS_USER_PASSWORD=Password1
DBAAS_LISTENER_HOST_NAME=10.x.x.x
DBAAS_LISTENER_PORT=1521
DBAAS_DEFAULT_SID=ORCL
DBAAS_DEFAULT_SERVICE_NAME=mydb
# System properties
# Only for "Java EE" runtime. Remove for other runtimes.
EXTRA_JAVA_PROPERTIES=-DconfigPath=/u01/app/conf/-DlogFile=/u01/app/logs/app.log
# Service binding properties
# Only for "Java EE" runtime. Remove for other runtimes.
DBAAS_SERVICE_BINDING_NAME=dbaasDb
DBAAS_PROPERTIES=jndi-name:jdbc/dbcs|max-capacity:5|min-capacity:1|driver-properties:user=admin;database=test|
MYSQLCS_SERVICE_BINDING_NAME=mysqlDb
MYSQLCS_PROPERTIES=jndi-name:jdbc/mysqlcs|max-capacity:10|min-capacity:1|driver-properties:user=oci;database=app|
Enable Connectivity between the Kubernetes Cluster and Oracle Cloud Services
If your application in Oracle Application Container Cloud Service uses service bindings to enable communication with other Oracle Cloud services, then you need to ensure that after the migration your application is able to communicate with those services.
- If your service is in Oracle Cloud Infrastructure Classic, then in the Oracle Cloud
Infrastructure Classic service, create an access rule that allows the Public IP
address of NAT Gateway attached to the worker nodes of the Kubernetes cluster to
connect to the service. For example, if your application uses Oracle Database Classic
Cloud Service, then see Managing Network Access to
Database Cloud Service.
Note:
The public IP address of the NAT Gateway can be located in the Oracle Cloud Infrastructure console from the menu: Developer Services, Container Clusters (OKE), Cluster Details, Node Pool Section, Node Instance Details, Virtual Cloud Network Details, NAT Gateways, Public IP Address. - If your service is in Oracle Cloud Infrastructure, then locate the VCN and subnets in which the service is deployed. Ensure that an ingress security rule exists to allow traffic from the Kubernetes cluster to the service. See Security Lists in the Oracle Cloud Infrastructure documentation.
Create the Kubernetes Configuration Files
Before you deploy your application to Oracle Cloud Infrastructure Container Engine for Kubernetes, you need to create the deployment and service configuration files for your application.
The deployment and service configuration files provide instructions for Kubernetes to create and update instances of your application. You can create and manage a deployment by using the Kubernetes command line interface.
Create the Deployment Configuration File
- Create the
deployment.yaml
file using the following template:apiVersion: apps/v1 kind: Deployment metadata: name: "<app-name>-deployment" spec: replicas: ${replicas} selector: matchLabels: app: "<app-name>-selector" template: metadata: labels: app: "<app-name>-selector" spec: containers: - name: "<app-name>" image: "<region-code>.ocir.io/<tenancy>/accs/<oci-account-username>/<app-name>:latest" command: ["${command}"] args: ["${args}"] ports: - containerPort: 8080 env: - name: ORA_INSTANCE_NAME valueFrom: fieldRef: fieldPath: metadata.name envFrom: - configMapRef: name: "<app-name>-config-var-map" resources: limits: memory: "${memory}i" requests: memory: "${memory}i" # The following section "livenessProbe" should be removed if Health Check URL # is not available. livenessProbe: httpGet: path: "${healthCheckHttpPath}" port: 8080 initialDelaySeconds: 5 periodSeconds: 600 timeoutSeconds: 30 failureThreshold: 3 imagePullSecrets: - name: "<app-name>-secret"
- Provide the appropriate values for the
<app-name>, <region-code>, <tenancy>,
and<oci-account-username>
placeholders. - Provide the appropriate values for the variables:
Placeholder Description ${replicas}
Specify the number of application replicas. If the property is missing, then the default value is 2. ${command}
Specify the operating system command used to launch the application. - Enter the first word of the
command
property in themanifest.json
file. For example, if the value of the command property is:"sh bin/startapp.sh -config conf/app.properties"
, then the${command}
value issh
. - If this property is missing in the
manifest.json
file, then leave the value emptycmd: []
.
${args}
Specify the arguments used to start the application. - Enter a comma-separated list of all the words starting from the second word of the
command
property in themanifest.json
file. For example, if the value of thecommand
property in themanifest.json
file is"sh bin/startapp.sh -config conf/app.properties"
, then the${args}
value is"bin/startapp.sh", "-config", "conf/app.properties"
. - If this property is missing in the, then leave the value empty. For example,
args: [].
${memory}
Specify the amount of memory required for the application. If the property is missing, then the default value is 2G. ${healthCheckHttpPath}
Specify the HTTP path used to perform health checks for the application. The URL on this HTTP path should return a HTTP response code greater than or equal to 200 and less than 400 to indicate success. - Enter the
healthCheck.http-endpoint
property in themanifest.json
file. - If this property is missing in the
manifest.json
file, then enter the value"/"
.
Example:
apiVersion: apps/v1 kind: Deployment metadata: name: "webapp01-deployment" spec: replicas: 2 selector: matchLabels: app: "webapp01-selector" template: metadata: labels: app: "webapp01-selector" spec: containers: - name: "webapp01" image: "fra.ocir.io/tenancy1/accs/oci_user1/webapp01:latest" command: ["sh"] args: ["bin/startapp.sh", "-config", "conf/app.properties"] ports: - containerPort: 8080 env: - name: ORA_INSTANCE_NAME valueFrom: fieldRef: fieldPath: metadata.name envFrom: - configMapRef: name: "webapp01-config-var-map" resources: limits: memory: "2Gi" requests: memory: "2Gi" # The following section "livenessProbe" should be removed if Health Check # is not required. livenessProbe: httpGet: path: "/" port: 8080 initialDelaySeconds: 5 periodSeconds: 300 timeoutSeconds: 30 failureThreshold: 3 imagePullSecrets: - name: "webapp01-secret"
- Enter the first word of the
Create the Service Configuration File
- Create the
service.yaml
file with the following template:kind: Service apiVersion: v1 metadata: name: "<app-name>-service" # The following section "annotations" should be removed if SSL endpoint # is not required. annotations: service.beta.kubernetes.io/oci-load-balancer-ssl-ports: '443' service.beta.kubernetes.io/oci-load-balancer-tls-secret: "<app-name>-tls-certificate" spec: type: "${type}" ports: - port: ${port} protocol: TCP targetPort: 8080 selector: app: "<app-name>-selector"
- Provide an appropriate value for the
<app-name>
placeholder. - Provide the appropriate values for the
${variable}
placeholders:Placeholder Description ${type}
Specify the application type. If the Oracle Application Container Cloud Service application is of type web
, then enter the valueLoadBalancer
. If application is of typeworker
, then enter the valueClusterIP
.${port}
Specify the public port of the application. If an SSL endpoint is required, then enter 443
, otherwise, enter80
. For worker applications the value is80.
Example:
kind: Service apiVersion: v1 metadata: name: "webapp01-service" # The following section "annotations" should be removed if SSL endpoint # is not required. annotations: service.beta.kubernetes.io/oci-load-balancer-ssl-ports: '443' service.beta.kubernetes.io/oci-load-balancer-tls-secret: "webapp01-tls-certificate" spec: type: LoadBalancer ports: - port: 443 protocol: TCP targetPort: 8080 selector: app: "webapp01-selector"
Set Up the Docker Registry Secret and the SSL Certificate
In order for Kubernetes to pull an image from Oracle Cloud Infrastructure Registry when deploying an application, you need to create a Kubernetes secret. If your application requires an SSL endpoint, then you need to create a TLS secret using the certificate and the private key for your application.
docker login
command, including your authentication token.
- To create a Docker registry secret, in a command-line window, run the following command. Provide appropriate values for the
<app-name>, <region-code>, <tenancy>, <oci-account-username>,
and<auth-token>
placeholders.kubectl create secret docker-registry <app-name>-secret --docker-server="<region-code>.ocir.io" --docker-username=<tenancy>/<oci-account-username> --docker-password='<auth-token>' --docker-email=<oci-account-username>
Example:$ kubectl create secret docker-registry webapp01-secret --docker-server="fra.ocir.io" --docker-username=tenancy1/oci_user1@example.com --docker-password='cIv3s8Aw2klYZ:QOcyFA' --docker-email=oci_user1@example.com secret/webapp01-secret created
- To create the TLS secret, in a command-line window, run the following command. Provide appropriate values for the
<app-name>, <path-to-tls-key-file>
and<path-to-tls-cert-file>
placeholders.kubectl create secret tls <app-name>-tls-certificate --key <path-to-tls-key-file> --cert <path-to-tls-cert-file>
Example:$ kubectl create secret tls webapp01-tls-certificate --key /home/user1/kubernetes/tls.key --cert /home/user1/kubernetes/tls.crt secret/webapp01-tls-certificate created
Note:
The<path-to-tls-cert-file>
and<path-to-tls-key-file>
are the absolute paths to your public certificate and key files respectively.
Deploy the Application
Deploy your application by using the deployment.yaml
and service.yaml
files.
If you want to customize HTTP to HTTPS redirection, load balancer policy, or session stickiness for your application, you need to deploy your application using steps mentioned in Set up an Ingress Controller.
- Create the Kubernetes deployment.
kubectl create -f <path-to-kubernetes-deployment-yaml>
Example:
$ kubectl create -f /home/user1/kubernetes/deployment.yaml deployment.apps/webapp01-deployment created
- Check the deployment roll-out status.
kubectl rollout status deployment <app-name>-deployment
Example:$ kubectl rollout status deployment webapp01-deployment Waiting for deployment "webapp01-deployment" rollout to finish: 0 of 1 updated replicas are available... deployment "webapp01-deployment" successfully rolled out
The deployment check may take several minutes to complete.
- Create the Kubernetes service.
kubectl create -f <path-to-kubernetes-service-yaml>
Example:
$ kubectl create -f /home/user1/kubernetes/service.yaml service/webapp01-service created
- Check the services status.
kubectl get svc <app-name>-service
Example:
$ kubectl get svc webapp01-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE webapp01-service LoadBalancer 10.x.x.x 132.x.x.x 443:31174/TCP 1m
The service may take several minutes to start. After it is completed, write down the public IP address under the
EXTERNAL-IP
column.
Note:
If you have deployed a cluster application, your service instances could communicate with each other using the service name. The service name will be available in your application as theHOSTNAME
environment variable.
Set up a Custom URL
If you want to use a custom URL for your application, then you need a public domain name. You need to map the domain name to the public IP address of your application.
Create a DNS Zone
A Domain Name System (DNS) zone is a contiguous portion of the global DNS that is managed by a specific organization or administrator.
If you already created a DNS zone, then these steps aren't required.
Add a DNS Record
After you create the DNS zone, you need to add a DNS record to create the sub-domain name.
- From the Zone Information Page, navigate to
Records and click Manage
Records. Then, click Add Record and enter
or select the following values:
- RECORD TYPE: A - IPv4 Address
- Name: Enter your application name.
- Address: Enter the public IP address of your application.
- Click Submit.
- Click Publish Changes.
The custom URL for your application has this form: http[s]://<name>.<zone-name>[:<port>]/
. For example, if you enter webapp01
for the name of your record, and the zone name is example.com,
then the HTTPS URL is https://webapp01.example.com.
Set up an Ingress Controller
If you want to customize HTTP
to HTTPS
redirection, load balancer, or session stickiness for your application, then you need to set up a Nginx ingress controller. The ingress controller handles the traffic between an external load balancer and your application.
Configure Role-Based Access
You create the cluster-admin
role and assign it to you Oracle Cloud
Infrastructure administrator user. Then use the rbac.yaml
file to create a service account, a cluster role, and a cluster role binding.
- To assign the
cluster-admin
role to your Oracle Cloud Infrastructure administrator user, in a command-line window, run the following command. Provide an appropriate value for the<user-ocid>
placeholder.kubectl create clusterrolebinding ingress-binding --clusterrole=cluster-admin --user=<user-ocid>
Example:$ kubectl create clusterrolebinding ingress-binding --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaaaaa3vslnnqmypaummkievofbe5r4afaihdh4wi503dig3bkys2msgaq clusterrolebinding.rbac.authorization.k8s.io/ingress-binding created
- Create the
rbac.yaml
file using the following content:apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: default --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get - create - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: default ---
- To create a service account, a cluster role, and a cluster role binding between the service account and the cluster role, run the following command. Provide the appropriate value for the
<path-to-rbac-yaml>
placeholder.kubectl create -f <path-to-rbac-yaml>
Example:$ kubectl create -f /home/user1/kubernetes/rbac.yaml serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
Create an Ingress Controller
Create a Kubernetes deployment and service for the Nginx ingress controller.
Create an Ingress Controller Deployment
- Create the
nginx-deployment.yaml
file using the following template.apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller labels: app: nginx-ingress-controller spec: replicas: 1 selector: matchLabels: app: nginx-ingress-controller template: metadata: labels: app: nginx-ingress-controller spec: terminationGracePeriodSeconds: 60 serviceAccountName: nginx-ingress-serviceaccount containers: - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 name: nginx-ingress-controller readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- Create the Kubernetes deployment for the Nginx ingress controller.
kubectl create -f <path-to-nginx-deployment-yaml>
Example:$ kubectl create -f /home/user1/kubernetes/nginx-deployment.yaml deployment.apps/nginx-ingress-controller created
- Check the deployment roll-out status.
kubectl rollout status deployment nginx-ingress-controller
Example:$ kubectl rollout status deployment nginx-ingress-controller Waiting for deployment "nginx-ingress-controller" rollout to finish: 0 of 1 updated replicas are available... deployment "nginx-ingress-controller" successfully rolled out
The deployment check may take several minutes to complete.
Create an Ingress Controller Service
- Create the
nginx-service.yaml
file using the following template:apiVersion: v1 kind: Service metadata: name: nginx-ingress-controller namespace: default labels: app: nginx-ingress-controller spec: type: LoadBalancer ports: - port: 80 name: http # Required only for SSL endpoint - port: 443 name: https selector: app: nginx-ingress-controller
- To create a service for the Nginx ingress controller, run the following command. Provide the appropriate value for the
<path-to-nginx-service-yaml>
placeholder.kubectl create -f <path-to-nginx-service-yaml>
Example:$ kubectl create -f /home/user1/kubernetes/nginx-service.yaml service/nginx-ingress-controller created
- Check the service status.
kubectl get svc nginx-ingress-controller
Example:$ kubectl get svc nginx-ingress-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.x.x.x 131.x.x.x 443:31155/TCP 1m
The service may take several minutes to start. After it is completed, write down the public IP address under the
EXTERNAL-IP
column. - Verify that you Nginx ingress controller is configured properly.
curl -I http://<public-ip-address>/healthz
Example:$ curl -I http://131.x.x.x/healthz HTTP/1.1 200 OK
After you deploy your application and create and ingress resource, you can access your application using the public URL.
Deploy Your Application with an Ingress Controller Setup
Deploy your application using the steps mentioned in Deploy the Application. To create the service use the following configuration:
- Create the
service.yaml
file with the following template:kind: Service apiVersion: v1 metadata: name: "<app-name>-service" spec: type: ClusterIP ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: "<app-name>-selector"
- Provide an appropriate value for the
<app-name>
placeholder.Example:kind: Service apiVersion: v1 metadata: name: "webapp01-service" spec: type: ClusterIP ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: "webapp01-selector"
- Create the Kubernetes service.
kubectl create -f <path-to-kubernetes-service-yaml>
- Check the services status.
kubectl get svc <app-name>-service
The service may take several minutes to start.
Create an Ingress Resource
Create an ingress resource that the Nginx ingress controller uses to forward the traffic to your application.
Customize an Ingress Resource
You can customize HTTP to HTTPS redirection, load balancer policy, and session stickiness in the nginx-ingress.yaml
file.
- HTTP to HTTPS redirect is enabled by default. If you want to disable it, then in the annotation section, add the following:
ingress.kubernetes.io/ssl-redirect:False
. - The load balancer policy by default is
round_robin
. To use theip_hash
load balancer policy, use the annotation:nginx.ingress.kubernetes.io/upstream-hash-by: "$binary_remote_addr"
. - To enable session stickiness, you need to use
ip_hash
as load balancer policy.
Create an Ingress Resource for Multiple Applications
If you want to deploy multiple applications in a Kubernetes cluster, then you can use the same ingress controller to configure load balancing for those applications. You can configure name-based virtual hosting, in which the single IP address of the Nginx ingress controller service routes traffic to the host names of multiple services.
Example:
You have two services, webapp01-service
and webapp02-service
, created in a Kubernetes cluster. The DNS names webapp01.example.com
and webapp02.example.com
are mapped to the public IP address of the ingress service. Then, you can set up an ingress controller so that requests to https://webapp01.example.com/
get forwarded to webapp01-service,
and requests to https://webapp02.example.com/
get forwarded to webapp02-service.
You can configure these two services in the nginx-ingress.yaml
file.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/load-balance: "round_robin"
spec:
# Required only for SSL endpoint
tls:
- hosts:
- webapp01.example.com
- webapp02.example.com
secretName: ingress-tls-certificate
rules:
- host: webapp01.example.com
http:
paths:
- path: /
backend:
serviceName: webapp01-service
servicePort: 80
- host: webapp02.example.com
http:
paths:
- path: /
backend:
serviceName: webapp02-service
servicePort: 80