Migrate Your Applications Manually

Create a Docker image based on your Oracle Application Container Cloud Service application, and then deploy it to Oracle Cloud Infrastructure Container Engine for Kubernetes.

Create a Kubernetes Cluster

To migrate your Oracle Application Container Cloud Service applications to Oracle Cloud Infrastructure, you can create a Kubernetes cluster or use an existing cluster.

An existing Kubernetes cluster should have the following characteristics:

  • The Virtual Cloud Network (VCN) that the cluster uses should include an Internet Gateway to enable remote access.
  • The Load Balancing (LB) subnets in the cluster should have ingress security rules that allow incoming traffic from the Internet to the subnet on port 80 or 443 (if SSL is enabled).
  • The shape and quantity of nodes in the cluster should be large enough to meet the CPU and memory capacity requirements for all of the applications that you want to migrate to this cluster.

You can use the Oracle Cloud Infrastructure console to create new Kubernetes clusters. You must specify details like the cluster name and the Kubernetes version to install on master nodes.

  1. From the Oracle Cloud Infrastructure console, open the navigation menu. Under Developer Services, go to Containers & Artifacts and then click Kubernetes Clusters (OKE).
  2. On the Cluster page, select your compartment, and then click Create Cluster.
  3. In the Cluster Creation dialog, enter and select the following values:
    1. NAME: Accept the default name or enter a name of your choice.
    2. KUBERNETES VERSION: Select the version of Kubernetes to run on the master nodes and worker nodes of the cluster.
    3. QUICK CREATE: Selected
    4. SHAPE: Select the shape to use for each node in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node.
    5. QUANTITY PER SUBNET: Enter the number of worker nodes to create for the node pool in each subnet.
    6. PUBLIC SSH KEY: (Optional) Enter the public key of the key pair that you want to use for SSH access to each node in the node pool. The public key is installed on all worker nodes in the cluster. If you don't specify a public SSH key, then Oracle Cloud Infrastructure Container Engine for Kubernetes provides one. However, since you don't have the corresponding private key, you can't have SSH access to the worker nodes.
  4. Click Create.

    It might take several minutes to create the cluster. After the cluster is created, its status changes to active.

Configure Kubectl

You need to download the kubeconfig file so that you can access the cluster by using kubectl.

  1. On the Cluster page, click the name of the cluster you created in the previous section.
  2. Click Access Cluster.
  3. Follow the instructions displayed in the Access Your Cluster dialog box.
  4. Verify that your cluster is accessible using kubectl.
    $ kubectl get services
    NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)         AGE
    kubernetes             ClusterIP      10.x.x.x       <none>          443/TCP         10m

Build the Docker Image

To build your Docker image, you need a Dockerfile that contains the instructions to assemble it. Based on your application runtime, you can use a template to create your Dockerfile.

  1. Extract the contents of your application archive.
  2. Identify the runtime type for your application.

    Note:

    It should be one of Java, Node, PHP, Java EE, DotNet, Ruby, Python, or Go.
  3. Open the manifest.json file and locate the runtime version.

    Example:

    {
      "runtime": {
        "majorVersion": "8"
      },
      ...
    }

    The runtime version in the example is 8.

  4. In your project directory, create a Dockerfile and use the template for your runtime:
    • Java:
      FROM iad.ocir.io/psmsvc3/accs/java<runtime-version>:latest
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      USER root
      COPY . /u01/app
      RUN if [ -f /u01/app/linux-packages.txt ] ; then  chmod 755 /u01/app/package-installer.sh && /u01/app/package-installer.sh ; fi
      RUN mkdir -p /u01/scripts /u01/logs/ \
       && chown -R apaas:apaas /u01/
      USER apaas
    • Node:
      FROM iad.ocir.io/psmsvc3/accs/node<runtime-version>:latest
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      USER root
      COPY . /u01/app
      RUN if [ -f /u01/app/linux-packages.txt ] ; then  chmod 755 /u01/app/package-installer.sh && /u01/app/package-installer.sh ; fi
      RUN mkdir -p /u01/scripts /u01/logs/ \
       && chown -R apaas:apaas /u01/
      USER apaas
    • Java EE:
      FROM iad.ocir.io/psmsvc3/accs/javaee<runtime-version>:latest
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      USER root
      COPY . /u01/app
      RUN if [ -f /u01/app/linux-packages.txt ] ; then  chmod 755 /u01/app/package-installer.sh && /u01/app/package-installer.sh ; fi
      RUN mkdir -p /u01/logs/ \
       && chown -R apaas:apaas /u01/
      USER apaas
      ENTRYPOINT ["sh", "-c", "$SCRIPT_HOME/post-install.sh && $DOMAIN_HOME/startServer.sh"]
    • PHP:
      FROM iad.ocir.io/psmsvc3/accs/php<runtime-version>:latest
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      USER root
      COPY . /u01/app
      RUN if [ -f /u01/app/linux-packages.txt ] ; then  chmod 755 /u01/app/package-installer.sh && /u01/app/package-installer.sh ; fi
      RUN mkdir -p /u01/scripts /u01/logs/ \
       && chown -R apaas:apaas /u01/
      USER apaas
      ENTRYPOINT ["sh", "-c", "apache2-run"]
    • DotNet:
      FROM microsoft/dotnet:<runtime-version>-runtime
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      RUN mkdir -p /u01/app/
      COPY . /u01/app
      RUN mkdir -p /u01/data/ \
       && mkdir -p /u01/logs/ \
       && groupadd apaas \
       && groupadd builds \
       && useradd -m -b /home -g apaas -G builds apaas \
       && chown -R apaas:apaas /u01/ \
       && chgrp -hR builds /usr/local
      USER apaas
    • Ruby:
      FROM ruby:<runtime-version>
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      RUN mkdir -p /u01/app/
      COPY . /u01/app
      RUN mkdir -p /u01/data/ \
       && mkdir -p /u01/logs/ \
       && groupadd apaas \
       && groupadd builds \
       && useradd -m -b /home -g apaas -G builds apaas \
       && chown -R apaas:apaas /u01/ \
       && chgrp -hR builds /usr/local
      USER apaas
    • Python:
      FROM python:<runtime-version>
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      RUN mkdir -p /u01/app/
      COPY . /u01/app
      RUN mkdir -p /u01/data/ \
       && mkdir -p /u01/logs/ \
       && groupadd apaas \
       && groupadd builds \
       && useradd -m -b /home -g apaas -G builds apaas \
       && chown -R apaas:apaas /u01/ \
       && chgrp -hR builds /usr/local
      USER apaas
    • Go:
      FROM golang:<runtime-version>
      ENV APP_HOME=/u01/app
      WORKDIR /u01/app
      EXPOSE 8080
      RUN mkdir -p /u01/app/
      COPY . /u01/app
      RUN mkdir -p /u01/data/ \
       && mkdir -p /u01/logs/ \
       && groupadd apaas \
       && groupadd builds \
       && useradd -m -b /home -g apaas -G builds apaas \
       && chown -R apaas:apaas /u01/ \
       && chgrp -hR builds /usr/local
      USER apaas
  5. Replace the <runtime-version> placeholder in the file with the runtime version of your application.
  6. If your application needs additional Linux packages, in your project directory, include the package-installer.sh script along with your linux-packages.txt file. See Install Additional Linux Packages.
  7. Open a command-line window, go to the project directory, and build the Docker image. Replace the <local-image-name> placeholder with the name of your image.
    sudo docker build -t <local-image-name>:latest .

    Note:

    If you want to maintain multiple versions of your local images, you can use an appropriate tag instead of "latest".

    Example:

    $ cd /home/user1/webapp01/
    $ sudo docker build -t webapp01-image:latest .
    [sudo] password for user1:
    Sending build context to Docker daemon  13.81MB
    Step 1/8 : FROM iad.ocir.io/psmsvc3/accs/java8:latest
     ---> 0f6c786aee03
    Step 2/8 : ENV APP_HOME=/u01/app
     ---> Using cache
     ---> a75170744b32
    Step 3/8 : WORKDIR /u01/app
     ---> Using cache
     ---> 5d78fc7b6c4b
    Step 4/8 : EXPOSE 8080
     ---> Using cache
     ---> 07860c3576c0
    Step 5/8 : USER root
     ---> Using cache
     ---> f7775cdf5760
    Step 6/8 : ADD ./ /u01/app
     ---> Using cache
     ---> ea04804d0b6e
    Step 7/8 : RUN mkdir -p  /u01/scripts /u01/logs/ && chown -R apaas:apaas /u01/
     ---> Using cache
     ---> e4f1d2d0b53a
    Step 8/8 : USER apaas
     ---> Using cache
     ---> 79880a9d2c15
    Successfully built 79880a9d2c15
    Successfully tagged webapp01-image:latest
  8. Verify that the Docker image was created.
    sudo docker image ls | grep <local-image-name>

    Example:

    sudo docker image ls | grep webapp01-image
    REPOSITORY            TAG             IMAGE ID           CREATED            SIZE
    webapp01-image        latest          79880a9d2c15       1 minute ago       1.15GB
  9. Verify the contents of your application's Docker image.
    sudo docker run -it <local-image-name>:latest sh

    Example:

    $ sudo docker run -it webapp01-image:latest sh
     
    sh-4.2$ ls
    build.sh  clustered_app.sh  manifest.json  target
  10. Exit the application container.

Install Additional Linux Packages

You can install additional Linux packages by creating the linux-packages.txt file, and bundling it with your application.

In your project directory, create the package-installer.sh file, and paste the contents of the following script:

#Step 1: Check if the file linux-packages.txt exists at /u01/app
#Step 2: Iterate through the file linux-packages.txt, to verify package exists in our Oracle repo
#If step 2 success, then go ahead and install all packages. If step 2 fails, then exit with failure and display specify packages
 
# The timeout function is called with 20m hard timeout, and within the stiplulated time if all the package are installed
# then return from function will be caught in parent process.
# An appropriate message displays back, depending on the return status.
# Return code for below scenarios:
# Syntax error: 2
# Validation failure : 3
# Success : 4
# Transaction error: 5
 
#!/bin/bash
 
timeout_value=20
 
cust_loc=/u01/app
export cust_loc
cd $cust_loc
if [ ! -s $cust_loc/linux-packages.txt ] || [ ! -f $cust_loc/linux-packages.txt ]
then
        exit 0
fi
 
 
export uuid=`date +%Y%m%d%H%M%S`
export LOG_NAME="/tmp/output_$uuid.log"
export PKG_LOGNAME="/tmp/pkgoutput_$uuid.log"
export GRP_LOGNAME="/tmp/grpoutput_$uuid.log"
export ERR_LOG_NAME="/tmp/error_$uuid.log"
rm -rf $LOG_NAME
 
function cleanup()
{
/bin/rm -rf $PKG_LOGNAME
/bin/rm -rf $GRP_LOGNAME
/bin/rm -rf /tmp/tmp_log /tmp/tmp_syn /tmp/tmp_succ /tmp/tmp_val /tmp/tmp_err
}
 
function install_packages()
{
cur_loc=`pwd`
ret_flag=0
syn_flag=0
sucpack_list=""
sucgroup_list=""
synpack_list=""
errpack_list=""
sucdisp_list=""
grpdisp_list=""
install_pkgs=""
failed_pkgs=""
 
#Fix for file created in notepad++
sed -i -e '$a\' $cust_loc/linux-packages.txt
 
echo "VALIDATION_CHECK_START" > $LOG_NAME
while read package_rec
do
        if [[ "$package_rec" != "#"* ]] && [[ ! -z "$package_rec" ]]
        then
                echo "Record Picked: $package_rec" >> $LOG_NAME
                package_name=`echo $package_rec | awk -F':' '{print $2}' | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//'`
                install_type=`echo $package_rec | awk -F':' '{print $1}' | tr -d '[:space:]'`
                if [ "package_install" == "$install_type" ];then
                        yum list $package_name 1>>$LOG_NAME 2>&1
                        var=$?
                        if [ $var -eq 1 ]
                        then
                                if [[ -z "$errpack_list" ]];then
                                        errpack_list=$package_name
                                else
                                        errpack_list=$errpack_list","$package_name
                                fi
                                ret_flag=1
                        elif [ $var -eq 0 ]
                        then
                                if [[ -z "$sucdisp_list" ]];then
                                        sucdisp_list=$package_name
                                else
                                        sucdisp_list=$sucdisp_list","$package_name
                                fi
                                sucpack_list=$sucpack_list" "$package_name
                        fi
                elif [ "group_install" == "$install_type" ];then
                        yum grouplist "$package_name" 1>>$LOG_NAME 2>&1 1>/tmp/tmp_log
                        cat /tmp/tmp_log | grep -E -iw -q "Available Groups|Installed Groups:"
                        var=$?
                        if [ $var -eq 1 ]
                        then
                                if [[ -z "$errpack_list" ]];then
                                        errpack_list=$package_name
                                else
                                        errpack_list=$errpack_list","$package_name
                                fi
                                ret_flag=1
                        elif [ $var -eq 0 ]
                        then
                                if [[ -z "$grpdisp_list" ]];then
                                        grpdisp_list=$package_name
                                else
                                        grpdisp_list=$grpdisp_list","$package_name
                                fi
                if [[ -z "$sucgroup_list" ]];then
                                        sucgroup_list=$package_name
                                else
                                        sucgroup_list=$sucgroup_list","$package_name
                                fi
                        fi
                else
            #Syntax failure scenario if not properly provided
                        if [[ -z "$synpack_list" ]];then
                                        synpack_list=$package_rec
                                else
                                        synpack_list=$synpack_list","$package_rec
                        fi
            ret_flag=-1
                        syn_flag=1
                fi
        fi
done < $cust_loc/linux-packages.txt
echo "VALIDATION_CHECK_END" >> $LOG_NAME
if [ $syn_flag -eq 1 ]
then
        echo "Syntax Error: $synpack_list" > /tmp/tmp_syn
        return 2
fi
 
if [ $ret_flag -eq 1 ]
then
        echo "Valid Packages: $sucdisp_list,$grpdisp_list"  > /tmp/tmp_val
        echo "Invalid Packages: $errpack_list"          >> /tmp/tmp_val
        return 3
fi
if [ $ret_flag -eq 0 ]
then
        echo "INSTALL_START" >> $LOG_NAME
        if [ ! -z "$sucpack_list" ];then
                yum -y install $sucpack_list 1>>$PKG_LOGNAME 2>&1
                resp=$?
                if [ $resp -eq 1 ];then
                        /bin/rm -rf $PKG_LOGNAME
                        ret_flag=2
                        for pkg_name in $sucpack_list
                        do
                                yum -y install $pkg_name 1>>$PKG_LOGNAME 2>/tmp/tmp_log
                                res=$?
                                if [ $res -eq 1 ];then
                                        if [ -z "$failed_pkgs" ];then
                                                failed_pkgs=$pkg_name
                                        else
                                                failed_pkgs=$failed_pkgs","$pkg_name
                                        fi
                    echo "Package Name: $pkg_name" >> $ERR_LOG_NAME
                                        cat /tmp/tmp_log >> $ERR_LOG_NAME
                                    cat /tmp/tmp_log >> $PKG_LOGNAME
                                elif [ $res -eq 0 ];then
                                        if [ -z "$install_pkgs" ];then
                                                install_pkgs=$pkg_name
                                        else
                                                install_pkgs=$install_pkgs","$pkg_name
                                        fi
                                fi
                        done
                        cat $PKG_LOGNAME >> $LOG_NAME
                fi
                if [ $resp -eq 0 ]
                then
                        cat $PKG_LOGNAME >> $LOG_NAME
                        install_pkgs=$sucdisp_list
                fi
        fi
        if [ ! -z "$sucgroup_list" ];then
                yum -y groupinstall "$sucgroup_list" 1>>$GRP_LOGNAME 2>&1
                resp=$?
                if [ $resp -eq 1 ];then
                        ret_flag=2
                        /bin/rm -rf $GRP_LOGNAME
            IFS=","
                        for grp_name in $sucgroup_list
                        do
                                yum -y groupinstall "$grp_name" 1>>$GRP_LOGNAME 2>/tmp/tmp_log
                                ret_res=$?
                                if [ $ret_res -eq 1 ];then
                                        if [ -z "$failed_pkgs" ];then
                                                failed_pkgs=$grp_name
                                        else
                                                failed_pkgs=$failed_pkgs","$grp_name
                                        fi
                                        echo "Group Name: $grp_name" >> $ERR_LOG_NAME
                                        cat /tmp/tmp_log >> $ERR_LOG_NAME
                                    cat /tmp/tmp_log >> $GRP_LOGNAME
                                elif [ $ret_res -eq 0 ];then
                                        if [ -z "$install_pkgs" ];then
                                                install_pkgs=$grp_name
                                        else
                                                install_pkgs=$install_pkgs","$grp_name
                                        fi
                                fi
                        done
                        cat $GRP_LOGNAME >> $LOG_NAME
                fi
                if [ $resp -eq 0 ]
                then
                        cat $GRP_LOGNAME >> $LOG_NAME
                        if [ -z "$install_pkgs" ];then
                                install_pkgs=$sucgroup_list
            else
                                install_pkgs=$install_pkgs","$sucgroup_list
                        fi
                fi
        fi
       echo "INSTALL_END" >> $LOG_NAME
fi
if [ -z "$failed_pkgs" ];then
    ret_flag=0
fi
if [ $ret_flag -eq 0 ];then
       echo "Installed Packages: $sucdisp_list,$grpdisp_list" > /tmp/tmp_succ
       return 4
fi
if [ $ret_flag -eq 2 ];then
       echo "Installable Packages: $install_pkgs"   > /tmp/tmp_err
       echo "Failed Packages: $failed_pkgs"     >> /tmp/tmp_err
       return 5
fi
 
} 
#End of install_package function
 
export -f install_packages
timeout "$timeout_value"m bash -c install_packages
rest_status=$?
 
# Timeout scenario
if [ $rest_status -eq 124 ]
then
    echo "RESULT_START"
    echo "SYNTAX_ERROR"
    echo "Error Message : Timed out while installing & configuring linux packages/groups. Reduce the number of specified linux packages/groups."
    echo "RESULT_END"
    cleanup
    exit 1
fi
 
# Syntax error scenario
if [ $rest_status -eq 2 ]
then
        echo "RESULT_START"
        echo "SYNTAX_ERROR"
    cat /tmp/tmp_syn
        echo "RESULT_END"
        cleanup
    exit 1
fi
 
#Validation error scenario
if [ $rest_status -eq 3 ]
then
        echo "RESULT_START"
        echo "VALIDATION_FAILURE"
    cat /tmp/tmp_val
        echo "RESULT_END"
        cat $LOG_NAME
        cleanup
    exit 1
fi
 
#Success scenario
if [ $rest_status -eq 4 ]
then
    echo "RESULT_START"
        echo "SUCCESS"
    cat /tmp/tmp_succ
        echo "RESULT_END"
        cleanup
        cat $LOG_NAME
    exit 0
fi
 
#Transaction error scenario
if [ $rest_status -eq 5 ]
then
        echo "RESULT_START"
        echo "ERROR_PACKAGE"
    cat /tmp/tmp_err
        echo "RESULT_END"
        echo "ERROR_PKGS_START"
        cat $ERR_LOG_NAME
        echo "ERROR_PKGS_END"
        cleanup
        cat $LOG_NAME
    exit 1
fi

Push the Docker Image to Oracle Cloud Infrastructure Registry

After you build your Docker image, you can push it to Oracle Cloud Infrastructure Registry and make it available in the cloud.

  1. In a command-line window, create a tag for your local Docker image. Use the following format.
    image_tag="<region-code>.ocir.io/<tenancy>/accs/<oci-account-username>/<app-name>:latest"

    Example:

    $ image_tag="fra.ocir.io/tenancy1/accs/oci_user1/webapp01:latest"
  2. Assign the tag to the Docker image.
    sudo docker tag <local-image-name>:latest $image_tag

    Example:

    $ sudo docker tag webapp01-image:latest $image_tag
  3. Log in to Oracle Cloud Infrastructure Registry. Specify your Oracle Cloud Infrastructure tenancy, user name, and authentication token.
    sudo docker login <region-code>.ocir.io --username=<tenancy_name>/<username> --password="<auth-token>"

    Example:

    $ sudo docker login fra.ocir.io --username=tenancy1/oci_user1@oracle.com --password="<auth_token>" 
    WARNING! Using --password via the CLI is insecure. Use --password-stdin.
    WARNING! Your password will be stored unencrypted in /home/user1/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    Login Succeeded
  4. Push the Docker image to the registry.
    sudo docker push $image_tag

    Example:

    $ sudo docker push $image_tag
    The push refers to repository [fra.ocir.io/tenancy1/accs/oci_user1/webapp01:latest]
    3d260c7695cb: Pushed
    0393ac67fc5a: Pushed
    091f5ab4d968: Pushed
    baf7d2d1e99b: Pushed
    9e35d0ce4d4b: Pushed
    406954a29cb9: Pushed
    922b485ea6a7: Pushed
    latest: digest: sha256:7dc38495f2be3513e5da3996efe12f2e864e197d68561582a7fa3b72a67f48f4 size: 1792

Set up the Environment Variables

Migrate the environment variables that are required for you Oracle Application Container Cloud Service application to a Kubernetes configuration.

There are three types of environment variables:
  • Oracle Application Container Cloud Service environment variables: Oracle Application Container Cloud Service created these variables automatically when you deployed your application. These environment variables are required and you can't remove them from the configuration file. See Configure Environment Variables.
  • Custom environment variables: You defined these variables for your application in the deployment.json file or by using the Oracle Application Container Cloud Service console.
  • Service binding environment variables: Oracle Application Container Cloud Service created these variables automatically when you added service bindings in your application.
To migrate the environment variables for your application:
  1. If your applications has configured one or more service bindings, locate the environment variables for each service binding. From the Oracle Application Container Cloud Service console, select your application deployment and identity the service binding variables from the Environment Variables section.
  2. In your project directory, create the env.properties file.
  3. Edit the env.properties file and add the environment variables for your application. Use the following template:
    
    # ACCS environment variables (DO NOT REMOVE)
    HOSTNAME=<app-name>-service:<port>
    APP_HOME=/u01/app
    PORT=8080
    ORA_PORT=8080
    ORA_APP_NAME=<app-name>
     
    # Custom environment variables
    <key>=<value>
    <key>=<value>
     
    # Service bindings environment variables
    <key>=<value>
    <key>=<value>
  4. Provide an appropriate value for the <app-name> and <port> placeholders. The <port> value is 80, or 443 if SSL is enabled. For worker applications the port is 80.

    Example:

    # ACCS environment variables (DO NOT REMOVE)
    HOSTNAME=webapp01-service:443
    APP_HOME=/u01/app
    PORT=8080
    ORA_PORT=8080
    ORA_APP_NAME=webapp01
     
    # Application environment variables
    APP_LIB_FOLDER=./lib
     
    # Service bindings environment variables
    MYSQLCS_CONNECT_STRING=10.x.x.1:3306/mydb
    MYSQLCS_MYSQL_PORT=3306
    MYSQLCS_USER_PASSWORD=Password1
    MYSQLCS_USER_NAME=TestUser
    DBAAS_DEFAULT_CONNECT_DESCRIPTOR=10.x.x.2:1521/mydb
    DBAAS_USER_NAME=TestUser
    DBAAS_USER_PASSWORD=Password1
    DBAAS_LISTENER_HOST_NAME=10.x.x.2
    DBAAS_LISTENER_PORT=1521
    DBAAS_DEFAULT_SID=ORCL
    DBAAS_DEFAULT_SERVICE_NAME=mydb
  5. If your application is a Java EE application, see Configure Java EE System and Service Binding Properties.
  6. Create a Kubernetes configuration map from your environment variables file. Replace the <app-name> and <path-to-kubernetes-env-properties-file> placeholders with your values.
    kubectl create configmap
              <app-name>-config-var-map --from-env-file=<path-to-kubernetes-env-properties-file>

    Example:

    $ kubectl create configmap webapp01-config-var-map --from-env-file=/home/user1/kubernetes/env.properties 
    configmap/webapp01-config created

Configure Java EE System and Service Binding Properties

If your Java Enterprise Edition (Java EE) application requires system or service binding properties, then you must specify them in the env.properties file.

  1. To use system properties for your Java EE application, define the EXTRA_JAVA_PROPERTIES property in the env.properties file.
    EXTRA_JAVA_PROPERTIES=<value>
  2. If your Java EE application uses any of the following JNDI service binding properties, then you must add them to the env.properties file.
    • jndi-name
    • max-capacity
    • min-capacity
    • driver-properties
    <ocic-service-type>_SERVICE_BINDING_NAME=<service-name>
    <ocic-service-type>_PROPERTIES=jndi-name:<jndi-name>|max-capacity:<max-capacity>|min-capacity:<min-capacity>|driver-properties:<driver-properties>
    Placeholder Description
    <ocic-service-type> Service type, for example: DBAAS, MYSQLCS, etc.
    <service-name> Name of your service.
    <jndi-name> JNDI name of your service. It should be in format "jdbc/<value>", for example: "jdbc/dbcs".
    <max-capacity> Maximum capacity of the connection pool.
    <min-capacity> Minimum capacity of the connection pool.
    <driver-properties> List of the JDBC driver properties semi-colon separated.

Example:

# ACCS environment variables(DO NOT REMOVE)
HOSTNAME=webapp01-service:443
APP_HOME=/u01/app
PORT=8080
ORA_PORT=8080
ORA_APP_NAME=webapp01

# Application environment variables
APP_LIB_FOLDER=./lib 

# Service bindings environment variables
MYSQLCS_CONNECT_STRING=10.x.x.1:3306/mydb
MYSQLCS_MYSQL_PORT=3306
MYSQLCS_USER_PASSWORD=Password1
MYSQLCS_USER_NAME=TestUser
DBAAS_DEFAULT_CONNECT_DESCRIPTOR=10.x.x.x:1521/mydb
DBAAS_USER_NAME=TestUser
DBAAS_USER_PASSWORD=Password1
DBAAS_LISTENER_HOST_NAME=10.x.x.x
DBAAS_LISTENER_PORT=1521
DBAAS_DEFAULT_SID=ORCL
DBAAS_DEFAULT_SERVICE_NAME=mydb 

# System properties
# Only for "Java EE" runtime. Remove for other runtimes.
EXTRA_JAVA_PROPERTIES=-DconfigPath=/u01/app/conf/-DlogFile=/u01/app/logs/app.log 

# Service binding properties
# Only for "Java EE" runtime. Remove for other runtimes.
DBAAS_SERVICE_BINDING_NAME=dbaasDb
DBAAS_PROPERTIES=jndi-name:jdbc/dbcs|max-capacity:5|min-capacity:1|driver-properties:user=admin;database=test|
MYSQLCS_SERVICE_BINDING_NAME=mysqlDb
MYSQLCS_PROPERTIES=jndi-name:jdbc/mysqlcs|max-capacity:10|min-capacity:1|driver-properties:user=oci;database=app|

Enable Connectivity between the Kubernetes Cluster and Oracle Cloud Services

If your application in Oracle Application Container Cloud Service uses service bindings to enable communication with other Oracle Cloud services, then you need to ensure that after the migration your application is able to communicate with those services.

There are two scenarios:
  1. If your service is in Oracle Cloud Infrastructure Classic, then in the Oracle Cloud Infrastructure Classic service, create an access rule that allows the Public IP address of NAT Gateway attached to the worker nodes of the Kubernetes cluster to connect to the service. For example, if your application uses Oracle Database Classic Cloud Service, then see Managing Network Access to Database Cloud Service.

    Note:

    The public IP address of the NAT Gateway can be located in the Oracle Cloud Infrastructure console from the menu: Developer Services, Container Clusters (OKE), Cluster Details, Node Pool Section, Node Instance Details, Virtual Cloud Network Details, NAT Gateways, Public IP Address.
  2. If your service is in Oracle Cloud Infrastructure, then locate the VCN and subnets in which the service is deployed. Ensure that an ingress security rule exists to allow traffic from the Kubernetes cluster to the service. See Security Lists in the Oracle Cloud Infrastructure documentation.

Create the Kubernetes Configuration Files

Before you deploy your application to Oracle Cloud Infrastructure Container Engine for Kubernetes, you need to create the deployment and service configuration files for your application.

The deployment and service configuration files provide instructions for Kubernetes to create and update instances of your application. You can create and manage a deployment by using the Kubernetes command line interface.

Create the Deployment Configuration File

  1. Create the deployment.yaml file using the following template:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: "<app-name>-deployment"
    spec:
      replicas: ${replicas}
      selector:
        matchLabels:
          app: "<app-name>-selector"
      template:
        metadata:
          labels:
            app: "<app-name>-selector"
        spec:
          containers:
          - name: "<app-name>"
            image: "<region-code>.ocir.io/<tenancy>/accs/<oci-account-username>/<app-name>:latest"
            command: ["${command}"]
            args: ["${args}"]
            ports:
            - containerPort: 8080
            env:
            - name: ORA_INSTANCE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            envFrom:
            - configMapRef:
                name: "<app-name>-config-var-map"
            resources:
              limits:
                memory: "${memory}i"
              requests:
                memory: "${memory}i"
            # The following section "livenessProbe" should be removed if Health Check URL
            # is not available.
            livenessProbe:
              httpGet:
                path: "${healthCheckHttpPath}"
                port: 8080
              initialDelaySeconds: 5
              periodSeconds: 600
              timeoutSeconds: 30
              failureThreshold: 3
          imagePullSecrets:
          - name: "<app-name>-secret"
  2. Provide the appropriate values for the <app-name>, <region-code>, <tenancy>, and <oci-account-username> placeholders.
  3. Provide the appropriate values for the variables:
    Placeholder Description
    ${replicas} Specify the number of application replicas. If the property is missing, then the default value is 2.
    ${command} Specify the operating system command used to launch the application.
    • Enter the first word of the command property in the manifest.json file. For example, if the value of the command property is: "sh bin/startapp.sh -config conf/app.properties", then the ${command} value is sh.
    • If this property is missing in the manifest.json file, then leave the value empty cmd: [].
    ${args} Specify the arguments used to start the application.
    • Enter a comma-separated list of all the words starting from the second word of the command property in the manifest.json file. For example, if the value of the command property in the manifest.json file is "sh bin/startapp.sh -config conf/app.properties", then the ${args} value is "bin/startapp.sh", "-config", "conf/app.properties".
    • If this property is missing in the, then leave the value empty. For example, args: [].
    ${memory} Specify the amount of memory required for the application. If the property is missing, then the default value is 2G.
    ${healthCheckHttpPath} Specify the HTTP path used to perform health checks for the application. The URL on this HTTP path should return a HTTP response code greater than or equal to 200 and less than 400 to indicate success.
    • Enter the healthCheck.http-endpoint property in the manifest.json file.
    • If this property is missing in the manifest.json file, then enter the value "/".

    Example:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: "webapp01-deployment"
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: "webapp01-selector"
      template:
        metadata:
          labels:
            app: "webapp01-selector"
        spec:
          containers:
          - name: "webapp01"
            image: "fra.ocir.io/tenancy1/accs/oci_user1/webapp01:latest"
            command: ["sh"]
            args: ["bin/startapp.sh", "-config", "conf/app.properties"]
            ports:
            - containerPort: 8080
            env:
            - name: ORA_INSTANCE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            envFrom:
            - configMapRef:
                name: "webapp01-config-var-map"
            resources:
              limits:
                memory: "2Gi"
              requests:
                memory: "2Gi"
            # The following section "livenessProbe" should be removed if Health Check
            # is not required.
            livenessProbe:
              httpGet:
                path: "/"
                port: 8080
              initialDelaySeconds: 5
              periodSeconds: 300
              timeoutSeconds: 30
              failureThreshold: 3
          imagePullSecrets:
          - name: "webapp01-secret"

Create the Service Configuration File

  1. Create the service.yaml file with the following template:
    kind: Service
    apiVersion: v1
    metadata:
      name: "<app-name>-service"
      # The following section "annotations" should be removed if SSL endpoint
      # is not required.
      annotations:
        service.beta.kubernetes.io/oci-load-balancer-ssl-ports: '443'
        service.beta.kubernetes.io/oci-load-balancer-tls-secret: "<app-name>-tls-certificate"
    spec:
      type: "${type}"
      ports:
      - port: ${port}
        protocol: TCP
        targetPort: 8080
      selector:
        app: "<app-name>-selector"
  2. Provide an appropriate value for the <app-name> placeholder.
  3. Provide the appropriate values for the ${variable} placeholders:
    Placeholder Description
    ${type} Specify the application type. If the Oracle Application Container Cloud Service application is of type web, then enter the value LoadBalancer. If application is of type worker, then enter the value ClusterIP.
    ${port} Specify the public port of the application. If an SSL endpoint is required, then enter 443, otherwise, enter 80. For worker applications the value is 80.

    Example:

    kind: Service
    apiVersion: v1
    metadata:
      name: "webapp01-service"
      # The following section "annotations" should be removed if SSL endpoint
      # is not required.
      annotations:
        service.beta.kubernetes.io/oci-load-balancer-ssl-ports: '443'
        service.beta.kubernetes.io/oci-load-balancer-tls-secret: "webapp01-tls-certificate"
    spec:
      type: LoadBalancer
      ports:
      - port: 443
        protocol: TCP
        targetPort: 8080
      selector:
        app: "webapp01-selector"

Set Up the Docker Registry Secret and the SSL Certificate

In order for Kubernetes to pull an image from Oracle Cloud Infrastructure Registry when deploying an application, you need to create a Kubernetes secret. If your application requires an SSL endpoint, then you need to create a TLS secret using the certificate and the private key for your application.

The secret includes all of the same details that you would provide if you were manually logging in to Oracle Cloud Infrastructure Registry using the docker login command, including your authentication token.
  1. To create a Docker registry secret, in a command-line window, run the following command. Provide appropriate values for the <app-name>, <region-code>, <tenancy>, <oci-account-username>, and <auth-token> placeholders.
    kubectl create secret
              docker-registry <app-name>-secret --docker-server="<region-code>.ocir.io" --docker-username=<tenancy>/<oci-account-username>
              --docker-password='<auth-token>' --docker-email=<oci-account-username>
    Example:
    $ kubectl create secret docker-registry webapp01-secret --docker-server="fra.ocir.io" --docker-username=tenancy1/oci_user1@example.com --docker-password='cIv3s8Aw2klYZ:QOcyFA' --docker-email=oci_user1@example.com
    secret/webapp01-secret created
  2. To create the TLS secret, in a command-line window, run the following command. Provide appropriate values for the <app-name>, <path-to-tls-key-file> and <path-to-tls-cert-file> placeholders.
    kubectl create secret tls
              <app-name>-tls-certificate --key <path-to-tls-key-file> --cert <path-to-tls-cert-file>
    Example:
    $ kubectl create secret tls webapp01-tls-certificate --key /home/user1/kubernetes/tls.key --cert /home/user1/kubernetes/tls.crt
    secret/webapp01-tls-certificate created

    Note:

    The <path-to-tls-cert-file> and <path-to-tls-key-file> are the absolute paths to your public certificate and key files respectively.

Deploy the Application

Deploy your application by using the deployment.yaml and service.yaml files.

If you want to customize HTTP to HTTPS redirection, load balancer policy, or session stickiness for your application, you need to deploy your application using steps mentioned in Set up an Ingress Controller.

  1. Create the Kubernetes deployment.
    kubectl create -f <path-to-kubernetes-deployment-yaml>

    Example:

    $ kubectl create -f /home/user1/kubernetes/deployment.yaml
    deployment.apps/webapp01-deployment created
  2. Check the deployment roll-out status.
    kubectl rollout status deployment <app-name>-deployment
    Example:
    $ kubectl rollout status deployment webapp01-deployment
    Waiting for deployment "webapp01-deployment" rollout to finish: 0 of 1 updated replicas are available...
    deployment "webapp01-deployment" successfully rolled out

    The deployment check may take several minutes to complete.

  3. Create the Kubernetes service.
    kubectl create -f <path-to-kubernetes-service-yaml>

    Example:

    $ kubectl create -f /home/user1/kubernetes/service.yaml
    service/webapp01-service created
  4. Check the services status.
    kubectl get svc <app-name>-service

    Example:

    $ kubectl get svc webapp01-service
    NAME               TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)         AGE
    webapp01-service   LoadBalancer   10.x.x.x       132.x.x.x         443:31174/TCP   1m

    The service may take several minutes to start. After it is completed, write down the public IP address under the EXTERNAL-IP column.

Note:

If you have deployed a cluster application, your service instances could communicate with each other using the service name. The service name will be available in your application as the HOSTNAME environment variable.

Set up a Custom URL

If you want to use a custom URL for your application, then you need a public domain name. You need to map the domain name to the public IP address of your application.

Create a DNS Zone

A Domain Name System (DNS) zone is a contiguous portion of the global DNS that is managed by a specific organization or administrator.

If you already created a DNS zone, then these steps aren't required.

  1. From the Oracle Cloud Infrastructure console, open the navigation menu. Under Networking, go to DNS Management and click Zones.
  2. Click Create Zone, enter a Zone name and leave the default values for the other fields.
  3. Click Create.

    Note:

    A list of name servers that host the DNS records for your DNS zone is displayed .
  4. Add the name servers in your public DNS hosting provider's service.

Add a DNS Record

After you create the DNS zone, you need to add a DNS record to create the sub-domain name.

  1. From the Zone Information Page, navigate to Records and click Manage Records. Then, click Add Record and enter or select the following values:
    1. RECORD TYPE: A - IPv4 Address
    2. Name: Enter your application name.
    3. Address: Enter the public IP address of your application.
  2. Click Submit.
  3. Click Publish Changes.

The custom URL for your application has this form: http[s]://<name>.<zone-name>[:<port>]/. For example, if you enter webapp01 for the name of your record, and the zone name is example.com, then the HTTPS URL is https://webapp01.example.com.

Set up an Ingress Controller

If you want to customize HTTP to HTTPS redirection, load balancer, or session stickiness for your application, then you need to set up a Nginx ingress controller. The ingress controller handles the traffic between an external load balancer and your application.

Configure Role-Based Access

You create the cluster-admin role and assign it to you Oracle Cloud Infrastructure administrator user. Then use the rbac.yaml file to create a service account, a cluster role, and a cluster role binding.

  1. To assign the cluster-admin role to your Oracle Cloud Infrastructure administrator user, in a command-line window, run the following command. Provide an appropriate value for the <user-ocid> placeholder.
    kubectl create clusterrolebinding ingress-binding --clusterrole=cluster-admin --user=<user-ocid>
    Example:
    $ kubectl create clusterrolebinding ingress-binding --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaaaaa3vslnnqmypaummkievofbe5r4afaihdh4wi503dig3bkys2msgaq
    clusterrolebinding.rbac.authorization.k8s.io/ingress-binding created
  2. Create the rbac.yaml file using the following content:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nginx-ingress-serviceaccount
      namespace: default
     
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
      name: nginx-ingress-clusterrole
    rules:
      - apiGroups:
          - ""
        resources:
          - configmaps
          - endpoints
          - nodes
          - pods
          - secrets
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - services
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - "extensions"
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - events
        verbs:
          - create
          - patch
      - apiGroups:
          - "extensions"
        resources:
          - ingresses/status
        verbs:
          - update
     
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: Role
    metadata:
      name: nginx-ingress-role
    rules:
      - apiGroups:
          - ""
        resources:
          - configmaps
          - pods
          - secrets
          - namespaces
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - configmaps
        resourceNames:
          # Defaults to "<election-id>-<ingress-class>"
          # Here: "<ingress-controller-leader>-<nginx>"
          # This has to be adapted if you change either parameter
          # when launching the nginx-ingress-controller.
          - "ingress-controller-leader-nginx"
        verbs:
          - get
          - update
      - apiGroups:
          - ""
        resources:
          - configmaps
        verbs:
          - create
      - apiGroups:
          - ""
        resources:
          - endpoints
        verbs:
          - get
          - create
          - update
     
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: nginx-ingress-role-nisa-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: nginx-ingress-role
    subjects:
      - kind: ServiceAccount
        name: nginx-ingress-serviceaccount
     
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: nginx-ingress-clusterrole-nisa-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: nginx-ingress-clusterrole
    subjects:
      - kind: ServiceAccount
        name: nginx-ingress-serviceaccount
        namespace: default
     
    ---
  3. To create a service account, a cluster role, and a cluster role binding between the service account and the cluster role, run the following command. Provide the appropriate value for the <path-to-rbac-yaml> placeholder.
    kubectl create -f <path-to-rbac-yaml>
    Example:
    $ kubectl create -f /home/user1/kubernetes/rbac.yaml
    serviceaccount/nginx-ingress-serviceaccount created
    clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
    role.rbac.authorization.k8s.io/nginx-ingress-role created
    rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
    clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created

Create a Default Backend

Create a Kubernetes deployment and service for a default backend.

  1. Create the default-deployment.yaml file using the following content:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: default-http-backend
      labels:
        app: default-http-backend
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: default-http-backend
      template:
        metadata:
          labels:
            app: default-http-backend
        spec:
          terminationGracePeriodSeconds: 60
          containers:
          - name: default-http-backend
            # Any image is permissable as long as:
            # 1. It serves a 404 page at /
            # 2. It serves 200 on a /healthz endpoint
            image: gcr.io/google_containers/defaultbackend:1.0
            livenessProbe:
              httpGet:
                path: /healthz
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 30
              timeoutSeconds: 5
            ports:
            - containerPort: 8080
            resources:
              limits:
                memory: 128Mi
              requests:
                memory: 128Mi
  2. Create the Kubernetes deployment for a default backend.
    kubectl create -f <path-to-default-deployment-yaml>
    Example:
    $ kubectl create -f /home/user1/kubernetes/default-deployment.yaml
    deployment.apps/default-http-backend created
  3. Check the deployment roll-out status.
    kubectl rollout status deployment default-http-backend
    Example:
    $ kubectl rollout status deployment default-http-backend
    Waiting for deployment "default-http-backend" rollout to finish: 0 of 1 updated replicas are available...
    deployment "default-http-backend" successfully rolled out

    The deployment check may take several minutes to complete.

  4. Create the default-service.yaml file using the following content:
    apiVersion: v1
    kind: Service
    metadata:
      name: default-http-backend
      labels:
        app: default-http-backend
    spec:
      ports:
      - port: 80
        targetPort: 8080
      selector:
        app: default-http-backend
  5. Create the Kubernetes service for the default backend.
    kubectl create -f <path-to-default-service-yaml>
    Example:
    $ kubectl create -f /home/user1/kubernetes/default-service.yaml
    service/default-http-backend created
  6. Check the service roll-out status.
    kubectl get svc default-http-backend
    Example:
    $ kubectl get svc default-http-backend
    NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP        PORT(S)         AGE
    default-http-backend    ClusterIP      10.x.x.x       <none>             80/TCP          5s

    The deployment may take several minutes to complete.

Create an Ingress Controller

Create a Kubernetes deployment and service for the Nginx ingress controller.

Create an Ingress Controller Deployment

  1. Create the nginx-deployment.yaml file using the following template.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-ingress-controller
      labels:
        app: nginx-ingress-controller
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx-ingress-controller
      template:
        metadata:
          labels:
            app: nginx-ingress-controller
        spec:
          terminationGracePeriodSeconds: 60
          serviceAccountName: nginx-ingress-serviceaccount
          containers:
          - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
            name: nginx-ingress-controller
            readinessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
            livenessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              timeoutSeconds: 1
            ports:
            - containerPort: 80
              hostPort: 80
            - containerPort: 443
              hostPort: 443
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
  2. Create the Kubernetes deployment for the Nginx ingress controller.
    kubectl create -f <path-to-nginx-deployment-yaml>
    Example:
    $ kubectl create -f /home/user1/kubernetes/nginx-deployment.yaml
    deployment.apps/nginx-ingress-controller created
  3. Check the deployment roll-out status.
    kubectl rollout status deployment nginx-ingress-controller
    Example:
    $ kubectl rollout status deployment nginx-ingress-controller
    Waiting for deployment "nginx-ingress-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "nginx-ingress-controller" successfully rolled out

    The deployment check may take several minutes to complete.

Create an Ingress Controller Service

  1. Create the nginx-service.yaml file using the following template:
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-ingress-controller
      namespace: default
      labels:
        app: nginx-ingress-controller
    spec:
      type: LoadBalancer
      ports:
      - port: 80
        name: http
      # Required only for SSL endpoint
      - port: 443
        name: https
      selector:
        app: nginx-ingress-controller
  2. To create a service for the Nginx ingress controller, run the following command. Provide the appropriate value for the <path-to-nginx-service-yaml> placeholder.
    kubectl create -f <path-to-nginx-service-yaml>
    Example:
    $ kubectl create -f /home/user1/kubernetes/nginx-service.yaml
    service/nginx-ingress-controller created
  3. Check the service status.
    kubectl get svc nginx-ingress-controller
    Example:
    $ kubectl get svc nginx-ingress-controller
    NAME                              TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)         AGE
    nginx-ingress-controller LoadBalancer   10.x.x.x       131.x.x.x         443:31155/TCP   1m

    The service may take several minutes to start. After it is completed, write down the public IP address under the EXTERNAL-IP column.

  4. Verify that you Nginx ingress controller is configured properly.
    curl -I http://<public-ip-address>/healthz
    Example:
    $ curl -I http://131.x.x.x/healthz
    HTTP/1.1 200 OK

    After you deploy your application and create and ingress resource, you can access your application using the public URL.

Deploy Your Application with an Ingress Controller Setup

Deploy your application using the steps mentioned in Deploy the Application. To create the service use the following configuration:

  1. Create the service.yaml file with the following template:
    kind: Service
    apiVersion: v1
    metadata:
      name: "<app-name>-service"
    spec:
      type: ClusterIP
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: "<app-name>-selector"
  2. Provide an appropriate value for the <app-name> placeholder.
    Example:
    kind: Service
    apiVersion: v1
    metadata:
      name: "webapp01-service"
    spec:
      type: ClusterIP
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: "webapp01-selector"
  3. Create the Kubernetes service.
    kubectl create -f <path-to-kubernetes-service-yaml>
  4. Check the services status.
    kubectl get svc <app-name>-service

    The service may take several minutes to start.

Create an Ingress Resource

Create an ingress resource that the Nginx ingress controller uses to forward the traffic to your application.

  1. (Optional) If your application requires an SSL endpoint, run the following command to create a secret for the TLS key and a certificate. Provide appropriate values for the <app-name>, <path-to-tls-key-file>, and <path-to-tls-cert-file> placeholders.
    kubectl create secret tls ingress-tls-certificate --key <path-to-tls-key-file> --cert <path-to-tls-cert-file>
    Example:
    $ kubectl create secret tls ingress-tls-certificate --key /home/user1/kubernetes/tls.key --cert /home/user1/kubernetes/tls.crt
    secret/ingress-tls-certificate created
  2. Create the nginx-ingress.yaml file using the following template:
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-resource
      annotations:
        kubernetes.io/ingress.class: "nginx"
        nginx.ingress.kubernetes.io/load-balance: "round_robin"
    spec:
      # Required only for SSL endpoint
      tls:
        - hosts:
          - <host-name>
          secretName: ingress-tls-certificate
      rules:
        - host: <host-name>
          http:
            paths:
            - path: /
              backend:
                serviceName: <app-name>-service
                servicePort: 80
  3. Provide an appropriate value for the <app-name> placeholder.
    Example:
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-resource
      annotations:
        kubernetes.io/ingress.class: "nginx"
        nginx.ingress.kubernetes.io/load-balance: "round_robin"
    spec:
      # Required only for SSL endpoint
      tls:
        - hosts:
          - webapp01.example.com
          secretName: ingress-tls-certificate
      rules:
        - host: webapp01.example.com
          http:
            paths:
            - path: /
              backend:
                serviceName: webapp01-service
                servicePort: 80
  4. To create the ingress resource, in a command-line window, run the following command. Provide an appropriate value for the <path-to-nginx-ingress-yaml> placeholder.
    kubectl create -f <path-to-nginx-ingress-yaml>
    Example:
    $ kubectl create -f /home/user1/kubernetes/nginx-ingress.yaml
    ingress.extensions/ingress-resource created
  5. To check the ingress resource status, run the following command.
    kubectl get ingress ingress-resource
    Example:
    $ kubectl get ingress ingress-resource
    NAME              HOSTS                  ADDRESS  PORTS    AGE
    ingress-resource  webapp01.example.com            80, 443  1m
  6. Test your application using the public URL of your Nginx ingress controller service.

Customize an Ingress Resource

You can customize HTTP to HTTPS redirection, load balancer policy, and session stickiness in the nginx-ingress.yaml file.

  • HTTP to HTTPS redirect is enabled by default. If you want to disable it, then in the annotation section, add the following: ingress.kubernetes.io/ssl-redirect:False.
  • The load balancer policy by default is round_robin. To use the ip_hash load balancer policy, use the annotation: nginx.ingress.kubernetes.io/upstream-hash-by: "$binary_remote_addr".
  • To enable session stickiness, you need to use ip_hash as load balancer policy.

Create an Ingress Resource for Multiple Applications

If you want to deploy multiple applications in a Kubernetes cluster, then you can use the same ingress controller to configure load balancing for those applications. You can configure name-based virtual hosting, in which the single IP address of the Nginx ingress controller service routes traffic to the host names of multiple services.

Example:

You have two services, webapp01-service and webapp02-service, created in a Kubernetes cluster. The DNS names webapp01.example.com and webapp02.example.com are mapped to the public IP address of the ingress service. Then, you can set up an ingress controller so that requests to https://webapp01.example.com/ get forwarded to webapp01-service, and requests to https://webapp02.example.com/ get forwarded to webapp02-service. You can configure these two services in the nginx-ingress.yaml file.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-resource
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/load-balance: "round_robin"
spec:
  # Required only for SSL endpoint
 
  tls:
    - hosts:
      - webapp01.example.com
      - webapp02.example.com
      secretName: ingress-tls-certificate
  rules:
    - host: webapp01.example.com
      http:
        paths:
        - path: /
          backend:
            serviceName: webapp01-service
            servicePort: 80
    - host: webapp02.example.com
      http:
        paths:
        - path: /
          backend:
            serviceName: webapp02-service
            servicePort: 80