Design a Microservices-Based Application

When you design an application by using the microservices architecture, consider adhering to the microservices best practices and the 12-factor methodology for developing applications. Select a suitable data-persistence pattern, and understand the value of running the microservices in orchestrated containers.

Learn the Best Practices for Designing Microservices

By following certain best practices when designing microservices you can ensure that your application is easy to scale, deploy, and maintain. Note that not all of the best practices discussed here might be relevant to your application.

Each microservice must implement a single piece of the application’s functionality. The development teams must define the limits and responsibilities of each microservice. One approach is to define a microservice for each frequently requested task in the application. An alternative approach is to divide the functionality by business tasks and then define a microservice for each area.

Consider the following requirements in your design:

  • Responsive microservices: The microservices must return a response to the requesting clients, even when the service fails.
  • Backward compatibility: As you add or update the functionality of a microservice, the changes in the API methods and parameters must not affect the clients. The REST API must remain backward-compatible.
  • Flexible communication: Each microservice can specify the protocol that must be used for communication between the clients and the API gateway and for communication between the microservices.
  • Idempotency: If a client calls a microservice multiple times, then it should produce the same outcome.
  • Efficient operation: The design must facilitate easy monitoring and troubleshooting. A log system is commonly used to implement this requirement

Understand the 12-Factor Methodology for Developing Applications

The 12-factor methodology for developing applications is a set of rules and guidelines for developing Software as a Service (SaaS), and cloud-native applications.

Microservices should adhere to the following guidelines:

  • Codebase: Every microservice needs a single codebase that is tracked in revision control. Microservices must not share codebases.
  • Dependencies: Each microservice must explicitly declare, isolate, and package its dependencies.
  • Configuration: The application configuration (for example, credentials) might change between deployments. Store this configuration data outside the microservice, so that the microservice uses the appropriate configuration that’s specific to the deployment environment.
  • Backing services: Clients must consume microservices through URLs over the network, and the microservices must not make a distinction between local and third-party services.
  • Build, release, and run: Treat each stage in the application development and deployment process as a distinct step. In the build stage, the code is translated to an executable bundle (build). In the release stage, the build combines with the deployment's current configuration (development, testing, staging, or production). At runtime, the application runs in the execution environment against the selected release.
  • Processes: Microservices are stateless and follow the shared-nothing model. State exists only in an external cache or a data store.
  • Port binding: A microservice runs in a container and exposes all of its interfaces through ports that listen for requests.
  • Concurrency: A microservice process scales out to handle higher demand by adding running copies of the microservice. A container orchestration engine can help in this process.
  • Disposability: The microservice processes can be started or stopped immediately whenever necessary.
  • Development and production parity: Keep the development and production environments as similar as possible.
  • Logs: Logs are handle as event streams. A microservice writes logs to its event stream, which is time-ordered and unbuffered. The microservice must never handle routing or storage of its output stream, and it must not manage the log files.
  • Admin processes: The maintenance and administrative tasks must run as one-off processes (for example, database migrations) on identical environment as the microservices.

Select a Data-Persistence Pattern

The recommended pattern to implement persistence for a microservice is to use a single database. For each microservice, keep the persistent data private, and create the database as a part of the microservice implementation.

In this pattern, the private persistent data is accessible through only the microservice API.

The following illustration shows the persistence design for microservices.

Description of microservices_persistence.png follows
Description of the illustration microservices_persistence.png
The following variants of this microservice implementation apply to relational databases:
  • Private tables: Each service owns a set of tables.
  • Schema: Each service owns a private database schema.
  • Database: Each service owns a database server, as shown in the illustration.

A persistence anti-pattern for your microservices is to share one database schema across multiple microservices. You can implement atomic, consistent, isolated, and durable transactions for data consistency. An advantage with this anti-pattern is that it uses a simple database. The disadvantages are that the microservices might interfere with each other while accessing the database, and the development cycles may slow down because developers of different microservices need to coordinate the schema changes, which also increases inter-service dependencies.

Your microservices can connect to an Oracle Database instance that is running on Oracle Cloud Infrastructure. The Oracle multi-tenant database supports multiple pluggable databases (PDBs) within a container. This is the best choice for the persistence layer for microservices, for bounded context isolation of data, security, and for high availability. In many cases, fewer PDBs can be used with schema-level isolation.

Understand the Value of Deploying Microservices in Containers

After you build your microservice, you must containerize it. A microservice running in its own container doesn’t affect the microservices deployed in the other containers.

A container is a standardized unit of software, used to develop, ship and deploy applications.

Containers are managed using a container engine, such as Docker. The container engine provides the tools that are necessary to bundle all the application dependencies as a container.

You can use the Docker engine to create, deploy, and run your microservices applications in containers. Microservices running in Docker containers have the following characteristics:

  • Standard: The microservices are portable. They can run anywhere.
  • Lightweight: Docker shares the operating system (OS) kernel, doesn’t require an OS for each instance, and runs as a lightweight process.
  • Secure: Each container runs as an isolated process. So the microservices are secure.

The process of containerizing a microservice involves creating a Dockerfile, creating and building a container image that includes its dependencies and the environmental configuration, deploying the image to a Docker engine, and uploading the image to a container registry for storage and retrieval.

Description of docker_container_process.png follows
Description of the illustration docker_container_process.png

Learn About Orchestrating Microservices Using Kubernetes

The microservices that are running in containers must be able to interact and integrate to provide the required application functionalities. This integration can be achieved through container orchestration.

Container orchestration enables you to start, stop, and group containers in clusters. It also enables high availability and scaling. Kubernetes is one of the container orchestration platforms that you can use to manage containers.

After you containerize your microservices, you can deploy them to Oracle Cloud Infrastructure Container Engine for Kubernetes.

Before you deploy your containerized microservices application to the cloud, you must deploy and test it in a local Kubernetes engine, as follows:

  • Create your microservices application.
  • Build Docker images, to containerize your microservices.
  • Run your microservices in your local Docker engine.
  • Push your container images to a container registry.
  • Deploy and run your microservices in a local Kubernetes engine, such as Minikube.

After testing the application in a local Kubernetes engine, deploy it to Oracle Cloud Infrastructure Container Engine for Kubernetes as follows:

  • Create a cluster.
  • Download the kubeconfig file.
  • Install kubectl tool on a local device.
  • Prepare the deployment.yaml file.
  • Deploy the microservice to the cluster.
  • Test the microservice.

The following diagram shows the process for deploying a containerized microservices application to Oracle Cloud Infrastructure Container Engine for Kubernetes.

Description of oke_deployment_process.png follows
Description of the illustration oke_deployment_process.png