The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

Chapter 3 Using a Service Mesh

Istio automatically populates its service registry with all services you create in the service mesh, so it knows all possible service endpoints. By default, the Envoy proxy sidecars manage traffic by sending requests to each service instance in turn in a round-robin fashion. You can configure the management of this traffic to suit your own application requirements using the Istio traffic management APIs. The APIs are accessed using Kubernetes custom resource definitions (CRDs), which you set up and deploy using YAML files.

The Istio API traffic management features available are:

  • Virtual services: Configure request routing to services within the service mesh. Each virtual service can contain a series of routing rules, that are evaluated in order.

  • Destination rules: Configures the destination of routing rules within a virtual service. Destination rules are evaluated and actioned after the virtual service routing rules. For example, routing traffic to a particular version of a service.

  • Gateways: Configure inbound and outbound traffic for services in the mesh. Gateways are configured as standalone Envoy proxies, running at the edge of the mesh. An ingress and an egress gateway are deployed automatically when you install the Istio module.

  • Service entries: Configure services outside the service mesh in the Istio service registry. Allows you to manage the traffic to services as if they are in the service mesh. Services in the mesh are automatically added to the service registry, and service entries allow you to bring in outside services.

  • Sidecars: Configure sidecar proxies to set the ports, protocols and services to which a microservice can connect.

These Istio traffic management APIs are well documented in the upstream documentation at:

https://istio.io/docs/concepts/traffic-management/

3.1 Enabling Proxy Sidecars

Istio enables network communication between services to be abstracted from the services themselves and to instead be handled by proxies. Istio uses a sidecar design, which means that communication proxies run in their own containers alongside every service container.

To enable the use of a service mesh in your Kubernetes applications, you need to enable automatic proxy sidecar injection. This injects proxy sidecar containers into pods you create.

To put automatic sidecar injection into effect, the namespace to be used by an application must be labeled with istio-injection=enabled. For example, to enable automatic sidecar injection for the default namespace:

kubectl label namespace default istio-injection=enabled
namespace/default labeled
kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION default Active 29h enabled externalip-validation-system Active 29h istio-system Active 29h kube-node-lease Active 29h kube-public Active 29h kube-system Active 29h kubernetes-dashboard Active 29h

Any applications deployed into the default namespace have automatic sidecar injection enabled and the sidecar runs alongside the pod. For example, create a simple NGINX deployment:

kubectl create deployment --image nginx hello-world
deployment.apps/hello-world created

Show the details of the pod to see that an istio-proxy container is also deployed with the application:

kubectl get pods
NAME READY STATUS RESTARTS AGE hello-world-5fcdb6bc85-wph7h 2/2 Running 0 7m40s
kubectl describe pods hello-world-5fcdb6bc85-wph7h
... Normal Started 13s kubelet, worker1.example.com Started container nginx Normal Started 12s kubelet, worker1.example.com Started container istio-proxy

3.2 Setting up a Load Balancer for an Ingress Gateway

If you are deploying the Istio module, you may also want to set up a load balancer to handle the Istio ingress gateway traffic. The information in this section shows you how to set up a load balancer to manage access to services from outside the cluster using the Istio ingress gateway.

The load balancer port mapping in this section sets ports for HTTP and HTTPS. That is, the load balancer listens for HTTP traffic on port 80 and redirects it to the Istio ingress gateway NodePort number for http2. You query the port number to set for http2 by entering the following on a control plane node:

kubectl describe svc istio-ingressgateway -n istio-system |grep http2
Port: http2 80/TCP NodePort: http2 32681/TCP

In this example, the NodePort is 32681. So the load balancer must be configured to listen for HTTP traffic on port 80 and redirect it to the istio-ingressgateway service on port 32681.

For HTTPS traffic, the load balancer listens on port 443 and redirects it to the Istio ingress gateway NodePort number for https. To find the port numbers to set for https, enter:

kubectl describe svc istio-ingressgateway -n istio-system |grep https
Port: https 443/TCP NodePort: https 31941/TCP

In this example, the NodePort is 31941. So the load balancer must be configured to listen for HTTPS traffic on port 443 and redirect it to the istio-ingressgateway service on port 31941.

The load balancer should be set up with the following configuration for HTTP traffic:

  • The listener listening on TCP port 80.

  • The distribution set to round robin.

  • The target set to the TCP port for http2 on the worker nodes. In this example it is 32681.

  • The health check set to TCP.

For HTTPS traffic:

  • The listener listening on TCP port 443.

  • The distribution set to round robin.

  • The target set to the TCP port for https on the worker nodes. In this example it is 31941.

  • The health check set to TCP.

For more information on setting up your own load balancer, see Oracle® Linux 8: Setting Up Load Balancing, or Oracle® Linux 7: Administrator's Guide.

If you are deploying to Oracle Cloud Infrastructure, you can either set up a new load balancer or, if you have one, use the load balancer you set up for the Kubernetes module.

To set up a load balancer on Oracle Cloud Infrastructure for HTTP traffic:
  1. Add a backend set to the load balancer using weighted round robin.

  2. Add the worker nodes to the backend set. Set the port for the worker nodes to the TCP port for http2. In this example it is 32681.

  3. Create a listener for the backend set using TCP port 80.

To set up a load balancer on Oracle Cloud Infrastructure for HTTPS traffic:
  1. Add a backend set to the load balancer using weighted round robin.

  2. Add the worker nodes to the backend set. Set the port for the worker nodes to the TCP port for https. In this example it is 31941.

  3. Create a listener for the backend set using TCP port 443.

For more information on setting up a load balancer in Oracle Cloud Infrastructure, see the Oracle Cloud Infrastructure documentation.

3.3 Setting up an Ingress Gateway

An Istio ingress gateway allows you to define entry points into the service mesh through which all incoming traffic flows. A ingress gateway allows you to manage access to services from outside the cluster. You can monitor and set route rules for the traffic entering the cluster.

This section contains a simple example to configure the automatically created ingress gateway to an NGINX web server application. The example assumes you have a load balancer available at lb.example.com and is connecting to the istio-ingressgateway service on TCP port 32681. The load balancer listener is set to listen on HTTP port 80, which is the port for the NGINX web server application used in the virtual service in this example.

To set up an ingress gateway:
  1. Create the deployment file to create the NGINX webserver application. Create a file named my-nginx.yml, containing:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: my-webserver
      name: my-nginx
      namespace: my-namespace
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-webserver
      template:
        metadata:
          labels:
            app: my-webserver
        spec:
          containers:
          - image: nginx
            name: my-nginx
            ports:
            - containerPort: 80
  2. Create a service for the deployment. Create a file named my-nginx-service.yml containing:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-http-ingress-service
      namespace: my-namespace
    spec:
      ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: my-webserver
      type: ClusterIP
  3. Create an ingress gateway for the service. Create a file named my-nginx-gateway.yml containing:

    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: my-nginx-gateway
      namespace: my-namespace
    spec:
      selector:
        istio: ingressgateway
      servers:
      - port:
          number: 80
          name: http
          protocol: HTTP
        hosts:
          - "mynginx.example.com"
  4. Create a virtual service for the ingress gateway. Create a file named my-nginx-virtualservice.yml containing:

    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: my-nginx-virtualservice
      namespace: my-namespace
    spec:
      hosts:
      - "mynginx.example.com"
      gateways:
      - my-nginx-gateway
      http:
      - match:
        - uri:
            prefix: /
        route:
        - destination:
            port:
              number: 80
            host: my-http-ingress-service
  5. Set up a namespace for the application named my-namespace and enable automatic proxy sidecar injection.

    kubectl create namespace my-namespace
    kubectl label namespaces my-namespace istio-injection=enabled
  6. Run the deployment, service, ingress gateway and virtual service:

    kubectl apply -f my-nginx.yml 
    kubectl apply -f my-nginx-service.yml 
    kubectl apply -f my-nginx-gateway.yml 
    kubectl apply -f my-nginx-virtualservice.yml
  7. You can see the ingress gateway is running using:

    kubectl get gateways.networking.istio.io -n my-namespace
    NAME AGE my-nginx-gateway 33s
  8. You can see the virtual service is running using:

    kubectl get virtualservices.networking.istio.io -n my-namespace
    NAME GATEWAYS HOSTS AGE my-nginx-virtualservice [my-nginx-gateway] [mynginx.example.com] 107s
  9. To confirm the ingress gateway is serving the application to the load balancer, use:

    curl -I -HHost:mynginx.example.com lb.example.com:80/
    HTTP/1.1 200 OK Date: Fri, 06 Mar 2020 00:39:16 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive last-modified: Tue, 03 Mar 2020 14:32:47 GMT etag: "5e5e6a8f-264" accept-ranges: bytes x-envoy-upstream-service-time: 15

3.4 Setting up an Egress Gateway

The Istio egress gateway allows you to set up access to external HTTP and HTTPS services from applications inside the service mesh. External services are called using the sidecar container.

The Istio egress gateway is deployed automatically. You do not need to manually deploy it. You can confirm the Istio egress gateway service is running using:

kubectl get svc istio-egressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway ClusterIP 10.111.233.121 <none> 80/TCP,443/TCP,15443/TCP 9m26s

The upstream documentation provides an example to show you how to set up use an Istio egress gateway.

https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/

3.5 Testing Network Resilience

Istio network resilience and testing features allow you to set up and test failure recovery and to inject faults to test resilience. You set up these features dynamically at runtime to improve the reliability of your applications in the service mesh. The network resilience and testing features available in this release are:

  • Timeouts: The amount of time that a sidecar proxy should wait for replies from a service. You can set up a virtual service to configure specific timeouts for a service. The default timeout for HTTP requests is 15 seconds.

  • Retries: The number of retries allowed by the sidecar proxy to connect to a service after an initial connection failure. You can set up a virtual service to enable and configure the number of retries for a service. By default, no retries are allowed.

  • Fault injection: Set up fault injection mechanisms to test failure recovery of applications. You can set up a virtual service to set up and inject faults into a service. You can set delays to mimic network latency or an overloaded upstream service. You can also set aborts to mimic crashes in an upstream service.