Skip to content

The application Lab📜

In this lab, you will deploy an application to your mesh.

  • The application consists of two microservices, web-frontend and customers.

Tip

The official Istio docs canonical example is the BookInfo application.

For this workshop we felt that an application involving fewer microservices would be more clear.

  • The customers service exposes a REST endpoint that returns a list of customers in JSON format. The web-frontend calls customers to retrieve the list, which it uses to render to HTML.

  • The respective Docker images for these services have already been built and pushed to a Docker registry.

  • You will deploy the application to the default Kubernetes namespace.

Before proceeding, we must enable sidecar injection.

Enable automatic sidecar injection📜

There are two options for sidecar injection {target=_blank}: automatic and manual.

In this lab, we will use automatic injection, which involves labeling the pods’ namespace.

  1. Label the default namespace.

    kubectl label namespace default istio-injection=enabled
    
  2. Verify that the label is applied:

    kubectl get ns -Listio-injection
    

Deploy the application📜

  1. Study the two Kubernetes yaml files: web-frontend.yaml and customers.yaml.

    web-frontend.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: web-frontend
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: web-frontend
      labels:
        app: web-frontend
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: web-frontend
      template:
        metadata:
          labels:
            app: web-frontend
            version: v1
        spec:
          serviceAccountName: web-frontend
          containers:
            - image: gcr.io/tetratelabs/web-frontend:1.0.0
              imagePullPolicy: Always
              name: web
              ports:
                - containerPort: 8080
              env:
                - name: CUSTOMER_SERVICE_URL
                  value: "http://customers.default.svc.cluster.local"
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: web-frontend
      labels:
        app: web-frontend
    spec:
      selector:
        app: web-frontend
      ports:
        - port: 80
          name: http
          targetPort: 8080
    
    customers.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: customers
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: customers-v1
      labels:
        app: customers
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: customers
          version: v1
      template:
        metadata:
          labels:
            app: customers
            version: v1
        spec:
          serviceAccountName: customers
          containers:
            - image: gcr.io/tetratelabs/customers:1.0.0
              imagePullPolicy: Always
              name: svc
              ports:
                - containerPort: 3000
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: customers
      labels:
        app: customers
    spec:
      selector:
        app: customers
      ports:
        - port: 80
          name: http
          targetPort: 3000
    

    Each file defines its corresponding deployment, service account, and ClusterIP service.

  2. Apply the two files to your Kubernetes cluster.

    kubectl apply -f customers.yaml
    
    kubectl apply -f web-frontend.yaml
    

Confirm that:

  • Two pods are running, one for each service.
  • Each pod consists of two containers, the one running the service image plus the Envoy sidecar.

    kubectl get pod
    

Question: How did each pod end up with two containers?

Istio installs a Kubernetes object known as a [mutating webhook admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/){ target=_blank }: logic that intercepts Kubernetes object creation requests and that has the permission to alter (mutate) what ends up stored in etcd (the pod spec).

You can list the mutating webhooks in your Kubernetes cluster and confirm that the sidecar injector is present.

```{.shell .language-shell}
kubectl get mutatingwebhookconfigurations
```

Verify access to each service📜

We need to deploy a pod that runs a curl image to check the reachability of services within the cluster. The Istio distribution offers sleep, a sample app for this purpose.

  1. Deploy sleep to the default namespace.

    sleep.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    # Copyright Istio Authors
    #
    #   Licensed under the Apache License, Version 2.0 (the "License");
    #   you may not use this file except in compliance with the License.
    #   You may obtain a copy of the License at
    #
    #       http://www.apache.org/licenses/LICENSE-2.0
    #
    #   Unless required by applicable law or agreed to in writing, software
    #   distributed under the License is distributed on an "AS IS" BASIS,
    #   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    #   See the License for the specific language governing permissions and
    #   limitations under the License.
    
    ##################################################################################################
    # Sleep service
    ##################################################################################################
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sleep
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sleep
      labels:
        app: sleep
        service: sleep
    spec:
      ports:
      - port: 80
        name: http
      selector:
        app: sleep
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          terminationGracePeriodSeconds: 0
          serviceAccountName: sleep
          containers:
          - name: sleep
            image: curlimages/curl
            command: ["/bin/sleep", "infinity"]
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - mountPath: /etc/sleep/tls
              name: secret-volume
          volumes:
          - name: secret-volume
            secret:
              secretName: sleep-secret
              optional: true
    ---
    
    kubectl apply -f sleep.yaml
    
  2. Capture the name of the sleep pod to an environment variable.

    SLEEP_POD=$(kubectl get pod -l app=sleep -ojsonpath='{.items[0].metadata.name}')
    
  3. Use the kubectl exec command to call the customers service.

    kubectl exec $SLEEP_POD -it -- curl customers
    

    The console output should show a list of customers in JSON format.

  4. Call the web-frontend service

    kubectl exec $SLEEP_POD -it -- curl web-frontend | head
    

    The console output should show the start of an HTML page listing customers in an HTML table.

Next📜

Let’s explore exposing our mesh to inbound traffic using an ingress controller, now that we understand the interaction between the control and data planes.