Skip to content

Sidecar resource📜

As described on Istio’s documentation , you can inject a sidecar proxy to your workload’s pod, in two main ways:

  • Automatically by labeling the namespace.
  • Manually using istioctl.

There is a third option:

  • using the sidecar CRD, read more here.

With the CRD, you can precisely manage the sidecar configuration. The following example declares a global default sidecar in the namespace istio-system. This configures all sidecars to allow egress traffic to other workloads in the same namespace and services in the istio-system namespace.

  1. Study the Kubernetes yaml files: sidecar-resource.yaml:

    sidecar-resource.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    apiVersion: networking.istio.io/v1beta1
    kind: Sidecar
    metadata:
      name: default
      namespace: istio-system
    spec:
      egress:
      - hosts:
        # See everything on my namespace
        - "./*"
        # See everything on istio-system ns
        - "istio-system/*"
    

We will use the sidecar resource to improve resource utilization in our cluster with the bookinfo app on the default namespace for this test. We will also monitor our cluster resources using the Kubernetes cluster monitoring Grafana dashboard (315).

Introduced in the architecture chapter, we explained how Istio’s control plane sends configuration by pushing it to the Envoy proxies. This configuration translates from Istio’s API resources to Envoy resources. In the Envoy configuration world, we need to be aware of these four main building blocks:

  • Listeners: Envoy has listeners that identify by network locations, such as an IP address and port or a path for Unix Domain Socket. Listeners enable Envoy to receive connections and requests.
  • Routes: Matches the incoming requests by observing the metadata (URI, headers, …) and defines where traffic is sent based on that.
  • Clusters: These are a group of similar upstream hosts that accept traffic. Clusters define a group of endpoints.
  • Endpoints: Hosts or IP addresses on which your services are listening.

Now that the Envoy basics are complete, we can use istioctl, Istio’s preferred tool for configuration management. It inspects our Envoy proxy configuration, specifically the associated set clusters.

After deploying the bookinfo app on the default namespace, check the Envoy’s clusters to count how many of them are configured:

POD=$(kubectl get pod -l app=web-frontend -o jsonpath={.items[0].metadata.name})
istioctl proxy-config clusters $POD | grep -c ".*"

Your expected result:

35

Now knowing the number of clusters we have for the producpage pod, let’s check the memory consumption using the Grafana dashboard and take note of the memory usage for all pods:

Now let’s use the following script to deploy 50 sleep pods in 50 namespaces:

deploy-svc.sh
1
2
3
4
5
6
7
8
9
#!/bin/bash
for i in {1..50}
do
   echo "Deploying sleep svc at sleep-ns-$i"
   kubectl create ns sleep-ns-$i
   kubectl label ns sleep-ns-$i istio-injection=enabled --overwrite
   kubectl -n sleep-ns-$i apply -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
#    kubectl scale --replicas=3 deployment/sleep -n sleep-ns-$i
done

As the services are deploying, check the memory consumption of the cluster:

After all the services have been deployed, double-check the number of clusters specified for the producpage Envoy proxy configuration.

istioctl proxy-config clusters $POD | grep -c ".*"

Your expected result:

85

As you may have noticed, the configuration has grown considerably. Istio operates under the assumption that all proxies within the mesh should communicate with each other by default. Therefore, it generates a proportional configuration for each mesh proxy.

To limit how the proxies interact with each other, we are going to apply the sidecar resource described previously, which limits that interaction with the istio-system and the proxy’s namespace:

kubectl apply -f sidecar-resource.yaml

After applying this resource, double-check the number of cluster connections for the producpage Envoy proxy:

istioctl proxy-config clusters $POD | grep -c ".*"

Your expected result:

34

The lower number indicates we only have the cluster configuration for the istio-system and the proxy’s namespace. Check the memory after applying the resource:

Even though the sidecar resource served its purpose, the memory is still allocated, as shown above. You can either wait and observe how the memory is reclaimed or forcefully purge it by executing this script:

purge-mem.sh
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#!/bin/bash
for i in {1..50}
do
    echo "Purging pod's mem at sleep-ns-$i"
    BOOKINFO_PODS=$(kubectl get pods -n sleep-ns-$i | grep -i running | awk '{ print $1 }')
    for pod in $BOOKINFO_PODS; do
        kubectl exec $pod -n sleep-ns-$i -c istio-proxy -- /sbin/killall5 
        echo $pod "mem purged"
    done
done
BOOKINFO_PODS=$(kubectl get pod -n default | grep -i running | awk '{ print $1 }')
for pod in $BOOKINFO_PODS; do
    kubectl exec $pod -c istio-proxy -- /sbin/killall5 
    echo $pod "mem purged"
done

The script above will scan the sleep and bookinfo pods and remove any memory data shown on the Grafana dashboard:

For cleanup, run the following script:

delete-svc.sh
1
2
3
4
5
6
7
8
#!/bin/bash
for i in {1..50}
do
   echo "Deleting sleep svc at sleep-ns-$i"
   kubectl delete svc sleep -n sleep-ns-$i
   kubectl delete deployment sleep -n sleep-ns-$i
   kubectl delete ns sleep-ns-$i --force
done