Skip to content

Sidecar resource💣

As described on Istio’s documentation , you can inject a sidecar proxy to your workload’s pod, in two main ways:

  • Automatically by labeling the namespace.
  • Manually using istioctl.

There is a third option:

  • using the sidecar CRD, read more here.

With the CRD, you can precisely manage the sidecar configuration. The following example declares a global default sidecar in the namespace istio-system. This configures all sidecars to allow egress traffic to other workloads in the same namespace and services in the istio-system namespace.

  1. Study the Kubernetes yaml files: sidecar-resource.yaml:

    sidecar-resource.yaml
    1
    --8<-- "./sidecar-resource.yaml"
    

We will use the sidecar resource to improve resource utilization in our cluster with the bookinfo app on the default namespace for this test. We will also monitor our cluster resources using the Kubernetes cluster monitoring Grafana dashboard (315).

Introduced in the architecture chapter, we explained how Istio’s control plane sends configuration by pushing it to the Envoy proxies. This configuration translates from Istio’s API resources to Envoy resources. In the Envoy configuration world, we need to be aware of these four main building blocks:

  • Listeners: Envoy has listeners that identify by network locations, such as an IP address and port or a path for Unix Domain Socket. Listeners enable Envoy to receive connections and requests.
  • Routes: Matches the incoming requests by observing the metadata (URI, headers, …) and defines where traffic is sent based on that.
  • Clusters: These are a group of similar upstream hosts that accept traffic. Clusters define a group of endpoints.
  • Endpoints: Hosts or IP addresses on which your services are listening.

Now that the Envoy basics are complete, we can use istioctl, Istio’s preferred tool for configuration management. It inspects our Envoy proxy configuration, specifically the associated set clusters.

After deploying the bookinfo app on the default namespace, check the Envoy’s clusters to count how many of them are configured:

POD=$(kubectl get pod -l app=web-frontend -o jsonpath={.items[0].metadata.name})
istioctl proxy-config clusters $POD | grep -c ".*"

Your expected result:

35

Now knowing the number of clusters we have for the producpage pod, let’s check the memory consumption using the Grafana dashboard and take note of the memory usage for all pods:

Now let’s use the following script to deploy 50 sleep pods in 50 namespaces:

deploy-svc.sh
1
--8<-- "./deploy-svc.sh"

As the services are deploying, check the memory consumption of the cluster:

After all the services have been deployed, double-check the number of clusters specified for the producpage Envoy proxy configuration.

istioctl proxy-config clusters $POD | grep -c ".*"

Your expected result:

85

As you may have noticed, the configuration has grown considerably. Istio operates under the assumption that all proxies within the mesh should communicate with each other by default. Therefore, it generates a proportional configuration for each mesh proxy.

To limit how the proxies interact with each other, we are going to apply the sidecar resource described previously, which limits that interaction with the istio-system and the proxy’s namespace:

kubectl apply -f sidecar-resource.yaml

After applying this resource, double-check the number of cluster connections for the producpage Envoy proxy:

istioctl proxy-config clusters $POD | grep -c ".*"

Your expected result:

34

The lower number indicates we only have the cluster configuration for the istio-system and the proxy’s namespace. Check the memory after applying the resource:

Even though the sidecar resource served its purpose, the memory is still allocated, as shown above. You can either wait and observe how the memory is reclaimed or forcefully purge it by executing this script:

purge-mem.sh
1
--8<-- "./purge-mem.sh"

The script above will scan the sleep and bookinfo pods and remove any memory data shown on the Grafana dashboard:

For cleanup, run the following script:

delete-svc.sh
1
--8<-- "./delete-svc.sh"