Skip to content

Traffic Shifting Lab📜

Version 2 of the customers service is complete and ready to deploy to production. Version 1 presented a list of customer names, while version 2 included the city of each customer.

Deploying customers, v2📜

We want to launch the new service but still need to be ready to handle the traffic.

It is advisable to separate the task of deploying the new service from the task of directing traffic to it.

Labels📜

The customers service is labeled with app=customers.

Verify this with:

kubectl get pod -Lapp,version

Note the selector on the customers service:

kubectl get svc customers -o wide

If we deploy v2, the selector will match both versions.

DestinationRules📜

We can inform Istio that two distinct subsets of the customers service exist, and we can use the version label as the discriminator.

customers-destinationrule.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: customers
spec:
  host: customers.default.svc.cluster.local
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
  1. Apply the above destination rule to the cluster.

  2. Verify that it has been applied.

    kubectl get destinationrule
    

VirtualServices📜

With the VirtualService custom resource, we can easily set up a routing rule to direct all traffic to the v1 subset. This resource has two distinct destinations for more flexibility in managing our traffic.

customers-virtualservice.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1

Above, note how the route specifies subset v1.

  1. Apply the virtual service to the cluster.

  2. Verify that it’s been applied.

    kubectl get virtualservice
    

Finally deploy customers, v2📜

Apply the following Kubernetes deployment to the cluster.

customers-v2.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: customers-v2
  labels:
    app: customers
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: customers
      version: v2
  template:
    metadata:
      labels:
        app: customers
        version: v2
    spec:
      serviceAccountName: customers
      containers:
        - image: gcr.io/tetratelabs/customers:2.0.0
          imagePullPolicy: Always
          name: svc
          ports:
            - containerPort: 3000

Check that traffic routes strictly to v1📜

  1. Generate traffic.

    siege --delay=3 --concurrent=3 --time=20M http://$GATEWAY_IP/
    
  2. Open a separate terminal, launch the Kiali dashboard, and open a new browser tab, kiali.bigbang.dev.

Look at the graph, and select the default namespace.

The graph should show all traffic going to v1.

Route to customers, v2📜

Proceed with caution. Before customers can see version 2, we need to ensure the service functions properly.

Expose “debug” traffic to v2📜

Review this proposed updated routing specification.

customers-v2-debug.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - match:
    - headers:
        user-agent:
          exact: debug
    route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1

We are telling Istio to check an HTTP header: If the user-agent is set to debug, the route is directed to v2. Otherwise, it is directed to v1.

Open a new terminal and apply the above resource to the cluster. Applying the resource to the cluster helps overwrite the current defined VirtualService as both yamls use the same resource name.

kubectl apply -f customers-v2-debug.yaml

Testing📜

Open a browser and visit the application.

If you need it
GATEWAY_IP=$(kubectl get svc -n istio-system istio-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[0].ip}')

The main difference between v1 and v2 is that customer names and cities are separate and split into two columns in v2.

If you’re using Chrome or Firefox, you can customize the user-agent header as follows:

  1. Open the browser’s developer tools by pressing F12.
  2. Open the “three dots” menu, and select More tools –> Network conditions.
  3. The network conditions panel will open.
  4. Under User agent, uncheck Use browser default.
  5. Select Custom… and in the text field enter debug.

Refresh the page. Traffic should direct to v2.

Tip

Use siege again and then wait ~15-30 seconds, you should see some of that v2 traffic in Kiali:

siege --delay=3 --concurrent=3 --time=20M --user-agent=debug http://$GATEWAY_IP/

Canary📜

The outlook for V2 seems optimistic. Therefore we can proceed with its release to the public.

Start by siphoning 10% of traffic over to v2.

customers-v2-canary.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2
      weight: 10
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1
      weight: 90

Above, note the weight field specifying 10% of traffic to v2. Kiali should now show traffic going to both v1 and v2.

  • Apply the above resource.
  • In your browser: Undo the user-agent customization and refresh the page several times.

Most of the requests still go to v1, but some are directed to v2.

Check Grafana📜

Before opening the floodgates, we need to assess how v2 performs. Open grafana.bigbang.dev in a new browser tab.

In Grafana, visit the Istio Workload Dashboard and specifically look at the customers v2 workload. Look at the request rate, incoming success rate, and latencies.

If everything looks good, increase the percentage from 90/10 to 50/50.

Observe the changes in request volume by clicking the “refresh dashboard” button in the upper right-hand corner.

Finally, switch all traffic over to v2.

customers-virtualservice-final.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2

After you apply the above yaml, go to your browser and make sure all requests land on v2 (2-column output). Within a minute, the Kiali dashboard should also reflect that all traffic goes to the customers v2 service.

Though it no longer receives traffic, we leave v1 running longer before retiring it.