Security Labπ£
In this lab we explore some of the security features of the Istio service mesh.
Mutual TLSπ£
By default, Istio is configured such that when a service is deployed onto the mesh, it will take advantage of mutual TLS:
- the service is given an identity as a function of its associated service account and namespace
- an x.509 certificate is issued to the workload (and regularly rotated) and used to identify the workload in calls to other services
??? info
Is important to note this only happens if you workload has an envoy proxy, regardless if it's being injected automatically or manually, the Envoy proxy is what enables mTLS in Istio. Workloads without an Envoy proxy injected will not get mTLS.
In the observability lab, we looked at the Kiali dashboard and noted the lock icons indicating that traffic was secured with mTLS.
Can a workload receive plain-text requests?π£
We can test whether a mesh workload, such as the customers service, will allow a plain-text request as follows:
- Create a separate namespace that is not configured with automatic injection.
kubectl create ns otherns
- Deploy
sleep
to that namespace
kubectl apply -f sleep.yaml -n otherns
- Verify that the sleep pod has no sidecars:
kubectl get pod -n otherns
- Call the customer service from that pod:
SLEEP_POD=$(kubectl get pod -l app=sleep -n otherns -ojsonpath='{.items[0].metadata.name}')
kubectl exec -n otherns $SLEEP_POD -it -- curl customers.default
The output should look like a list of customers in JSON format.
We conclude that Istio is configured by default to allow plain-text request. This is called permissive mode and is specifically designed to allow services that have not yet fully onboarded onto the mesh to participate.
Enable strict modeπ£
Istio provides the PeerAuthentication
resource to define peer authentication policy.
-
Apply the following peer authentication policy.
mtls-strict.yaml
yaml linenums="1" --8<-- "mtls-strict.yaml"
kubectl apply -f mtls-strict.yaml
Info
Strict mtls can be enabled globally by setting the namespace to the name of the Istio root namespace, which by default is
istio-system
-
Verify that the peer authentication has been applied.
kubectl get peerauthentication
Verify that plain-text requests are no longer permittedπ£
kubectl exec -n otherns $SLEEP_POD -it -- curl customers.default
The console output should indicate that the connection was reset by peer.
Security in depthπ£
Another important layer of security is to define an authorization policy, in which we allow only specific services to communicate with other services.
At the moment, any container can, for example, call the customers service or the web-frontend service.
- Capture the name of the sleep pod running in the default namespace
SLEEP_POD=$(kubectl get pod -l app=sleep -ojsonpath='{.items[0].metadata.name}')
- Call the
customers
service.
kubectl exec $SLEEP_POD -it -- curl customers
- Call the
web-frontend
service.
kubectl exec $SLEEP_POD -it -- curl web-frontend | head
Both calls succeed.
We wish to apply a policy in which only web-frontend
is allowed to call customers
, and only the ingress gateway can call web-frontend
.
Study the below authorization policy.
authz-policy-customers.yaml
yaml linenums="1"
--8<-- "authz-policy-customers.yaml"
- The
selector
section specifies that the policy applies to thecustomers
service. - Note how the rules have a βfrom: source: β section indicating who is allowed in.
- The nomenclature for the value of the
principals
field comes from the SPIFFE standard. Note how it captures the service account name and namespace associated with theweb-frontend
service. This identify is associated with the x.509 certificate used by each service when making secure mTLS calls to one another.
Tasks:
- [ ] Apply the policy to your cluster
??? tldr βhintβ
{.shell .language-shell} kubectl apply -f authz-policy-customers.yaml
- [ ] Verify that you are no longer able to reach the
customers
pod from thesleep
pod ??? tldr βhintβ{.shell .language-shell} kubectl exec $SLEEP_POD -it -- curl customers
Info
Configuration may take a minute to take place and while this happens there might be requests to the customers service that might get through.
Challengeπ£
Can you come up with a similar authorization policy for web-frontend
?
- Use a copy of the
customers
authorization policy as a starting point - Give the resource an apt name
- Revise the selector to match the
web-frontend
service - Revise the rule to match the principal of the ingress gateway
Hint
The ingress gateway has its own identity.
Here is a command which can help you find the name of the service account associated with its identity:
kubectl get pod -n istio-system -l istio=ingressgateway -o yaml | grep serviceAccountName
Use this service account name together with the namespace that the ingress gateway is running in to specify the value for the principals
field.
Test itπ£
Donβt forget to verify that the policy is enforced.
- Call both services again from the sleep pod and ensure communication is no longer allowed.
??? tldr βhintβ
{.shell .language-shell} kubectl exec $SLEEP_POD -it -- curl customers kubectl exec $SLEEP_POD -it -- curl web-frontend | head
- The console output should contain the message RBAC: access denied.
- Test you can still reach the
web-frontend
service ??? tldr βhintβ{.shell .language-shell} curl $GATEWAY_IP | head
Nextπ£
In the next section we learn how to use Istioβs traffic management features.