Using the same KIND cluster from the Cilium install, let’s deploy the Postgres database(database.yaml) with the follwing YAML
and kubectl
Here we deploy our web server as a Kubernetes deployment(web.yaml) to our KIND cluster:
To run connectivity tests inside the cluster network, we will deploy and use a dnsutils
pod(dnsutils.yaml) that has basic networking tools like ping
and curl
:
Since we are not deploying a service with an ingress, we can use kubectl port-forward
to test connectivity to our web server:
Now from our local terminal, we can reach our API:
Let’s test connectivity to our web server inside the cluster from other pods. To do that, we need to get the IP address of our web server pod:
Now we can test L4 and L7 connectivity to the web server from the dnsutils
pod:
From our dnsutils, we can test the layer 7 HTTP API access:
We can also test this on the database pod. First, we have to retrieve the IP address of the database pod, 10.244.2.25
. We can use kubectl
with a combination of labels and options to get this information:
Again, let’s use dnsutils
pod to test connectivity to the Postgres database over its default port 5432:
The port is open for all to use since no network policies are in place. Now let’s restrict this with a Cilium network policy. The following commands deploy the network policies so that we can test the secure network connectivity. Let’s first restrict access to the database pod to only the web server. Apply the network policy(layer_3_net_pol.yaml) that only allows traffic from the web server pod to the database:
The Cilium deploy of Cilium objects creates resources that can be retrieved just like pods with kubectl. With kubectl describe ciliumnetworkpolicies.cilium.io l3-rule-app-to-db, we can see all the information about the rule deployed via the YAML:
With the network policy applied, the dnsutils
pod can no longer reach the database pod; we can see this in the timeout trying to reach the DB port from the dnsutils
pods:
While the web server pod is still connected to the database pod, the /data route connects the web server to the database and the NetworkPolicy allows it:
Now let’s apply the layer 7 policy. Cilium is layer 7 aware so that we can block or allow a specific request on the HTTP URI paths. In our example policy, we allow HTTP GETs on /
and /data
but do not allow them on /healthz
; let’s test that:
We can see the policy applied just like any other Kubernetes objects in the API:
As we can see, /
and /data
are available but not /healthz, precisely what we expect from the NetworkPolicy:
These small examples show how powerful the Cilium network policies can enforce network security inside the cluster. We highly recommend that administrators select a CNI that supports network policies and enforce developers’ use of network policies. Network policies are namespaced, and if teams have similar setups, cluster administrators can and should enforce that developers define network policies for added security.
We used two aspects of the Kubernetes API, labels and selectors; in our next section, we will provide more examples of how they are used inside a cluster.