Enforcing Network Policies using kube-router on AKS

Corporate security policy often requires the flow of traffic to be restricted between between Kubernetes pods. This is similar to how switch access control lists restrict traffic between physical servers. This functionality Kubernetes the traffic flow is configured using network policies.

There are a number of projects that support network policy enforcement. The majority require a specific network plugin to be deployed. As the Azure Kubernetes Service is a managed service we do not have the flexibility to choose the network plug in that is deployed. The default is kubenet, or if using advanced networking AKS uses the Azure CNI plugin, for more details see https://docs.microsoft.com/en-us/azure/aks/networking-overview. At the time of writing the Azure CNI plugin does not support network policy enforcement.

With the above in mind I often get asked how network policies can be enforced on AKS. This led me to search for a project that can enforce network policies without requiring a specific CNI plugin. Kube-router https://github.com/cloudnativelabs/kube-router can be deployed as a daemonset and offers is this functionality.

Please note: The steps below are a result of my attempts to enable kube-router to work on AKS. My initial impressions are that it functions correctly, but please be aware kube-router is still in beta and no support is offered by Microsoft for the project, see https://docs.microsoft.com/en-us/azure/aks/networking-overview#frequently-asked-questions.

I took one of sample daemonset deployment files, and removed any references to CNI configuration and modified the volume mounts to match the configuration of AKS nodes:

        - name: kubeconfig
                path: /var/lib/kubelet/kubeconfig
        - name: kubecerts
                path: /etc/kubernetes/certs/

I also added a node selector to ensure kube-router only gets installed on Linux nodes (it does not support Windows):

        beta.kubernetes.io/os: linux

After some trial and error I came up with a configuration that would deploy successfully. My deployment file can be found here: https://github.com/marrobi/kube-router/blob/marrobi/aks-yaml/daemonset/kube-router-firewall-daemonset-aks.yaml

This can be installed on to your AKS cluster using the following command:

kubectl apply -f https://raw.githubusercontent.com/marrobi/kube-router/marrobi/aks-yaml/daemonset/kube-router-firewall-daemonset-aks.yaml

The status of the kube router daemonset can be seen by running:

kubectl get daemonset kube-router -n kube-system

Once all pods in the daemonset are running I verified functionality using examples in Ahmet Alp Balkan's Network Policy recipes repository.

The samples, covering both ingress and egress policies, all performed as expected.

I’d be interested in feedback from others with regards to and pros and cons of using kube-router to enforce network policies on AKS.


  1. gopal

    it worked but then every pod in every namespace started crashing and being recreated?!?

    I am using aks with 1.92 version. 

    1. Marcus (Post author)

      Any logs of any kind? Not seen that happen. Are you blocking traffic that’s needed for the pod to run?

  2. gopal

    i ran networkpolicy only one namespace…., suddenly pods were getting recreated in all namespaces…kube became unresponsive …sorry no logs

    1. Marcus (Post author)

      Definitely not seen that. What happens if you run a “kubectl describe” on the deployment? Without any logs it’s hard to troubleshoot.

  3. gopal

    ok removed all the dead pods, restarted agent nodes, reran the daemonset and desribed it,


    this time no kube-router pods were created.


    PS C:\WINDOWS\system32> kubectl -n kube-system describe daemonset "kube-router"
    Name:           kube-router
    Selector:       k8s-app=kube-router
    Node-Selector:  beta.kubernetes.io/os=linux
    Labels:         k8s-app=kube-router
    Annotations:    <none>
    Desired Number of Nodes Scheduled: 0
    Current Number of Nodes Scheduled: 0
    Number of Nodes Scheduled with Up-to-date Pods: 0
    Number of Nodes Scheduled with Available Pods: 0
    Number of Nodes Misscheduled: 0
    Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
    Pod Template:
      Labels:       k8s-app=kube-router
      Annotations:  scheduler.alpha.kubernetes.io/critical-pod=
      Init Containers:
        Image:  busybox
        Port:   <none>
          set -e -x;
        Environment:  <none>
        Mounts:       <none>
        Image:  cloudnativelabs/kube-router
        Port:   <none>
        Liveness:  http-get http://:20244/healthz delay=10s timeout=1s period=3s #success=1 #failure=3
          NODE_NAME:   (v1:spec.nodeName)
          /etc/kubernetes/certs/ from kubecerts (ro)
          /var/lib/kube-router/kubeconfig from kubeconfig (ro)
        Type:          HostPath (bare host directory volume)
        Path:          /lib/modules
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/kubelet/kubeconfig
        Type:          HostPath (bare host directory volume)
        Path:          /etc/kubernetes/certs/
    Events:            <none>

    1. Marcus (Post author)

      Odd, wonder if the node-selector is preventing them being sceduled. Let me test it out with a new cluster. 

  4. gopal

    deleting the pod selector worked for me

  5. gopal

    sorry meant the node selector, deleting which the kube-router pods restarted.

    1. Marcus (Post author)

      Just deploying a fresh cluster now, can you do a "kubectl describe nodes" and let me know waht Labels are listed. Thanks.

      1. Marcus (Post author)

        Just confirmed I get the appropriate labels and the YAML above works correctly and get running pods. Be interested to see the labels assigned to your nodes. Thanks.


Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.