Enforcing Network Policies using kube-router on AKS

Corporate security policy often requires the flow of traffic to be restricted between between Kubernetes pods. This is similar to how switch access control lists restrict traffic between physical servers. This functionality Kubernetes the traffic flow is configured using network policies.

There are a number of projects that support network policy enforcement. The majority require a specific network plugin to be deployed. As the Azure Kubernetes Service is a managed service we do not have the flexibility to choose the network plug in that is deployed. The default is kubenet, or if using advanced networking AKS uses the Azure CNI plugin, for more details see https://docs.microsoft.com/en-us/azure/aks/networking-overview. At the time of writing the Azure CNI plugin does not support network policy enforcement.

With the above in mind I often get asked how network policies can be enforced on AKS. This led me to search for a project that can enforce network policies without requiring a specific CNI plugin. Kube-router https://github.com/cloudnativelabs/kube-router can be deployed as a daemonset and offers is this functionality.

Please note: The steps below are a result of my attempts to enable kube-router to work on AKS. My initial impressions are that it functions correctly, but please be aware kube-router is still in beta and no support is offered by Microsoft for the project, see https://docs.microsoft.com/en-us/azure/aks/networking-overview#frequently-asked-questions.

I took one of sample daemonset deployment files, and removed any references to CNI configuration and modified the volume mounts to match the configuration of AKS nodes:

        - name: kubeconfig
            hostPath:
                path: /var/lib/kubelet/kubeconfig
        - name: kubecerts
            hostPath:
                path: /etc/kubernetes/certs/

I also added a node selector to ensure kube-router only gets installed on Linux nodes (it does not support Windows):

    nodeSelector:
        beta.kubernetes.io/os: linux

After some trial and error I came up with a configuration that would deploy successfully. My deployment file can be found here: https://github.com/marrobi/kube-router/blob/marrobi/aks-yaml/daemonset/kube-router-firewall-daemonset-aks.yaml

This can be installed on to your AKS cluster using the following command:

kubectl apply -f https://raw.githubusercontent.com/marrobi/kube-router/marrobi/aks-yaml/daemonset/kube-router-firewall-daemonset-aks.yaml

The status of the kube router daemonset can be seen by running:

kubectl get daemonset kube-router -n kube-system

Once all pods in the daemonset are running I verified functionality using examples in Ahmet Alp Balkan's Network Policy recipes repository.

The samples, covering both ingress and egress policies, all performed as expected.

I’d be interested in feedback from others with regards to and pros and cons of using kube-router to enforce network policies on AKS.

10 Comments

  1. gopal

    it worked but then every pod in every namespace started crashing and being recreated?!?

    I am using aks with 1.92 version. 

    Reply
    1. Marcus (Post author)

      Any logs of any kind? Not seen that happen. Are you blocking traffic that’s needed for the pod to run?

      Reply
  2. gopal

    i ran networkpolicy only one namespace…., suddenly pods were getting recreated in all namespaces…kube became unresponsive …sorry no logs

    Reply
    1. Marcus (Post author)

      Definitely not seen that. What happens if you run a “kubectl describe” on the deployment? Without any logs it’s hard to troubleshoot.

      Reply
  3. gopal

    ok removed all the dead pods, restarted agent nodes, reran the daemonset and desribed it,

     

    this time no kube-router pods were created.

     

    PS C:\WINDOWS\system32> kubectl -n kube-system describe daemonset "kube-router"
    Name:           kube-router
    Selector:       k8s-app=kube-router
    Node-Selector:  beta.kubernetes.io/os=linux
    Labels:         k8s-app=kube-router
    Annotations:    <none>
    Desired Number of Nodes Scheduled: 0
    Current Number of Nodes Scheduled: 0
    Number of Nodes Scheduled with Up-to-date Pods: 0
    Number of Nodes Scheduled with Available Pods: 0
    Number of Nodes Misscheduled: 0
    Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
    Pod Template:
      Labels:       k8s-app=kube-router
      Annotations:  scheduler.alpha.kubernetes.io/critical-pod=
      Init Containers:
       install-cni:
        Image:  busybox
        Port:   <none>
        Command:
          /bin/sh
          -c
          set -e -x;
        Environment:  <none>
        Mounts:       <none>
      Containers:
       kube-router:
        Image:  cloudnativelabs/kube-router
        Port:   <none>
        Args:
          –run-router=false
          –run-firewall=true
          –run-service-proxy=false
          –kubeconfig=/var/lib/kube-router/kubeconfig
        Liveness:  http-get http://:20244/healthz delay=10s timeout=1s period=3s #success=1 #failure=3
        Environment:
          NODE_NAME:   (v1:spec.nodeName)
        Mounts:
          /etc/kubernetes/certs/ from kubecerts (ro)
          /var/lib/kube-router/kubeconfig from kubeconfig (ro)
      Volumes:
       lib-modules:
        Type:          HostPath (bare host directory volume)
        Path:          /lib/modules
        HostPathType:
       kubeconfig:
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/kubelet/kubeconfig
        HostPathType:
       kubecerts:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/kubernetes/certs/
        HostPathType:
    Events:            <none>

    Reply
    1. Marcus (Post author)

      Odd, wonder if the node-selector is preventing them being sceduled. Let me test it out with a new cluster. 

      Reply
  4. gopal

    deleting the pod selector worked for me

    Reply
  5. gopal

    sorry meant the node selector, deleting which the kube-router pods restarted.

    Reply
    1. Marcus (Post author)

      Just deploying a fresh cluster now, can you do a "kubectl describe nodes" and let me know waht Labels are listed. Thanks.

      Reply
      1. Marcus (Post author)

        Just confirmed I get the appropriate labels and the YAML above works correctly and get running pods. Be interested to see the labels assigned to your nodes. Thanks.

        Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.