Search
left arrowBack
Stanislav Levchenko

Stanislav Levchenko

March 8, 2023 ・ Kubernetes

Kubernetes Security Best Practices. Hardening Application Environment (2 of 3)

In the previous article of our Kubernetes Security Best Practices series, we delved into hardening the core components of a Kubernetes cluster: the Kubernetes API server, Kubelet, and etcd encryption. These elements form the backbone of any Kubernetes environment, and securing them is crucial in building a resilient foundation. However, securing your Kubernetes cluster doesn't end at these foundational elements.

As we progress into the second part of this series, our focus shifts from the core components of the Kubernetes infrastructure to the application environment itself. We will take a closer look at how to protect your applications running on Kubernetes by implementing robust security measures that enforce stringent rules and policies.

In this article, we will explore various security facets such as Network Policies, Role-Based Access Control (RBAC), Security Contexts, Admission Controllers, and Runtime Classes. Each of these plays a vital role in safeguarding the application environment within your Kubernetes cluster.

By understanding and implementing these security practices, you can better protect your applications from potential threats, reduce the attack surface, and maintain the integrity of your Kubernetes deployments. Let's embark on the next leg of our Kubernetes security journey, exploring the best practices for securing the application environment in a Kubernetes cluster.

Network Policy in Kubernetes

Network Policies in Kubernetes provide a way to control network traffic in and out of your pods and between different services within your cluster. They are designed to secure your cluster by defining rules to allow or block specific traffic. Network policies operate at the level of the pod, which means they can offer granular control over network communication. It's important to note that Network Policies are implemented by the network plugin, so you must be using a networking solution which supports NetworkPolicy.

By default, pods are non-isolated, they accept traffic from any source through your cluster. Lets look at a very simple example. Imagine, that you have a simple application which consists of three elements — frontend application, backend application and database. To prevent security risks you would want to minimize communication between them. For example, frontend app should be available from outside by 80 and/or 443 port and it is not communicating directly to database. But backend should make direct requests to database.

And database should accept connections only from backend pod and should send data only to it. We may meet these requirements with the help of network policies. Network policy is applied to pod by labels. We can regulate network by direction by adding Ingress (incoming) and Egress (out coming) policy types. If we define some rules in Ingress and/or Egress, policy will deny all traffic which is not meet them. The source (in ingress case) or destination (for egress) can be defined by ip addresses, pod labels, namespaces. According to this we may sign label role=backend to backend pod and role=db to database pod. And we may write such simple manifest

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-network-policy
  namespace: my-app
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: backend
      ports:
        - protocol: TCP
          port: 5432

When we apply this network policy to database pod, it will block all incoming traffic, except requests from backend to port 5432, and will no restrict any out coming traffic. So, we may said that ensuring a minimum of sufficient connectivity between pods are good practice and we can rich it using network policies.

Role Based Access Control in Kubernetes

Role-Based Access Control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an organization. In Kubernetes, RBAC allows admins to define what actions a user or application can perform in the cluster, and on which resources. To use RBAC we need two kind of Kubernetes resources. The first one is Role. In Role manifest we define access rules. We can specify resource types, like pods, configmaps, secrets and so on, or even certain resource by name and actions which are allowed to be applied to them. It may be looked like:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: my-app
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

This Role allows get, watch and list pods in my-app namespace. To use this Role we should create another resource — RoleBinding. In RoleBinding we can assign certain role to certain subject which can be a user, group or service account. It can be like this:

**apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: my-app
subjects:
- kind: User
  name: jane
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io**

It is a good practice to assign the minimum set of permissions for a subject, for example if a developer should create pods in certain namespace, don’t grant permissions for secrets, or if some app needs to read config map, don’t allow it to write it. Good idea to split read (actions which allow only get information about resources, like get, list, watch etc) and write (actions which can modify resources, like create, delete, edit etc) permissions to separate roles. It will allow you easier control access to your cluster.

As you can see, Roles are applied only to namespace. There is another resource, which allows interact with such entities like nodes, namespaces etc. It is ClusterRole. But usually it grants too much privileges, and it is recommended not to use it for non admin users.

And obviously, you should regularly review roles, cluster roles, roles and cluster roles bindings to ensure they are still needed and provide necessary level of permissions.

Security Context

Security Context it is a set of security option which you can define for certain pod or certain container. There are several options, but we will discuss the most important. runAsUser and runAsGroup options deifne under and group under which pod or container will run. It is a good practice to use some non-root user.You may specify certain UID(GID), or just set runAsNonRoot option to true. When we start a pod, the container(s) inside it has some level of privileges. Sometimes, it can try to escalate them for getting access, for example, to host system or to other containers. It is a great security risk. To avoid such behavior, you should use allowPrivilegeEscalation option set to false. You may also protect your container file system by adding readOnlyRootFilesystem set to true to security context. You may also define what kernel capabilities to allow and what to deny for a container in capability option. It is a good idea to give the minimum of necessary capabilities, for example you would want to disallow CAP_SYS_ADMIN for containers.

One more thing you should remember is that security context may be applied on pod and on container level. Container security context will override pod level context if they have some conflicts.

Runtime Classes

RuntimeClass is a feature in Kubernetes that allows you to select the container runtime configuration to use for pods running in a cluster. This feature can provide more control over your container runtimes, enabling you to choose the best runtime for different workloads and promote greater security. The most commonly used container runtimes are the next:

  • Docker: Docker is the most commonly used container runtime. It's known for its ease of use and wide range of features, but it doesn't provide as strong isolation as some other options.

  • containerd: containerd is a lightweight container runtime that's designed to be embedded into a larger system. It's part of the Docker stack but can also be used independently.

  • CRI-O: CRI-O is a lightweight container runtime specifically for Kubernetes. It aligns with Kubernetes release cycles and is designed to be simple and support a wide variety of use cases.

  • gVisor: gVisor is a container runtime that provides a strong isolation boundary between the host OS and the application running within the container by using a userspace kernel, reducing the kernel attack surface. It's suitable for running untrusted code or isolating multi-tenant workloads.

  • Kata Containers: Kata Containers is another container runtime that provides strong isolation by using lightweight virtual machines. It can be used for running sensitive workloads or for increasing the security of containerized applications.

  • Firecracker: Firecracker is a virtualization technology that uses microVMs to provide secure and fast virtualization for containers. It's developed by AWS and is used in AWS Lambda and AWS Fargate.

Different container runtimes can provide different security features. For example, gVisor and Kata Containers are container runtimes designed to provide more isolation between containers than the standard Docker runtime. This isolation can be helpful for running untrusted workloads or for protecting sensitive workloads from potentially compromised neighbor containers.

Admission Controllers

We have discussed some best practices of running apps in Kubernetes. We know, that it is necessary to run pods as non-root user, prevent privilege escalation and disable unused kernel capabilities. Other obvious practices are to run images only from trusted registries and use basic images for your application as thin as it is possible. But usually, not only administrators run pods in Kubernetes. How can we make other users to follow strict security standards? How can we control the way pods are deployed? This is where such a mechanism as Admission Controller comes to our aid.

Admission controllers in Kubernetes are crucial parts of the Kubernetes API server. They intercept requests to the Kubernetes API server before the persistence of the object, but after the request is authenticated and authorized.

Admission controllers can be validating or mutating. Validating admission controllers validate the objects in requests and reject those that do not meet specific conditions. Mutating admission controllers, on the other hand, modify the objects in requests.

We can create our custom admission controllers which will check all requests by our own rules and publish it as a web hook. It is very powerful tool to keep your security standards. And it is a good practice to avoid security risks than to find and fix them. That’s why you should use admission controllers in your Kubernetes cluster.

Conclusion

As we wrap up the second article of our series on Kubernetes security best practices, we've delved into the critical topics of Network Policies, Role-Based Access Control (RBAC), Security Contexts, Runtime Classes, and Admission Controllers. These elements are all essential pieces of the Kubernetes security puzzle.

Understanding and applying Network Policies enables us to control the traffic flow between pods, enhancing our cluster's network security. RBAC, on the other hand, allows us to manage the permissions of different users and applications in our cluster, effectively implementing the principle of least privilege.

With Security Contexts, we can set important security parameters for our pods and containers, limiting their privileges and creating a more secure operating environment. By choosing the correct Runtime Class, we can ensure the right balance between performance and security isolation for our containerized applications.

Lastly, Admission Controllers serve as gatekeepers to our Kubernetes API server, providing an additional layer of security by enforcing various policies and modifying requests to the API server, thus ensuring the overall integrity of our cluster operations.

Continuing to harden your Kubernetes environment is a step-by-step process, and we're here to guide you along that path. Stay tuned for our future articles as we continue to explore more ways to bolster your Kubernetes security.

  • Kubernetes
  • Basics