APP.4.4

APP.4.4 Kubernetes

Kubernetes has established itself as the de facto standard for orchestrating containers in public and private clouds. Kubernetes is also used for IoT and other...

Description

Introduction

Kubernetes has established itself as the de facto standard for orchestrating containers in public and private clouds. Kubernetes is also used for IoT and other use cases; for example, K3S is an edition designed for very small servers such as single-board computers. The so-called Cloud Native Stack, which consists of many different components, also builds on the standard established by Kubernetes.

The term container refers to a technology whereby a host system runs applications in parallel within separated environments (operating system level virtualization). In most cases, the monitoring, starting, stopping, and further management of containers is handled by management software, which thus performs what is known as orchestration. Kubernetes groups one or more related containers together in a Pod. Since this building block focuses on Kubernetes, the following discussion refers only to Pods and not to individual containers. Orchestration typically takes place in groups of jointly managed Kubernetes nodes within one or more so-called clusters.

To operate and manage Pod orchestration, several products have established themselves that allow even very large environments to be managed. At their core, however, all of these products are built on Kubernetes. When examining this, a distinction must be made between the runtime, which runs the processes on the Kubernetes nodes, and the orchestration, which controls the runtimes across multiple Kubernetes nodes.

In addition to these two central components, operating Kubernetes in most cases also involves a specialized infrastructure, which includes, for example, registries, code versioning and storage, automation tools, management servers, storage systems, and virtual networks.

The following terms are used in this building block with the following meanings:

  • Application refers to a collection of multiple programs that together fulfill a task
  • Cluster refers to operating environments for containers with multiple nodes
  • Container refers to processes started from an image that run within operating system namespaces
  • Container Network Interface (CNI) refers to the interface for managing virtual networks in the cluster
  • Container Storage Interface (CSI) refers to the interface to the mostly external storage systems that Kubernetes can provide to Pods
  • Control Plane refers to all applications used for managing, i.e. orchestrating, the nodes, runtimes, and clusters
  • Images are all software packages compliant with the Open Container Initiative (OCI), including base images for custom images as well as images that are used without modification
  • Node refers to a server installed and optimized for running the runtime
  • Pod refers to a collection of multiple containers running within the same operating system namespaces
  • Registry is the umbrella term for code management and storage of images
  • Runtime refers to the software that starts the software in the image as a container

Objective

The objective of this building block is to protect information that is processed, offered, or transmitted in Kubernetes clusters.

Scope and Modeling

The building block APP.4.4 Kubernetes must always be applied together with the building block SYS.1.6 Containerization. In terms of the focus of this building block, it is not relevant which container runtime is in use or which additional applications are part of the control plane.

The building block contains fundamental requirements for the setup, operation, and orchestration with Kubernetes, as well as for the specialized infrastructure required for operation. The latter includes registries, CSI/CNI, nodes, and automation software, insofar as they interact directly with the cluster. The requirements for these applications primarily relate to the interfaces, but also include requirements that concern the operation of these applications insofar as they directly affect cluster security. Other services commonly found in the Kubernetes environment, such as automation for CI/CD pipelines and code management in, e.g., Git, are not addressed in depth in this building block.

The building block comprehensively models a cluster. The control plane applications, automation services, and nodes are to be viewed and treated here as a single group.

Security requirements for services operated within Kubernetes clusters, such as web servers (APP.3.2 Web Server) or email servers (see APP.5.3 General Email Client and Server), are the subject of their own building blocks.

Threat Landscape

Since IT-Grundschutz building blocks cannot address individual information domains, typical scenarios are used to represent the threat landscape. The following specific threats and vulnerabilities are of particular relevance for the building block APP.4.4 Kubernetes.

Inadequate Authentication and Authorization in the Control Plane

To manage runtimes, nodes, and Kubernetes itself, both administrators and tool-based provisioning require administrative access. These access points are implemented either as Unix sockets or network ports. Authentication and encryption mechanisms for administrative access are often available but are not enabled by default in all products.

If unauthorized parties gain access to the data network or to the nodes, they can execute commands via unsecured administrative access points that can damage the availability, confidentiality, and integrity of the processed data.

Loss of Confidentiality of Access Credentials

Pods often require access credentials (access tokens) for Kubernetes. Through an attack on the Pod, these credentials can fall into unauthorized hands. With these credentials, it is possible during attacks to interact with the control plane in an authenticated manner and, if the permissions are sufficient, to also make changes to the orchestration.

Resource Conflicts on Nodes

Individual Pods can overload the node or even the orchestration, thereby threatening the availability of all other Pods on the node or the operation of the node itself.

Unauthorized Changes to Clusters

Automation with CI/CD and the resulting need to grant privileged access permissions to tools entails the risk of unauthorized changes to clusters. For example, a new version of an application may be deployed to the cluster that has not been sufficiently tested or has not gone through the approval process. Errors in permissions on the CI/CD environment can also allow malware to infiltrate clusters and read, delete, or modify data there.

Unauthorized Communication

All Pods in a cluster are in principle capable of communicating with each other, with the nodes in their own cluster, and with any other IT systems. If this communication is not restricted, it can be exploited to attack, for example, the control plane, other Pods, or the nodes.

There is also the risk that Pods in the cluster are undesirably reachable from outside. This means an attack can be carried out from outside against services that should only be reachable within the cluster. This threat is compounded by the lesser attention often paid to internal services. For example, if a vulnerability in an internally deployed service is tolerated but that service is also reachable from outside, this significantly endangers the entire cluster.

Requirements

The following are the specific requirements of the building block APP.4.4 Kubernetes. The Information Security Officer (ISO) is responsible for ensuring that all requirements are fulfilled and verified in accordance with the established security concept. The ISO MUST always be involved in strategic decisions.

Further roles are defined in the IT-Grundschutz Compendium. They should be filled insofar as this is reasonable and appropriate.

ResponsibilitiesRoles
Primarily responsibleIT Operations
Additional responsibilitiesNone

Exactly one role should be Primarily responsible. Beyond that, there may be Additional responsibilities. If one of these additional roles is primarily responsible for fulfilling a requirement, this role is listed in square brackets after the requirement heading. The use of singular or plural says nothing about how many people should fill these roles.

Basic Requirements

The following requirements MUST be fulfilled with priority for this building block.

APP.4.4.A1 Planning the Separation of Applications (B)

Before going into operation, a plan MUST be made for how the applications operated in the Pods and their different test and production operating environments are to be separated. Based on the protection needs of the applications, it MUST be determined which architecture of namespaces, meta-tags, clusters, and networks adequately addresses the risks, and whether virtualized servers and networks should also be used.

The plan MUST include rules for network, CPU, and persistent storage separation. The separation SHOULD also take into account and be aligned with the network zone concept and protection needs.

Applications SHOULD each run in their own Kubernetes namespace that encompasses all programs of the application. Only applications with similar protection needs and similar potential attack vectors SHOULD share a Kubernetes cluster.

APP.4.4.A2 Planning Automation with CI/CD (B)

If automation of the operation of applications in Kubernetes using CI/CD takes place, this MUST ONLY occur after appropriate planning. The plan MUST cover the entire lifecycle from commissioning to decommissioning, including development, testing, operation, monitoring, and updates. The roles and rights concept as well as the securing of Kubernetes Secrets MUST be part of the plan.

APP.4.4.A3 Identity and Access Management in Kubernetes (B)

Kubernetes and all other control plane applications MUST authenticate and authorize every action by a user or, in automated operation, by corresponding software, regardless of whether the actions are performed via a client, a web interface, or through a corresponding interface (API). Administrative actions MUST NOT be performed anonymously.

Users MUST ONLY be granted the minimum necessary rights. Permissions without restrictions MUST be assigned very restrictively.

Only a small group of persons SHOULD be authorized to define automation processes. Only selected administrators SHOULD be granted the right in Kubernetes to create or modify persistent storage allocations (Persistent Volumes).

APP.4.4.A4 Separation of Pods (B)

The operating system kernel of the nodes MUST have isolation mechanisms for restricting the visibility and resource usage of Pods from one another (cf. Linux Namespaces and cgroups). The separation MUST cover at minimum the IDs of processes and users, inter-process communication, the filesystem, and the network including hostname.

APP.4.4.A5 Data Backup in the Cluster (B)

A data backup of the cluster MUST be performed. The data backup MUST include:

  • Persistent Volumes,
  • configuration files of Kubernetes and the other programs of the control plane,
  • the current state of the Kubernetes cluster including extensions,
  • configuration databases, specifically etcd,
  • all infrastructure applications necessary for the operation of the cluster and the services within it, and
  • the data storage of the code and image registries.

Snapshots for the operation of applications SHOULD also be considered. Snapshots MUST NOT replace the data backup.

Standard Requirements

Together with the basic requirements, the following requirements correspond to the state of the art for this building block. They SHOULD generally be fulfilled.

APP.4.4.A6 Initialization of Pods (S)

If initialization takes place in the Pod at startup, e.g. of an application, this SHOULD occur in a dedicated init container. It SHOULD be ensured that the initialization terminates all already-running processes. Kubernetes SHOULD only start the further containers upon successful initialization.

APP.4.4.A7 Separation of Networks in Kubernetes (S)

The networks for administration of nodes, the control plane, and the individual networks of application services SHOULD be separated.

ONLY the network ports of the Pods necessary for operation SHOULD be released to the designated networks. With multiple applications on one Kubernetes cluster, all network connections between Kubernetes namespaces SHOULD initially be prohibited and only required network connections SHOULD be permitted (whitelisting). The network ports required for administering the nodes, the runtime, and Kubernetes including its extensions SHOULD ONLY be reachable from the administration network and from Pods that require them.

Only selected administrators SHOULD be authorized in Kubernetes to manage the CNI and to create or modify rules for the network.

APP.4.4.A8 Securing Configuration Files in Kubernetes (S)

The configuration files of the Kubernetes cluster, including all extensions and applications, SHOULD be versioned and annotated.

Access rights to the configuration file management software SHOULD be granted minimally. Access rights for read and write access to the configuration files of the control plane SHOULD be assigned and restricted with particular care.

APP.4.4.A9 Use of Kubernetes Service Accounts (S)

Pods SHOULD NOT use the “default” service account. The “default” service account SHOULD NOT be granted any rights. Pods for different applications SHOULD each run under their own service accounts. Permissions for the service accounts of application Pods SHOULD be limited to the strictly necessary rights.

Pods that do not need a service account SHOULD NOT be able to view it and SHOULD NOT have access to corresponding tokens.

Only Pods of the control plane and Pods that strictly require it SHOULD use privileged service accounts.

Automation programs SHOULD each receive their own tokens, even if they use a shared service account due to similar tasks.

APP.4.4.A10 Securing Automation Processes (S)

All processes of the automation software, such as CI/CD and its pipelines, SHOULD only operate with the strictly necessary rights. If different groups of users can modify the configuration via the automation software or start Pods, this SHOULD be carried out for each group through separate processes that only have the rights necessary for the respective group.

APP.4.4.A11 Monitoring Containers (S)

In Pods, each container SHOULD define a health check for startup and operation (“readiness” and “liveness”). These checks SHOULD provide information about the availability of the software executing in the Pod. The checks SHOULD fail if the monitored software cannot properly perform its tasks. For each of these checks, an appropriate time period for the service operated in the Pod SHOULD be defined. Based on these checks, Kubernetes SHOULD delete or restart the Pods.

APP.4.4.A12 Securing Infrastructure Applications (S)

If a private registry for images or software for automation, storage management, storage of configuration files, or similar is in use, its security SHOULD consider at minimum:

  • Use of personal and service accounts for access,
  • encrypted communication on all network ports,
  • minimal assignment of permissions to users and service accounts,
  • logging of changes, and
  • regular data backup.

Requirements for High Protection Needs

The following are exemplary proposals for requirements for this building block that go beyond the level of protection that corresponds to the state of the art. The proposals SHOULD be considered when there are high protection needs. The specific determination is made within the context of an individual risk analysis.

APP.4.4.A13 Automated Auditing of Configuration (H)

An automated audit of the settings of nodes, Kubernetes, and the application Pods SHOULD be performed against a defined list of permitted settings and against standardized benchmarks.

Kubernetes SHOULD enforce the established rules in the cluster by connecting appropriate tools.

APP.4.4.A14 Use of Dedicated Nodes (H)

In a Kubernetes cluster, nodes SHOULD be assigned dedicated tasks and each SHOULD only operate Pods assigned to the respective task.

Bastion nodes SHOULD handle all incoming and outgoing data connections from applications to other networks.

Management nodes SHOULD operate the Pods of the control plane and SHOULD only handle the data connections of the control plane.

If used, storage nodes SHOULD only operate the Pods of the persistent storage services in the cluster.

APP.4.4.A15 Separation of Applications at the Node and Cluster Level (H)

Applications with very high protection needs SHOULD each use dedicated Kubernetes clusters or dedicated nodes that are not available for other applications.

APP.4.4.A16 Use of Operators (H)

The automation of operational tasks in operators SHOULD be used for particularly critical applications and the programs of the control plane.

APP.4.4.A17 Attestation of Nodes (H)

Nodes SHOULD send a cryptographically and, where possible, TPM-verified status report to the control plane. The control plane SHOULD only admit nodes to the cluster that have successfully demonstrated their integrity.

APP.4.4.A18 Use of Micro-Segmentation (H)

Pods SHOULD be able to communicate with each other within a Kubernetes namespace only via the necessary network ports. Rules SHOULD exist within the CNI that prevent all network connections within the Kubernetes namespace except those necessary for operation. These rules SHOULD precisely define the source and target of connections and for this SHOULD use at minimum one of the following criteria: service name, metadata (“labels”), Kubernetes service accounts, or certificate-based authentication.

All criteria that serve as labels for these connections SHOULD be secured so that they can only be modified by authorized persons and management services.

APP.4.4.A19 High Availability of Kubernetes (H)

Operations SHOULD be structured so that in the event of a site failure, the clusters and thus the applications in the Pods can either continue running without interruption or restart at another site within a short time.

For recovery, all necessary configuration files, images, user data, network connections, and other resources required for operation, including the hardware required for operation, SHOULD already be available at that site.

For uninterrupted cluster operation, the control plane of Kubernetes, the cluster infrastructure applications, and the application Pods SHOULD be distributed across multiple fire compartments based on node location data so that the failure of a fire compartment does not lead to application failure.

APP.4.4.A20 Encrypted Data Storage for Pods (H)

The filesystems with the persistent data of the control plane (especially etcd) and the application services SHOULD be encrypted.

APP.4.4.A21 Regular Restart of Pods (H)

In the case of elevated risk of external interference and very high protection needs, Pods SHOULD be regularly stopped and restarted. No Pod SHOULD run for longer than 24 hours. The availability of applications in the Pod SHOULD be ensured in doing so.

Additional Information

Good to Know

Further information on threats and security measures in the area of containers can be found, among others, in the following publications: