left arrowBack
Pavel Rykov

Pavel Rykov

July 31, 2023 ・ Kubernetes

Kubernetes Metrics Server: A comprehensive guide

The advancement in computing has allowed for the growth and development of numerous technologies that enable businesses to improve their efficiency and effectiveness. Among these technologies is Kubernetes, an open-source container orchestration system for automating deployment, scaling, and management of containerized applications.

Within the Kubernetes ecosystem, one component that plays a vital role in monitoring and scaling your applications is the Metrics Server. This article will provide a comprehensive guide to the Kubernetes Metrics Server, including its background, installation, configuration, and usage.

What is Kubernetes Metrics Server?

Metrics Server is a scalable, efficient source of container resource metrics. These metrics, such as CPU usage and memory consumption, are necessary for features like the Horizontal Pod Autoscaler (HPA) and the Kubernetes dashboard for monitoring resource usage.

Read more about optimizing performance of Kubernetes in Kubernetes Performance Tuning article.

Before the Metrics Server, Kubernetes used Heapster for gathering and accessing metrics. Heapster, however, was deprecated due to its limitations. Kubernetes introduced the Metrics Server as a more efficient, lightweight solution. It collects metrics from the Kubelet running on each node, then aggregates them and stores them in memory.

Installation of Metrics Server

Before starting the installation, make sure that you have a running Kubernetes cluster. If you do not have one, you can set it up using various services like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or tools like minikube for local development.

To install the Metrics Server, you can use Helm, a package manager for Kubernetes, or you can apply the necessary YAML files directly. In this guide, we will apply the YAML file directly:

kubectl apply -f


The Metrics Server collects data from the Kubelet's summary API on each node. However, depending on your cluster's configuration, additional settings may be needed. You can configure the Metrics Server by modifying the components.yaml file before applying it.

For example, if your Kubernetes nodes use self-signed certificates, the Metrics Server must ignore certificate validation to collect data. To do this, add the following arguments under the Metrics Server deployment:

      - name: metrics-server
        - --kubelet-insecure-tls


Once the Metrics Server is up and running, it will start collecting and storing metrics data. Kubernetes components can now access this data using the Metrics API.

You can view node and pod metrics using the kubectl top command:

kubectl top nodes
kubectl top pods

These commands will show CPU and memory usage for your nodes and pods.

Read more about performance in Kubernetes Performance Tuning article.

Also, the Metrics Server is necessary for the functioning of the Horizontal Pod Autoscaler (HPA), which automatically scales the number of pods in a replication controller, deployment, or replica set based on observed CPU utilization.


The Metrics Server is an essential component of the Kubernetes ecosystem. It provides the metrics required for scaling and monitoring applications, replacing the deprecated Heapster with a more efficient, scalable solution. By understanding the installation, configuration, and usage of the Metrics Server, you are now better equipped to monitor and optimize your Kubernetes clusters.

Remember, the complexity of Kubernetes demands continuous learning and adaptation. The Kubernetes community is continually improving, so stay updated by frequently visiting the Kubernetes GitHub repository and the official Kubernetes documentation.

  • Kubernetes
  • Basics
  • Infrastructure