At Netdata, we've built Kubernetes monitoring tools that add visibility without complexity while also helping you actively troubleshoot anomalies or outages. This guide walks you through each of the visualizations and offers best practices on how to use them to start Kubernetes monitoring in a matter of minutes, not hours or days.
Netdata's Kubernetes monitoring solution uses a handful of complementary tools and collectors for peeling back the many complex layers of a Kubernetes cluster, entirely for free. These methods work together to give you every metric you need to troubleshoot performance or availability issues across your Kubernetes infrastructure.
While Kubernetes (k8s) might simplify the way you deploy, scale, and load-balance your applications, not all clusters come with "batteries included" when it comes to monitoring. Doubly so for a monitoring stack that helps you actively troubleshoot issues with your cluster.
Some k8s providers, like GKE (Google Kubernetes Engine), do deploy clusters bundled with monitoring capabilities, such as Google Stackdriver Monitoring. However, these pre-configured solutions might not offer the depth of metrics, customization, or integration with your preferred alerting methods.
Without this visibility, it's like you built an entire house and then smashed your way through the finished walls to add windows.
In this tutorial, you'll learn how to navigate Netdata's Kubernetes monitoring features, using robot-shop as an example deployment. Deploying robot-shop is purely optional. You can also follow along with your own Kubernetes deployment if you choose. While the metrics might be different, the navigation and best practices are the same for every cluster.
To follow this tutorial, you need:
- A free Netdata Cloud account. Sign up if you don't have one already.
- A working cluster running Kubernetes v1.9 or newer, with a Netdata deployment and claimed parent/child nodes. See our Kubernetes deployment process for details on deployment and claiming.
kubectlcommand line tool, within one minor version difference of your cluster, on an administrative system.
- The Helm package manager v3.0.0 or newer on the same administrative system.
robot-shop demo (optional)#
Begin by downloading the robot-shop code and using
helm to create a new deployment.
kubectl get pods shows both the Netdata and robot-shop deployments.
The Netdata Helm chart deploys and enables everything you need for monitoring Kubernetes on every layer. Once you deploy Netdata and claim your cluster's nodes, you're ready to check out the visualizations with zero configuration.
To get started, sign in to your Netdata Cloud account. Head over to the War Room you claimed your cluster to, if not General.
Netdata Cloud is already visualizing your Kubernetes metrics, streamed in real-time from each node, in the Overview:
Let's walk through monitoring each layer of a Kubernetes cluster using the Overview as our framework.
The gauges and time-series charts you see right away in the Overview show aggregated metrics from every node in your cluster.
For example, the
apps.cpu chart (in the Applications menu item), visualizes the CPU utilization of various
applications/services running on each of the nodes in your cluster. The X Nodes dropdown shows which nodes
contribute to the chart and links to jump a single-node dashboard for further investigation.
For example, the chart above shows a spike in the CPU utilization from
rabbitmq every minute or so, along with a
baseline CPU utilization of 10-15% across the cluster.
Click on the Kubernetes xxxxxxx... section to jump down to Netdata Cloud's unique Kubernetes visualizations for view real-time resource utilization metrics from your Kubernetes pods and containers.
The first visualization is the health map, which places each container into its own box, then varies the intensity of their color to visualize the resource utilization. By default, the health map shows the average CPU utilization as a percentage of the configured limit for every container in your cluster.
Let's explore the most colorful box by hovering over it.
The Context tab shows
rabbitmq-5bb66bb6c9-6xr5b as the container's image name, which means this container is
running a RabbitMQ workload.
Click the Metrics tab to see real-time metrics from that container. Unsurprisingly, it shows a spike in CPU utilization at regular intervals.
Beneath the health map is a variety of time-series charts that help you visualize resource utilization over time, which is useful for targeted troubleshooting.
The default is to display metrics grouped by the
k8s_namespace label, which shows resource utilization based on your
Each composite chart has a definition bar
for complete customization. For example, grouping the top chart by
k8s_container_name reveals new information.
Netdata has a service discovery plugin, which discovers and creates configuration files for compatible services and any endpoints covered by our generic Prometheus collector. Netdata uses these files to collect metrics from any compatible application as they run inside of a pod. Service discovery happens without manual intervention as pods are created, destroyed, or moved between nodes.
Service metrics show up on the Overview as well, beneath the Kubernetes section, and are labeled according to the
service in question. For example, the RabbitMQ section has numerous charts from the
The robot-shop cluster has more supported services, such as MySQL, which are not visible with zero configuration. This is usually because of services running on non-default ports, using non-default names, or required passwords. Read up on configuring service discovery to collect more service metrics.
Service metrics are essential to infrastructure monitoring, as they're the best indicator of the end-user experience, and key signals for troubleshooting anomalies or issues.
Netdata also automatically collects metrics from two essential Kubernetes processes.
The k8s kubelet section visualizes metrics from the Kubernetes agent responsible for managing every pod on a given node. This also happens without any configuration thanks to the kubelet collector.
Monitoring each node's kubelet can be invaluable when diagnosing issues with your Kubernetes cluster. For example, you
can see if the number of running containers/pods has dropped, which could signal a fault or crash in a particular
Kubernetes service or deployment (see
kubectl get services or
kubectl get deployments for more details). If the
number of pods increases, it may be because of something more benign, like another team member scaling up a
You can also view charts for the Kubelet API server, the volume of runtime/Docker operations by type, configuration-related errors, and the actual vs. desired numbers of volumes, plus a lot more.
The k8s kube-proxy section displays metrics about the network proxy that runs on each node in your Kubernetes cluster. kube-proxy lets pods communicate with each other and accept sessions from outside your cluster. Its metrics are collected by the kube-proxy collector.
With Netdata, you can monitor how often your k8s proxies are syncing proxy rules between nodes. Dramatic changes in these figures could indicate an anomaly in your cluster that's worthy of further investigation.
After reading this guide, you should now be able to monitor any Kubernetes cluster with Netdata, including nodes, pods, containers, services, and more.
With the health map, time-series charts, and the ability to drill down into individual nodes, you can see hundreds of
per-second metrics with zero configuration and less time remembering all the
kubectl options. Netdata moves with your
cluster, automatically picking up new nodes or services as your infrastructure scales. And it's entirely free for
clusters of all sizes.