Skip to main content

Kubernetes API Server

Plugin: go.d.plugin Module: k8s_apiserver

Overview

This collector monitors Kubernetes API Server health, performance, and request metrics.

It collects metrics from the kube-apiserver's /metrics endpoint, providing insights into:

  • Request rates, latencies, and error rates
  • Current inflight and long-running requests
  • Admission controller and webhook performance
  • etcd backend health and object counts
  • Controller work queue depths and latencies
  • Authentication and audit events
  • Go runtime and process metrics

The collector scrapes Prometheus-format metrics from the Kubernetes API Server's metrics endpoint. Authentication is typically done using a ServiceAccount bearer token.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

The ServiceAccount used must have permissions to access the /metrics endpoint. In most clusters, this requires cluster-admin or a custom ClusterRole with metrics access.

Kubernetes API Server can be monitored further using the following other integrations:

Default Behavior

Auto-Detection

When running inside a Kubernetes cluster, the collector attempts to connect to https://kubernetes.default.svc:443/metrics using the pod's ServiceAccount token.

Limits

The default configuration for this integration does not impose any limits on data collection.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per Kubernetes API Server instance

These metrics refer to the entire monitored API server instance.

This scope has no labels.

Metrics:

MetricDimensionsUnit
k8s_apiserver.requests_totalrequestsrequests/s
k8s_apiserver.requests_droppeddroppedrequests/s
k8s_apiserver.requests_by_verba dimension per HTTP verbrequests/s
k8s_apiserver.requests_by_codea dimension per HTTP status coderequests/s
k8s_apiserver.requests_by_resourcea dimension per Kubernetes resourcerequests/s
k8s_apiserver.request_latencyp50, p90, p99milliseconds
k8s_apiserver.response_sizep50, p90, p99bytes
k8s_apiserver.inflight_requestsmutating, read_onlyrequests
k8s_apiserver.longrunning_requestslongrunningrequests
k8s_apiserver.rest_client_requests_by_codea dimension per HTTP status coderequests/s
k8s_apiserver.rest_client_requests_by_methoda dimension per HTTP methodrequests/s
k8s_apiserver.rest_client_latencyp50, p90, p99milliseconds
k8s_apiserver.admission_step_latencyvalidate, admitmilliseconds
k8s_apiserver.etcd_object_countsa dimension per resource typeobjects
k8s_apiserver.audit_eventsevents, rejectedevents/s
k8s_apiserver.authentication_requestsauthenticatedrequests/s
k8s_apiserver.goroutinesgoroutinesgoroutines
k8s_apiserver.threadsthreadsthreads
k8s_apiserver.process_memoryresident, virtualbytes
k8s_apiserver.heap_memoryalloc, inuse, stackbytes
k8s_apiserver.gc_durationmin, p25, p50, p75, maxseconds
k8s_apiserver.open_fdsopen, maxfile descriptors
k8s_apiserver.cpu_usagecpuseconds/s

Per workqueue

These metrics refer to controller work queues.

Labels:

LabelDescription
controllerController name

Metrics:

MetricDimensionsUnit
k8s_apiserver.workqueue_depthdepthitems
k8s_apiserver.workqueue_latencyp50, p90, p99microseconds
k8s_apiserver.workqueue_addsadds, retriesitems/s
k8s_apiserver.workqueue_durationp50, p90, p99microseconds

Per admission controller

These metrics refer to admission controllers.

Labels:

LabelDescription
nameAdmission controller name

Metrics:

MetricDimensionsUnit
k8s_apiserver.admission_controller_latency5ms, 25ms, 100ms, 500ms, 1s, 2.5s, +Infevents/s

Per admission webhook

These metrics refer to admission webhooks.

Labels:

LabelDescription
nameWebhook name

Metrics:

MetricDimensionsUnit
k8s_apiserver.admission_webhook_latency5ms, 25ms, 100ms, 500ms, 1s, 2.5s, +Infevents/s

Alerts

The following alerts are available:

Alert nameOn metricDescription
k8s_apiserver_request_errors k8s_apiserver.requests_by_codehigh rate of API server request errors (5xx responses)
k8s_apiserver_latency k8s_apiserver.request_latencyAPI server request latency is high

Setup

You can configure the k8s_apiserver collector in two ways:

MethodBest forHow to
UIFast setup without editing filesGo to Nodes → Configure this node → Collectors → Jobs, search for k8s_apiserver, then click + to add a job.
FileIf you prefer configuring via file, or need to automate deployments (e.g., with Ansible)Edit go.d/k8s_apiserver.conf and add a job.
important

UI configuration requires paid Netdata Cloud plan.

Prerequisites

ServiceAccount with metrics access

The Netdata pod must have a ServiceAccount with permissions to read metrics from the API server. You can create a ClusterRole and ClusterRoleBinding for this purpose.

Configuration

Options

The following options can be defined globally: update_every, autodetection_retry.

Config options
GroupOptionDescriptionDefaultRequired
Collectionupdate_everyData collection interval (seconds).1no
autodetection_retryAutodetection retry interval (seconds). Set 0 to disable.0no
TargeturlTarget endpoint URL.https://kubernetes.default.svc:443/metricsyes
timeoutHTTP request timeout (seconds).2no
HTTP AuthusernameUsername for Basic HTTP authentication.no
passwordPassword for Basic HTTP authentication.no
bearer_token_filePath to a file containing a bearer token (used for Authorization: Bearer)./var/run/secrets/kubernetes.io/serviceaccount/tokenno
TLStls_skip_verifySkip TLS certificate and hostname verification (insecure).nono
tls_caPath to CA bundle used to validate the server certificate./var/run/secrets/kubernetes.io/serviceaccount/ca.crtno
tls_certPath to client TLS certificate (for mTLS).no
tls_keyPath to client TLS private key (for mTLS).no
Proxyproxy_urlHTTP proxy URL.no
proxy_usernameUsername for proxy Basic HTTP authentication.no
proxy_passwordPassword for proxy Basic HTTP authentication.no
RequestmethodHTTP method to use.GETno
bodyRequest body (e.g., for POST/PUT).no
headersAdditional HTTP headers (one per line as key: value).no
not_follow_redirectsDo not follow HTTP redirects.nono
force_http2Force HTTP/2 (including h2c over TCP).nono
Virtual NodevnodeAssociates this data collection job with a Virtual Node.no

via UI

Configure the k8s_apiserver collector from the Netdata web interface:

  1. Go to Nodes.
  2. Select the node where you want the k8s_apiserver data-collection job to run and click the (Configure this node). That node will run the data collection.
  3. The Collectors → Jobs view opens by default.
  4. In the Search box, type k8s_apiserver (or scroll the list) to locate the k8s_apiserver collector.
  5. Click the + next to the k8s_apiserver collector to add a new job.
  6. Fill in the job fields, then click Test to verify the configuration and Submit to save.
    • Test runs the job with the provided settings and shows whether data can be collected.
    • If it fails, an error message appears with details (for example, connection refused, timeout, or command execution errors), so you can adjust and retest.

via File

The configuration file name for this integration is go.d/k8s_apiserver.conf.

The file format is YAML. Generally, the structure is:

update_every: 1
autodetection_retry: 0
jobs:
- name: some_name1
- name: some_name2

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/k8s_apiserver.conf
Examples
In-cluster (default)

Default configuration when running inside a Kubernetes cluster.

jobs:
- name: local
url: https://kubernetes.default.svc:443/metrics
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_ca: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

External access with kubectl proxy

Access API server metrics via kubectl proxy running on localhost.

Config
jobs:
- name: via-proxy
url: http://127.0.0.1:8001/metrics

Direct access with token

Direct access to API server with a bearer token.

Config
jobs:
- name: direct
url: https://api.example.com:6443/metrics
bearer_token_file: /path/to/token
tls_skip_verify: yes

Troubleshooting

Debug Mode

Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.

To troubleshoot issues with the k8s_apiserver collector, run the go.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn't working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that's not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
  • Switch to the netdata user.

    sudo -u netdata -s
  • Run the go.d.plugin to debug the collector:

    ./go.d.plugin -d -m k8s_apiserver

    To debug a specific job:

    ./go.d.plugin -d -m k8s_apiserver -j jobName

Getting Logs

If you're encountering problems with the k8s_apiserver collector, follow these steps to retrieve logs and identify potential issues:

  • Run the command specific to your system (systemd, non-systemd, or Docker container).
  • Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.

System with systemd

Use the following command to view logs generated since the last Netdata service restart:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep k8s_apiserver

System without systemd

Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector's name:

grep k8s_apiserver /var/log/netdata/collector.log

Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.

Docker Container

If your Netdata runs in a Docker container named "netdata" (replace if different), use this command:

docker logs netdata 2>&1 | grep k8s_apiserver

Connection refused

The API server may not be accessible. Check that:

  1. The URL is correct
  2. Network policies allow access
  3. The ServiceAccount has proper RBAC permissions

401 Unauthorized

Authentication failed. Verify:

  1. The bearer token file exists and is readable
  2. The token is valid and not expired
  3. The ServiceAccount has metrics access permissions

Certificate errors

TLS verification failed. Options:

  1. Provide the correct CA certificate path in tls_ca
  2. Set tls_skip_verify: yes (not recommended for production)

Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.