Skip to main content

Docker discovery

Kind: docker

Overview

Netdata can automatically discover running Docker containers on the local Docker daemon and generate collector jobs for the services running inside them. The discoverer queries the Docker API on a fixed interval, builds one target per container port, and applies your services: rules to render collector job YAML — typically picking the right go.d module from the container image (nginx, postgres, redis, …).

This page covers Docker-specific setup. For the broader Service Discovery model and the shared template-helper reference, see Service Discovery.

How it works

Each discovery cycle, the discoverer:

  1. Calls ContainerList on the Docker API at the configured address.
  2. Builds one target per (container, network, port) triple for every container that has at least one network and at least one published port. Containers running in network: host mode are intentionally skipped — those are picked up by the net_listeners discoverer instead.
  3. Exposes target fields: .Name, .Image, .Command, .Labels, .PrivatePort, .PublicPort, .PublicPortIP, .PortProtocol, .NetworkMode, .NetworkDriver, .IPAddress, .Address (the convenience IPAddress:PrivatePort).
  4. Runs the services: rules against each target. The default stock conf carries curated rules for ~40 popular images (nginx, postgres, redis, rabbitmq, etc.) keyed on .Image patterns.
  5. Reconciles disappeared containers — when a container exits, its target is removed and the corresponding collector job stops on the next reconcile.

Limitations

  • Containers in network: host mode are not produced as Docker targets. Configure the net_listeners discoverer to pick them up via the host's process table.
  • Only TCP ports are typically useful; the stock conf's first rule explicitly skips non-TCP, missing-port, and IPv6-mapped entries.
  • Only published ports appear as targets. A container that exposes ports only inside a Docker network without -p mapping still produces a target via its private port and network IP.
  • The discoverer reads the live container list; it does not inspect image manifests, healthcheck output, or process tables inside the container. Anything beyond labels/image/ports must be inferred via service rules.
  • Only the local Docker daemon is supported (Unix socket or TCP). There is no docker-swarm or remote-cluster discovery mode.

Setup

You can configure the docker discoverer in two ways:

MethodBest forHow to
UIFast setup without editing filesGo to Collectors -> go.d -> ServiceDiscovery -> docker, then add a discovery pipeline.
FileFile-based configuration or automationEdit /etc/netdata/go.d/sd/docker.conf and define the discoverer: and services: blocks.

Prerequisites

Access to the Docker socket

The Netdata Agent must be able to reach the Docker daemon. The default address is unix:///var/run/docker.sock. If you run Netdata in a container, mount the socket: -v /var/run/docker.sock:/var/run/docker.sock:ro. The Netdata user (or the container) must have read access to the socket.

Discovery is enabled by default

The stock conf at /etc/netdata/go.d/sd/docker.conf ships with disabled: no and a curated set of services: rules covering ~40 popular images. To turn discovery off, set disabled: yes at the top of the file.

Configuration

Options

The configuration file has two top-level blocks: discoverer: (the options below) and services: (rules that turn discovered containers into collector jobs — see Service Rules).

After editing the file, restart the Netdata Agent to load the updated discovery pipeline.

OptionDescriptionDefaultRequired
addressDocker daemon address.unix:///var/run/docker.sockno
timeoutMaximum time to wait for a Docker API response (per request).2sno
address

Supports both Unix-socket (unix:///var/run/docker.sock) and TCP (tcp://hostname:2375) endpoints.

If unset, Netdata also honors the DOCKER_HOST environment variable when present.

via UI

  1. Open the Netdata Dynamic Configuration UI.
  2. Go to Collectors -> go.d -> ServiceDiscovery -> docker.
  3. Add a new discovery pipeline and give it a name.
  4. Fill in the discoverer-specific settings and the service rules.
  5. Save the discovery pipeline.

via File

Define the discovery pipeline in /etc/netdata/go.d/sd/docker.conf.

The file has two top-level blocks: discoverer: (the options above) and services: (rules that turn discovered targets into collector jobs — see Service Rules).

After editing the file, restart the Netdata Agent to load the updated discovery pipeline.

Examples
Default (Unix socket)

Use the default local Docker socket and the stock services rules.

disabled: no
discoverer:
docker:
address: unix:///var/run/docker.sock
services:
# See the stock conf for the full curated rule set.
- id: skip
match: |
{{ or (eq .NetworkMode "host") (not (eq .PortProtocol "tcp")) (empty .PrivatePort) }}
- id: nginx
match: '{{ match "sp" .Image "nginx nginx:*" }}'
config_template: |
name: docker_{{.Name}}
url: http://{{.Address}}/stub_status

Remote daemon over TCP

Point the discoverer at a remote Docker daemon. TLS is not yet wired into the discoverer; either expose the daemon on a trusted internal network or use a stunnel/socat proxy.

disabled: no
discoverer:
docker:
address: tcp://docker.internal:2375
timeout: 5s
services:
- id: skip
match: '{{ or (eq .NetworkMode "host") (not (eq .PortProtocol "tcp")) (empty .PrivatePort) }}'
- id: redis
match: '{{ match "sp" .Image "redis redis:* */redis */redis:*" }}'
config_template: |
name: docker_{{.Name}}
address: redis://@{{.Address}}

Service Rules

A services: rule turns each discovered container target into one or more collector jobs. Most rules match on .Image (using the match "sp" simple-pattern helper for the typical image image:* */image */image:* family), some also gate on .PrivatePort, and a few use .Labels to honor user intent.

The shared rule model — function reference (match, glob, sprig, toYaml), config_template rendering rules, and the missingkey=error failure semantics — lives on the Service Discovery hub page. The notes below are Docker-specific.

How rules are evaluated

Quick reference — see Rule evaluation semantics on the hub page for the full model.

  • The first rule in the stock conf is a skip rule — It drops targets that are unreachable or uninteresting (host networking, non-TCP, missing port, IPv6-mapped public IP). Keep it as the first rule — every subsequent rule assumes it has filtered out the noise.
  • Match on .Image with match "sp" — The simple-patterns matcher (match "sp" .Image "nginx nginx:* */nginx */nginx:*") is the idiomatic way to handle the four-form image family (bare, tagged, namespaced, namespaced-tagged). Use glob if you only need shell-style globbing without the simple-patterns engine.
  • Module inference from rule id — For Docker, set id: <module-name> (e.g. id: nginx) so the rendered job inherits the module name automatically. Use a different id only when you also include module: explicitly in the template.

Template Variables

Available inside both match expressions and config_template bodies for Docker targets.

VariableTypeDescription
.NamestringContainer name (without the leading slash).
.ImagestringContainer image as reported by Docker (e.g. nginx:1.25, myorg/redis:6).
.CommandstringContainer command line.
.LabelsmapAll container labels. Use index .Labels "key" or hasKey .Labels "key" to read individual entries.
.IPAddressstringIP of the container on the matched network.
.AddressstringConvenience IPAddress:PrivatePort — the canonical address used in most stock rule templates.
.PrivatePortstringContainer-side port.
.PublicPortstringHost-side port (empty when the container does not publish a host mapping for this port).
.PublicPortIPstringHost IP that the container port is bound to (empty when no public mapping).
.PortProtocolstringPort protocol — tcp or udp. Stock rules typically gate on eq .PortProtocol "tcp".
.NetworkModestringContainer network mode (bridge, host, overlay, custom network names, …). The stock skip rule drops host mode.
.NetworkDriverstringDriver of the matched network.
.IDstringContainer ID (full hex).

Examples

Each example shows one or more entries from the services: array. Order matters — see How rules are evaluated.

Skip rule for unreachable / uninteresting targets

The first rule in the stock conf. Drops host networking (those are local-listener targets), non-TCP ports, ports without a private side, and IPv6-mapped public IPs. Place it first.

- id: skip
match: |
{{ $netNOK := eq .NetworkMode "host" -}}
{{ $protoNOK := not (eq .PortProtocol "tcp") -}}
{{ $portNOK := empty .PrivatePort -}}
{{ $addrNOK := or (empty .IPAddress) (glob .PublicPortIP "*:*") -}}
{{ or $netNOK $protoNOK $portNOK $addrNOK }}

Nginx — module inferred from rule id

Match the four common image-name forms with match "sp". id: nginx makes the module name infer to nginx automatically — no module: line needed in the template.

- id: nginx
match: '{{ match "sp" .Image "nginx nginx:*" }}'
config_template: |
- name: docker_{{.Name}}
url: http://{{.Address}}/stub_status
- name: docker_{{.Name}}
url: http://{{.Address}}/basic_status
- name: docker_{{.Name}}
url: http://{{.Address}}/nginx_status
- name: docker_{{.Name}}
url: http://{{.Address}}/status

Postgres — explicit module override

When the rule id is something other than the target module name, set module: explicitly inside the rendered job.

- id: postgres
match: '{{ or (eq .PrivatePort "5432") (match "sp" .Image "postgres postgres:* */postgres */postgres:* */postgresql */postgresql:*") }}'
config_template: |
module: postgres
name: docker_{{.Name}}
dsn: postgres://netdata:postgres@{{.Address}}/postgres

Label-driven custom matching

Use container labels to override behaviour without changing rules — e.g. opt a container in or out of monitoring, or pick a non-default endpoint. The example below requires the operator to set the label netdata.go.d/module=mymodule on the container.

- id: label-routed
match: '{{ and (hasKey .Labels "netdata.go.d/module") (eq (index .Labels "netdata.go.d/module") "mymodule") }}'
config_template: |
module: mymodule
name: docker_{{.Name}}
url: http://{{.Address}}/metrics

Verify discovery worked

After enabling the discoverer, confirm it is finding containers and producing jobs.

Confirm containers are being listed

Watch the agent log for Docker discoverer messages. With systemd:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep "discoverer=docker"

On a healthy daemon you should see the agent successfully calling ContainerList. If the log shows error on creating docker client or permission errors, the agent cannot reach /var/run/docker.sock.

Confirm jobs are being created

In the Netdata UI go to Collectors -> go.d -> <module> for whatever modules your service rules target (nginx, redis, postgres, …) — each container that matched a rule should appear as a docker_<container-name> job.

Confirm metrics are being collected

If a job was created but no charts appear, the rendered config_template produced a config the collector module rejected (wrong DSN, unreachable URL, missing credential). Check the collector's log.

Troubleshooting

Permission denied on docker.sock

The Netdata user must be able to read the Docker socket. On a typical Linux host:

sudo usermod -aG docker netdata
systemctl restart netdata

In containers, mount the socket read-only and verify the file is readable from inside.

No targets discovered for containers in host networking

host-mode containers are intentionally skipped by the Docker discoverer. Enable the net_listeners discoverer instead — it picks up locally-listening processes, which includes host-mode containers.

Wrong module picked for an image

Stock rules match on .Image patterns. Custom forks or in-house image names won't match. Add a rule above the stock catch-alls keyed on your own image name (match "sp" .Image "myorg/nginx myorg/nginx:*") or use a .Labels-driven rule.

Generated jobs fail to start

Common causes: the rendered URL is not reachable from the agent (different network, firewall); credentials baked into the template are wrong; the module's port is not the one Docker reported. Check the rendered job YAML in the agent's debug output.


Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.