Skip to main content

Local listening processes discovery

Kind: net_listeners

Overview

Netdata can automatically discover services running on the local host by inspecting the kernel's listening sockets. This is the discoverer that powers Netdata's "zero-config" experience for most database, cache, web-server, and exporter monitoring — if the service is listening on a known port and runs as a recognisable process, Netdata picks it up and starts collecting metrics without you writing any configuration.

This page covers net_listeners-specific setup. For the broader Service Discovery model and the shared template-helper reference, see Service Discovery.

How it works

Each discovery cycle, the discoverer:

  1. Reads the kernel's TCP/UDP listening-socket table via the bundled local-listeners helper (which reads /proc on Linux, netstat-equivalents elsewhere).
  2. Builds one target per (protocol, IP, port, process) tuple, exposing .Protocol, .IPAddress, .Port, .Comm (process basename), .Cmdline (full command line), and .Address (the convenience IPAddress:Port).
  3. Caches each target for 10 minutes so a brief disappearance (process restart) does not churn collector jobs.
  4. Runs the services: rules against each target. The stock conf carries ~100 curated rules covering the bulk of go.d modules (databases, web servers, caches, message queues, exporters).
  5. Reconciles disappeared listeners — when a process stops listening, its target is removed and the corresponding collector job stops on the next reconcile.

Limitations

  • Only local listeners are visible. Discovering services on other hosts requires another discoverer (http, snmp, k8s, or a custom one).
  • The discoverer needs to read kernel socket information. On Linux this works for processes owned by other users only when Netdata can read the appropriate /proc/<pid>/net files; the Netdata installer configures this via the local-listeners setuid helper.
  • Containerised services in host networking appear as listeners and are picked up here, not by the Docker discoverer. Services in private container networks must be discovered by the Docker discoverer instead.
  • The discoverer does not introspect process runtime — anything beyond port/comm/cmdline (e.g. config-file path, version, runtime URL prefix) must be inferred via service rules or known by convention.

Setup

You can configure the net_listeners discoverer in two ways:

MethodBest forHow to
UIFast setup without editing filesGo to Collectors -> go.d -> ServiceDiscovery -> net_listeners, then add a discovery pipeline.
FileFile-based configuration or automationEdit /etc/netdata/go.d/sd/net_listeners.conf and define the discoverer: and services: blocks.

Prerequisites

Discovery is enabled by default

The stock conf at /etc/netdata/go.d/sd/net_listeners.conf ships with disabled: no and a curated set of rules. To turn discovery off, set disabled: yes at the top of the file.

Trust the curated stock rules

The stock rule set covers most commonly-monitored services out of the box (Apache, Nginx, MySQL, PostgreSQL, Redis, RabbitMQ, MongoDB, Elasticsearch, etc.). Most users do not need to author their own rules — start by enabling the relevant collector module and let the stock rules find local instances.

Configuration

Options

The configuration file has two top-level blocks: discoverer: (the options below) and services: (rules that turn discovered listeners into collector jobs — see Service Rules).

After editing the file, restart the Netdata Agent to load the updated discovery pipeline.

OptionDescriptionDefaultRequired
intervalHow often to re-scan the listening-socket table.2mno
timeoutMaximum time to wait for the local-listeners helper to return.5sno

via UI

  1. Open the Netdata Dynamic Configuration UI.
  2. Go to Collectors -> go.d -> ServiceDiscovery -> net_listeners.
  3. Add a new discovery pipeline and give it a name.
  4. Fill in the discoverer-specific settings and the service rules.
  5. Save the discovery pipeline.

via File

Define the discovery pipeline in /etc/netdata/go.d/sd/net_listeners.conf.

The file has two top-level blocks: discoverer: (the options above) and services: (rules that turn discovered targets into collector jobs — see Service Rules).

After editing the file, restart the Netdata Agent to load the updated discovery pipeline.

Examples
Default — keep stock rules

Most users should not need to touch this file. The stock conf carries ~100 curated rules. Set disabled: yes if you want to disable local-listener discovery entirely.

disabled: no
discoverer:
net_listeners: { }

# services rules ship in the stock conf — the snippet below is illustrative only
services:
- id: redis
match: '{{ or (eq .Port "6379") (eq .Comm "redis-server") }}'
config_template: |
name: local
address: redis://@{{.Address}}

Faster scan interval

Bump the scan rate to once per minute. Useful for very dynamic environments where services come and go often (e.g. ephemeral test runners).

disabled: no
discoverer:
net_listeners:
interval: 1m
services: [ ]

Service Rules

A services: rule turns each discovered listener into one or more collector jobs. The stock conf carries ~100 curated rules — most rules match on a (.Port, .Comm) pair (or just .Comm / .Cmdline for services with non-default ports), and most templates use name: local plus the canonical .Address to point the collector at the listener.

The shared rule model — function reference (match, glob, sprig, regexFind, trimPrefix, promPort), config_template rendering rules, and the missingkey=error failure semantics — lives on the Service Discovery hub page. The notes below are net_listeners-specific.

How rules are evaluated

Quick reference — see Rule evaluation semantics on the hub page for the full model.

  • Match by .Port AND .Comm where possible — The idiomatic stock pattern is {{ and (eq .Port "PORT") (eq .Comm "PROCESS") }} — porty + processy. This avoids picking up the wrong process on the right port (e.g. an HTTP exporter listening on :80 is not Apache).
  • For services with non-standard ports, use .Cmdline + glob — Some services (Logstash, RabbitMQ, ZooKeeper, Tomcat, …) listen on default ports but identify themselves through their command line, not their .Comm (which is a generic java, python, …). Stock rules for these use glob .Cmdline "*tomcat*" or glob .Cmdline "*rabbitmq*".
  • Use match "sp" for variant patterns — When you want to match any of several short patterns, simple-patterns (match "sp" .Comm "mysqld mariadbd") is shorter than nested or (eq ...) (eq ...). See the hub page for matcher types.
  • Module inference from rule id — For net_listeners, set id: <module-name> so the rendered job inherits the module name automatically. The stock conf does this throughout (id: nginx, id: postgres, …).
  • The exporter catch-all rule uses promPort — The last rule in the stock conf catches Prometheus exporters by port using the promPort helper, which maps a port number to a known exporter module name (or empty if unknown). Keep this rule at the bottom — it overlaps with everything above it.

Template Variables

Available inside both match expressions and config_template bodies for net_listeners targets.

VariableTypeDescription
.ProtocolstringProtocol — TCP, UDP, TCP6, UDP6.
.IPAddressstringListening IP address (e.g. 127.0.0.1, 0.0.0.0, ::).
.PortstringListening port. Stock rules typically gate on this with eq .Port "PORT".
.CommstringProcess basename (comm field — kernel-truncated to 15 chars). Examples: nginx, mysqld, redis-server. Best for native daemons.
.CmdlinestringFull process command line. Use glob .Cmdline "*pattern*" or regexFind (from sprig) to match interpreters that hide the real service name behind java/python/node (e.g. RabbitMQ, Logstash, ZooKeeper, Spigot).
.AddressstringConvenience IPAddress:Port — used in nearly every stock rule template.

Examples

Each example shows one or more entries from the services: array. The full curated rule set lives in the stock conf; the snippets below illustrate the common patterns.

Port + Comm (idiomatic)

The most common stock-rule shape. Match the canonical port AND the canonical process name.

- id: redis
match: '{{ or (eq .Port "6379") (eq .Comm "redis-server") }}'
config_template: |
name: local
address: redis://@{{.Address}}

Comm + Cmdline glob (for interpreted services)

When the process is java/python/node and the real service name lives in the command line.

- id: rabbitmq
match: '{{ or (eq .Port "15672") (glob .Cmdline "*rabbitmq*") }}'
config_template: |
name: local
url: http://{{.Address}}
username: guest
password: guest
collect_queues_metrics: no

Multiple jobs from one target (sequence output)

When one running service should produce multiple collector jobs — for example, MySQL exposing both a Unix-socket DSN and a TCP DSN. The rendered YAML is a top-level sequence; each item becomes a separate job.

- id: mysql
match: '{{ or (eq .Port "3306") (eq .Comm "mysqld" "mariadbd") }}'
config_template: |
- name: local
dsn: netdata@unix(/var/run/mysqld/mysqld.sock)/
- name: local
dsn: netdata@tcp({{.Address}})/

Prometheus-exporter catch-all (promPort)

The last rule in the stock conf catches generic Prometheus exporters by port. promPort .Port returns the well-known module name for that port, or empty. The rule is bottom-most because it overlaps with every preceding curated rule.

- id: exporter
match: '{{ or (and (not (empty (promPort .Port))) (not (eq .Comm "docker-proxy"))) (glob .Comm "*exporter*") }}'
config_template: |
{{ $name := promPort .Port -}}
{{ if empty $name -}}
{{ $name = printf "%s_%s" .Comm .Port -}}
{{ end -}}
module: prometheus
name: {{$name}}_local
url: http://{{.Address}}/metrics

Verify discovery worked

After enabling the discoverer, confirm it is finding listeners and producing jobs.

Confirm listeners are being scanned

Watch the agent log for discoverer=net_listeners messages. With systemd:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep "discoverer=net_listeners"

On a healthy host you should see periodic scan activity. If the log shows local-listeners exec failures, the helper binary is missing or not executable.

Confirm the local-listeners helper sees your service

Run the helper manually to see what targets it would emit:

sudo /usr/libexec/netdata/plugins.d/local-listeners

If your service does not appear in the helper output, the kernel's listening-socket table is the place to debug — ss -tlnp and ss -ulnp should show it.

Confirm jobs are being created

In the Netdata UI go to Collectors -> go.d -> <module> for the module your listener should map to. Stock-rule jobs are typically named local.

Troubleshooting

A locally-running service is not picked up

Check, in order:

  • Is the service actually listening (ss -tlnp | grep <name>)?
  • Does the stock conf have a rule for it? See /etc/netdata/go.d/sd/net_listeners.conf.
  • Is the service on a non-standard port? Stock rules typically gate on the canonical port. Add a rule keyed on .Comm or .Cmdline for non-default ports.
  • Is the process name truncated past 15 chars? .Comm is kernel-truncated; use .Cmdline instead.

Wrong module picked

The stock exporter catch-all rule (last in the file) is greedy by design — anything on a known Prometheus port gets the prometheus module. Add a more specific rule above it if you want a different module to win.

Generated jobs fail to start

The discoverer creates jobs but does not run them. Common causes: the rendered template assumes credentials the local service rejects (e.g. RabbitMQ default guest:guest); 0.0.0.0 listeners produce 0.0.0.0:port addresses that the collector cannot connect to (use 127.0.0.1 in the template if appropriate); the service has TLS but the template uses HTTP.


Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.