HTTP endpoint discovery
Kind: http
Overview
Netdata can pull a list of monitorable targets from any HTTP endpoint you control — a CMDB API, an internal asset registry, a static file served by nginx, or a Prometheus-style file_sd export. The discoverer fetches the endpoint, decodes JSON or YAML, and feeds each item into the services: rule engine. This is the "bring your own source-of-truth" discoverer.
This page covers HTTP-specific setup. For the broader Service Discovery model and the shared template-helper reference, see Service Discovery.
How it works
Each discovery cycle, the discoverer:
- Fetches the configured
urlover HTTP/HTTPS, honouring all standard go.d collector HTTP options (auth, headers, TLS, proxy, timeout). - Decodes the response as either JSON or YAML according to
format(auto / json / yaml). Withformat: auto, the decoder usesContent-Typeif it is unambiguous, otherwise tries JSON first then YAML. - Accepts two shapes at the top level: a bare array (
[ item, item, … ]) or an envelope ({ "items": [ … ] }). Anything else is rejected. - Builds one target per array element, exposing
.Item(the decoded element — could be a string, a map, a number, …),.TUID, and.Hash. - Runs the
services:rules against each target. The default stock rule passes the item through unchanged via thetoYamlhelper, so an endpoint that already serves go.d job configurations works with zero rule authoring. - Reconciles disappeared items — when a target is no longer in the response, the corresponding job stops on the next reconcile.
Limitations
- Only one URL per pipeline. To pull from multiple sources, configure multiple HTTP discovery pipelines (each as its own UI entry, or split the file into one job per source).
- Response size is capped at 10 MiB.
- One-shot mode (
interval: 0) fetches a single time when the pipeline starts. It does not refetch on SD reload — recreate the pipeline to refresh. bearer_token_fileunder/var/run/secrets/is treated as optional when Netdata is not running in Kubernetes (so the same config can be used in a Helm deployment without erroring out on dev hosts).- The discoverer does not introspect the items it received — anything beyond what the upstream endpoint provides must be inferred via service rules.
Setup
You can configure the http discoverer in two ways:
| Method | Best for | How to |
|---|---|---|
| UI | Fast setup without editing files | Go to Collectors -> go.d -> ServiceDiscovery -> http, then add a discovery pipeline. |
| File | File-based configuration or automation | Edit /etc/netdata/go.d/sd/http.conf and define the discoverer: and services: blocks. |
Prerequisites
Endpoint that returns JSON or YAML
Stand up an HTTP endpoint that returns either a top-level array ([ "https://a/health", "https://b/health" ]) or an envelope ({ "items": [...] }). Items can be primitives (strings, numbers), maps, or any nestable value the rule engine knows how to consume.
Choose a pass-through vs. curated approach
- Pass-through: have your endpoint emit ready-made go.d job configurations and use the stock rule, which renders each item directly via
toYaml. Zero rule authoring on the Netdata side. - Curated: have your endpoint emit raw data (URLs, hostnames, tags) and write
services:rules that map the data to the right collector module. More work, more flexibility.
Configuration
Options
The configuration file has two top-level blocks: discoverer: (the options below) and services: (rules that turn fetched items into collector jobs — see Service Rules).
After editing the file, restart the Netdata Agent to load the updated discovery pipeline.
| Option | Description | Default | Required |
|---|---|---|---|
| url | HTTP/HTTPS endpoint that returns the items. | yes | |
| interval | How often to refetch the endpoint. | 1m | no |
| format | Response format. One of auto, json, yaml. | auto | no |
| timeout | Per-request HTTP timeout. | 2s | no |
| headers / username / password / bearer_token_file / proxy_url / tls_skip_verify / etc. | All standard go.d HTTP options are accepted (basic auth, bearer tokens, custom headers, HTTP proxy, TLS options). | no |
url
Must be a fully-qualified http:// or https:// URL. The endpoint is expected to return either a bare array or an {"items": [...]} envelope (see Service Rules for the input model).
interval
Set to 0 for one-shot mode — the endpoint is fetched once when the pipeline starts and never again. SD reload does not retrigger; recreate the pipeline to refresh.
format
With auto, the decoder uses Content-Type when it is unambiguous (application/json, application/yaml, *+json, *+yaml), otherwise tries JSON first then YAML.
headers / username / password / bearer_token_file / proxy_url / tls_skip_verify / etc.
See any go.d HTTP-based collector (httpcheck, prometheus, nginx, …) for the full set. Notable: when bearer_token_file points under /var/run/secrets/ and Netdata is not running inside Kubernetes, missing token files are silently ignored.
via UI
- Open the Netdata Dynamic Configuration UI.
- Go to
Collectors -> go.d -> ServiceDiscovery -> http. - Add a new discovery pipeline and give it a name.
- Fill in the discoverer-specific settings and the service rules.
- Save the discovery pipeline.
via File
Define the discovery pipeline in /etc/netdata/go.d/sd/http.conf.
The file has two top-level blocks: discoverer: (the options above) and services: (rules that turn discovered targets into collector jobs — see Service Rules).
After editing the file, restart the Netdata Agent to load the updated discovery pipeline.
Examples
Pass-through go.d jobs (stock rule)
The endpoint serves go.d job configurations directly. Each item must include a module field. The stock rule pipes the item through toYaml unchanged.
disabled: no
discoverer:
http:
url: https://cmdb.example.com/netdata/jobs.yaml
interval: 5m
format: auto
services:
- id: passthrough
match: '{{ true }}'
config_template: |
{{ .Item | toYaml }}
Array of bare URLs → httpcheck
The endpoint returns [ "https://a/health", "https://b/health" ]. Map each URL to an httpcheck job.
disabled: no
discoverer:
http:
url: https://cmdb.example.com/netdata/health-urls.json
interval: 1m
services:
- id: httpcheck
match: '{{ kindIs "string" .Item }}'
config_template: |
name: {{ .TUID }}
url: {{ .Item }}
Array of objects with custom shape
The endpoint returns [ { "name": "api", "url": "https://api.example.com/health" }, … ].
disabled: no
discoverer:
http:
url: https://cmdb.example.com/netdata/services.json
services:
- id: httpcheck
match: '{{ and (kindIs "map" .Item) (hasKey .Item "url") }}'
config_template: |
name: {{ .Item.name }}
url: {{ .Item.url }}
Bearer-token authentication
Authenticate against the source-of-truth endpoint using a bearer token from a file.
disabled: no
discoverer:
http:
url: https://cmdb.example.com/api/v1/netdata/jobs
bearer_token_file: /etc/netdata/secrets/cmdb-token
headers:
Accept: application/yaml
services:
- id: passthrough
match: '{{ true }}'
config_template: |
{{ .Item | toYaml }}
Service Rules
A services: rule turns each fetched item into one or more collector jobs. The HTTP discoverer is unique among SD discoverers in that the target's data shape is defined by the upstream endpoint, not by this discoverer — .Item is whatever JSON/YAML element the endpoint returned.
The shared rule model — function reference (match, sprig including kindIs/hasKey/toYaml), config_template rendering rules, and the missingkey=error failure semantics — lives on the Service Discovery hub page. The notes below are HTTP-specific.
How rules are evaluated
Quick reference — see Rule evaluation semantics on the hub page for the full model.
- Type-check
.Itemfirst — Because.Itemis whatever the endpoint serves, write defensive rules that check the type before reading sub-fields.kindIs "string" .Item,kindIs "map" .Item, andhasKey .Item "<key>"are the workhorses. A rule that does{{ .Item.url }}on a non-map item will fail at template-render time (missingkey=error) and the rule will be skipped. - Pass-through requires
module:in the upstream payload — The pass-through rule ({{ .Item / toYaml }}) forwards the item unchanged to the collector subsystem. The collector subsystem requires every job to havename:andmodule:. If your endpoint omitsmodule:, the resulting job has no module and the agent rejects it. Either includemodule:upstream or wrap with a curated rule that adds it. - Module inference from rule id — When you write a curated rule and the rendered job omits
module:, the ruleidis used as the module name. Soid: httpcheckis enough to produce httpcheck jobs without writingmodule: httpcheckin every template.
Template Variables
Available inside both match expressions and config_template bodies for HTTP targets.
| Variable | Type | Description |
|---|---|---|
.Item | any | The decoded array element. Type depends on the upstream endpoint — could be a string, a number, a bool, a map, or a nested structure. Always type-check before reading sub-fields. |
.TUID | string | Stable per-target ID (http_<endpoint-label>_<hash>). Useful as a job name: when the upstream payload does not provide one. |
.Hash | uint64 | Hash of the item content. Used internally for change detection. |
Examples
Each example shows one or more entries from the services: array. Order matters — see How rules are evaluated.
Pass-through (default stock rule)
The endpoint already returns valid go.d job configurations. Forward each item unchanged via toYaml. Each item must include a module field (and name).
- id: passthrough
match: '{{ true }}'
config_template: |
{{ .Item | toYaml }}
Array of strings → httpcheck (curated)
Endpoint serves [ "https://a/health", "https://b/health" ]. Use kindIs "string" to gate the rule, then map each string to an httpcheck job. id: httpcheck makes the module infer automatically.
- id: httpcheck
match: '{{ kindIs "string" .Item }}'
config_template: |
name: {{ .TUID }}
url: {{ .Item }}
Array of objects → httpcheck (curated)
Endpoint serves [ { "name": "api", "url": "https://api/health" }, … ]. Type-check that the item is a map and has a url key, then map fields into the rendered job.
- id: httpcheck
match: '{{ and (kindIs "map" .Item) (hasKey .Item "url") }}'
config_template: |
name: {{ .Item.name }}
url: {{ .Item.url }}
Multiple modules from one endpoint
Your endpoint mixes shapes — some items target httpcheck, some target prometheus. Use hasKey to discriminate, with each rule producing its own module's jobs.
- id: prometheus
match: '{{ and (kindIs "map" .Item) (hasKey .Item "metrics_url") }}'
config_template: |
name: {{ .Item.name }}
url: {{ .Item.metrics_url }}
- id: httpcheck
match: '{{ and (kindIs "map" .Item) (hasKey .Item "health_url") }}'
config_template: |
name: {{ .Item.name }}
url: {{ .Item.health_url }}
Verify discovery worked
After enabling the discoverer, confirm the endpoint is reachable and items are being parsed.
Confirm the endpoint is being fetched
Watch the agent log for discoverer=http messages. With systemd:
journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep "discoverer=http"
A successful fetch logs the number of items decoded. Failures (DNS, TLS, auth, parse) appear at warn level.
Reproduce the fetch with curl
When the discoverer log shows a parse error, hit the endpoint with curl to inspect what it returned:
curl -sS -H "Accept: application/yaml" https://cmdb.example.com/netdata/jobs.yaml | head -40
The response must be a top-level array or {"items": [...]} envelope.
Confirm jobs are being created
In the Netdata UI go to Collectors -> go.d -> <module>. Pass-through jobs use the name your endpoint provided; curated rules use whatever you set in the config_template.
Troubleshooting
parse response as json: ...; parse response as yaml: ...
The response is neither valid JSON nor valid YAML. Common causes: the endpoint returned an HTML error page (check status code and Content-Type), the JSON has trailing garbage, or YAML indentation is wrong. Reproduce with curl -i to see the headers + body.
Items decoded but no jobs created
Your services: rules are not matching, or they match but the rendered template is empty. With pass-through ({{ .Item | toYaml }}), make sure each upstream item includes module: and name:. With curated rules, double-check the type checks (kindIs, hasKey).
TLS/certificate errors against an internal endpoint
Use tls_skip_verify: yes to bypass for testing, then mount the issuing CA and set tls_ca: /path/to/ca.crt for production.
Bearer token file not found
When Netdata runs outside Kubernetes and the configured bearer_token_file points under /var/run/secrets/, missing tokens are silently ignored — this is intentional so the same config works in dev and in Helm. If you are inside k8s, the file must exist.
Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.