Zabbix Preprocessing
Plugin: scripts.d.plugin Module: zabbix
Overview
This module runs Zabbix-style data collection jobs natively inside Netdata.
It supports Zabbix's master-item + dependent-pipeline pattern: a single collection step (command, HTTP, or SNMP) produces raw data, and one or more preprocessing pipelines extract individual metrics using Zabbix-compatible preprocessing steps (JSONPath, regex, JavaScript, SNMP walk, Prometheus parsing, CSV, XPath, and more).
For each configured job it collects:
- User-defined metrics: Each dependent pipeline produces a charted metric with configurable context, unit, family, chart type, and dimension algorithm.
- Job state: OK / WARNING / ERROR / UNKNOWN state tracking per job and per discovered instance.
- Low-level discovery (LLD): Optional discovery pipelines that dynamically create instances from JSON arrays, similar to Zabbix LLD.
Collection supports three modes:
- Command: Runs an external script/binary via
nd-runand captures stdout. - HTTP: Performs an HTTP request and captures the response body.
- SNMP: Queries an SNMP agent (GET or WALK) and captures the result.
The raw output is then processed through Zabbix-compatible preprocessing steps to extract metrics.
Zabbix macros ({HOST.NAME}, {HOST.IP}, {#MACRO}, etc.) are expanded before execution.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
Command-mode plugins run as the netdata user via nd-run.
Default Behavior
Auto-Detection
No auto-detection. Each job must be explicitly configured with a collection definition and one or more dependent_pipelines.
Limits
The default configuration for this integration does not impose any limits on data collection.
Performance Impact
Command-mode jobs spawn a subprocess per execution. HTTP and SNMP modes use in-process clients.
Metrics
Metrics grouped by scope.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
Virtual Node Label Conventions
When a job references a vnode, the module reads Zabbix host macros from the virtual node's labels using prefix conventions:
| Label key | Zabbix macro | Description |
|---|---|---|
_address | {HOST.IP}, {HOST.CONN} | IP address or DNS name of the host |
_alias | {HOST.ALIAS} | Human-readable host alias |
Other _ prefixed | N/A | Reserved for future use |
The {HOST.NAME}, {HOST.HOST}, and {HOST.DNS} macros are derived from the vnode hostname.
Per pipeline
Metrics produced by each dependent preprocessing pipeline.
Labels:
| Label | Description |
|---|---|
| zabbix_job | Job name. |
| zabbix_pipeline | Pipeline name. |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| zabbix.{context} | {dimension} | varies |
Per job
Per-job instance state tracking.
Labels:
| Label | Description |
|---|---|
| zabbix_job | Job name. |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| zabbix.{job}.state | ok, collect_failure, lld_failure, extraction_failure, dimension_failure | state |
Alerts
There are no alerts configured by default for this integration.
Setup
Prerequisites
No action required.
Configuration
Options
Each job defines a collection source and one or more dependent preprocessing pipelines.
Config options
| Group | Option | Description | Default | Required |
|---|---|---|---|---|
| Collection | collection.type | Collection mode (command, http, or snmp). | yes | |
| collection.command | Command to execute (for command type). | no | ||
| collection.http.url | URL to fetch (for http type). | no | ||
| collection.snmp.target | SNMP target host (for snmp type). | no | ||
| collection.snmp.oid | SNMP OID to query. | no | ||
| Pipelines | dependent_pipelines[].name | Pipeline name (used for chart identification). | yes | |
| dependent_pipelines[].context | Netdata chart context. | yes | ||
| dependent_pipelines[].dimension | Dimension name within the chart. | yes | ||
| dependent_pipelines[].unit | Metric unit string. | yes | ||
| dependent_pipelines[].steps | Array of Zabbix preprocessing steps applied to the raw collection output. | [] | yes | |
| General | vnode | Virtual node name for host macro resolution. | no | |
| Discovery | lld | Low-level discovery configuration for dynamic instance creation. | no |
via File
The configuration file name for this integration is scripts.d/zabbix.conf.
You can edit the configuration file using the edit-config script from the
Netdata config directory.
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config scripts.d/zabbix.conf
Examples
Command collection with JSONPath extraction
Run a script and extract a metric using JSONPath.
Config
jobs:
- name: api_latency
collection:
type: command
command: /usr/local/bin/get_api_stats.sh
dependent_pipelines:
- name: latency
context: myapp.api.latency
dimension: p99
unit: milliseconds
steps:
- type: jsonpath
params: "$.latency.p99"
SNMP collection
Query an SNMP OID and chart the result.
Config
jobs:
- name: disk_usage
vnode: my-switch
collection:
type: snmp
snmp:
target: "{HOST.IP}"
oid: ".1.3.6.1.2.1.25.2.3.1.6"
version: v2c
community: public
dependent_pipelines:
- name: used
context: zabbix.disk.used
dimension: used
unit: bytes
steps:
- type: snmp_get_value
Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.