Skip to main content

Exporting Reference

This reference guide provides comprehensive information about enabling, configuring, and monitoring Netdata's exporting engine for sending metrics to external time-series databases.

For a quick introduction, read our exporting metrics overview or start with enabling a connector.

Core Capabilities

The exporting engine features a modular structure that supports:

  • Multiple connector instances running simultaneously
  • Different update intervals per connector
  • Custom filters per connector instance
  • Metric resampling to reduce database congestion
info

When you enable the exporting engine, Netdata exports metrics starting from the restart time, not the entire historical database.

Operation Modes

Netdata provides three data export modes:

ModeDescriptionData FormatUse Case
as-collectedRaw metrics in original unitsCounters remain counters, gauges remain gaugesTime-series database experts who need raw data
averageNormalized metrics from Netdata databaseAll metrics sent as gauges in Netdata unitsSimplified visualization with Netdata-centric workflows
sum/volumeSum of interpolated valuesAggregated values over the export intervalLong-term trend analysis
tip

Choosing the Right Mode:

  • Use as-collected if you're building monitoring around a time-series database and know how to convert units
  • Use average for simpler long-term archiving that matches Netdata's visualization exactly

Supported Connectors

ConnectorProtocol/FormatMetric Format
AWS KinesisJSONStream-based
Google Pub/SubJSONMessage-based
GraphitePlaintextprefix.hostname.chart.dimension
JSON DatabasesJSONDocument-based
OpenTSDBPlaintext/HTTPprefix.chart.dimension with tags
MongoDBJSONDocument-based
PrometheusHTTP scrapingPrometheus exposition format
Prometheus Remote WriteSnappy-compressed protobufBinary over HTTP
TimescaleDBJSON streamsTime-series tables

Configuration Structure

Your exporting.conf file contains these configuration blocks:

[exporting:global]
enabled = yes
send configured labels = no
send automatic labels = no
update every = 10

[prometheus:exporter]
send names instead of ids = yes
send configured labels = yes
send automatic labels = no
send charts matching = *
send hosts matching = localhost *
prefix = netdata

[graphite:my_graphite_instance]
enabled = yes
destination = localhost:2003
data source = average
prefix = Netdata
hostname = my-name
update every = 10
buffer on failures = 10
timeout ms = 20000
send charts matching = *
send hosts matching = localhost *
send names instead of ids = yes
send configured labels = yes
send automatic labels = yes

[prometheus_remote_write:my_prometheus_remote_write_instance]
enabled = yes
destination = localhost
remote write URL path = /receive

[kinesis:my_kinesis_instance]
enabled = yes
destination = us-east-1
stream name = netdata
aws_access_key_id = my_access_key_id
aws_secret_access_key = my_aws_secret_access_key

[pubsub:my_pubsub_instance]
enabled = yes
destination = pubsub.googleapis.com
credentials file = /etc/netdata/pubsub_credentials.json
project id = my_project
topic id = my_topic

[mongodb:my_mongodb_instance]
enabled = yes
destination = localhost
database = my_database
collection = my_collection

[json:my_json_instance]
enabled = yes
destination = localhost:5448

[opentsdb:my_opentsdb_plaintext_instance]
enabled = yes
destination = localhost:4242

[opentsdb:http:my_opentsdb_http_instance]
enabled = yes
destination = localhost:4242
username = my_username
password = my_password

[opentsdb:https:my_opentsdb_https_instance]
enabled = yes
destination = localhost:8082

Configuration Sections

SectionPurpose
[exporting:global]Default settings for all connectors
[prometheus:exporter]Prometheus API endpoint settings
[<type>:<name>]Individual connector instance configuration

Connector Types

Available connector types with optional modifiers:

  • graphite | graphite:http | graphite:https
  • opentsdb:telnet | opentsdb:http | opentsdb:https
  • prometheus_remote_write | prometheus_remote_write:http | prometheus_remote_write:https
  • json | json:http | json:https
  • kinesis | pubsub | mongodb

Configuration Options

Basic Settings

OptionValuesDescription
enabledyes/noActivates the connector instance
data sourceas-collected/average/sumSelects data export mode
hostnamestringHostname for external database (default: [global].hostname)
prefixstringPrefix added to all metrics
update everysecondsExport interval with automatic randomization

Connection Settings

OptionFormatDescription
destinationspace-separated listTarget servers in [PROTOCOL:]IP[:PORT] format
buffer on failuresiterationsBuffer size when database unavailable
timeout msmillisecondsProcessing timeout (default: 2 * update_every * 1000)

Destination Examples

IPv4 configuration:

destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242

IPv6 and IPv4 combined:

destination = [ffff:...:0001]:2003 10.11.12.1:2003

Special destinations:

  • Kinesis: AWS region (e.g., us-east-1)
  • MongoDB: MongoDB URI
  • Pub/Sub: Service endpoint

Filtering Options

OptionPattern FormatDescription
send hosts matchingspace-separated patternsFilter hosts using * wildcard, ! for negation
send charts matchingspace-separated patternsFilter charts by ID/name, ! for negation
important

Pattern matching follows first-match logic. Order matters when using negative patterns (!).

Example: !*child* *db* matches all *db* hosts except those containing *child*.

Label Settings

OptionValuesDescription
send names instead of idsyes/noUse human-friendly names vs system IDs
send configured labelsyes/noInclude [host labels] from netdata.conf
send automatic labelsyes/noInclude auto-generated labels (_os_name, _architecture)

Chart Filtering

Filter metrics through two methods:

  1. Configuration file:

    [prometheus:exporter]
    send charts matching = system.*
  2. URL parameter:

    http://localhost:19999/api/v1/allmetrics?format=shell&filter=system.*

HTTPS Support

For databases without native TLS/SSL support, configure a reverse proxy:

Performance Considerations

The exporting engine operates independently to avoid slowing down Netdata. However:

warning

Multiple connector instances running batches simultaneously can consume significant CPU resources. Configure different update intervals to prevent synchronization.

Monitoring the Exporting Engine

Netdata provides five monitoring charts under Netdata Monitoring:

ChartMonitors
Buffered metricsNumber of metrics added to dispatch buffer
Exporting data sizeData volume (KB) added to buffer
Exporting operationsOperation count performed
Exporting thread CPU usageCPU resources consumed by exporting thread

Exporting engine monitoring

Built-in Alerts

The exporting engine includes three automatic alerts:

AlertMonitors
exporting_last_bufferingSeconds since last successful buffering
exporting_metrics_sentPercentage of successfully sent metrics
exporting_metrics_lostMetrics lost due to repeated failures

Exporting alerts

Fallback Script

Netdata includes nc-exporting.sh for:

  • Saving metrics to disk during database outages
  • Pushing cached metrics when database recovers
  • Monitoring/tracing/debugging metric generation

Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.