Skip to main content

YugabyteDB

Plugin: go.d.plugin Module: yugabytedb

Overview

This collector monitors the activity and performance of YugabyteDB servers.

It sends HTTP requests to the YugabyteDB metric endpoints.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Default Behavior

Auto-Detection

By default, it detects YugabyteDB instances running on localhost. On startup, it tries to collect metrics from:

Limits

The default configuration for this integration does not impose any limits on data collection.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per Master Client operations (Master)

Metrics tracking latency and counts of Master Client RPC operations.

Labels:

LabelDescription
operationThe specific MasterClient RPC operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.master_client_operationsoperationsops/s
yugabytedb.master_client_operations_latencylatencymicroseconds

Per Master DDL operations

Metrics tracking Data Definition Language (DDL) operations performed on the Master server.

Labels:

LabelDescription
operationThe specific DDL operation handler name.

Metrics:

MetricDimensionsUnit
yugabytedb.master_ddl_operationsoperationsops/s
yugabytedb.master_ddl_operations_latencylatencymicroseconds

Per TabletServerService (Master)

Metrics tracking latency and counts of TabletServer RPC operations on the Master server.

Labels:

LabelDescription
opThe specific TabletServer RPC operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.master_tabletserverservice_operationsoperationsops/s
yugabytedb.master_tabletserverservice_operations_latencylatencymicroseconds
yugabytedb.master_tabletserverservice_trafficreceived, sentbytes/s

Per PgClientService (Master)

Metrics tracking latency and counts of PostgreSQL client service RPC operations on the Master server.

Labels:

LabelDescription
operationThe specific PgClientService RPC operation name that handles PostgreSQL protocol requests and responses.

Metrics:

MetricDimensionsUnit
yugabytedb.master_pgclientservice_operationsoperationsops/s
yugabytedb.master_pgclientservice_operations_latencylatencymicroseconds
yugabytedb.master_pgclientservice_trafficreceived, sentbytes/s

Per RemoteBootstrapService (Master)

Metrics tracking latency and counts of RemoteBootstrap RPC operations on the Master server.

Labels:

LabelDescription
operationThe specific RemoteBootstrap RPC operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.master_remotebootstrapservice_operationsoperationsops/s
yugabytedb.master_remotebootstrapservice_operations_latencylatencymicroseconds
yugabytedb.master_remotebootstrapservice_trafficreceived, sentbytes/s

Per Raft operations (Master)

Metrics tracking latency and counts of Raft consensus protocol operations on the Master server.

Labels:

LabelDescription
operationThe specific Raft protocol operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.master_consensus_operationsoperationsops/s
yugabytedb.master_consensus_operations_latencylatencymicroseconds
yugabytedb.master_consensus_trafficreceived, sentbytes/s

Per TabletServerService (Tablet Server)

Metrics tracking latency and counts of TabletServer RPC operations on the Tablet server.

Labels:

LabelDescription
opThe specific TabletServer RPC operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.tserver_tabletserverservice_operationsoperationsops/s
yugabytedb.tserver_tabletserverservice_operations_latencylatencymicroseconds
yugabytedb.tserver_tabletserverservice_trafficreceived, sentbytes/s

Per TabletServerAdminService (Tablet Server)

Metrics tracking latency and counts of TabletServerAdmin RPC operations on the Tablet server.

Labels:

LabelDescription
opThe specific TabletServerAdmin RPC operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.tserver_tabletserveradminservice_operationsoperationsops/s
yugabytedb.tserver_tabletserveradminservice_operations_latencylatencymicroseconds
yugabytedb.tserver_tabletserveradminservice_trafficreceived, sentbytes/s

Per TabletServerBackupService (Tablet Server)

Metrics tracking latency and counts of TabletServerBackup RPC operations on the Tablet server.

Labels:

LabelDescription
opThe specific TabletServerBackup RPC operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.tserver_tabletserverbackupservice_operationsoperationsops/s
yugabytedb.tserver_tabletserverbackupservice_operations_latencylatencymicroseconds
yugabytedb.tserver_tabletserverbackupservice_trafficreceived, sentbytes/s

Per PgClientService (Tablet Server)

Metrics tracking latency and counts of PostgreSQL client service RPC operations on the Tablet server.

Labels:

LabelDescription
operationThe specific PgClientService RPC operation name that handles PostgreSQL protocol requests and responses.

Metrics:

MetricDimensionsUnit
yugabytedb.tserver_pgclientservice_operationsoperationsops/s
yugabytedb.tserver_pgclientservice_operations_latencylatencymicroseconds
yugabytedb.tserver_pgclientservice_trafficreceived, sentbytes/s

Per RemoteBootstrapService (Tablet Server)

Metrics tracking latency and counts of RemoteBootstrap RPC operations on the Tablet server.

Labels:

LabelDescription
operationThe specific RemoteBootstrap RPC operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.tserver_remotebootstrapservice_operationsoperationsops/s
yugabytedb.tserver_remotebootstrapservice_operations_latencylatencymicroseconds
yugabytedb.tserver_remotebootstrapservice_trafficreceived, sentbytes/s

Per Raft operations (Tablet Server)

Metrics tracking latency and counts of Raft consensus protocol operations on the Tablet server.

Labels:

LabelDescription
operationThe specific Raft protocol operation name.

Metrics:

MetricDimensionsUnit
yugabytedb.tserver_consensus_operationsoperationsops/s
yugabytedb.tserver_consensus_operations_latencylatencymicroseconds
yugabytedb.tserver_consensus_trafficreceived, sentbytes/s

Per SQL Statements (YCQL)

Metrics tracking latency and counts of SQL statements on the YCQL server.

Labels:

LabelDescription
statementThe specific SQL statement name.

Metrics:

MetricDimensionsUnit
yugabytedb.ycql_sql_statementsstatementsstatements/s
yugabytedb.yCql_sql_statements_latencylatencymicroseconds

Per YSQL server

Metrics tracking connections on the YSQL server.

This scope has no labels.

Metrics:

MetricDimensionsUnit
yugabytedb.ysql_connection_usageavailable, usedconnections
yugabytedb.ysql_active_connectionsactiveconnections
yugabytedb.ysql_established_connectionsestablishedconnections/s
yugabytedb.ysql_over_limit_connectionsover_limitrejects/s

Per SQL Statements (YSQL)

Metrics tracking latency and counts of SQL statements on the YSQL server.

Labels:

LabelDescription
statementThe specific SQL statement name.

Metrics:

MetricDimensionsUnit
yugabytedb.ysql_sql_statementsstatementsstatements/s
yugabytedb.ysql_sql_statements_latencylatencymicroseconds

Alerts

There are no alerts configured by default for this integration.

Setup

You can configure the yugabytedb collector in two ways:

MethodBest forHow to
UIFast setup without editing filesGo to Nodes → Configure this node → Collectors → Jobs, search for yugabytedb, then click + to add a job.
FileIf you prefer configuring via file, or need to automate deployments (e.g., with Ansible)Edit go.d/yugabytedb.conf and add a job.
important

UI configuration requires paid Netdata Cloud plan.

Prerequisites

No action required.

Configuration

Options

The following options can be defined globally: update_every, autodetection_retry.

Config options
GroupOptionDescriptionDefaultRequired
Collectionupdate_everyData collection interval (seconds).5no
autodetection_retryAutodetection retry interval (seconds). Set 0 to disable.0no
TargeturlTarget endpoint URL.http://127.0.0.1:7000/prometheus-metricsyes
timeoutHTTP request timeout (seconds).1no
HTTP AuthusernameUsername for Basic HTTP authentication.no
passwordPassword for Basic HTTP authentication.no
bearer_token_filePath to a file containing a bearer token (used for Authorization: Bearer).no
TLStls_skip_verifySkip TLS certificate and hostname verification (insecure).nono
tls_caPath to CA bundle used to validate the server certificate.no
tls_certPath to client TLS certificate (for mTLS).no
tls_keyPath to client TLS private key (for mTLS).no
Proxyproxy_urlHTTP proxy URL.no
proxy_usernameUsername for proxy Basic HTTP authentication.no
proxy_passwordPassword for proxy Basic HTTP authentication.no
RequestmethodHTTP method to use.GETno
bodyRequest body (e.g., for POST/PUT).no
headersAdditional HTTP headers (one per line as key: value).no
not_follow_redirectsDo not follow HTTP redirects.nono
force_http2Force HTTP/2 (including h2c over TCP).nono
Virtual NodevnodeAssociates this data collection job with a Virtual Node.no

via UI

Configure the yugabytedb collector from the Netdata web interface:

  1. Go to Nodes.
  2. Select the node where you want the yugabytedb data-collection job to run and click the (Configure this node). That node will run the data collection.
  3. The Collectors → Jobs view opens by default.
  4. In the Search box, type yugabytedb (or scroll the list) to locate the yugabytedb collector.
  5. Click the + next to the yugabytedb collector to add a new job.
  6. Fill in the job fields, then click Test to verify the configuration and Submit to save.
    • Test runs the job with the provided settings and shows whether data can be collected.
    • If it fails, an error message appears with details (for example, connection refused, timeout, or command execution errors), so you can adjust and retest.

via File

The configuration file name for this integration is go.d/yugabytedb.conf.

The file format is YAML. Generally, the structure is:

update_every: 1
autodetection_retry: 0
jobs:
- name: some_name1
- name: some_name2

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/yugabytedb.conf
Examples
Basic

A basic example configuration.

jobs:
- name: local
url: http://127.0.0.1:7000/prometheus-metrics # Master
# url: http://127.0.0.1:9000/prometheus-metrics # Tablet Server
# url: http://127.0.0.1:12000/prometheus-metrics # YCQL
# url: http://127.0.0.1:13000/prometheus-metrics # YSQL

HTTP authentication

Basic HTTP authentication.

Config
jobs:
- name: local
url: http://127.0.0.1:7000/prometheus-metrics
username: username
password: password

HTTPS with self-signed certificate

NGINX with enabled HTTPS and self-signed certificate.

Config
jobs:
- name: local
url: https://127.0.0.1:7000/prometheus-metrics
tls_skip_verify: yes

Multi-instance

Note: When you define multiple jobs, their names must be unique.

Collecting metrics from local and remote instances.

Config
jobs:
- name: local
url: http://127.0.0.1:7000/prometheus-metrics

- name: remote
url: http://192.0.2.1:7000/prometheus-metrics

Troubleshooting

Debug Mode

Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.

To troubleshoot issues with the yugabytedb collector, run the go.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn't working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that's not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
  • Switch to the netdata user.

    sudo -u netdata -s
  • Run the go.d.plugin to debug the collector:

    ./go.d.plugin -d -m yugabytedb

    To debug a specific job:

    ./go.d.plugin -d -m yugabytedb -j jobName

Getting Logs

If you're encountering problems with the yugabytedb collector, follow these steps to retrieve logs and identify potential issues:

  • Run the command specific to your system (systemd, non-systemd, or Docker container).
  • Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.

System with systemd

Use the following command to view logs generated since the last Netdata service restart:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep yugabytedb

System without systemd

Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector's name:

grep yugabytedb /var/log/netdata/collector.log

Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.

Docker Container

If your Netdata runs in a Docker container named "netdata" (replace if different), use this command:

docker logs netdata 2>&1 | grep yugabytedb

Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.