Skip to main content

ProxySQL

Plugin: go.d.plugin Module: proxysql

Overview

This collector monitors ProxySQL servers.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Default Behavior

Auto-Detection

This integration doesn't support auto-detection.

Limits

The default configuration for this integration does not impose any limits on data collection.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per ProxySQL instance

These metrics refer to the entire monitored application.

This scope has no labels.

Metrics:

MetricDimensionsUnit
proxysql.client_connections_countconnected, non_idle, hostgroup_lockedconnections
proxysql.client_connections_ratecreated, abortedconnections/s
proxysql.server_connections_countconnectedconnections
proxysql.server_connections_ratecreated, aborted, delayedconnections/s
proxysql.backends_trafficrecv, sentB/s
proxysql.clients_trafficrecv, sentB/s
proxysql.active_transactions_countclientconnections
proxysql.questions_ratequestionsquestions/s
proxysql.slow_queries_rateslowqueries/s
proxysql.queries_rateautocommit, autocommit_filtered, commit_filtered, rollback, rollback_filtered, backend_change_user, backend_init_db, backend_set_names, frontend_init_db, frontend_set_names, frontend_use_dbqueries/s
proxysql.backend_statements_counttotal, uniquestatements
proxysql.backend_statements_rateprepare, execute, closestatements/s
proxysql.client_statements_counttotal, uniquestatements
proxysql.client_statements_rateprepare, execute, closestatements/s
proxysql.cached_statements_countcachedstatements
proxysql.query_cache_entries_countentriesentries
proxysql.query_cache_memory_usedusedB
proxysql.query_cache_ioin, outB/s
proxysql.query_cache_requests_rateread, write, read_successrequests/s
proxysql.mysql_monitor_workers_countworkers, auxiliarythreads
proxysql.mysql_monitor_workers_ratestartedworkers/s
proxysql.mysql_monitor_connect_checks_ratesucceed, failedchecks/s
proxysql.mysql_monitor_ping_checks_ratesucceed, failedchecks/s
proxysql.mysql_monitor_read_only_checks_ratesucceed, failedchecks/s
proxysql.mysql_monitor_replication_lag_checks_ratesucceed, failedchecks/s
proxysql.jemalloc_memory_usedactive, allocated, mapped, metadata, resident, retainedB
proxysql.memory_usedauth, sqlite3, query_digest, query_rules, firewall_users_table, firewall_users_config, firewall_rules_table, firewall_rules_config, mysql_threads, admin_threads, cluster_threadsB
proxysql.uptimeuptimeseconds

Per command

These metrics refer to the SQL command.

Labels:

LabelDescription
commandSQL command.

Metrics:

MetricDimensionsUnit
proxysql.mysql_command_execution_rateuptimeseconds
proxysql.mysql_command_execution_timetimemicroseconds
proxysql.mysql_command_execution_duration100us, 500us, 1ms, 5ms, 10ms, 50ms, 100ms, 500ms, 1s, 5s, 10s, +Infmicroseconds

Per user

These metrics refer to the user.

Labels:

LabelDescription
userusername from the mysql_users table

Metrics:

MetricDimensionsUnit
proxysql.mysql_user_connections_utilizationusedpercentage
proxysql.mysql_user_connections_countusedconnections

Per hostgroup

These metrics refer to the backends hostgroup.

Labels:

LabelDescription
hostgrouphostgroup identifier

Metrics:

MetricDimensionsUnit
proxysql.hostgroup_backends_statusonline, shunned, offline_soft, offline_hardbackends

Per backend

These metrics refer to the backend server.

Labels:

LabelDescription
hostgroupbackend server hostgroup
hostbackend server host
portbackend server port

Metrics:

MetricDimensionsUnit
proxysql.backend_statusonline, shunned, offline_soft, offline_hardstatus
proxysql.backend_connections_usagefree, usedconnections
proxysql.backend_connections_ratesucceed, failedconnections/s
proxysql.backend_queries_ratequeriesqueries/s
proxysql.backend_trafficrecv, sendB/s
proxysql.backend_latencylatencymicroseconds

Functions

This collector exposes real-time functions for interactive troubleshooting in the Top tab.

Top Queries

Top SQL queries from ProxySQL query digest stats.

Queries stats_mysql_query_digest and returns the top entries sorted by the selected column.

AspectDescription
NameProxysql:top-queries
PerformanceUses ProxySQL stats tables and can be expensive on busy systems.
SecurityQuery text may contain unmasked literals (potential PII).
AvailabilityAvailable when the collector can query ProxySQL stats; returns errors if the SQL connection is unavailable.

Prerequisites

Grant access to stats_mysql_query_digest

Ensure the ProxySQL user can read stats_mysql_query_digest.

Parameters

ParameterTypeDescriptionRequiredDefaultOptions
Filter ByselectSelect the primary sort column (options are derived from sortable columns in the response).yestotalTime

Returns

Query digest statistics from ProxySQL.

ColumnTypeUnitVisibilityDescription
Digeststringhidden
Querystring
Schemastring
Userstringhidden
Hostgroupintegerhidden
Callsinteger
Total Timedurationmilliseconds
Avg Timedurationmilliseconds
Min Timedurationmillisecondshidden
Max Timedurationmillisecondshidden
Rows Affectedinteger
Rows Sentinteger
Errorsinteger
Warningsinteger
First Seenstringhidden
Last Seenstringhidden

Alerts

The following alerts are available:

Alert nameOn metricDescription
proxysql_hostgroup_no_online_backends proxysql.hostgroup_backends_statusProxySQL hostgroup ${label:hostgroup} has no ONLINE backends
proxysql_backend_shunned proxysql.backend_statusProxySQL backend SHUNNED (${label:host}:${label:port} hostgroup ${label:hostgroup})
proxysql_backend_offline_hard proxysql.backend_statusProxySQL backend OFFLINE_HARD (${label:host}:${label:port} hostgroup ${label:hostgroup})

Setup

You can configure the proxysql collector in two ways:

MethodBest forHow to
UIFast setup without editing filesGo to Nodes → Configure this node → Collectors → Jobs, search for proxysql, then click + to add a job.
FileIf you prefer configuring via file, or need to automate deployments (e.g., with Ansible)Edit go.d/proxysql.conf and add a job.
important

UI configuration requires paid Netdata Cloud plan.

Prerequisites

No action required.

Configuration

Options

The following options can be defined globally: update_every, autodetection_retry.

Config options
GroupOptionDescriptionDefaultRequired
Collectionupdate_everyData collection frequency (seconds).1no
autodetection_retryAutodetection retry interval (seconds). Set 0 to disable.0no
TargetdsnProxySQL server DSN (Data Source Name). See DSN syntax.stats:stats@tcp(127.0.0.1:6032)/yes
timeoutQuery timeout (seconds).1no
Virtual NodevnodeAssociates this data collection job with a Virtual Node.no

via UI

Configure the proxysql collector from the Netdata web interface:

  1. Go to Nodes.
  2. Select the node where you want the proxysql data-collection job to run and click the (Configure this node). That node will run the data collection.
  3. The Collectors → Jobs view opens by default.
  4. In the Search box, type proxysql (or scroll the list) to locate the proxysql collector.
  5. Click the + next to the proxysql collector to add a new job.
  6. Fill in the job fields, then click Test to verify the configuration and Submit to save.
    • Test runs the job with the provided settings and shows whether data can be collected.
    • If it fails, an error message appears with details (for example, connection refused, timeout, or command execution errors), so you can adjust and retest.

via File

The configuration file name for this integration is go.d/proxysql.conf.

The file format is YAML. Generally, the structure is:

update_every: 1
autodetection_retry: 0
jobs:
- name: some_name1
- name: some_name2

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/proxysql.conf
Examples
TCP socket

An example configuration.

Config
jobs:
- name: local
dsn: stats:stats@tcp(127.0.0.1:6032)/

my.cnf

An example configuration.

Config
jobs:
- name: local
my.cnf: '/etc/my.cnf'

Multi-instance

Note: When you define multiple jobs, their names must be unique.

Local and remote instances.

Config
jobs:
- name: local
dsn: stats:stats@tcp(127.0.0.1:6032)/

- name: remote
dsn: stats:stats@tcp(203.0.113.0:6032)/

Troubleshooting

Debug Mode

Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.

To troubleshoot issues with the proxysql collector, run the go.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn't working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that's not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
  • Switch to the netdata user.

    sudo -u netdata -s
  • Run the go.d.plugin to debug the collector:

    ./go.d.plugin -d -m proxysql

    To debug a specific job:

    ./go.d.plugin -d -m proxysql -j jobName

Getting Logs

If you're encountering problems with the proxysql collector, follow these steps to retrieve logs and identify potential issues:

  • Run the command specific to your system (systemd, non-systemd, or Docker container).
  • Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.

System with systemd

Use the following command to view logs generated since the last Netdata service restart:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep proxysql

System without systemd

Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector's name:

grep proxysql /var/log/netdata/collector.log

Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.

Docker Container

If your Netdata runs in a Docker container named "netdata" (replace if different), use this command:

docker logs netdata 2>&1 | grep proxysql

Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.