Skip to main content


Plugin: python.d.plugin Module: ceph


This collector monitors Ceph metrics about Cluster statistics, OSD usage, latency and Pool statistics.

Uses the rados python module to connect to a Ceph cluster.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Default Behavior


This integration doesn't support auto-detection.


The default configuration for this integration does not impose any limits on data collection.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.


Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per Ceph instance

These metrics refer to the entire monitored application.

This scope has no labels.


ceph.general_usageavail, usedKiB
ceph.general_bytesread, writeKiB/s
ceph.general_operationsread, writeoperations
ceph.general_latencyapply, commitmilliseconds
ceph.pool_usagea dimension per Ceph PoolKiB
ceph.pool_objectsa dimension per Ceph Poolobjects
ceph.pool_read_bytesa dimension per Ceph PoolKiB/s
ceph.pool_write_bytesa dimension per Ceph PoolKiB/s
ceph.pool_read_operationsa dimension per Ceph Pooloperations
ceph.pool_write_operationsa dimension per Ceph Pooloperations
ceph.osd_usagea dimension per Ceph OSDKiB
ceph.osd_sizea dimension per Ceph OSDKiB
ceph.apply_latencya dimension per Ceph OSDmilliseconds
ceph.commit_latencya dimension per Ceph OSDmilliseconds


The following alerts are available:

Alert nameOn metricDescription
ceph_cluster_space_usage ceph.general_usagecluster disk space utilization



rados python module

Make sure the rados python module is installed

Granting read permissions to ceph group from keyring file

Execute: chmod 640 /etc/ceph/ceph.client.admin.keyring

Create a specific rados_id

You can optionally create a rados_id to use instead of admin



The configuration file name for this integration is python.d/ceph.conf.

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config python.d/ceph.conf


There are 2 sections:

  • Global variables
  • One or more JOBS that can define multiple different instances to monitor.

The following options can be defined globally: priority, penalty, autodetection_retry, update_every, but can also be defined per JOB to override the global values.

Additionally, the following collapsed table contains all the options that can be configured inside a JOB definition.

Every configuration JOB starts with a job_name value which will appear in the dashboard, unless a name parameter is specified.

Config options
update_everySets the default data collection frequency.5no
priorityControls the order of charts at the netdata dashboard.60000no
autodetection_retrySets the job re-check interval in seconds.0no
penaltyIndicates whether to apply penalty to update_every in case of failures.yesno
nameJob name. This value will overwrite the job_name value. JOBS with the same name are mutually exclusive. Only one of them will be allowed running at any time. This allows autodetection to try several alternatives and pick the one that
config_fileCeph config fileyes
keyring_fileCeph keyring file. netdata user must be added into ceph group and keyring file must be read group permission.yes
rados_idA rados user id to use for connecting to the Ceph cluster.adminno


Basic local Ceph cluster

A basic configuration to connect to a local Ceph cluster.

config_file: '/etc/ceph/ceph.conf'
keyring_file: '/etc/ceph/ceph.client.admin.keyring'


Debug Mode

To troubleshoot issues with the ceph collector, run the python.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn't working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that's not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
  • Switch to the netdata user.

    sudo -u netdata -s
  • Run the python.d.plugin to debug the collector:

    ./python.d.plugin ceph debug trace

Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.