NVMe devices
Plugin: go.d.plugin Module: nvme
Overview
This collector monitors the health of NVMe devices. It relies on the nvme
CLI tool but avoids directly executing the binary. Instead, it utilizes ndsudo
, a Netdata helper specifically designed to run privileged commands securely within the Netdata environment. This approach eliminates the need to use sudo
, improving security and potentially simplifying permission management.
This collector is only supported on the following platforms:
- Linux
- BSD
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
Default Behavior
Auto-Detection
This integration doesn't support auto-detection.
Limits
The default configuration for this integration does not impose any limits on data collection.
Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
Metrics
Metrics grouped by scope.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
Per device
These metrics refer to the NVME device.
Labels:
Label | Description |
---|---|
device | NVMe device name |
model_number | NVMe device model |
Metrics:
Metric | Dimensions | Unit |
---|---|---|
nvme.device_estimated_endurance_perc | used | % |
nvme.device_available_spare_perc | spare | % |
nvme.device_composite_temperature | temperature | celsius |
nvme.device_io_transferred_count | read, written | bytes |
nvme.device_power_cycles_count | power | cycles |
nvme.device_power_on_time | power-on | seconds |
nvme.device_critical_warnings_state | available_spare, temp_threshold, nvm_subsystem_reliability, read_only, volatile_mem_backup_failed, persistent_memory_read_only | state |
nvme.device_unsafe_shutdowns_count | unsafe | shutdowns |
nvme.device_media_errors_rate | media | errors/s |
nvme.device_error_log_entries_rate | error_log | entries/s |
nvme.device_warning_composite_temperature_time | wctemp | seconds |
nvme.device_critical_composite_temperature_time | cctemp | seconds |
nvme.device_thermal_mgmt_temp1_transitions_rate | temp1 | transitions/s |
nvme.device_thermal_mgmt_temp2_transitions_rate | temp2 | transitions/s |
nvme.device_thermal_mgmt_temp1_time | temp1 | seconds |
nvme.device_thermal_mgmt_temp2_time | temp2 | seconds |
Alerts
The following alerts are available:
Alert name | On metric | Description |
---|---|---|
nvme_device_critical_warnings_state | nvme.device_critical_warnings_state | NVMe device ${label:device} has critical warnings |
Setup
Prerequisites
Install nvme-cli
See Distro Support. Install nvme-cli
using your distribution's package manager.
For Netdata running in a Docker container: grant NVMe device access
Your NVMe devices need to be accessible within the Docker container for Netdata to monitor them.
Include the following option in your docker run
command or add the device mapping in your docker-compose.yml
file:
docker run
--device '/dev/nvme0n1:/dev/nvme0n1'
docker-compose.yml
services:
netdata:
devices:
- "/dev/nvme0n1:/dev/nvme0n1"
Note: Replace /dev/nvme0n1
with your actual NVMe device name.
Configuration
File
The configuration file name for this integration is go.d/nvme.conf
.
You can edit the configuration file using the edit-config
script from the
Netdata config directory.
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/nvme.conf
Options
The following options can be defined globally: update_every, autodetection_retry.
Config options
Name | Description | Default | Required |
---|---|---|---|
update_every | Data collection frequency. | 10 | no |
autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |
timeout | nvme binary execution timeout. | 2 | no |
Examples
Custom update_every
Allows you to override the default data collection interval.
Config
jobs:
- name: nvme
update_every: 5 # Collect NVMe metrics every 5 seconds
Troubleshooting
Debug Mode
Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.
To troubleshoot issues with the nvme
collector, run the go.d.plugin
with the debug option enabled. The output
should give you clues as to why the collector isn't working.
Navigate to the
plugins.d
directory, usually at/usr/libexec/netdata/plugins.d/
. If that's not the case on your system, opennetdata.conf
and look for theplugins
setting under[directories]
.cd /usr/libexec/netdata/plugins.d/
Switch to the
netdata
user.sudo -u netdata -s
Run the
go.d.plugin
to debug the collector:./go.d.plugin -d -m nvme
Getting Logs
If you're encountering problems with the nvme
collector, follow these steps to retrieve logs and identify potential issues:
- Run the command specific to your system (systemd, non-systemd, or Docker container).
- Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.
System with systemd
Use the following command to view logs generated since the last Netdata service restart:
journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep nvme
System without systemd
Locate the collector log file, typically at /var/log/netdata/collector.log
, and use grep
to filter for collector's name:
grep nvme /var/log/netdata/collector.log
Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.
Docker Container
If your Netdata runs in a Docker container named "netdata" (replace if different), use this command:
docker logs netdata 2>&1 | grep nvme
Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.