Skip to main content

Nvidia GPU collector

Monitors performance metrics (memory usage, fan speed, pcie bandwidth utilization, temperature, etc.) using nvidia-smi cli tool.

Warning: this collector does not work when the Netdata Agent is running in a container.

Requirements and Notes

  • You must have the nvidia-smi tool installed and your NVIDIA GPU(s) must support the tool. Mostly the newer high end models used for AI / ML and Crypto or Pro range, read more about nvidia_smi.

  • You must enable this plugin, as its disabled by default due to minor performance issues:

    cd /etc/netdata   # Replace this path with your Netdata config directory, if different
    sudo ./edit-config python.d.conf

    Remove the '#' before nvidia_smi so it reads: nvidia_smi: yes.

  • On some systems when the GPU is idle the nvidia-smi tool unloads and there is added latency again when it is next queried. If you are running GPUs under constant workload this isn't likely to be an issue.

  • Currently the nvidia-smi tool is being queried via cli. Updating the plugin to use the nvidia c/c++ API directly should resolve this issue. See discussion here:

  • Contributions are welcome.

  • Make sure netdata user can execute /usr/bin/nvidia-smi or wherever your binary is.

  • If nvidia-smi process is not killed after netdata restart you need to off loop_mode.

  • poll_seconds is how often in seconds the tool is polled for as an integer.


It produces the following charts:

  • PCI Express Bandwidth Utilization in KiB/s
  • Fan Speed in percentage
  • GPU Utilization in percentage
  • Memory Bandwidth Utilization in percentage
  • Encoder/Decoder Utilization in percentage
  • Memory Usage in MiB
  • Temperature in celsius
  • Clock Frequencies in MHz
  • Power Utilization in Watts
  • Memory Used by Each Process in MiB
  • Memory Used by Each User in MiB
  • Number of User on GPU in num


Edit the python.d/nvidia_smi.conf configuration file using edit-config from the Netdata config directory, which is typically at /etc/netdata.

cd /etc/netdata   # Replace this path with your Netdata config directory, if different
sudo ./edit-config python.d/nvidia_smi.conf


loop_mode    : yes
poll_seconds : 1
exclude_zero_memory_users : yes


To troubleshoot issues with the nvidia_smi module, run the python.d.plugin with the debug option enabled. The output will give you the output of the data collection job or error messages on why the collector isn't working.

First, navigate to your plugins directory, usually they are located under /usr/libexec/netdata/plugins.d/. If that's not the case on your system, open netdata.conf and look for the setting plugins directory. Once you're in the plugin's directory, switch to the netdata user.

cd /usr/libexec/netdata/plugins.d/
sudo su -s /bin/bash netdata

Now you can manually run the nvidia_smi module in debug mode:

./python.d.plugin nvidia_smi debug trace

Was this page helpful?

Need more help?