Dell PowerStore
Plugin: go.d.plugin Module: powerstore
Overview
This collector monitors Dell PowerStore storage arrays via the PowerStore REST API.
It collects metrics for the following components:
- Cluster Capacity — physical and logical space usage, efficiency ratios (data reduction, snapshot savings, thin provisioning)
- Appliance Performance — IOPS, bandwidth, latency, CPU utilization, and space usage per appliance
- Volume Performance — IOPS, bandwidth, latency, provisioned/used space, and thin savings per volume
- Node Performance — IOPS, bandwidth, latency, and current login count per node
- FC Port Performance — IOPS, bandwidth, latency, and link status per Fibre Channel port
- Ethernet Port Performance — bytes/packets rate, error rates, and link status per Ethernet port
- File System Performance — IOPS, bandwidth, and latency per file system
- Hardware Health — component-level status counts (fans, PSUs, drives, batteries, nodes) by health state (OK, degraded, failed, unknown)
- Active Alerts — count of currently active alerts by severity (critical, major, minor, info)
- Drive Wear — SSD endurance remaining percentage per drive
- NAS Server Status — operational status counts (started, stopped, degraded, unknown)
- Replication — data remaining, data transferred, and transfer rate across appliances
The collector uses the PowerStore REST API (/api/rest/* endpoints) to fetch data.
Authentication uses HTTP Basic Auth — the collector calls /api/rest/login_session to obtain
a session cookie and a DELL-EMC-TOKEN CSRF token, which are reused for subsequent requests.
Sessions are automatically refreshed on HTTP 403 (token expiry).
Discovery runs on startup and then every 5 collection cycles to refresh the hardware inventory, appliances, volumes, nodes, ports, file systems, and NAS servers.
Performance and space metrics are collected via the /api/rest/metrics/generate endpoint using
the PowerStore metrics generation API. The collector requests Five_Mins interval data for
performance metrics and One_Day for cluster space metrics, taking the most recent data point.
Discovery API calls use pagination (2000 items per page, HTTP 206 for partial results). Performance metrics for different entities are collected concurrently (up to 10 parallel API calls).
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
A user account on the PowerStore array is required. Any role with API read access
is sufficient (e.g., the built-in Operator role). The account does not need write permissions.
Default Behavior
Auto-Detection
The collector does not auto-detect PowerStore arrays. You must configure the management IP address, username, and password explicitly.
Limits
The collector makes multiple API calls per collection cycle: discovery calls (once every 5 cycles) plus per-entity metrics calls (appliances, volumes, nodes, ports, file systems, drives, and replication — one API call per entity). With many volumes or ports, the number of API calls scales linearly. The default 30-second collection interval provides ample headroom.
Performance Impact
The PowerStore REST API is lightweight and read-only. The collector uses the metrics generation API which returns pre-aggregated data, imposing minimal load on the array. Session tokens and CSRF tokens are cached across collection cycles.
Metrics
Metrics grouped by scope.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
Per Dell PowerStore instance
These metrics refer to the entire monitored PowerStore cluster.
This scope has no labels.
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.cluster_space_usage | used, total | bytes |
| powerstore.cluster_space_logical | provisioned, used, data_physical, shared | bytes |
| powerstore.cluster_space_efficiency | efficiency, data_reduction, snapshot_savings, thin_savings | ratio |
| powerstore.hardware_health_fan | ok, degraded, failed, unknown | fans |
| powerstore.hardware_health_psu | ok, degraded, failed, unknown | PSUs |
| powerstore.hardware_health_drive | ok, degraded, failed, unknown | drives |
| powerstore.hardware_health_battery | ok, degraded, failed, unknown | batteries |
| powerstore.hardware_health_node | ok, degraded, failed, unknown | nodes |
| powerstore.alerts_active | critical, major, minor, info | alerts |
| powerstore.nas_server_status | started, stopped, degraded, unknown | servers |
| powerstore.copy_data | remaining, transferred | bytes |
| powerstore.copy_transfer_rate | rate | bytes/s |
Per appliance
These metrics refer to individual PowerStore appliances.
Labels:
| Label | Description |
|---|---|
| appliance | Appliance name (e.g., Appliance-WX-D8031). |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.appliance_iops | read, write | ops/s |
| powerstore.appliance_bandwidth | read, write | bytes/s |
| powerstore.appliance_latency | read, write, avg | microseconds |
| powerstore.appliance_cpu | utilization | percentage |
| powerstore.appliance_space | used, total | bytes |
| powerstore.appliance_space_logical | provisioned, used, data_physical, shared | bytes |
| powerstore.appliance_space_efficiency | efficiency, data_reduction, snapshot_savings, thin_savings | ratio |
Per volume
These metrics refer to individual PowerStore volumes. Volumes can be filtered using the volume_selector configuration option.
Labels:
| Label | Description |
|---|---|
| volume | Volume name (e.g., prod-db-01). |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.volume_iops | read, write | ops/s |
| powerstore.volume_bandwidth | read, write | bytes/s |
| powerstore.volume_latency | read, write, avg | microseconds |
| powerstore.volume_space | provisioned, used | bytes |
| powerstore.volume_space_savings | thin_savings | ratio |
Per node
These metrics refer to individual PowerStore nodes within an appliance.
Labels:
| Label | Description |
|---|---|
| node | Node name (e.g., Appliance-WX-D8031-node-A). |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.node_iops | read, write | ops/s |
| powerstore.node_bandwidth | read, write | bytes/s |
| powerstore.node_latency | read, write, avg | microseconds |
| powerstore.node_logins | logins | logins |
Per fc port
These metrics refer to individual Fibre Channel ports used for host connectivity.
Labels:
| Label | Description |
|---|---|
| fc_port | FC port name (e.g., Appliance-WX-D8031-node-A-Port-0). |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.fc_port_iops | read, write | ops/s |
| powerstore.fc_port_bandwidth | read, write | bytes/s |
| powerstore.fc_port_latency | read, write, avg | microseconds |
| powerstore.fc_port_link_status | up | status |
Per ethernet port
These metrics refer to individual Ethernet ports (management and iSCSI connectivity).
Labels:
| Label | Description |
|---|---|
| eth_port | Ethernet port name (e.g., Appliance-WX-D8031-node-A-Port-eth0). |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.eth_port_bytes | received, sent | bytes/s |
| powerstore.eth_port_packets | received, sent | packets/s |
| powerstore.eth_port_errors | rx_crc, rx_no_buffer, tx_error | errors/s |
| powerstore.eth_port_link_status | up | status |
Per file system
These metrics refer to individual file systems (NFS/SMB shares).
Labels:
| Label | Description |
|---|---|
| filesystem | File system name (e.g., nfs-share-01). |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.filesystem_iops | read, write | ops/s |
| powerstore.filesystem_bandwidth | read, write | bytes/s |
| powerstore.filesystem_latency | read, write, avg | microseconds |
Per drive
These metrics refer to individual physical drives (SSDs).
Labels:
| Label | Description |
|---|---|
| drive | Drive name (e.g., Drive_0_0_25). |
Metrics:
| Metric | Dimensions | Unit |
|---|---|---|
| powerstore.drive_endurance | remaining | percentage |
Alerts
There are no alerts configured by default for this integration.
Setup
You can configure the powerstore collector in two ways:
| Method | Best for | How to |
|---|---|---|
| UI | Fast setup without editing files | Go to Nodes → Configure this node → Collectors → Jobs, search for powerstore, then click + to add a job. |
| File | If you prefer configuring via file, or need to automate deployments (e.g., with Ansible) | Edit go.d/powerstore.conf and add a job. |
UI configuration requires paid Netdata Cloud plan.
Prerequisites
PowerStore API access
- A user account on the PowerStore array with at least read-only access.
The built-in
Operatorrole is sufficient. - Network connectivity from the Netdata Agent to the PowerStore management IP (HTTPS, port 443).
- The PowerStore management REST API must be accessible (enabled by default).
TLS certificate
PowerStore arrays typically use self-signed TLS certificates. You will likely need to set
tls_skip_verify: yes in the configuration, or provide the array's CA certificate via tls_ca.
Configuration
Options
The following options can be defined globally: update_every, autodetection_retry.
Config options
| Group | Option | Description | Default | Required |
|---|---|---|---|---|
| Collection | update_every | Data collection interval (seconds). | 30 | no |
| autodetection_retry | Autodetection retry interval (seconds). Set 0 to disable. | 0 | no | |
| Target | url | PowerStore management URL. | https://127.0.0.1 | yes |
| timeout | HTTP request timeout (seconds). | 30 | no | |
| HTTP Auth | username | Username for PowerStore API authentication. | yes | |
| password | Password for PowerStore API authentication. | yes | ||
| Filtering | volume_selector | Space-separated simple patterns to filter which volumes to monitor. Use * for wildcards, ! prefix to exclude. An empty value or * monitors all volumes. | * | no |
| TLS | tls_skip_verify | Skip TLS certificate and hostname verification (insecure, but often needed for self-signed certs on storage arrays). | no | no |
| tls_ca | Path to CA bundle used to validate the server certificate. | no | ||
| tls_cert | Path to client TLS certificate (for mTLS). | no | ||
| tls_key | Path to client TLS private key (for mTLS). | no | ||
| Proxy | proxy_url | HTTP proxy URL. | no | |
| proxy_username | Username for proxy Basic HTTP authentication. | no | ||
| proxy_password | Password for proxy Basic HTTP authentication. | no | ||
| Request | headers | Additional HTTP headers (one per line as key: value). | no | |
| not_follow_redirects | Do not follow HTTP redirects. | no | no | |
| Virtual Node | vnode | Associates this data collection job with a Virtual Node. | no |
via UI
Configure the powerstore collector from the Netdata web interface:
- Go to Nodes.
- Select the node where you want the powerstore data-collection job to run and click the ⚙ (Configure this node). That node will run the data collection.
- The Collectors → Jobs view opens by default.
- In the Search box, type powerstore (or scroll the list) to locate the powerstore collector.
- Click the + next to the powerstore collector to add a new job.
- Fill in the job fields, then click Test to verify the configuration and Submit to save.
- Test runs the job with the provided settings and shows whether data can be collected.
- If it fails, an error message appears with details (for example, connection refused, timeout, or command execution errors), so you can adjust and retest.
via File
The configuration file name for this integration is go.d/powerstore.conf.
The file format is YAML. Generally, the structure is:
update_every: 1
autodetection_retry: 0
jobs:
- name: some_name1
- name: some_name2
You can edit the configuration file using the edit-config script from the
Netdata config directory.
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/powerstore.conf
Examples
Basic
A minimal configuration connecting to a PowerStore array with a self-signed certificate.
Config
jobs:
- name: ps-lab
url: https://10.0.0.1
username: monitor
password: monitor123
tls_skip_verify: yes
With volume filtering
Monitor only production volumes, excluding temporary and test volumes.
Config
jobs:
- name: ps-prod
url: https://10.0.0.1
username: monitor
password: monitor123
tls_skip_verify: yes
volume_selector: "prod-* !*-temp !*-test"
Multi-instance
Note: When you define multiple jobs, their names must be unique.
Monitoring multiple PowerStore arrays.
Config
jobs:
- name: ps-site-a
url: https://10.0.0.1
username: monitor
password: monitor123
tls_skip_verify: yes
- name: ps-site-b
url: https://10.0.0.2
username: monitor
password: monitor456
tls_skip_verify: yes
Troubleshooting
Debug Mode
Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.
To troubleshoot issues with the powerstore collector, run the go.d.plugin with the debug option enabled. The output
should give you clues as to why the collector isn't working.
-
Navigate to the
plugins.ddirectory, usually at/usr/libexec/netdata/plugins.d/. If that's not the case on your system, opennetdata.confand look for thepluginssetting under[directories].cd /usr/libexec/netdata/plugins.d/ -
Switch to the
netdatauser.sudo -u netdata -s -
Run the
go.d.pluginto debug the collector:./go.d.plugin -d -m powerstoreTo debug a specific job:
./go.d.plugin -d -m powerstore -j jobName
Getting Logs
If you're encountering problems with the powerstore collector, follow these steps to retrieve logs and identify potential issues:
- Run the command specific to your system (systemd, non-systemd, or Docker container).
- Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.
System with systemd
Use the following command to view logs generated since the last Netdata service restart:
journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep powerstore
System without systemd
Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector's name:
grep powerstore /var/log/netdata/collector.log
Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.
Docker Container
If your Netdata runs in a Docker container named "netdata" (replace if different), use this command:
docker logs netdata 2>&1 | grep powerstore
Authentication failure
If the collector fails with login failed:
- Verify the username and password are correct.
- Ensure the user account has at least
Operatorrole privileges. - Check that the account is not locked or expired on the array.
Connection refused or timeout
If the collector fails with connection errors:
- Verify the
urlpoints to the management IP of the PowerStore array. - Ensure HTTPS port 443 is reachable from the Netdata Agent host.
- If using a firewall, allow outbound HTTPS to the array.
- Try increasing
timeoutif the array is on a high-latency link.
TLS certificate errors
PowerStore arrays ship with self-signed certificates. If you see TLS errors:
- Set
tls_skip_verify: yesin the configuration (most common solution). - Alternatively, export the array's certificate and provide it via
tls_ca.
HTTP 403 errors
PowerStore returns HTTP 403 when the session/CSRF token is stale. The collector automatically re-authenticates on 403, but if you see persistent 403 errors:
- Verify the user account still has API access.
- Check if the array's security policy has been changed to restrict API access.
- Ensure no IP-based access controls are blocking the Netdata Agent host.
Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.