Skip to main content

VM Templates and Clones

danger

Destructive Operations - Data Loss Warning

The commands in this guide permanently delete:

This is irreversible. There is no undo.

Only run these commands on VMs you intend to convert to templates. Running these on a production system will destroy your monitoring data.

tip

What You'll Learn

How to prepare a VM template so each clone gets a unique Netdata identity and automatically connects to Netdata Cloud.

Prerequisites

  • Read first: Node Identities - understand what you're deleting
  • Netdata installed on a VM
  • Hypervisor that supports templates or golden images
  • (Optional) /etc/netdata/claim.conf configured for auto-claiming to Cloud

Overview

To prepare a VM template:

  1. Stop Netdata - Prevent file regeneration
  2. Delete identity and data files - Force new identity on clone boot
  3. Keep claim.conf - Enable auto-claiming (optional)
  4. Convert to template - Without starting Netdata

Files to Delete

danger

Verify you are on the correct VM before running these commands.

CategoryFilesWhat's Lost
Agent IdentityGUID file, status backupsNode identity
ACLK Authcloud.d/ directoryCloud connection, must re-claim
Node Metadatanetdata-meta.db*, context-meta.db*Node metadata, metric mappings
Metricsdbengine* directories (all tiers)All historical metrics

Keep: /etc/netdata/claim.conf - enables auto-claiming on clones

Step-by-Step

1. Stop Netdata

sudo systemctl stop netdata

2. Delete All Identity and Data Files

danger

Point of No Return

The following commands permanently delete Netdata data. Verify you are on the template VM.

# Machine GUID (Agent Self Identity)
sudo rm -f /var/lib/netdata/registry/netdata.public.unique.id

# Status file backups (GUID recovery locations)
sudo rm -f /var/lib/netdata/status-netdata.json
sudo rm -f /var/cache/netdata/status-netdata.json
sudo rm -f /tmp/status-netdata.json
sudo rm -f /run/status-netdata.json
sudo rm -f /var/run/status-netdata.json

# ACLK authentication (Claimed ID, RSA keys)
sudo rm -rf /var/lib/netdata/cloud.d/

# Databases and metrics (metadata, all dbengine tiers)
sudo rm -f /var/cache/netdata/netdata-meta.db*
sudo rm -f /var/cache/netdata/context-meta.db*
sudo rm -rf /var/cache/netdata/dbengine*

3. Configure Auto-Claiming (Optional)

To have clones automatically claim to Netdata Cloud on first boot, ensure /etc/netdata/claim.conf exists:

cat /etc/netdata/claim.conf

Should contain:

[global]
url = https://app.netdata.cloud
token = YOUR_SPACE_TOKEN
rooms = ROOM_ID

4. Convert to Template

Do not start Netdata. Convert the VM to a template using your hypervisor.

When Clones Boot

  1. Netdata starts, no GUID found, generates new unique identity
  2. If claim.conf exists, auto-claims to Cloud
  3. Cloud assigns Node ID, new node appears in your Space

Each clone is a unique, independent node.

Hypervisor Notes

The Netdata cleanup commands are the same for all hypervisors. The difference is when and how to run them.

HypervisorTemplate SupportWhen to CleanAutomation
ProxmoxConvert to TemplateBefore conversioncloud-init scripts
VMware/vSphereVM TemplatesBefore conversionGuest customization
libvirt/KVMvirt-sysprepDuring sysprep--delete flags
AWSAMIBefore image creationuser-data scripts
AzureManaged ImageBefore capturecloud-init
GCPMachine ImageBefore creationstartup scripts
VagrantBox packagingBefore vagrant packageVagrantfile provisioner
libvirt/KVM: virt-sysprep example
virt-sysprep -a myvm.qcow2 \
--delete /var/lib/netdata/registry/netdata.public.unique.id \
--delete /var/lib/netdata/status-netdata.json \
--delete /var/cache/netdata/status-netdata.json \
--delete /tmp/status-netdata.json \
--delete /run/status-netdata.json \
--delete /var/run/status-netdata.json \
--delete /var/lib/netdata/cloud.d \
--delete '/var/cache/netdata/netdata-meta.db*' \
--delete '/var/cache/netdata/context-meta.db*' \
--delete '/var/cache/netdata/dbengine*'
Cloud-init: Fresh install approach

Alternative: Install Netdata on first boot instead of templating:

# cloud-init user-data
runcmd:
- curl -fsSL https://get.netdata.cloud/kickstart.sh -o /tmp/kickstart.sh
- bash /tmp/kickstart.sh --claim-token TOKEN --claim-rooms ROOM_ID

Each instance installs fresh with unique identity.

Troubleshooting

Clones share the same identity

Cause: GUID recovered from status backup. Netdata checks multiple backup locations before generating a new GUID.

Solution: Delete all status file locations, not just the primary GUID file. See the cleanup commands in Step 2.

Clones don't connect to Parent

Cause: Either clones share the same Machine GUID (only one can connect at a time), or stream.conf wasn't configured in the template.

Solution:

  • Verify each clone has a unique GUID: cat /var/lib/netdata/registry/netdata.public.unique.id
  • Verify stream.conf exists and has the correct Parent destination and API key
  • If GUIDs are duplicated, run the cleanup on each clone (loses metrics)

Stale "template" node appears in Cloud

Cause: Database files kept from the template. The template's node identity persists in the metadata.

Solution: Delete databases on all clones. This loses historical metrics but removes the stale node reference.

Clones using Parent profile unexpectedly

Cause: Template had stream.conf with an enabled API key section (configured to receive streams, as Parent).

Solution: Reset stream.conf on clones or delete the API key sections that enable receiving.

Unstable Cloud connections (flapping)

Cause: Two agents have the same Machine GUID. Cloud kicks the older connection offline when the second connects.

Solution: Each agent needs a unique GUID. Run the cleanup procedure on affected clones.

Clone doesn't auto-claim to Cloud

Cause: Missing claim.conf or environment variables not set.

Solution: Create /etc/netdata/claim.conf with your Space token.

Fixing Already-Deployed Clones

If clones were deployed with identity files:

# On each affected clone
sudo systemctl stop netdata

# Machine GUID
sudo rm -f /var/lib/netdata/registry/netdata.public.unique.id

# Status file backups (all locations)
sudo rm -f /var/lib/netdata/status-netdata.json
sudo rm -f /var/cache/netdata/status-netdata.json
sudo rm -f /tmp/status-netdata.json
sudo rm -f /run/status-netdata.json
sudo rm -f /var/run/status-netdata.json

# ACLK authentication (if re-claiming to Cloud)
sudo rm -rf /var/lib/netdata/cloud.d/

# Databases and metrics
sudo rm -f /var/cache/netdata/netdata-meta.db*
sudo rm -f /var/cache/netdata/context-meta.db*
sudo rm -rf /var/cache/netdata/dbengine*

sudo systemctl start netdata
warning

This deletes all historical metrics on the clone. If you skip deleting cloud.d/, you must re-claim to Cloud manually.

FAQ

What if I reboot a clone?

Identity persists. Netdata only generates a new GUID when the file AND all backups are missing.

Can multiple clones use the same claim token?

Yes. Each clone gets a unique Machine GUID and Claimed ID. They authenticate with the same token but appear as separate nodes.

Do containers need this?

No. Containers start with empty volumes, so each gets a unique identity automatically.

Is my claim token secure in the template?

The token only allows claiming to your Space. It cannot read data or modify other nodes. Treat it like an API key - don't expose publicly, but it's safe in private templates.


Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.