Running the Netdata Agent in a container works best for an internal network or to quickly analyze a host. Docker helps you get set up quickly, and doesn't install anything permanent on the system, which makes uninstalling the Agent easy.
See our full list of Docker images at Docker Hub.
Starting with v1.12, Netdata collects anonymous usage information by default and sends it to Google Analytics. Read about the information collected, and learn how to-opt, on our anonymous statistics page.
The usage statistics are vital for us, as we use them to discover bugs and priortize new features. We thank you for actively contributing to Netdata's future.
Limitations running the Agent in Docker
For monitoring the whole host, running the Agent in a container can limit its capabilities. Some data, like the host OS performance or status, is not accessible or not as detailed in a container as when running the Agent directly on the host.
A way around this is to provide special mounts to the Docker container so that the Agent can get visibility on host OS
/proc folders or even
/etc/group and shadow files.
Also, we now ship Docker images using an ENTRYPOINT directive, not a COMMAND directive. Please adapt your execution scripts accordingly. You can find more information about ENTRYPOINT vs COMMAND in the Docker documentation.
Package scrambling in runtime (x86_64 only)
Our x86_64 Docker images provide support for using Polymorphic Polyverse Linux package
scrambling to protect against buffer overflow errors. To
activate this, set the environemnt variable
RESCRAMBLE=true while starting Netdata with a Docker container.
Run the Agent with the Docker command
Quickly start a new Agent with the
docker run command.
You can then access the dashboard at
Run the Agent with Docker Compose
The above can be converted to a
docker-compose.yml file to use with Docker
docker-compose up -d in the same directory as the
docker-compose.yml file to start the container.
Our Docker image provides integrated support for health checks through the standard Docker interfaces.
You can control how the health checks run by using the environment variable
NETDATA_HEALTHCHECK_TARGET as follows:
- If left unset, the health check will attempt to access the
/api/v1/infoendpoint of the agent.
- If set to the exact value 'cli', the health check
script will use
netdatacli pingto determine if the agent is running correctly or not. This is sufficient to ensure that Netdata did not hang during startup, but does not provide a rigorous verification that the daemon is collecting data or is otherwise usable.
- If set to anything else, the health check will treat the vaule as a
URL to check for a 200 status code on. In most cases, this should
http://localhost:19999/to check the agent running in the container.
In most cases, the default behavior of checking the
endpoint will be sufficient. If you are using a configuration which
disables the web server or restricts access to certain API's, you will
need to use a non-default configuration for health checks to work.
Configure Agent containers
You may need to configure the above
docker run... and
docker-compose commands based on your needs. You should
docker run and Docker
Compose documentation for details, but we'll cover a few recommended configurations
below, as well as those that are unique to Netdata Agent containers.
Add or remove other volumes
Some of the volumes are optional depending on how you use Netdata:
- If you don't want to use the apps.plugin functionality, you can remove the mounts of
/etc/group(they are used to get proper user and group names for the monitored host) to get slightly better security.
- Most modern linux distros supply
/etc/os-releasealthough some older distros only supply
/etc/lsb-release. If this is the case you can change the line above that mounts the file inside the container to
- If your host is virtualized then Netdata cannot detect it from inside the container and will output the wrong
metadata (e.g. on
/api/v1/infoqueries). You can fix this by setting a variable that overrides the detection using, e.g.
--env VIRTUALIZATION=$(systemd-detect-virt -v). If you are using a
This allows the information to be passed into
Docker container names resolution
There are a few options for resolving container names within Netdata. Some methods of doing so will allow root access to your machine from within the container. Please read the following carefully.
Docker socket proxy (safest option)
Deploy a Docker socket proxy that accepts and filters out requests using something like HAProxy so that it restricts connections to read-only access to the CONTAINERS endpoint.
The reason it's safer to expose the socket to the proxy is because Netdata has a TCP port exposed outside the Docker network. Access to the proxy container is limited to only within the network.
Below is an example repository (and image) that provides a proxy to the socket.
You run the Docker Socket Proxy in its own Docker Compose file and leave it on a private network that you can add to other services that require access.
2375 with the port of your proxy.
Giving group access to the Docker socket (less safe)
Important Note: You should seriously consider the necessity of activating this option, as it grants to the
user access to the privileged socket connection of docker service and therefore your whole machine.
If you want to have your container names resolved by Netdata, make the
netdata user be part of the group that owns the
To achieve that just add environment variable
PGID=[GROUP NUMBER] to the Netdata container, where
[GROUP NUMBER] is
practically the group id of the group assigned to the docker socket, on your host.
This group number can be found by running the following (if socket group ownership is docker):
Running as root (unsafe)
You should seriously consider the necessity of activating this option, as it grants to the
netdatauser access to the privileged socket connection of docker service, and therefore your whole machine.
Pass command line options to Netdata
Install the Agent using Docker Compose with SSL/TLS enabled HTTP Proxy
For a permanent installation on a public server, you should secure the Netdata instance. This section contains an example of how to install Netdata with an SSL reverse proxy and basic authentication.
You can use the following
docker-compose.yml and Caddyfile files to run Netdata with Docker. Replace the domains and
email address for Let's Encrypt before starting.
This file needs to be placed in
/opt with name
Caddyfile. Here you customize your domain and you need to provide
your email address to obtain a Let's Encrypt certificate. Certificate renewal will happen automatically and will be
executed internally by the caddy server.
After setting Caddyfile run this with
docker-compose up -d to have fully functioning Netdata setup behind HTTP reverse
Restrict access with basic auth
You can restrict access by following official caddy guide and adding lines to Caddyfile.
Publish a test image to your own repository
At Netdata, we provide multiple ways of testing your Docker images using your own repositories. You may either use the command line tools available or take advantage of our Travis CI infrastructure.
Inside Netdata organization, using Travis CI
To enable Travis CI integration on your own repositories (Docker and Github), you need to be part of the Netdata organization.
Once you have contacted the Netdata owners to setup you up on Github and Travis, execute the following steps
- Have Netdata forked on your personal GitHub account
- Get a GitHub token: Go to GitHub settings -> Developer Settings -> Personal access tokens, and
generate a new token with full access to
repo_hook, read-only access to
user:emailsettings enabled. This will be your
GITHUB_TOKENthat is described later in the instructions, so keep it somewhere safe.
- Contact the Netdata team and seek for permissions on
https://scan.coverity.comshould you require Travis to be able to push your forked code to coverity for analysis and report. Once you are setup, you should have your email you used in coverity and a token from them. These will be your
COVERITY_SCAN_TOKENthat we will refer to later.
- Have a valid Docker hub account, the credentials from this account will be your
Setting up Travis CI for your own fork (Detailed instructions provided by Travis team here)
- Login to travis with your own GITHUB credentials (There is Open Auth access)
- Go to your profile settings, under repositories section and setup your Netdata fork to be built by Travis CI.
- Once the repository has been setup, go to repository settings within Travis CI (usually under
NETDATA_DEVELOPERis your GitHub handle), and select your desired settings.
While in Travis settings, under Netdata repository settings in the Environment Variables section, you need to add the following:
DOCKER_PWDvariables so that Travis can login to your Docker Hub account and publish Docker images there.
NETDATA_DEVELOPERis your GitHub handle again.
GITHUB_TOKENvariable with the token generated on the preparation step, for Travis workflows to function properly.
COVERITY_SCAN_TOKENvariables to enable Travis to submit your code for analysis to Coverity.
Having followed these instructions, your forked repository should be all set up for integration with Travis CI. Happy testing!