Log files have been a critical resource for developers and system administrators who want to understand the health and performance of their web servers, and Netdata is taking important steps to make them even more valuable.
By parsing web server log files with Netdata, and seeing the volume of redirects, requests, or server errors over time, you can better understand what's happening on your infrastructure. Too many bad requests? Maybe a recent deploy missed a few small SVG icons. Too many requests? Time to batten down the hatches—it's a DDoS.
Netdata has been capable of monitoring web log files for quite some time, thanks for the weblog python.d module, but we recently refactored this module in Go, and that effort comes with a ton of improvements.
You can now use the LTSV log format, track TLS and cipher usage, and the whole parser is faster than ever. In one test on a system with SSD storage, the collector consistently parsed the logs for 200,000 requests in 200ms, using ~30% of a single core. To learn more about these improvements, see our v1.19 release post.
This guide will walk you through using the new Go-based web log collector to turn the logs these web servers constantly write to into real-time insights into your infrastructure.
As with all data sources, Netdata can auto-detect Nginx or Apache servers if you installed them using their standard installation procedures.
Almost all web server installations will need no configuration to start collecting metrics. As long as your web server has readable access log file, you can configure the web log plugin to access and parse it.
To use the Go version of this plugin, you need to explicitly enable it, and disable the deprecated Python version.
web_log line, uncomment it, and set it to
web_log: no. Next, open the
go.d.conf file for editing.
web_log line again, uncomment it, and set it to
Finally, restart Netdata with
service netdata restart, or the appropriate method for your system. You should see
metrics in your Netdata dashboard!
If you don't see web log charts, or web log nginx/web log apache menus on the right-hand side of your dashboard, continue reading for other configuration options.
The web log collector's default configuration comes with a few example jobs that should cover most Linux distributions and their default locations for log files:
However, if your log files were not auto-detected, it might be because they are in a different location. Try the default
To create a new custom configuration, you need to set the
path parameter to point to your web server's access log
file. You can give it a
name as well, and set the
Restart Netdata with
service netdata restart or the appropriate method for your system. Netdata should pick up your
web server's access log and begin showing real-time charts!
The web log collector is capable of parsing custom Nginx and Apache log formats and presenting them as charts, but we'll leave that topic for a separate guide.
We do have extensive documentation on how to build custom parsing for Nginx and Apache logs.
Over time, we've created some default alarms for web log monitoring. These alarms are designed to work only when your web server is receiving more than 120 requests per minute. Otherwise, there's simply not enough data to make conclusions about what is "too few" or "too many."
You can also edit this file directly with
For more information about editing the defaults or writing new alarm entities, see our health monitoring documentation.
Now that you have web log collection up and running, we recommend you take a look at the documentation for our python.d for some ideas of how you can turn these rather "boring" logs into powerful real-time tools for keeping your servers happy.
Don't forget to give GitHub user Wing924 a big 👍 for his hard work in starting up the Go refactoring effort.