Prometheus is an open-source systems monitoring and alerting toolkit with an active ecosystem. It is the only system directly supported by Kubernetes and the de facto standard across the cloud native ecosystem. See the overview.
See the comparison page.
The main Prometheus server runs standalone as a single monolithic binary and has no external dependencies.
Cloud native is a flexible operating model, breaking up old service boundaries to allow for more flexible and scalable deployments.
Prometheus's service discovery integrates with most tools and clouds. Its dimensional data model and scale into the tens of millions of active series allows it to monitor large cloud-native deployments. There are always trade-offs to make when running services, and Prometheus values reliably getting alerts out to humans above all else.
Yes, run identical Prometheus servers on two or more separate machines. Identical alerts will be deduplicated by the Alertmanager.
This is often more of a marketing claim than anything else.
A single instance of Prometheus can be more performant than some systems positioning themselves as long term storage solution for Prometheus. You can run Prometheus reliably with tens of millions of active series.
If you need more than that, there are several options. Scaling and Federating Prometheus on the Robust Perception blog is a good starting point, as are the long storage systems listed on our integrations page.
Most Prometheus components are written in Go. Some are also written in Java, Python, and Ruby.
All repositories in the Prometheus GitHub organization that have reached version 1.0.0 broadly follow semantic versioning. Breaking changes are indicated by increments of the major version. Exceptions are possible for experimental components, which are clearly marked as such in announcements.
Even repositories that have not yet reached version 1.0.0 are, in general, quite
stable. We aim for a proper release process and an eventual 1.0.0 release for
each repository. In any case, breaking changes will be pointed out in release
notes (marked by
[CHANGE]) or communicated clearly for components that do not
have formal releases yet.
Pulling over HTTP offers a number of advantages:
Overall, we believe that pulling is slightly better than pushing, but it should not be considered a major point when considering a monitoring system.
For cases where you must push, we offer the Pushgateway.
Longer answer: Prometheus is a system to collect and process metrics, not an event logging system. The Grafana blog post Logs and Metrics and Graphs, Oh My! provides more details about the differences between logs and metrics.
If you want to extract Prometheus metrics from application logs, Grafana Loki is designed for just that. See Loki's metric queries documentation.
Prometheus is released under the Apache 2.0 license.
After extensive research, it has been determined that the correct plural of 'Prometheus' is 'Prometheis'.
If you can not remember this, "Prometheus instances" is a good workaround.
SIGHUP to the Prometheus process or an HTTP POST request to the
/-/reload endpoint will reload and apply the configuration file. The
various components attempt to handle failing changes gracefully.
Yes, with the Alertmanager.
To avoid any kind of timezone confusion, especially when the so-called daylight saving time is involved, we decided to exclusively use Unix time internally and UTC for display purposes in all components of Prometheus. A carefully done timezone selection could be introduced into the UI. Contributions are welcome. See issue #500 for the current state of this effort.
There are a number of client libraries for instrumenting your services with Prometheus metrics. See the client libraries documentation for details.
If you are interested in contributing a client library for a new language, see the exposition formats.
Yes, the Node Exporter exposes an extensive set of machine-level metrics on Linux and other Unix systems such as CPU usage, memory, disk utilization, filesystem fullness, and network bandwidth.
Yes, for applications that you cannot instrument directly with the Java client, you can use the JMX Exporter either standalone or as a Java Agent.
Performance across client libraries and languages may vary. For Java, benchmarks indicate that incrementing a counter/gauge with the Java client will take 12-17ns, depending on contention. This is negligible for all but the most latency-critical code.
We restrained ourselves to 64-bit floats to simplify the design. The IEEE 754 double-precision binary floating-point format supports integer precision for values up to 253. Supporting native 64 bit integers would (only) help if you need integer precision above 253 but below 263. In principle, support for different sample value types (including some kind of big integer, supporting even more than 64 bit) could be implemented, but it is not a priority right now. A counter, even if incremented one million times per second, will only run into precision issues after over 285 years.
This documentation is open-source. Please help improve it by filing issues or pull requests.