Creating a Prometheus exporter can be complicated, but it doesn't have to be. In this article, you'll learn the basics of Prometheus and introduce two step-by-step guides that show you how to implement a Python-based exporter. The first guide is about a third-party exporter that publishes metrics about the application you want to monitor standalone. The second will cover exporters that expose embedded application metrics.
Prometheus is the leading monitoring tool for time series metrics that has applied its own concept since its introduction in 2012. Specifically, Prometheus's data collection pull approach, along with exporters and flexible visualizations, includes Graphite and InfluxDB. It also stands out when compared to other popular monitoring tools such as / products / influxdb-overview /).
The data collection pull approach consists of the server component (Prometheus server) periodically retrieving metrics from the client component. This pull is commonly referred to as a "scrap" in the Prometheus world. Through scraping, the client component is only responsible for making it available for metric generation and scraping.
Graphique, InfluxDB, and many other tools use a push approach in which the client component generates metrics and pushes them to the server component. Therefore, the client decides when to push the data, regardless of whether the server needs the data or is ready to collect the data.
Prometheus' pull approach is innovative. By requesting scraping from the server rather than the client, we only collect metrics when the server is up and the data is ready. This approach requires each client component to enable a specific feature called the Prometheus exporter.
The exporter is an integral part of the Prometheus monitoring environment. Each program that acts as a Prometheus client has an exporter at its center. The exporter consists of a software feature that generates metric data and an HTTP server that exposes the generated metric available through a particular endpoint. Metrics are published according to a specific format that the Prometheus server can read and capture (scrap). Later in this article, we'll discuss how to create metrics, their formats, and how to make them available for scraping.
Once the metrics are retrieved and saved by the Prometheus server, you will then have to visualize them, but there are many ways to do this. The easiest way is to use the Prometheus Expression Browser (https://prometheus.io/docs/visualization/browser/). However, because it has only basic visualization capabilities, the Expression Browser is primarily used for debugging purposes (checking the availability or last value of a particular metric). For better and more advanced visualization, users often choose other tools such as Grafana [https://grafana.com/]. In addition, in some contexts, the user queries the Prometheus API directly to get the metrics that need to be visualized. You may have a custom-made visualization system to do.
The figure below shows the basic architecture of a Prometheus environment with a server component, two client components, and an external visualization system.
From an application perspective, there are two situations in which you can implement the Prometheus exporter: exporting built-in application metrics and exporting metrics from standalone or third-party tools.
This is usually the case when the system or application exposes key metrics natively. The most interesting example is when your application is built from scratch. This is because all the requirements needed to function as a Prometheus client can be investigated and integrated through design. You may need to integrate the exporter into your existing application. This requires updating the code (and even the design) to add the functionality needed to function as a Prometheus client. Integrating into an existing application can be risky, as it can cause regressions in the core functionality of the application if not done carefully. If you need to do this, do thorough testing to avoid causing regressions in your application (for example, bugs and performance overhead due to code or design changes).
The required metrics may be collected or calculated externally. An example of this is when an application provides an API or log from which metric data can be retrieved. You can use this data as is, but you may need to do more to generate the metric (this MySQL exporter is an example).
If you need to calculate metrics throughout the aggregation process with a dedicated system, you may also need an external exporter. As an example, consider a Kubernetes cluster that needs a metric that shows the CPU resources used by a set of pods grouped by label. Such exporters can rely on the Kubernetes API and work as follows:
Get the current CPU usage and individual pod labels
Sum the usage based on the pod label
Make the results available for scraping
This section provides step-by-step instructions on how to implement the Prometheus exporter using Python. Here are two examples that cover the following metric types:
--Counter: Represents a metric whose value can only increase over time. This value will be reset to zero on reboot. Such metrics can be used to export the uptime of a system (the time elapsed since the last reboot of the system).
--Gauge: Represents a metric whose value can increase or decrease arbitrarily over time. It can be used to expose memory and CPU usage over time.
Consider two scenarios. In the first scenario, consider a standalone exporter exposing your system's CPU and memory usage. The second scenario is a Flask web application that exposes request response and uptime.
This scenario shows a dedicated Python exporter that periodically collects and publishes system CPU and memory usage.
This program requires you to install the Prometheus client library for Python.
$ pip install prometheus_client
You also need to install the powerful library psutil to extract system resource consumption.
$ pip install psutil
The final exporter code looks like this (see Source Summary):
You can download the code and save it to a file.
$ curl -o prometheus_exporter_cpu_memory_usage.py \
-s -L https://git.io/Jesvq
You can start the exporter with the following command:
$ python ./prometheus_exporter_cpu_memory_usage.py
Local browser http://127.0.0.1:9999 You can check the metrics published through. Among the other built-in metrics enabled by the Prometheus library, the following metrics must be provided by the exporter (values may vary depending on computer load):
It's simple. This is due to the magic of the Prometheus client library officially available in Golang, Java, Python and Ruby. They hide the boilerplate and facilitate the implementation of the exporter. The basics of our exporter can be summarized in the following entry.
--Import the Prometheus client Python library (line 1). --Instantiate the HTTP server and expose the metric on port 9999 (line 10). --Declare a gauge metric and name it system_usage (line 6). --Set the metric value (lines 13 and 14). --Metrics are declared with labels (resource_type, line 6), using the concept of a multidimensional data model. This allows you to keep a single metric name and use labels to distinguish between CPU and memory metrics. Instead of using labels, you can also declare two metrics. Either way, we strongly recommend that you read best practices for metric names and labels.
This scenario shows a Prometheus exporter for Flask web applications. Unlike standalone, the Flask web application exporter has a WSGI dispatch application that acts as a gateway to route requests to both Flask and Prometheus clients. This happens because you can't consistently use a Flask-enabled HTTP server to act as a Prometheus client as well. Also, the HTTP server enabled by the Prometheus client library does not handle Flask requests.
WSGI To allow integration with wrapper applications, Prometheus uses specific library methods for creating WSGI applications that provide metrics (https://wsgi.readthedocs.io/en/latest/). make_wsgi_app) is provided.
The following example (Source Summary)-A Flask hello-world application that has been slightly modified to handle requests with random response times-is Shows a Prometheus exporter that works with Flask applications. (See the hello method on line 18). The Flash application can be accessed through the root context (/ endpoint), but the Prometheus exporter is enabled through the / metrics endpoint (see line 23 where the WSGI dispatch application is created). Two metrics are published for the Prometheus exporter.
--Last request response time: This is a gauge (line 10), a Prometheus decorator function (line 17) that does the same thing while keeping the business code clean, instead of using the set method as in the previous example. ) Has been introduced.
--Service Uptime: This is the counter (line 8) that exposes the time that has elapsed since the last launch of the application. A dedicated thread (line 33) updates the counter every second.
Install additional dependencies for the program to work.
$ pip install uwsgi
Then use WGSI to launch the program.
$ curl -o prometheus_exporter_flask.py \
-s -L https://git.io/Jesvh
Then start the service as a WSGI application.
$ uwsgi --http 127.0.0.1:9999 \
--wsgi-file prometheus_exporter_flask.py \
--callable app_dispatch
--wsgi-file must point to a Python program file, but the value of the -callable option must match the name of the WSGI application declared in the program (line 23).
Again, you can see the metrics published through your local browser: http://127.0.0.1:9999/metrics Among the other built-in metrics published by the Prometheus library, you need to find the following metric published by the exporter (values may vary depending on computer load):
The various exporters are now ready to be scraped by the Prometheus server. Learn more about this here.
This article first explained the basic concepts of the Prometheus exporter, and then explained two documented implementation examples using Python. These examples leverage Prometheus best practices and can be used as a starting point for building your own exporters to meet the needs of your particular application. We didn't really talk about integration with the Prometheus server or visualizations that can be handled by tools such as Grafana. If you are interested in these topics, please check here (https://qiita.com/MetricFire/items/cc9fe9741288048f4588).
If you want to try Prometheus but don't want to worry about setup and maintenance struggles or spend time, MetricFire = Japan & utm_content = First% 20Contact% 20with% 20Prometheus% 20Exporters) Hosted Prometheus [Free Trial](https://www.hostedgraphite.com/accounts/signup-metricfire/?signup=japan&utm_source=blog&utm_medium=Qiita&utm_campaign=Japan&utm_campaign= = First% 20Contact% 20with% 20Prometheus% 20Exporters) Please give it a try. Book a demo (https://calendly.com/metricfire-chatwithus/chat?utm_source=blog&utm_medium=Qiita&utm_campaign=Japan&utm_content=First%20Contact%20with%20Prometheus%20Exporters) to inquire directly about your Prometheus monitoring solution. You can also.
See you in another article!