Hacker symbol

January 7, 2020 ~ 4 min read

Enabling Dockerized Prometheus Access to Docker Swarm Metrics


Recently, I was tasked with configuring a Prometheus server to monitor some docker containers that are not running in swarm mode. This means that the docker containers are neither using docker-compose nor docker stack. The applications running inside the docker containers still don't have Prometheus exporters. Adding a Prometheus exporter to the dockerized applications will be done in the future.

One thing I did find out is that docker swarm already has a built metrics endpoint: https://docs.docker.com/config/thirdparty/prometheus/. This is better than nothing because at least we can keep an eye on the docker service itself. This got me all excited so I went about to figure out what sort of damage I could do to it.

Setup

The first thing one needs to do is add the following to /etc/docker/daemon.json:

{
  "metrics-addr" : "0.0.0.0:9323",
  "experimental" : true
}

Then run systemctl daemon-reload && systemctl service restart docker. This is to reload the daemon.json config. You will notice that the address is set to 0.0.0.0, which means that the metrics will be open to any network interface of the server. This is necessary to allow a dockerized Prometheus application to query it. You can test that the metrics service is running with curl localhost:9323 and event test it with an external request with curl <server ip address>:9323 since it is open to the public internet.

Blocking external request with UFW

If you are using a publicly facing server, you can block all external requests with UFW.

$ ufw default allow incoming # I need this because I'm running several other services on this server
$ ufw allow ssh # Allow ssh to not get locked out of the server while accidentally changing rules
$ ufw deny from any to eth0 port 9323 # This is the rule that blocks all external queries
$ ufw enable # Only needed once

The above command blocks all external request on interface eth0 on port 9323 but still allows for request originating from other interfaces such as the lo (loopback) network interface or, the interface we care about, docker0. The eth0 network interface is where public ip address is bound to in my server. I had two public ipaddres so I just decided to block the whole eth0 interface. You can now try to curl for the metrics from inside the server but not if you try to curl to the public ip of the server it won't work.

Dockerized Prometheus

And now, we need to run the dockerized Prometheus. I have a simple Prometheus config file that gets it working.

# prometheus.yml
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
scrape_configs:
  - job_name: 'docker'
    static_configs:
      - targets: ['172.17.0.1:9323']

Notice the targets ip address. This is the ip address of the server in the docker0 bridge network interface. The reason this works is because of the 0.0.0.0 value we used in /etc/docker/daemon.json. You might be asking... Why can't we use 172.17.0.1 as the metrics-addr value? The reason is because 172.17.0.1 ip address doesn't exist until the docker daemon starts. So if you set `172.17.0 ip address as the value, docker can't bind to it because it doesn't exist and your docker service will crash.

Now, to start Prometheus here is what I did:

docker service create --name prometheus \
    --mount type=bind,source=/tmp/prometheus.yml,destination=/etc/prometheus/prometheus.yml \
    --publish published=9090,target=9090,protocol=tcp \
    prom/prometheus

Now we can go to the public address of the server at port 9090 and access the Prometheus UI: http://<SERVER IP ADDRESS>:9090/targets and you can see the docker swarm metrics target.

I still need to lock down the Prometheus UI and I will update this post when I have learned how to do that.

Thanks for reading!


Sebastian BolaƱos

Hi, I'm Sebastian. I'm a software developer from Costa Rica. You can follow me on Twitter. I enjoy working on distributed systems.