Infiniroot Blog: We sometimes write, too.

Of course we cannot always share details about our work with customers, but nevertheless it is nice to show our technical achievements and share some of our implemented solutions.

Application (Docker/Kubernetes) containers and STDOUT logging

Published on January 15th 2019


In our Docker container environment (on premise, using Rancher) I have configured the Docker daemon to forward STDOUT logs from the containers to a central Logstash using GELF.

For applications logging by default to STDOUT this works out of the box. But for some hand-written applications this might require some additional work.

In this particular example the application simply logged into a local log file on the AUFS filesystem (/tmp/application.log). But all these log messages of course never arrive in the ELK stack because they were not logged to STDOUT but written into a file.

The developer then adjusted the Dockerfile and instead of creating the log file, created a symlink:

# forward logs to docker log collector
RUN ln -sf /dev/stdout /tmp/application.log

To be honest, I thought this would do the trick. But once the new container image was deployed, the application logs didn't arrive in our ELK stack. Why?

I went into the container and tested myself:

root@af8e2147f8ba:/app# cd /tmp/

root@af8e2147f8ba:/tmp# ls -la
total 12
drwxrwxrwt  3 root root 4096 Jan 15 12:55 .
drwxr-xr-x 54 root root 4096 Jan 15 12:57 ..
lrwxrwxrwx  1 root root   11 Jan 15 12:52 application.log -> /dev/stdout
drwxr-xr-x  3 root root 4096 Jan 15 12:52 npm-6-d456bc8a

Yes, there is the  application log file, which is a symlink to /dev/stdout. Should work, right? Let's try this:

root@af8e2147f8ba:/tmp# echo "test test test" > application.log
test test test

Although I saw "test test test" appearing in the terminal, this message never made it into the ELK stack. On my research why, I came across a VERY GOOD explanation by user "phemmer" on this GitHub issue:

"The reason this doesn't work is because /dev/stdout is a link to STDOUT of the process accessing it. So by doing foo > /dev/stdout, you're saying "redirect my STDOUT to my STDOUT". Kinda doesn't do anything :-).
And since /var/log/test.log is a symlink to it, the same thing applies. What you want is to redirect output to STDOUT of PID 1. PID 1 is the process launched by docker, and its STDOUT will be what docker picks up."

So to sum this up, we need to use the STDOUT of PID 1 (the container itself), otherwise the message won't be picked up by the Docker daemon.

Let's try this inside the still running container:

root@af8e2147f8ba:/tmp# rm application.log
root@af8e2147f8ba:/tmp# ln -sf /proc/1/fd/1 /tmp/application.log
root@af8e2147f8ba:/tmp# echo 1 2 3 > application.log

And hey, my 1 2 3 appeared in Kibana!

Docker container logs STDOUT logging

I slightly modified the Dockerfile with that new knowledge:

# forward logs to docker log collector
RUN ln -sf /proc/1/fd/1 /tmp/application.log

Note: /proc/1 obviously is PID 1. fd/1 is stdout, as you might know from typical cron jobs, e.g. */5 * * * * myscript.sh 2>&1. fd/2 would be STDERR by the way.

After the new container image was built, deployed and started, the ELK stack is now getting the application logs:

Container logs appearing in ELK stack