Logging in Docker
November 30, 2019
Introduction
In a recent move to Docker where existing applications were containerized, I was tasked to investigate how to handle logging in Docker. In my case, I was dealing with ASP.NET applications using Serilog as a logging library to write logs into files. There are several options available how to handle logging in Docker, each with its pros and cons. To get the overview, here is the list with more detailed discussion to follow:
- Continue logging to a file in a container
- Log to a file mounted via Docker volume
- Logging to external service or database
- Use Docker logging drivers
- Use dedicated logging service
Logging to a file in a container
The simplest option is to continue logging in a file located in container, but unfortunately, this simplicity does come with a long list of potential issues. The biggest issue is caused by Docker’s greatest benefit, that is that containers are meant to be transient. This means that when a new version of an application is released, a new Docker image would be created and a container based on the new image would be started. In turn, a container running the old version of the application would be deleted, including the logging data that was stored within the container. I assume that most likely you do care or should care about not losing logging data. Consider a scenario where you just deployed a new version of your application, but you get a bug report for the old version and you really need to check the log to determine the issue in your application.
Another shortcoming comes when you run more than one container with your application. If you now want to access log files, you need to extract it from all containers. There is also a potential for data loss if your application was in the meantime scaled down from e.g. 5 containers to 3 by deleting 2 containers.
I find this option as completely unviable even for simple scenarios, since the next option of using Docker volumes alleviates the most burning problems with practically no overhead.
Logging to a file mounted via Docker volume
Logging to a Docker volume makes the log files persistent even when you redeploy your containers. In this manner, classical application logging can be achieved. But this setup has shortcomings that make it less than ideal when running more than one instance of your application. Sharing a log file between several application instances makes it hard to trace which instance was responsible for log entry. It can also lead to torn log files when two applications write to the same file concurrently. Another issue could also be that you lose information about things happening within your container, but outside your application. E.g. a script that prepares something and then starts your application.
In summary, if you need simple logging, this is the first viable option for simple scenarios where you have a single instance of your application running in Docker.
Logging to external service or database
Most logging frameworks allow redirecting your logging output to an external service (e.g. Logstash) or a database, making this a viable option. Drawback here again being losing information about things happening in a container outside your application and in case of a multi-container scenario, which container is the source of a log entry. Another disadvantage is also that information about your logging infrastructure leaks into your application. E.g. if you change the logging service address, you need to update your application configuration and most likely also restart the application for changes to take effect.
Use Docker logging drivers
Docker offers logging functionality which takes standard output (STDOUT) and error (STDERR) from your container and allows redirecting it. By default, this is redirected to files in JSON format, but there are also options to write to external services. When using logging drivers the benefits are:
- knowledge about which container was the source of a log entry
- everything that writes to container STDOUT or STDERR is logged
- due to application writing to STDOUT or STDERR it is not aware of logging infrastructure
Using the logging drivers comes with drawbacks. One that is shared across the board at the time of writing is, that there is no multiline support. If your application writes a multiline log entry, these will be stored as separate entries by logging driver. This means that log entries will either need to be stitched together later before being stored or you need to output log entries in a single line as e.g. JSON entry. There is also a shortcoming specific to the JSON file logging driver, namely that log file for a container is stored next to the container and is removed when the container is removed. This seems very similar to the major disadvantage that logging in the container has, but notice that the log file is now outside of container. By itself this means nothing, but in combination with a service that monitors and extracts container logs, this could prove a powerful combination.
Use dedicated logging service
As hinted before, there is an option to use the JSON file logging driver with a dedicated logging service. In my case I ended up using Filebeat running in a container. I could have also installed Filebeat on a server running Docker, but since using containers was an option, I opted for this. I will discuss Filebeat and how to set up Elastic stack in more detail in the upcoming post. The main benefit of using such an external service is, that your application is completely unaware of your logging infrastructure. Since how and where to store log entries is the responsibility of dedicated logging service, any change in the logging infrastructure will only affect the different components in your infrastructure. Depending on the support by selected logging service it can take on additional tasks like stitching multiline log entries, append data about the container to the log entry (e.g.: container name, image version), etc.
Conclusion
As described, there are many ways how to implement logging in a docker environment. One that in my opinion should always be avoided is logging in a file within the container, especially since setting up docker volume is trivial. From the remaining options which one to choose in the end depends on your requirements.
For completeness sake, I should also mention the sidecar logging setup. This is a setup where every application container has a dedicated logging container. I found this setup too complex with no tangible benefits so I didn’t investigate it further.
As already mentioned in the upcoming posts I plan to write about Elastic stack and how to set it up, so stay tuned. 🙂