I’ve been in charge for years of various startups critical infrastructures. Usually, I set up pretty simple stuff: a vanilla ubuntu server with ufw (firewall) and docker + docker-compose. I close all ports to incoming connections except 22, 80 and 443. Then I put a docker-compose.yml with the various containers needed for the business applications. It’s been working great so far.
Recently, I was confronted with a particular challenge: I had to connect to an external API using a 3rd-party wrapper (not written by the API provider nor written by us), that would thus have access to our credentials for this API. If someone ever injected malicious code in that wrapper, how could I prevent it from sniffing our credentials and sending them away? I had always blocked all incoming connections on my servers (except for a few ports) and I thought that it was time to put restrictions on outbound traffic too. The idea is to add an additional layer of security and prevent malicious software from sending stuff away, should it manage to breach other layers of defences and manage to get running somehow on my server.
The first step was obviously to use ufw and do a “sudo ufw default deny outgoing“. Unfortunately, it was not enough:
- docker, in order to achieve all its networking black magic, uses iptables and overrides your firewall
- restricting all outbound traffic is easy but letting through legit outbound connections (for instance, connections to this external API we need) is not easy since most firewalls work with IP addresses but not DNS domain names. This means that to whitelist outbound connections based on domain, you would need to keep up-to-date a list of IP adresses that match with your allowed domains, and refresh your firewall configuration periodically
How could I enforce domain-based restrictions on outbound traffic coming from Docker containers?
After a lot of searching, I was able to hack together a solution. The core ideas:
- block all outbound connections on the server with your firewall (ufw). This will not be enforced inside Docker containers but it’s still useful on the host.
- in your docker-compose.yml, put the docker containers in an internal restricted network, so that they have no access to the internet
- for each allowed domain you want to be able to connect to from inside a docker container, add a nginx container in your docker-compose.yml that will act as a proxy for this specific domain, put this container inside the internal restricted docker network AND in a docker network with access to the internet + link it to other containers under the domain name in question + configure nginx so that it forward everything to the domain name in question only
Here is the docker-compose.yml:
Here is the nginx proxy configuration:
If you go inside the protected-container, you won’t have access to the internet. No curl command will work, except for “curl google.com”. The protected-container is linked to the nginx-proxy-google container, which has access to the internet. Nginx is listening inside this container, and will proxy everything to google.com.
An important point is that if you want to allow multiple domains, you will have to use one nginx proxy container per domain. This is due to the fact that you can’t really proxy outbound SSL traffic, this would require to have locally a SSL certificate for the external domain which you don’t own. So we use the nginx “stream” block, which performs a pass-through at TCP level, which does not let you inspect traffic and know the destination domain.
With this setup, no outbound connection will be possible on the host, and inside docker containers, only traffic to the allowed domains will be possible.
Here is a working git repository: