
The Problem: Domain Routing and Security
Exposing a Docker container port directly is cool for local development but what if we wanted to process multiple services on the same port?
Also, what about security? Nobody can afford to transfer in plain HTTP anymore and there’s plenty of ready solutions out there for free certificate generation.
In short, we need ways to quickly setup secure environments to serve dynamic content.
The Solution: Reverse Proxy
Instead of binding to port 80 directly, we could hide the containers in an isolated network behind a nginx reverse proxy that will route the traffic to the right containers.
The Recipe
If you have containers bound to port 80 you should stop them before attempting to run the reverse proxy.
Step 1: Create the Network
You only need to run this once in the beginning:
docker network create nginx-proxy
Step 2: Setup nginx
Clone this project on your server: https://github.com/jaretburkett/easy-multidomain-docker-server
Run cd nginx-proxy
and then create the container with:
docker-compose up -d
Optional: Increase client_max_body_size
nginx comes with default value of 2 MB which is a reasonable default value but in most cases that won’t be enough for a web app, so the way around this is to override nginx.conf
:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
client_max_body_size 200M;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Step 3: Configure Services
Add this to the end of your docker-compose.yml
:
networks: default: external: name: nginx-proxy
And configure these environment variables on the target service:
services: service_name: environment: VIRTUAL_HOST: example.com, second-domain.com VIRTUAL_PORT: 80 # the port your application exposes LETSENCRYPT_HOST: example.com # the domain name to issue cert to, same as VIRTUAL_HOST LETSENCRYPT_EMAIL: [email protected] # this email can be any of your emails. Not domain specific
As you notice, you can route multiple domains to the same container by separating them with a comma in the VIRTUAL_HOST
variable.
Finally, expose the port set in VIRTUAL_PORT
:
services: service_name: expose: - 80 # the port your application exposes
Step 4: Activate Websites
Run this in your project’s directory:
docker-compose up -d
nginx-proxy will detect it on the network, set it up, issue a SSL certificate, autorenew its certs, and activate it.
To stop the service, cd into the folder and run.
docker-compose down
The service will stop and nginx-proxy will detect this and remove it’s service.
Additionally: Configure Cloudflare
If you’re using Cloudflare you will have to switch to Full (strict) SSL mode to avoid “Too many redirects” errors.
Just go to SSL/TLS -> Overview and select “Full (strict)”:

Conclusion
Using Docker to automate the deployments and have automatic certificates protecting the traffic is super easy when we leverage existing solutions.
Cloudflare can provide you with additional security and it’s highly recommended.