Lee_Ars wrote:
Have been banging on converting my install to Docker all day, and I'm nearly done. However, I'm at kind of a configuration quandary with how to best arrange the layer cake of web servers.
Givens for this install:
The (real) server runs varnish + nginx with multiple production web sites. Varnish is on port 80 reverse-proxying to nginx, and nginx itself listens on 443 for HTTPS traffic (since varnish doesn't do that—well, that's not quite true, but that's a whole other book).
The server has a single public static IP, and nginx decides which web site to serve based wholly on on http request hostname (in other words, each nginx virtual host definition has its own unique
server_name
parameter).Varnish must remain listening on port 80, and nginx must remain listening on port 443. Yes, I know Discourse's preferred option is to use a CDN instead of Varnish, but varnish is required for other stuff and even bypassing it entirely with
pipe
in its vcl file still has it playing a reverse-proxy role. So, it's in the mix even if it's "off."With those known, how best to pass traffic to Docker?
There appear to be two options:
A) Leave nginx active inside of docker, with a multi-step reverse proxy. The flow would be requester -> varnish -> nginx (real) -> nginx (docker) -> unicorn for HTTP and requester -> nginx (real) -> nginx (docker) -> unicorn for HTTPS.
B) Don't load nginx inside of docker—only run unicorn and open up port 3000 on docker. The flow then would be requester -> varnish -> nginx (real) -> unicorn for HTTP and requester -> nginx (real) -> unicorn for HTTPS.
Pros of A are that it's simpler to install and configure and maintain (and, as @sam has noted, it lets the Discourse devs fold in updated nginx configurations as part of the normal upgrade cycle).
Cons of A are the multiple reverse proxies. I'm not terribly concerned about performance, but I am a little concerned about having that many layers in the stack and having all the client request metadata survive the trip through. I don't know if it's bad or not, and that's a little scary.
Pros of B are that it makes client-side administration easier (for me, at least), and more importantly hacks a layer out of the cake.
Cons of B are that it gets...complicated. All of the locations referenced point at paths inside the docker container, which...er, actually, come to think of it, this isn't even going to work, is it? Because
public
and everything in it is inaccessible inside the docker container, right?Could use some advice here. How are you guys typically rolling out docker in production? Do you just do the double-reverse-proxy thing? I mean, I can actually use varnish to push HTTP traffic bound for the right hostname past the production nginx and directly to docker nginx, but it won't help for HTTPS.
Posts: 4
Participants: 2