Adding Varnish Cache to New VPS

Before adding a Web server, my new VPS needs Varnish cache.

Varnish Cache

My current setup for vps2 is nginx (vps2) in front of varnish (vps2) in front of varnish (home) in front of lighttpd (home). Web.JohnCook.UK also has nginx (Cloudflare) in front of nginx (vps2).

I am going to use the same setup for vps3 so that all processing to build dynamic pages will continue to be performed by my home server, with vps3 doing minimal processing to glue parts (ESI includes) together with the rest of the content of the requested page.

One potential issue is going to be testing varnish without installing nginx and either creating new sub-domains for testing with or changing DNS.

I can, however, simply replace varnish (vps2) in the above flow with varnish (vps3) as my ULA network is working across the three servers.

It will treble the latency, however, because instead of Maidenhead to Watford and back, the backend will become Maidenhead to Watford to Maidenhead to Watford and back, because vps3 does not have a direct CJDNS link to vps2 (by design).

Increased latency for a week or two is (for me) preferred over downtime while trying to diagnose problems.

Installation

As with vps2, I am going to add the varnish repository rather than using Ubuntu's.

sudo nano /etc/network/sources.list
# Varnish Cache
deb https://repo.varnish-cache.org/ubuntu trusty varnish-4.0
sudo apt-get install apt-transport-https
curl https://repo.varnish-cache.org/GPG-key.txt | sudo apt-key add -
sudo apt-get update
sudo apt-get install varnish

Configuration

Configuring varnish is going to (hopefully) be pretty simple. All I should need to do is copy and paste the existing configuration on vps2 to vps3 and then change the IP address varnish binds to.

scp -P 8043 thejc@vps2.thejc.me.uk:/etc/varnish/user.vcl ~/user.vcl
sudo mv ~/user.vcl /etc/varnish/
scp -P 8043 thejc@vps2.thejc.me.uk:/etc/default/varnish ~/varnish
sudo mv ~/varnish /etc/default/
sudo ip -6 addr add fdd7:5938:e2e6:6c8d::6081:c dev lo
sudo nano /etc/default/varnish
VARNISH_LISTEN_ADDRESS=[fdd7:5938:e2e6:6c8d::6081:c]
sudo service varnish restart

A test from Chrome on my laptop shows that http://[fdd7:5938:e2e6:6c8d::6081:c]:6081 is functioning. The only thing left to do is to automatically add the IP add boot.

sudo nano /etc/init/ipv6-lo.conf

Inside the pre-start script block, add a line: /sbin/ip -6 addr add fdd7:5938:e2e6:6c8d::6081:c dev lo and add an identical line in the pre-stop script code block, replacing add with del.

Reboot vps3 and then check everything has started correctly. The problem with mixing upstart and sysinit scripts is that there is the potential for race conditions where a program starts before the IP address is added to an interface.

Sick Backends

Unfortunately, my "by design" rules make it impossible for vps3 to contact vps2 reliably.

When I added johncook_vps3_varnish as a backend to varnish on vps2, the health probe reports it as Sick (0/5) and it is going to be too much hassle trying to understand my firewall rules (which are currently a big mess I can't understand) to get things working.

That means vps2 can't reliably talk with vps3 so I am going to have to install a Web server and potentially have downtime during the site moves if something goes wrong.

On the upside, vps3 not being able to talk to vps2 over the ULA network means that I won't accidentally have anything on vps3 being reliant on a connection to vps2, making it less likely I will have problems when vps2 is decommissioned.