Backups
Before doing anything, I need to backup. First, make my MySQL databases read only and back them up to a file:
mysql -u root -p
flush tables with read lock;
set global read_only = ON;
mysqldump -u root -p -h localhost --all-databases > /home/thejc/backup_All_Databases_2015-01-24.sql
Check I am able to login to my home server from my VPS:
ssh -p 22 -i ~/.ssh/vps2 thejc@home.thejc.me.uk
mkdir vps-backup-2015-01-24
exit
Having aborted my first backup attempt because compression was limited the throughput due to CPU usage at both my VPS and Home Server, I am going to skip the z option in the following rsync command.
sudo su
rsync -avrpPlog --progress --exclude=/proc --exclude=/sys --exclude=/dev -e 'ssh -p 22 -i ~/.ssh/vps2' / thejc@home.thejc.me.uk:/home/thejc/vps-backup-2015-01-24/
After 30-60 minutes, the VPS is fully backed up. At this point, I made the MySQL database read-write again just in case the upgrade process needs to do something to the databases:
mysql -u root -p
set global read_only = OFF;
unlock tables;
If everything goes wrong, I will need to restore my MySQL databases from backup. Doing so will delete any changes.
Note to self: Do not do anything with the databases for a couple of days.
Upgrade Current Packages
Before running a distribution upgrade, I need to update and upgrade anything that needs upgrading on the current Ubuntu release:
sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
After agreeing to installing the updates, it is time to do the scary: upgrade the release.
Upgrading from Ubuntu 12.04 to Ubuntu 14.04
sudo do-release-upgrade -a
sudo do-release-upgrade
Do not use the -a option with do-release-upgrade unless you want to install "experimental" (likely to bork your system) stuff.
Yes I want to continue, and yes I will be opening up my firewall for that second SSH port you are going to let me use. In another ssh window connected to my VPS:
sudo iptables -I INPUT -p tcp --dport 1022 -s `dig +short home.thejc.me.uk` -j ACCEPT
sudo iptables -L -n
Back in the upgrade shell, hit enter to start the additional sshd. In a new shell window:
ssh -p 1022 vps2.thejc.me.uk
Back in the upgrade shell I waited, and eventually the changes were listed. I hit d
for the details and didn't see anything major being removed and not replaced so hit q
to go back to the prompt and then hit y
at the point of no return.
This process may take several hours.
Note to self: Do not lean on keyboard. Do not reboot (or logout of) my home server. UPS is at full charge.
Unlike when I usually get prompts about changes to configuration files, I wisely hit d to look at the details (and make a note of the changes) before agreeing to use the package maintainers version.
The reason for this is that although I could continue to use the existing configuration files, a major release upgrade may not only fix bugs in the default configuration files, but may also change the format of the configuration files.
Because of that, I decided that it would be simpler to install the package maintainers version after noting what changes were going to be made that might break things, so I can go back later and reinsert the changes.
Take, for instance, /etc/sysctl.conf:
-net.ipv6.conf.all.forwarding=1
-net.ipv6.conf.default.forwarding=1
+#net.ipv6.conf.all.forwarding=1
Rather than keeping my version of sysctl.conf, I accepted the package maintainer's version and made a note of changes that would likely impact me. I also know that I need to reverse this change for some IPv6 stuff to function after the upgrade.
It should be noted at this point that the upgrade did not go according to plan. By following the instructions of the first Google hit, and not reading the comments, I unknowingly used the experimental option for the upgrade command. After rebooting, nothing. Hard power off, power on, nothing. I broke my VPS.
Starting From Absolute Scratch
Left with only one choice after breaking my VPS, I reinstalled Ubuntu 12.04 using my VPS provider's control panel option OS Reload (format).
At this point I realised that despite backing up my e-mails I had left my mail server running during the failed upgrade process and therefore may have lost a number of newer e-mails. Fortunately, as soon as things broke, e-mail went down and I had 4-ish days before mail servers started bouncing e-mail addressed to me. I shall now call that 3 days, or until Midnight GMT 2015-01-29.
On the plus side, I may just bring my e-mail in-house since the only copy of my e-mail is now in-house.
So, with a fresh installation, and logged in using my VPS provider's Recovery Console, I set about getting some things working again.
apt-get update
apt-get upgrade
apt-get dist-upgrade
man adduser
apt-get install man
man adduser
adduser --home /home/thejc --shell /bin/bash --uid 1000 \
--ingroup sudo thejc
nano /etc/ssh/sshd_config
apt-get install nano
nano /etc/ssh/sshd_config
...
Port 8043
...
ListenAddress 149.255.99.49
ListenAddress 2001:470:1f09:38d::a:2
ListenAddress 2a03:ca80:8000:7673::18
service ssh reload
Using a fresh shell, check I can ssh to my VPS:
ssh -p 8043 thejc@vps2.thejc.me.uk
As I can, it is time to move on. But first, a couple of notes for later:
- IPv6 is not working.
- ULA IPv6 is not working.
- I have not yet setup ssh logins using keys.
The moment of truth, I reboot and wait. After a few minutes I am able to successfully login over ssh using my VPS's hostname rather than my VPS Provider's Recovery Console IP address. sudo
is also working properly so I can move on to setting things up.
One further note:
- Some commands result in an error such as the following:
Error reading /home/thejc/.nano_history: Permission denied Press Enter to continue starting nano.
That error is going to become annoying.
At present, I have no DNS. Although some of my domains are using Hurricane Electric for backup DNS, two of them are not. As changing my DNS setup has been on my mind for a while, as I am thinking about using DNSSEC as soon as dns.he.net supports it.
sudo apt-get install automake make gcc libssl-dev dnsutils
My attempt at installing Yadifa did not go so well. After finally meeting the dependencies, I tried using a zone (AXFR Dump) file copied from dns.he.net and Yadifa would not use it. After numerous attempts at changing the file, I found out (on page 41 of the PDF manual) that neither SPF nor SRV DNS record types (among others) are recognised by Yadifa and will result in an error.
Onwards to another DNS Server then. After looking at Wikipedia, and a bit of Googling, I decided to give NSD a go.
My reasons for using NSD:
- It supports IPv6 (AAAA).
- It supports DNSSEC.
- It appears to support offline DNSSEC zone signing (needs further investigation).
- It seems like it has good performance.
- Some of the root nameservers use it.
- It is an authoritative-only, non-recursive DNS server.
- It uses a BSD license (i.e. it is open-source).
sudo apt-get install nsd
sudo nsd-control-setup
sudo cp /usr/share/doc/nsd/examples/nsd.conf.gz /etc/nsd/
sudo gunzip /etc/nsd/nsd.conf.gz
sudo nano /etc/nsd/nsd.conf
Note: in the following code block I am modifying the commented version of nsd.conf. Empty lines represent commented code and/or empty lines.
server:
server-count: 2
ip-address: 127.0.0.1
ip-address: 127.1.0.1
#ip-address: 149.255.99.49
#ip-address: 149.255.99.50
#ip-address: 2001:470:1f09:38d::3
#ip-address: 2001:470:1f09:38d::4
do-ip4: yes
do-ip6: yes
username: nsd
zonesdir: "/etc/nsd/zones"
pattern:
name: "myzones"
zonefile: "%s.zone"
pattern:
name: "henetslaves"
include-pattern: "myzones"
#notify: 216.218.130.2 NOKEY
#provide-xfr: 216.218.133.2 NOKEY
#provide-xfr: 2001:470:600::2 NOKEY
#allow-axfr-fallback: yes
#outgoing-interface: 149.255.99.49
zone:
name: "thejc.me.uk"
include-pattern: "henetslaves"
Basically, I am telling NSD to use two localhost IP addresses, that everything using include-pattern myzones (including those using pattern henetslaves) shall find the zone-file at /etc/nsd/zones/name.zone, and that zones using pattern henetslaves have (at present) the same functionality as zones using pattern myzones.
The reason I am using two different patterns is to reduce code duplication. All zones that are setup in dns.he.net as slave zones need to allow AXFR requests. AXFR requests come from slave.dns.he.net's IP address(es), and NOTIFYs are sent to ns1.he.net's IP address.
Next I visted dns.he.net, logged in, clicked on zone thejc.me.uk, scrolled down and clicked on Raw Zone, and copied the contents (minus the first line that should really be commented) to /etc/nsd/zones/thejc.me.uk.zone.
I restarted nsd and tested:
dig @127.0.0.1 thejc.me.uk
It worked. I then did the same for the remaining zones, adding them to /etc/nsd/nsd.conf, and testing. I then opened up, from backup, my tinydns zone file and created zone files for watfordjc.com and another zone that doesn't use dns.he.net as backup, giving them an include-pattern of "myzones".
After further testing, I uncommented all of the above with the exception of the IPv6 ip-address lines.
Firewall, IPv6, and Upstart Scripts
My box is not very secure - it doesn't have a firewall. Time to fix that.
sudo apt-get install iptables
sudo nano /etc/init/iptables.conf
# iptables - save and restore iptables rules
#
description "save and restore iptables rules"
start on (filesystem and net-device-up IFACE=lo)
# stop script probably doesn't run
stop on (stopped ipv4-lo)
pre-start script
if [ -f /etc/iptables.save ]; then
/sbin/iptables-restore -c < /etc/iptables.save
fi
end script
post-stop script
if [ -f /etc/iptables.save ]; then
/sbin/iptables.save -c < /etc/iptables.save
fi
end script
sudo nano /etc/init/ip6tables.conf
# ip6tables - save and restore ip6tables rules
#
description "save and restore ip6tables rules"
start on (filesystem and net-device-up IFACE=lo)
# stop script probably doesn't run
stop on (stopping he-ipv6-tunnel)
pre-start script
if [ -f /etc/ip6tables.save ]; then
/sbin/ip6tables-restore < /etc/ip6tables.save
fi
end script
pre-stop script
if [ -f /etc/ip6tables.save ]; then
/sbin/ip6tables.save -c > /etc/ip6tables.save
fi
end script
sudo nano /etc/init/he-ipv6-tunnel.conf
# he-ipv6-tunnel
#
description "Set-up IPv6 tunnel and IPs"
start on (filesystem and net-device-up IFACE=lo)
# Stop script probably doesn't run
stop on (net-device-down IFACE=he-ipv6)
emits ipv6-tunnel-up
emits ipv6-tunnel-down
pre-start script
ip tunnel add he-ipv6 mode sit remote 216.66.80.26 local 149.255.99.49 ttl 255
ip link set he-ipv6 up
ip addr add 2001:470:1f08:38d::2/64 dev he-ipv6
ip route add ::/0 dev he-ipv6
ip -f inet6 addr
initctl emit ipv6-tunnel-up IFACE=he-ipv6
end script
pre-stop script
ip route del ::/0 dev he-ipv6
ip addr del 2001:470:1f08:38d::2/64 dev he-ipv6
ip link set he-ipv6 down
ip tunnel del he-ipv6
end script
post-stop script
initctl emit ipv6-tunnel-down IFACE=he-ipv6
end script
sudo nano /etc/init/ipv6-lo.conf
# ipv6-lo
#
description "Set-up IPv6 loopback IPs"
start on (filesystem and net-device-up IFACE=lo)
# Stop script probably doesn't run
stop on (net-device-down IFACE=lo)
emits ipv6-tunnel-lo-ips
pre-start script
/bin/ip addr add fdd7:5938:e2e6:9660::80:c dev lo
...
initctl emit ipv6-tunnel-lo-ips
end script
pre-stop script
/bin/ip addr del fdd7:5938:e2e6:9660::80:c dev lo
...
end script
sudo nano /etc/init/ipv6-eth0.conf
# ipv6-eth0
#
description "Set-up IPv6 ethernet IPs"
start on (filesystem and ipv6-tunnel-up IFACE=he-ipv6)
# Stop script probably doesn't run
stop on (net-device-down IFACE=eth0)
emits ipv6-tunnel-eth0-ips
pre-start script
# Create /64 for system (routing)
/bin/ip -6 addr add 2001:470:1f09:38d::2/64 dev eth0
# Add additional IPv6 addresses to eth0
/bin/ip -6 addr add 2001:470:1f09:38d::3/128 dev eth0
/bin/ip -6 addr add 2001:470:1f09:38d::4/128 dev eth0
...
# Create Unique Local IPv6 address for system
/bin/ip -6 addr add fdd7:5938:e2e6:9660::2/64 dev eth0
...
initctl emit ipv6-tunnel-eth0-ips
end script
pre-stop script
# Delete the eth0 IPv6 IPs
/bin/ip -6 addr del 2001:470:1f09:38d::2/64 dev eth0
/bin/ip -6 addr del 2001:470:1f09:38d::3/128 dev eth0
/bin/ip -6 addr del 2001:470:1f09:38d::4/128 dev eth0
...
/bin/ip -6 addr del fdd7:5938:e2e6:9660:2/64 dev eth0
...
end script
sudo nano /etc/init/ula-ipv-tunnel.conf
# ula-ipv6-tunnel
#
description "Set-up ULA Ipv6 tunnel and IPs"
start on (ipv6-tunnel-up IFACE=he-ipv6 and filesystem and net-device-up IFACE=eth0) or (net-device-up IFACE=ula-net)
# Stop script probably doesn't run
stop on (net-device-down IFACE=ula-net or ipv6-tunnel-down IFACE=he-ipv6)
emits ipv6-tunnel-up
emits ipv6-tunnel-down
pre-start script
# Set-up the tunnel
ip -6 tunnel add ula-net mode ip6ip6 remote 2001:470:1f09:1aab::2 local 2001:470:1f09:38d::2 ttl 225
ip -6 link set ula-net up
ip -6 addr add fdd7:5938:e2e6:3::9660/64 dev ula-net
ip -6 route add fdd7:5938:e2e6:1::/64 dev ula-net
initctl emit ipv6-tunnel-up IFACE=ula-net
end script
pre-stop script
ip -6 route del fdd7:5938:e2e6:1::/64 dev ula-net
ip -6 addr del fdd7:5938:e2e6:3::9660/64 dev ula-net
ip -6 link set ula-net down
ip -6 tunnel del ula-net
end script
post-stop script
initctl emit ipv6-tunnel-down IFACE=ula-net
end script
These scripts, combined with /etc/iptables.save and /etc/ip6tables.save are all that are needed to set up my IPv6 tunnels, IP addresses, and firewall.
One of the advantages of moving to upstart previously is that by just copying these scripts and related files I needed to make zero changes to other files to get up and running again.
Well, I say zero changes. I did have to comment some lines in /etc/iptables.save and /etc/ip6tables.save that redirected packets to/from port 53 as I previously had dnscurve in front of my authoritative DNS servers. A bit of testing and a reboot later, and my IPv6 IPs and ULA IPv6 IPs are working again.
SSH Login Using Keys
I have previously modified how I do things so each device/user has its own key. On my home server I am also using a symlink from user home directories to a sub-folder in /etc/ssh/ so that all keys are in the same place.
sudo nano /etc/ssh/sshd_config
LogLevel VERBOSE
AuthorizedKeysFile /etc/ssh/%u/authorized_keys
sudo service ssh reload
sudo mkdir /etc/ssh/thejc
sudo chown thejc:root /etc/ssh/thejc
sudo chmod 700 /etc/ssh/thejc
nano /etc/ssh/thejc/authorized_keys
From my home server, copy and paste the contents of John.Alienware-JC.pub, thejc.PC1-JC.pub, thejc.PC2-JC.pub, and thejc.raspberrypi.pub, into the nano window. Also, from the backup of my VPS, copy the public key of my key stored on an SD card, and the one I use on my iPad. Restore my key thejc.vps2 and thejc.vps2.pub, and make sure the permissions are sane:
sudo chmod 600 thejc.vps2
ln -s /etc/ssh/thejc /home/thejc/.ssh
nano ~/.ssh/config
Host home.thejc.me.uk
Hostname home.thejc.me.uk
Port 8043
User thejc
IdentityFile ~/.ssh/thejc.vps2
At this point, check I can login to both vps2 from home without the user of a password, and that I can login to home from vps2. I can, so I can now turn off password logins.
sudo nano /etc/ssh/sshd_config
PasswordAuthentication no
sudo service ssh reload
ssh -p 8043 vps2.thejc.me.uk
... Permission denied (publickey).
Yet logging in from my home server using a valid key works, so I have got things working how I want them to as far as SSH goes.
There is one tweak that is on my list of configuration diff changes that I want to implement now:
sudo nano /etc/cron.daily/apt
Search for nice ionice
and insert -n 19 so that block looks like the following:
if [ -x /usr/sbin/update-apt-xapian-index ]; then
nice -n 19 ionice -c3 update-apt-xapian-index -q -u
fi
I don't know if Trusty Tahr still has the problem, but this one thing previously brought my VPS to a standstill while update-apt-xapian-index was running.
As opposed to completely disabling it, I added in a niceness value (default is 10) of 19 (the maximum) so that it is "least favourable to the process" or in simple English: if anything else wants to do something, let it.
Niceness values run from -20 (most favourable to the process, or "real time") to 19 (least favourable to the process, or "stop hogging the CPU when something else needs to do something".
Web Server (HTTP/HTTPS Termination)
On my previous set-up, my Web server configuration files were an absolute mess. As I was previously using the mainline version of nginx, I made a bit of a mistake so the commands I ended up using were as follows:
sudo apt-get install nginx
sudo apt-get remove nginx
sudo apt-get autoremove
sudo aptitude purge ?config-files
sudo apt-key add -
[copy/paste the public key for the nginx PPA]
^D
sudo apt-get install python-software-properties
sudo nano /etc/apt/sources.list
# nginx
deb http://ppa.launchpad.net/nginx/development/ubuntu trusty main
deb-src http://ppa.launchpad.net/nginx/development/ubuntu trusty main
sudo apt-get update
sudo apt-get install nginx
sudo rm /etc/nginx/sites-enabled/default
sudo service nginx restart
The reason for the last two commands is that I do not want Googlebot coming along and seeing a 200 response code and thinking the default nginx page contains the content of the requested page. By removing that symlink, nginx will not listen on any IP addresses on any ports.
As I previously mentioned, using default configuration files without modifying them is the optimal way of doing things, especially when it comes to upgrades.
Using nginx version 1.7.9, the default configuration file includes within the http { }
section all files inside /etc/nginx/conf.d with a .conf extension, and all files within /etc/nginx/sites-enabled.
Thus I opened up my backup of nginx.conf, and created two files within /etc/nginx/conf.d/ - one for SPDY, and one for upstream connections.
sudo nano /etc/nginx/spdy.conf
map $spdy $spdy_ae {
default $http_accept_encoding;
# default "gzip";
2 "gzip, deflate";
3 "gzip, deflate";
3.1 "gzip, deflate";
}
map $spdy $spdy_connection {
default "0";
2 "1";
3 "1";
3.1 "1";
}
sudo nano /etc/nginx/conf.d/upstream-servers.conf
upstream johncook.varnish {
server [fdd7:5938:e2e6:9660::6081::c]:6081;
}
upstream johncook.lighttpd {
server [fdd7:5938:e2e6:9660::80:c]:80;
}
upstream johncook.varnish-lighttpd {
server [fdd7:5938:e2e6:9660::6081:c]:6081;
server [fdd7:5938:e2e6:9660::80:c]:80 backup;
server [fdd7:5938:e2e6:1::80:c]:80 backup;
}
...
The third line in the upstream server johncook.varnish-lighttpd is my lighttpd installation on my home server. The reason I have added this is that I want to try and move my Web stuff in-house, and just have my VPS acting as a reverse proxy.
Whereas I was previously modifying nginx/varnish/lighttpd on my VPS and hoping I didn't break things, by wanting to utilise varnish more, caching and ESIs are starting to become a bigger part of my Web backend.
As with e-mail, if I take the mess up during the upgrade to bring as much in-house as possible, there is at least one advantage of having to start from scratch: I can do things how I want to without having to gradually change how I'm doing things.
As an example, my current VPS disk usage is 2.1 GB out of 30 GB. The backup of my before-upgrade VPS installation was 16 GB. Here is an excellent example of just how fast things could be:
ping -c4 google.com
64 bytes from ... time=1.76 ms 64 bytes from ... time=1.81 ms 64 bytes from ... time=1.77 ms 64 bytes from ... time=1.75 ms
ping6 -I 2001:470:1f09:38d::2 -c4 google.com
64 bytes from ... time=2.39 ms 64 bytes from ... time=2.43 ms 64 bytes from ... time=2.45 ms 64 bytes from ... time=2.36 ms
ping6 -I 2a03:ca80:8000:7673::18 -c4 google.com
64 bytes from ... time=2.21 ms 64 bytes from ... time=2.06 ms 64 bytes from ... time=2.06 ms 64 bytes from ... time=2.25 ms
Although these are ping responses to google.com, rather than to a Googlebot IP address, this does give an example of just how fast things could be when Googlebot accesses one of my sites in ideal circumstances.
If Googlebot can get a 304 response in under 5 milliseconds, it might benefit my sites in the search results. Of course, these tests were conducted at 06:00 UTC on a Monday so are not representative of the busy times of the Net, and they are only for ping.
I have just made a modification to my /etc/ip6tables.save rules and reloaded them, so that all the -j LOG
rules have been commented out, reducing disk accesses and writes. Logging is one of those things I will have to think about optimising if I am to have "superfast" Web loading times, possibly using a RAM disk for server logs and outputting to disk every few minutes.
johncook.uk
As previously mentioned, my Web server configuration files are a mess. In order to clean things up, I am going to move some of the configuration out of the configuration files and into separate includes files.
sudo mkdir /etc/nginx/includes
sudo nano nginx.ssl-ciphers
Scott Helme, Squeezing a little more out of your Qualys score# Source: https://scotthelme.co.uk/squeezing-a-little-more-out-of-your-qualys-score/ ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssh_dhparam /etc/ssl/dhparam.pem;
That last line is for stronger DH params, and uses the command from the commented URL to generate 4096-bit DH parameters (openssl dhparam -out dhparam.pem 4096
).
As I previously used that command, however, I just copied the dhparam.pem file accross from my backup.
sudo nano nginx.ssl-settings
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_buffer_size 2k;
ssl_session_cache shared:SSL:1m;
means that the session cache called SSL will be shared between worker processes, and uses a size of 1m bytes = 1 MB. 1 MB can store around 4,000 sessions according to the documentation.
ssl_session_timeout 5m;
means that a client can reuse the session parameters stored in the cache for 5 minutes.
By combining the two above settings, you can see that things will need tweaking based on visitor numbers. Let x be shared session cache size in MB, and let y be the number of minutes session parameters can be reused. If 4000x/y is greater than peak visitors per minute, the cache size is big enough.
4000/5 = 800 new connections per minute, until there are around 4,000 concurrent users. Since I am not going to have 4,000 concurrent visitors any time soon, and 800 new connections per minute sounds more than sufficient, I can increase the session timeout in line with the cache size.
Thus, if I were to increase the session timeout to 30 minutes (multiply by 6) I can increase cache size to 6m to compensate. 24 hours / 5 minutes = 288m, which is too much for the amount of RAM I have. 6 hours / 5 minutes = 72m.
72 MB would allow 72(4000) = 288,000 TLS client sessions to be reused if those clients come back to the site within 6 hours. I could double the time to 12 hours whilst keeping the cache size the same if I expect half that number of clients.
Obviously, dedicating 72 MB of RAM just to TLS session resumption is a large chunk of my RAM. For now I will use this number and revisit the numbers after I have added MySQL, PHP, Lighttpd and others to my VPS setup.
sudo nano nginx.ssl-stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver [fdf4:90db:f24c:72ca:df4d:b9ee:be0b:c37d] [2001:470:1f09:38d:fcf3:53] [2001:470:20::2] 74.82.42.42 8.8.8.8;
sudo nano nginx.ssl-stapling-startssl
include /etc/nginx/includes/nginx.ssl-stapling
ssl_trusted_certificate /etc/ssl/StartCom/ca.pem;
sudo nano proxy.static
proxy_set_header Accept-Encoding $spdy_ae;
include /etc/nginx/proxy_params;
proxy_set_header X-Is-Spdy $spdy_connection;
proxy_pass_request_headers on;
sudo nano web.johncook.uk-ssl
ssl_certificate /etc/ssl/web.johncook.uk/201501/web_johncook_uk.chained-no-root.crt;
ssl_certificate_key /etc/ssl/web.johncook.uk/201501/web_johncook_uk.key;
include /etc/nginx/includes/nginx.ssl-settings;
include /etc/nginx/includes/nginx.ssl-ciphers;
include /etc/nginx/includes/nginx.ssl-stapling-startssl;
ssl_stapling_file /etc/ssl/web.johncook.uk/ocsp_cache.resp;
sudo nano web.johncook.uk-ips-https
listen [2001:470:1f09:38d::80:c]:443 ssl spdy;
listen [fdd7:5938:e2e6:9660::80:c]:443 ssl spdy;
listen 149.255.97.82:443 ssl spdy;
sudo nano web.johncook.uk-ips-http
listen [2001:470:1f09:38d::80:c]:80;
listen 149.255.97.82:80;
cd ../sites-available
sudo nano johncook.uk
server {
include /etc/nginx/includes/web.johncook.uk/ips-https;
include /etc/nginx/includes/web.johncook.uk/ips-http;
server_name web.johncook.uk;
include /etc/nginx/includes/web.johncook.uk-ssl;
location ~* \.(gif|jpg|jpeg|png)$ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
location / {
proxy_pass http://johncook.varnish-lighttpd;
include /etc/nginx/includes/proxy.static;
}
location /img/ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
location /js/ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
location /css/ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
}
server {
include /etc/nginx/includes/web.johncook.uk/ips-https;
include /etc/nginx/includes/web.johncook.uk/ips-http;
server_name johncook.uk;
include /etc/nginx/includes/web.johncook.uk-ssl;
return 307 https://web.johncook.uk$request_uri;
}
cd ../includes
sudo cp web.johncook.uk-ssl web.johncook.co.uk-ssl
sudo nano web.johncook.co.uk-ssl
ssl_certificate /etc/ssl/web.johncook.co.uk/201404/web_johncook_co_uk.chained-no-root.crt;
ssl_certificate_key /etc/ssl/web.johncook.co.uk/201404/web_johncook_co_uk.key;
include /etc/nginx/includes/nginx.ssl-settings;
include /etc/nginx/includes/nginx.ssl-ciphers;
include /etc/nginx/includes/nginx.ssl-stapling-startssl;
ssl_stapling_file /etc/ssl/web.johncook.co.uk/ocsp_cache.resp;
cd ../sites-avaiable
sudo cp johncook.uk johncook.co.uk
sudo nano johncook.co.uk
server {
include /etc/nginx/includes/web.johncook.uk/ips-https;
include /etc/nginx/includes/web.johncook.uk/ips-http;
server_name web.johncook.co.uk;
include /etc/nginx/includes/web.johncook.co.uk-ssl;
location ~* \.(gif|jpg|jpeg|png)$ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
location / {
proxy_pass http://johncook.varnish-lighttpd;
include /etc/nginx/includes/proxy.static;
}
location /img/ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
location /js/ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
location /css/ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
}
server {
include /etc/nginx/includes/web.johncook.uk/ips-https;
include /etc/nginx/includes/web.johncook.uk/ips-http;
server_name johncook.co.uk;
include /etc/nginx/includes/web.johncook.co. uk-ssl;
return 307 https://web.johncook.uk$request_uri;
}
The configuration for watfordjc.uk is likewise just a copy/paste of some files, a bit of modification on top of what was done above for JohnCook.co.uk (e.g. WatfordJC.uk has the same IP addresses as JohnCook.UK, plus one more), and a restart of nginx and it is done.
Now there is a further modification I could do that would really condense down the files, and that is creating a new sub-domain of johncook.co.uk for static files that doesn't use cookies. That is what I will probably do at a later time any way, but this is fine for the moment.
Varnish Cache
With HTTP/HTTPS termination in place, the next thing to do is to add in my Varnish cache.
My current planned set-up for Web sites is to use nginx for http/https termination, lighttpd for static content and as a backup for dynamic content, and varnish for dynamic content.
nginx will use several backends:
- Varnish on my VPS.
- Varnish on my home server.
- Lighttpd on my home server.
- Lighttpd on my VPS.
Varnish on my VPS will also use several backends:
- Varnish on my home server.
- Lighttpd on my home server.
- Lighttpd on my VPS.
The directory structure and files for lighttpd on both my home server and VPS will be idential.
The reason I will be having the same things behind varnish as are backups for nginx is that if varnish stops working then there is something else there to take over. It will add some latency, but things won't suddenly stop working as there will be some redundancy.
Admittedly it will make nginx on my VPS a single-point-of-failure, but that is what my HTTP/HTTPS termination server should be in this setup.
In my previous varnish configuration I mentioned that varnish does not (yet?) support If-Modified-Since on ESI includes. Support was added in version 4.0, which is not yet in Ubuntu Trusty.
sudo apt-get install apt-transport-https
sudo apt-key add -
[paste content of https://repo.varnish-cache.org/ubuntu/GPG-key.txt here]
^D
sudo nano /etc/apt/sources.list
# Varnish Cache
deb https://repo.varnish-cache.org/ubuntu trusty varnish-4.0
sudo apt-get update
sudo apt-get upgrade && sudo apt-get dist-upgrade
sudo apt-get install varnish
With varnish installed, visiting https://web.johncook.uk/ now results in a varnish Error 503 Backend fetch failed error page, rather than nginx 502 Bad Gateway error page. On the plus side, https termination is working and I am not getting an invalid certificate error.
At this moment, I want to change comment out all of the IP addresses in my nginx upstream-servers.conf and add to the top of each upstream section server [fdd7:5938:e2e6:9660:bad:bad:bad:bad:bad]:6081;
so that I am back to an nginx 502 Bad Gateway error page.
The reason for doing this is that [fdd7:5938:e2e6:9660:bad:bad:bad:bad:bad] is never going to be a valid IP address, because fdd7:5938:e2e6:9660:bad::/80 is reserved for IP addresses that are never going to given to a service. I don't want nginx or varnish returning a 200 response while I am testing things.
After running sudo service nginx reload
and getting a successful response to the command, I can now move on to configuring Varnish 4.0.
sudo nano /etc/default/varnish
Comment out the Alternative 2 block, and make changes to Alternative 3 block:
VARNISH_VCL_CONF=/etc/varnish/user.vcl
VARNISH_LISTEN_ADDRESS=[fdd7:5938:e2e6:9660::6081:c]
VARNISH_LISTEN_PORT=6081
VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_MIN_THREADS=1
VARNISH_MAX_THREADS=1000
VARNISH_THREAD_TIMEOUT=120
VARNISH_STORAGE_FILE=/var/lib/varnish/$INSTANCE/varnish_storage.bin
VARNISH_STORAGE_SIZE=512M
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"
VARNISH_TTL=120
DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
-f ${VARNISH_VCL_CONF} \
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
-t ${VARNISH_TTL} \
-w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
-S ${VARNISH_SECRET_FILE} \
-s ${VARNISH_STORAGE}"
Something worth noting here is VARNISH_STORAGE_SIZE=512M
, VARNISH_STORAGE_FILE=/var/lib/varnish/$INSTANCE/varnish_storage.bin
, and VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},$VARNISH_STORAGE_SIZE}"
means that Varnish will store the cache in a file with a maximum size of 512 MiB. My VPS has 512 MB of RAM, so this basically means that I am telling Varnish it can cache slightly more than the total RAM in the system.
By using file,..., Varnish relies on the operating system doing its thing and caching in memory that which is accessed a lot. In theory an ESI included on every page of a site will be cached by the OS, assuming the OS isn't being used for a lot more than just serving Web pages.
For comparison, on my home server varnish is competing with a lot of things, such as Web browsers, video software, mail software, and more. I have decided to try VARNISH_STORAGE_SIZE=1G
and VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}"
on my home server.
By using malloc,..., Varnish relies on using the RAM directly for its cache, rather than using file storage and relying on the OS to cache the most used objects.
Given the price of upgrading my VPS package from 512MB to 1024 MB (£8 per month, or £96 per year) compared to fitting another 8 GiB DDR DIMM in the unused slot in my home server (current cost of around £60), a 1 GiB (or 8 GiB) malloc varnish cache that is only a 20-30 millisecond round trip away from my VPS might be a wiser use of resources.
sudo nano /etc/varnish/user.vcl
vcl 4.0;
# Customised by John Cook
# Template from http://www.mediawiki.org/wiki/Manual:Varnish_caching
# set default backend if no server cluster specified
backend default {
.host = "127.0.0.1";
.port = "8080";
# .port = "80" led to issues with competing for the port with apache.
}
import directors;
import std;d
probe healthcheck {
.url = "/inc/esi/site_name";
.expected_response = 200;
.timeout = 100ms;
.interval = 5s;
.window = 5;
.threshold = 3;
}
backend johncook_home_varnish {
.host = "[fdd7:5938:e2e6:1::80:c]";
.port = "6081";
.probe = healthcheck;
}
backend johncook_home_lighttpd {
.host = "[fdd7:5938:e2e6:1::80:c]";
.port = "80";
.probe = healthcheck;
}
backend johncook_vps2_lighttpd {
.host = "[fdd7:5938:e2e6:9660::80:c]";
.port = "80";
.probe = healthcheck;
}
sub vcl_init {
# Create a fallback director,
new fallback = directors.fallback();
fallback.add_backend(johncook_home_varnish);
fallback.add_backend(johncook_home_lighttpd);
fallback.add_backend(johncook_vps2_lighttpd);
}
# access control list for "purge": open to only localhost and other local nodes
acl purge {
"127.0.0.1";
}
# vcl_recv is called whenever a request is received
sub vcl_recv {
#set req.http.X-Forwarded-For = client.ip;
# Use fallback director "fallback", so request goes to best backend, 2nd best, or 3rd best, depending on backend health.
set req.backend_hint = fallback.backend();
# This uses the ACL action called "purge". Basically if a request to
# PURGE the cache comes from anywhere other than localhost, ignore it.
if (req.method == "PURGE")
{if (!client.ip ~ purge)
{return(synth(405,"Not allowed."));}
return(hash);}
# Pass any requests that Varnish does not understand straight to the backend.
if (req.method != "GET" && req.method != "HEAD" &&
req.method != "PUT" && req.method != "POST" &&
req.method != "TRACE" && req.method != "OPTIONS" &&
req.method != "DELETE")
{return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */
# Pass anything other than GET and HEAD directly.
if (req.method != "GET" && req.method != "HEAD")
{return(pass);} /* We only deal with GET and HEAD by default */
# Pass requests from logged-in users directly.
#if (req.http.Authorization || req.http.Cookie)
# {return(pass);} /* Not cacheable by default */
# Pass any requests with the "If-None-Match" header directly.
if (req.http.If-None-Match)
{return(pass);}
# If an image is requested, do not cache it nor do anything special.
if (req.url ~ "\.(?i)(jpg|jpeg|png|bmp)$") {
return (pipe);
}
# If HTTPS, add a header (TODO: modify Web site source code so this is redundent)
if (req.http.X-Forwarded-Proto == "https") {
set req.http.X-IsHTTPs = 1;
} else {
unset req.http.X-IsHTTPs;
}
# Force lookup if the request is a no-cache request from the client.
if (req.http.Cache-Control ~ "no-cache")
{ban(req.url);}
# normalize Accept-Encoding to reduce vary
if (req.http.Accept-Encoding) {
if (req.url ~ "\.") {
if (req.url !~ "\.(txt)$") {
unset req.http.Accept-Encoding;
}
}
if (req.http.User-Agent ~ "MSIE 6") {
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
# } elsif (req.http.Accept-Encoding ~ "deflate") {
# set req.http.Accept-Encoding = "deflate";
} else {
unset req.http.Accept-Encoding;
}
}
# Normalise User-Agent.
# * MSIE 6-10 currently gets an extra HTTP header.
# * All IE browsers do not support cookie max-age
# ** Workaround implemented in JavaScript
# * Ancient browser detection to display warnings about old browsers.
# * Most common user-agent strings should be closer to the top, so the other
# parts of the user-agent block don't have to be checked.
if (req.http.User-Agent) {
# Ignore current browser version first,
# so other else ifs aren't checked.
if (req.http.User-Agent ~ "(Trident/7\.|Opera/2[5-9]\.|Iceweasel/3[1-9]\.|Firefox/3[3-9]\.|Chrome/3[8-9]\.)") {
# IE 11 in compatability mode gets extra tag
if (req.http.User-Agent ~ "MSIE") {
set req.http.User-Agent = "MSIE 7-10";
# Major browsers normalised to generic empty User-Agent string.
} else {
unset req.http.User-Agent;
}
# All search engines normalised to generic Crawler UA string.
} elsif (req.http.User-Agent ~ "(?i)(msn|google|bing|yandex|youdau|exa|mj12|omgili|flr-|ahrefs|blekko)bot" || "(?i)(magpie|madiapartners|sogou|baiduspider|nutch|yahoo.*slurp|genieo)") {
set req.http.User-Agent = "Crawler";
# Ignore mobile browsers for the time being.
} elsif (req.http.User-Agent ~ "(Fennec/|Opera Mobi/|Opera Mini|IEMobile)") {
unset req.http.User-Agent;
# Versions of IE 11 are not yet old.
# } elsif (req.http.User-Agent ~ "Trident/[67]\.0") {
# set req.http.User-Agent = "MSIE 7-11";
# set req.http.X-IE-Old = 1;
# Versions of IE < 11 are old.
} elsif (req.http.User-Agent ~ "Trident/6\.0") {
set req.http.User-Agent = "MSIE 7-10";
set req.http.X-IE-Old = 1;
} elsif (req.http.User-Agent ~ "MSIE [789]\.0") {
# Ignore IE 11 in compatability mode
if (req.http.User-Agent !~ "Trident/7\.0") {
set req.http.User-Agent = "MSIE 7-10";
set req.http.X-IE-Old = 1;
}
# Versions of Chrome < 38 are old.
} elsif (req.http.User-Agent ~ "Chrome/3[0-7]\.") {
set req.http.User-Agent = "Chrome 0-37";
set req.http.X-Chrome-Old = 1;
} elsif (req.http.User-Agent ~ "Chrome/([12])?[0-9]\.") {
set req.http.User-Agent = "Chrome 0-37";
set req.http.X-Chrome-Old = 1;
# Versions of Opera < 25 are old.
} elsif (req.http.User-Agent ~ "Opera/2[0-4]\.") {
set req.http.User-Agent = "Opera 0-24";
set req.http.X-Opera-Old = 1;
} elsif (req.http.User-Agent ~ "Opera/([1])?[0-9]\.") {
set req.http.User-Agent = "Opera 0-24";
set req.http.X-Opera-Old = 1;
# Versions of Iceweasel < 31 are old.
} elsif (req.http.User-Agent ~ "Iceweasel/[3][0]\.") {
set req.http.User-Agent = "Iceweasel 0-30";
set req.http.X-Iceweasel-Old = 1;
} elsif (req.http.User-Agent ~ "Iceweasel/([12])?[0-9]\.") {
set req.http.User-Agent = "Iceweasel 0-30";
set req.http.X-Iceweasel-Old = 1;
# Unknown versions of Iceweasel are not Firefox.
} elsif (req.http.User-Agent ~ "Iceweasel") {
unset req.http.User-Agent;
# Versions of Firefox < 33 are old.
} elsif (req.http.User-Agent ~ "Firefox/3[0-2]\.") {
set req.http.User-Agent = "Firefox 0-32";
set req.http.X-Firefox-Old = 1;
} elsif (req.http.User-Agent ~ "Firefox/([12])?[0-9]\.") {
set req.http.User-Agent = "Firefox 0-32";
set req.http.X-Firefox-Old = 1;
# IE 6 is very low on this list, because it shouldn't make it to server
} elsif (req.http.User-Agent ~ "MSIE 6") {
set req.http.User-Agent = "MSIE 6";
set req.http.X-IE-Old = 1;
unset req.http.Accept-Encoding;
# If no UA string has matched, normalise to the generic empty UA string
} else {
unset req.http.User-Agent;
}
}
# Noramlise Cookies
if (req.http.Cookie ~ "fonts=1") {
unset req.http.X-Has-Fonts;
set req.http.X-Has-Fonts = 1;
unset req.http.Cookie;
} else {
unset req.http.Cookie;
}
# Normalise Domain
if (req.http.host) {
if (req.http.host ~ "^(?i)web\.johncook\.uk$") {
set req.http.host = "web.johncook.uk";
} elsif (req.http.host ~ "^(?i)web\.watfordjc\.uk$") {
set req.http.host = "web.watfordjc.uk";
} elsif (req.http.host ~ "^(?i)web\.johncook\.co\.uk$") {
set req.http.host = "web.johncook.co.uk";
} elsif (req.http.host ~ "^(?i)johncook\.co\.uk$") {
set req.http.host = "johncook.co.uk";
} elsif (req.http.host ~ "^(i)watfordjc\.uk") {
set req.http.host = "watfordjc.uk";
} else {
set req.http.host = "web.watfordjc.uk";
}
} else {
set req.http.host = "web.watfordjc.uk";
}
# Normalise robots.txt requests
if (req.url ~ "^/robots.txt") {
// Normalise URL, removing any extraneous parameters:
set req.url = "/robots.txt";
// Remove User Agent:
unset req.http.User-Agent;
# Normalise /inc/copyright.php
} elsif (req.url == "/inc/copyright") {
unset req.http.User-Agent;
unset req.http.Cookie;
unset req.http.X-Has-Fonts;
unset req.http.X-Forwarded-Proto;
}
# If Surrogate Capability in client, pass it in an X-Surrogate-Capability header (for vcl_backend_receive)
if (req.http.Surrogate-Capability) {
set req.http.X-Surrogate-Capability = req.http.Surrogate-Capability;
}
# Tell backend we have ESI processing capability:
set req.http.Surrogate-Capability = "varnish=ESI/1.0";
return(hash);
}
sub vcl_pipe {
# Note that only the first request to the backend will have
# X-Forwarded-For set. If you use X-Forwarded-For and want to
# have it set for all requests, make sure to have:
# set req.http.connection = "close";
# This is otherwise not necessary if you do not do any request rewriting.
set req.http.connection = "close";
}
# Called if the cache has a copy of the page.
sub vcl_hit {
if (req.method == "PURGE")
{ban(req.url);
return(synth(200,"Purged"));}
if (!obj.ttl > 0s)
{return(pass);}
}
# Called if the cache does not have a copy of the page.
sub vcl_miss {
if (req.method == "PURGE")
{return(synth(200,"Not in cache"));}
}
sub vcl_hash {
hash_data(req.url);
if (req.http.X-Has-Fonts == "1") {
hash_data("has-fonts");
}
if (req.http.Cookie ~ "fonts=1") {
set req.http.Cookie = "fonts=1";
}
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
return (lookup);
}
# Called after a document has been successfully retrieved from the backend.
sub vcl_backend_response {
# Set a grace period of 7 days, so if backend goes down things still work.
set beresp.grace = 7d;
// req.* not available in vcl_backend_response, using custom X-Surrogate-Capability instead.
# If the client request did not include Surrogate Capability, then parse ESI tags if suitable.
# This should result in ESI tags not being parsed if a downstream proxy supports ESI.
if (beresp.http.Content-Type ~ "^text/.+" || beresp.http.Content-Type ~ "xml") {
if (!bereq.http.X-Surrogate-Capability) {
set beresp.do_esi = true;
set beresp.do_gzip = true;
} elsif (bereq.http.X-Surrogate-Capability !~ "1.0") {
set beresp.do_esi = true;
set beresp.do_gzip = true;
} else {
set beresp.do_gzip = true;
}
if (beresp.http.Vary) {
if (beresp.http.Vary !~ "Accept-Encoding") {
set beresp.http.Vary = beresp.http.Vary + ", Accept-Encoding";
}
} else {
set beresp.http.Vary = "Accept-Encoding";
}
}
# If request is for robots.txt, set a 2 hour grace time in case backend goes down
if (bereq.url == "/robots.txt") {
set beresp.grace = 2h;
}
# Cache for 2m if TTL equals zero, otherwise cache for 2m if no cache time.
if (beresp.ttl <= 0s || beresp.http.Vary == "*") {
set beresp.uncacheable = true;
set beresp.ttl = 120s;
} elseif (beresp.ttl <= 120s) {
set beresp.ttl = 120s;
}
if (beresp.http.Authorization && !beresp.http.Cache-Control ~ "public") {
set beresp.uncacheable = true;
return (deliver);
}
unset beresp.http.Set-Cookie;
return (deliver);
}
sub vcl_pass {
return (fetch);
}
After restarting varnish numerous times and fixing errors, I visited http://[fdd7:5938:e2e6:9660::6081:c]:6081 from Iceweasel on my home server and got the lovely Error 503 Backend fetch failed error message. Without a working backend, there is no way to know if my varnish configuration file is correct yet.
Backend Lighttpd Server
The first backend HTTP server I am going to set-up is lighttpd on my home server. Although lighttpd is already installed, I need to move the files from my VPS backup to my lighttpd folder structure. As I am only (currently) dealing with the one Web site, this should be rather simple:
sudo su
mkdir /home/www/var/www/johncook_co_uk
chown thejc:www-data /home/www/var/www/johncook_co_uk
exit
rsync -rtplogP --progress --delete /home/thejc/vps-backup-2015-01-24/webroot/var/www/johncook_co_uk/ /home/www/var/www/johncook_co_uk/
ls /home/www/var/www/johncook_co_uk/
A copying over of my tidied up configuration file later... and I have a lot of errors:
... WARNING: unknown config-key: url-rewrite-once (ignored) ... WARNING: unknown config-key: url.rewrite-if-not-file (ignored) ... WARNING: unknown config-key: server.error-handler-410 (ignored) ... WARNING: unknown config-key: setenv.add-response-header (ignored)
I need to fix these, and I think it is just a matter of enabling some modules.
After enabling those modules (I hope I have the order correct - TODO: check lighttpd module order) everything seems to be functioning, albeit without a site_name (a quick modification to the user.vcl to add a default site, and all is working).
After uncommenting the lines in user.vcl for the lighttpd IP address on my home server for these sites and a reload of varnish, some of my Web sites are back online. Not the fastest loading times, but that is because static content is being loaded via my home server and not being cached by varnish.
Also, although this lighttpd installation is not intended for static content, I do want to re-enable my modifications to how lighttpd serves static content.
sudo nano /etc/lighttpd/includes/caching-hours-6.all
$HTTP["url"] =~ "^(.*)\.(?i)(gif|jpg|jpeg|png|ico)$" {
setenv.add-response-header = (
"Pragrma" => "Private",
"Cache-Control" => "private"
)
expire.url = (
"" => "access 6 hours"
)
}
$HTTP["url"] =~ "^(.*)\.(?i)(css|js|svg|woff|ttf|otf|eot)$" {
setenv.add-response-header = (
"Pragma" => "Private",
"Cache-Control" => "private",
"Access-Control-Allow-Origin" => "*",
"X-Content-Type-Options" => "nosniff"
)
expire.url = (
"" => "access 6 hours"
)
}
sudo nano /etc/lighttpd/includes/caching-months-1.all
$HTTP["url"] =~ "^(.*)\.(?i)(gif|jpg|jpeg|png|ico)$" {
setenv.add-response-header = (
"Pragrma" => "Private",
"Cache-Control" => "private"
)
expire.url = ( "" => "access 1 months" )
}
$HTTP["url"] =~ "^(.*)\.(?i)(css|js|svg|woff|ttf|otf|eot)$" {
setenv.add-response-header = (
"Pragma" => "Private",
"Cache-Control" => "private",
"Access-Control-Allow-Origin" => "*",
"X-Content-Type-Options" => "nosniff"
)
expire.url = ( "" => "access 1 months" )
}
sudo nano /etc/lighttpd/includes/caching-none.johncook.co.uk
$HTTP["url"] =~ "^/domains/(.*)\.(gif|jpg|jpeg|png|ico|css|js|svg|woff|ttf|otf|eot)$" {
setenv.add-response-header = (
"Pragrma" => "no-cache",
"Cache-Control" => "no-cache",
"Expires" => "-1"
)
}
The last of these 3 includes is for a sub-directory used by the previous owner of johncook.co.uk, basically saying "don't cache this resource" (not that a resource should exist - a 410 Gone should be returned).
The other two are for differing levels of caching of static content. If I am planning on making changes I switch the include from "1 months" to "6 hours" so new visitors won't be left with old content in their cache for a month, and when something like a JavaScript or CSS file changes I create a new symlink to it and change the references to the new file name.
My lighttpd configuration file for this vhost is not the cleanest it could be, so I'll make some modifications now.
sudo nano /etc/lighttpd/vhosts.d/ula.johncook.uk
$SERVER["socket"] == "[fdd7:5938:e2e6:1::80:c]:80" {
include "includes/johncook.uk-1"
include "includes/ssl-hsts-seconds-0.all"
$HTTP["host"] == "www.johncook.co.uk" {
server.indexfile = ( "" )
url.rewrite-once = ( "^.*$" => "$0" )
url.redirect-code = 307
url.redirect = ( "^(.*)$" => "https://web.johncook.co.uk$0" )
}
url.redirect = (
"^/favicon.ico$" => "https://web.johncook.uk/img/favicon.ico",
"^/img/favicon-4.ico$" => "https://web.johncook.uk/img/favicon.ico"
)
include "includes/johncook.uk-2"
}
sudo nano /etc/lighttpd/includes/ssl-hsts-seconds-0.all
setenv.add-response-header = (
"Strict-Transport-Security" => "max-age=0"
)
sudo nano /etc/lighttpd/includes/johncook.uk-1
var.servername = "/johncook_co_uk"
server.document-root = var.basedir + servername
include "includes/compress.file-types"
include "includes/trusted-proxies.all"
sudo nano /etc/lighttpd/includes/compress.filetypes
$HTTP["url"] !~ "(/|\.php)(\?.*)?$" {
compress.filetype = (
"application/x-javascript",
"application/x-javascript; charset=utf-8",
"application/javascript",
"application/javascript; charset=utf-8",
"text/javascript",
"text/javascript; charset=utf-8",
"text/x-js",
"text/x-js; charset=utf-8",
"text/css",
"text/css; charset=utf-8",
"text/xml",
"text/xml; charset=utf-8",
"text/html",
"text/html; charset=utf-8",
"text/plain",
"text/plain; charset=utf-8",
"image/svg+xml",
"image/svg+xml; charset=utf-8"
)
setenv.add-response-header = (
"X-Content-Type-Options" => "nosniff"
)
sudo nano /etc/lighttpd/includes/trusted-proxies.all
extforward.forwarder = (
"fdd7:5938:e2e6:1::6081:c",
"fdd7:5938:e2e6:9660::6081:c",
"fdd7:5938:e2e6:9660::80:c"
)
sudo chown -R www-data:www-data /home/www/var/cache/lighttpd
sudo nano /etc/lighttpd/includes/johncook.uk-2
url.rewrite-once = (
# Annoying GET requests for /; were filling up logs
"^/;" => "/402.php",
# Annoying GET requests for a/ were filling up logs
"^a/" => "/503.php",
# 410 Gone for URLs previous owner used
"^/domains/css/styles.css" = "410.php",
# ... more 410 rewrites ...
# /robots.txt = /robots.php
"^/robots.txt$" => "/robots.php",
# GET "" - rewrite empty path as /
"^$" => "/",
# Everything that isn't a / ($1) followed by index/index.php/index.html, with/without get paramters is invalid.
"^([^/]*)/index(\.php|\.html)?(\?.*)?$" => "/404.php"
# Everything that isn't a dot ($1) followed by .php, followed by nothing or something, is invalid.
"^([^\.]*)\.php(.*)?$" => "/404.php"
# *.php = *, *.php?blah = *?blah
"^([^?]*)\.php(\?.*)?$" => "$1$2",
# /404, /403, /503, are reserved for error pages, return a 404 on attempted access.
"^\d{3}(\?.*)?$" => /404.php
# ^$ = /index.php, ^?blah=meh$ = /index.php?blah=meh
"^(\?.*)?$" => "/index.php?$1"
# /blah/ = /blah/index.php
"/(.*)/$" => "/$1/index.php",
)
url.rewrite-if-not-file = (
# Everything that isn't a dot ($1), followed by .php, followed by optional get parameters ($2), might mean /$1.php?$2
"^([^\.*)\.php/(\?.*)?$" => "/$1.php?$2",
# Inside an infinite number of sub-directories, if there is no index.php in the deepest one, return a 404.
"^/(.*)/index.php" => "/404.php",
# /, with optional get parameters ($1), probably means /index.php?$1
"/(\?.*)?$" => "/index.php?$1",
# Everything that isn't a question mark ($1) followed by a question mark and everything after it ($2),
# probably means /$1.php$2
"^([^?]*)(\?.*)?$" => /$1.php$2",
)
compress.cache-dir = compress.cache-dir + var.servername
include "includes/caching-months-1.all"
include "includes/caching-none-johncook.co.uk"
$HTTP["url"] =~ "^/img/(.*)\.(?i)(gif|jpg|jpeg|png|ico)$" {
static-file.etags = "disable"
}
server.error-handler-404 = /404.php
server.indexfiles = ( "" )
server.dir-listing = "disable"
There are a lot more optimisations I could do, and if lighttpd supported includes within () I could split some of this file to separate include files. Unfortunately, that is not the case, so as long as /domains/* keeps getting requests I cannot split the content out.
On the other hand, some of this content could be split out. If I move everything that is about "clean URLs" out, I end up with the following:
sudo nano /etc/lighttpd/includes/johncook.uk-2
include "includes/pretty-urls-1.johncook.uk"
include "includes/pretty-urls-2.all"
compress.cache-dir = compress.cache-dir + var.servername
include "includes/caching-months-1.all"
include "includes/caching-none-johncook.co.uk"
$HTTP["url"] =~ "^/img/(.*)\.(?i)(gif|jpg|jpeg|png|ico)$" {
static-file.etags = "disable"
}
server.error-handler-404 = /404.php
server.dir-listing = "disable"
sudo nano /etc/lighttpd/includes/pretty-urls-1.johncook.uk
url.rewrite-once = (
# Annoying GET requests for /; were filling up logs
"^/;" => "/402.php",
# Annoying GET requests for a/ were filling up logs
"^a/" => "/503.php",
# 410 Gone for URLs previous owner used
"^/domains/css/styles.css" = "410.php",
# ... more 410 rewrites ...
# /robots.txt = /robots.php
"^/robots.txt$" => "/robots.php",
# GET "" - rewrite empty path as /
"^$" => "/",
# index, index.php, index.html, with/without leading /, with/without get parameters,
# is invalid if rewriting is working
"^([^/]*)/index(\.php|\.html)?(\?.*)?$" => "/404.php"
# *.php, without a double extension, with/without get parameters, is invalid
"^([^\.]*)\.php(.*)?$" => "/404.php"
# *.php = *, *.php?blah = *?blah
"^([^?]*)\.php(\?.*)?$" => "$1$2",
# /404, /403, /503, are reserved for error pages, return a 404 on attempted access.
"^\d{3}(\?.*)?$" => /404.php
# ^$ = /index.php, ^?blah=meh$ = /index.php?blah=meh
"^(\?.*)?$" => "/index.php?$1"
# /blah/ = /blah/index.php
"/(.*)/$" => "/$1/index.php",
)
sudo nano /etc/lighttpd/includes/pretty-urls-2
url.rewrite-if-not-file = (
# Everything that isn't a dot ($1), followed by .php, followed by optional get parameters ($2), might mean /$1.php?$2
"^([^\.*)\.php/(\?.*)?$" => "/$1.php?$2",
# Inside an infinite number of sub-directories, if there is no index.php in the deepest one, return a 404.
"^/(.*)/index.php" => "/404.php",
# /, with optional get parameters ($1), probably means /index.php?$1
"/(\?.*)?$" => "/index.php?$1",
# Everything that isn't a question mark ($1) followed by a question mark and everything after it ($2),
# probably means /$1.php$2
"^([^?]*)(\?.*)?$" => /$1.php$2",
)
server.indexfiles = ( "" )
With this configuration as it is, it shouldn't be that difficult to port it to my VPS. Other than checking the modules are correct there is probably very little modification needed of a standard lighttpd configuration file.
Varnish Cache on Home Server
The installation instructions for Varnish 4.0 on Debian is pretty similar to on Ubuntu - the only difference is the URL for sources.list.
After installation, I copied and pasted the contents of /etc/default/varnish on my server to the same file on my VPS, replacing the IP address with that for varnish on my home server.
I then copied and pasted user.vcl over, and (because of how I coded it) all I had to do was remove the references to its own IP address (i.e. the johncook_home_varnish block and the fallback line).
A restart later, and upon firing up varnishadm
on my VPS typing backend.list
listed both johncook_home_varnish and johncook_home_lighttpd as Healthy 5/5.
Although I could do more testing, I can't pause and spend time coding my Web site. I am still unable to receive e-mail, and that should be the next priority.
New E-mail Setup
My previous e-mail configuration wasn't that bad. What I want, however, is to have my VPS just be a proxy to/from my home server. Thus, it will be like the following:
- VPS
- SMTP incoming (store and forward to home server).
- SMTP outgoing (receive and forward from home server).
- Webmail (via Home Server).
- Home Server
- SMTP inbound (receive and store from home server).
- SMTP outbound (store and forward to home server).
- IMAP.
- Webmail.
As with my Web server and Googlebot, I need to ensure that no SMPT connections are possible until everything is configured correctly otherwise some mail might end up erroneously bouncing.
I need my mail server to do the following:
- Domainkeys
- DKIM
- SPF
- DNSBL lookups
- DMARC
In order to prevent erroneous bounces while I am setting things up, I have modified my iptables rules so incoming connections to ports 25, 587, and 993 are rejected.
As I am going to be using MySQL for the domains and mailboxes database, I want to set it up in a way so that what is on my VPS is mirrored to my home server.
MySQL Master/Slave Setup
What I am wanting to do is have the same mail database on my home server and my VPS. It looks like master/slave replication is what I want to do.
The first issue is going to be that of IP addresses. At the moment my home server is using IP address 127.0.0.1, and the last time I looked IPv6 support in MySQL was lacking.
mysql --version
reports I have version 5.5.41 installed on my home server (IPv6 address support added in version 5.5.3). ping6 -c2 ::1
confirms IPv6 is configured.
To enable IPv6 support in MySQL, 5.1.9 of the MySQL 5.5 manual says I need to add additional --bind-address options.
sudo nano /etc/mysql/my.cnf
ip-address = 127.0.0.1
ip-address = ::1
sudo service mysql restart
Testing connection to MySQL using the command mysql -h ::1 -u smsd -p
worked, so I can now move on to adding a ULA IPv6 IP address.
I am going to use IPv6 IP addresses fdd7:5938:e2e6:1::3306:1 for server 1 (home server) and fdd7:5938:e2e6:9660::3306:2 for server 2 (VPS).
But first, I know that my IPv6 ULA tunnel is not currently encrypted. I therefore want to use SSL connections in mysql.
SSL Encryption
mysql -u root -p
show variables like 'have_ssl';
quit
The Variable_name have_ssl has a Value of DISABLED. According to the MySQL manual 6.3.6.2, this means SSL support is compiled in, but the server wasn't started with the relevant --ssl-xxx options.
sudo nano /etc/mysql/my.cnf
The relevant options are:
ssl-ca=
ssl-cert=
ssl-key=
As I do not currently have e-mail, I cannot generate a new certificate through StartSSL. I am therefore going to use an existing certificate on my server, for calendar.thejc.me.uk.
ssl-ca=/etc/ssl/calendar.thejc.me.uk/sub.class1.server.sha2.ca.pem
ssl-cert=/etc/ssl/calendar.thejc.me.uk/calendar_thejc_me_uk.crt
ssl-key=/etc/ssl/calendar.thejc.me.uk/calendar_thejc_me_uk.key
mysql -h ::1 --ssl-ca=/etc/ssl/calendar.thejc.me.uk/sub.class1.server.sha2.ca.pem -u root -p
show variables like 'have_ssl';
show status like 'ssl_cipher';
quit
The Value of have_ssl is now YES, and that connection was encrypted using Ssl_cipher DHE-RSA-AES256-SHA.
IPv6 Addresses
So far so good, now to add IP address fdd7:5938:e2e6:1::3306:1 to eth0, mysql, and iptables. Although I am adding the IP using my ipv6-addresses init.d script, and to ip6tables by editing my ip6tables.bak file and using ip6tables-restore, I will put the commands here as if they were typed in a shell.
sudo ip -6 addr add fdd7:5938:e2e6:1::3306:1/128 dev eth0
sudo ip6tables -A in-new -d fdd7:5938:e2e6:1::3306:1 -s fdd7:5938:e2e6:9660::3306:2 -p tcp -m tcp --dport 3306 -j ACCEPT
sudo nano /etc/mysql/my.cnf
bind-address = fdd7:5938:e2e6:1::3306:1
sudo service mysql restart
And now MySQL is only listening on that single IP address. A revert of changes later, and I am now wondering what to do.
It turns out that MySQL can only listen to "one" socket, or put another way, only one IP address. Since 127.0.0.1 and ::1 are equivalent, that is considered a single IP address (::ffff:127.0.0.1 is also the same IP address). Time to think in a different way.
I had a similar issue with a recursive DNS server. What I did with that was to use iptables to block access. Therefore, if I am going to open up MySQL to ::1, I need a similar way of doing things.
Luckily, my ip6tables.bak file includes the stuff from my DNS filter:
*filter
...
:in-new -
:permitted-dns -
...
-A INPUT -m state --state NEW -j in-new
...
-A in-new -d 2001:470:1f09:1aab::b:53/128 -p tcp -m tcp --dport 53 -j permitted-dns
-A in-new -d 2001:470:1f09:1aab::b:53/128 -p udp -m udp --dport 53 -j permitted-dns
-A in-new -d fdd7:5938:e2e6:1::b:53/128 -p tcp -m tcp --dport 53 -j permitted-dns
...
-A permitted-dns -s 2001:470:1f09:1aab::/64 -j ACCEPT
-A permitted-dns -s fdd7:5938:e2e6:1::/64 -j ACCEPT
...
-A permitted-dns -j REJECT
Thus, what I need is something pretty similar for MySQL connections.
*filter
...
:permitted-mysql -
...
-A in-new -d ::1/128 -p tcp -m tcp --dport 3306 -j permitted-mysql
-A in-new -d fdd7:5938:e2e6:1::3306:1/128 -p tcp -m tcp --dport 3306 -j permitted-mysql
-A in-new -d ::/0 -p tcp -m tcp --dport 3306 -j permitted-mysql
...
-A permitted-mysql -s ::1/128 -j ACCEPT
-A permitted-mysql -s fdd7:5938:e2e6:1::3306:1/128 -j ACCEPT
-A permitted-mysql -s fdd7:5938:e2e6:9660::3306:2/128 -j ACCEPT
-A permitted-mysql -j REJECT
Likewise, for my iptables.bak:
*filter
...
# Enable MySQL from only localhost
-A INPUT -s 127.0.0.1 -p tcp -m tcp --dport 3306 - ACCEPT
-A INPUT -p tcp -m tcp --dport 3306 -j REJECT
It is hard to test because, locally, iptables has no real effect. Using lynx, however, to try to connect from my VPS results in a connection refused error (i.e. the -j REJECT is working), and lynx from my home server results in a MySQL error that my IP address "is not allowed to connect to this MySQL server".
Based on this, I think my iptables and ip6tables rules are working. SSL is working also. A grc.com Sheilds Up! test on port 3306 shows it is closed, so all looks good.
A switch from -j REJECT to -j DROP, reloading of iptables and ip6tables rules, and another Sheilds Up! test shows the port as stealthed.
The reason I want the port to be "stealthed" is because showing as closed would indicate that MySQL is installed on my IP addresses - it should only show as "listening to" the IP addresses configured in my firewall. Since MySQL doesn't allow listening to multiple IP addresses (listen to all, listen to one) I have used my firewall rules to simulate MySQL only listening on particular IP addresses. Onwards to replication.
Replication Master
My home server is going to be acting as master in this setup. I do not want all databases to be replicated, however, so I will need to specify the databases (none specified is assumed to mean all databases).
First, I am going to login to mysql as the root user, and using my full database backup recreate the mail database. The alises records were too long for MySQL CLI so I split it up into a dozen or so INSERTs. With the mail database fully restored, I can now move on to granting permissions on it.
I am going to need to create a user, as follows:
mysql -u root -p
grant replication slave on *.* to 'replicator'@'%' identified by 'randompassword' require ssl;
grant select on mail.* to 'mail'@'localhost' identified by 'mail';
flush privileges;
In order to ensure the replication slave doesn't replicate all databases, I need to make some final changes to /etc/mysql/my.cnf:
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = mail
A restart of MySQL later, and there are just a couple more changes needed.
mysql -u root -p
use mail;
flush tables with read lock;
show master status;
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | mysql-bin.000001 | 182 | mail | |
We need the File and Position later.
MySQL Replication Slave
As I have a nearly fresh installation of Ubuntu Server 14.04 LTS, I don't yet have MySQL installed.
sudo apt-get install mysql-server mysql-client
After entering a password for the database user 'root', a little while later and mysql is installed.
sudo nano /etc/mysql/my.cnf
[client]
...
ssl-ca = /etc/ssl/StartCom/ca-bundle.pem
...
[mysqld]
...
bind-address = 127.0.0.1
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
relay-log = /var/log/mysql/mysql-relay-bin.log
binlog_do_db = mail
After restarting MySQL, login using mysql -u root -p
and set-up replication:
change master to MASTER_HOST='fdd7:5938:e2e6:1::3306:1',MASTER_USER='replicator',MASTER_PASSWORD='randompassword',MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=182,MASTER_SSL=1,MASTER_SSL_CA='/etc/ssl/StartCom/ca-bundle.pem';
grant select on mail.* to 'mail'@'localhost' identified by 'mail';
flush privileges;
quit;
Another restart of MySQL, another login to MySQL, and a show slave status\g;
command shows the output and a Slave_IO_State of Waiting for master to send event.
At this point, back on the master mysql server enter the command unlock tables;
.
It should be noted that security is not ideal. The replicator user can login from any IP address, but that had to be done because mysql server doesn't have the option for an outgoing source address, making IP address matching difficult. I am not sure if MySQL yet has the ability to match IPv6 IP addresses by netmask... it doesn't not appear to.
For time reference, I have done a bit more Googling and it looks like mail might start bouncing if a mail server is down for 72 hours. That means I have 5 hours to get e-mail up and running again.
With the databases synchronised and iptables/ip6tables blocking incoming SMTP/Submission/IMAP connections, it is time to work out how to set up e-mail using MySQL, Postfix, and as a forwarding server.
Postfix Installation
sudo apt-get install postfix postfix-mysql postfix-policyd-spf-python opendkim opendmarc
For Postfix Configuration, I have chosen Internet Site as it is the default. The System mail name is likewise the default, vps2.thejc.me.uk.
After installation, Postfix has automatically started running. With my iptables and ip6tables rules changed from ACCEPT to REJECT all connections to IP addresses that postfix used to accept mail on result in connection refused errors using telnet. It does, however, mean that IP addresses that were not used previously can now be used for testing purposes.
sudo nano /etc/postfix/main.cf
myhostname = mail3.thejc.me.uk
mydestination = mail3.thejc.me.uk, mail2.thejc.me.uk, vps2.thejc.me.uk, localhost
relayhost =
mynetworks = 127.0.0.1/8 [::ffff:127.0.0.1]/104 [::1]/128
inet_interfaces = all
relay_domains = proxy:mysql:/etc/postfix/mysql/relay_domains.conf
relay_recipient_maps = proxy:mysql:/etc/postfix/mysql/relay_recipient_maps.conf
relay_transport = smtp:[fdd7:5938:e2e6:1::25:1]
sudo mkdir /etc/postfix/mysql/
sudo nano /etc/postfix/mysql/relay_domains.conf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = domain
select_field = domain
where_field = domain
additional_conditions = and active = '1'
#query = select domain from domain where domain='%s' and active='1'
sudo nano /etc/postfix/mysql/relay_recipient_maps.conf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = alias
select_field = address
where_field = address
additional_conditions = and active = '1'
#query = select address from alias where address='%s' and active='1'
cd /etc/postfix/mysql/
sudo postmap relay_domains.conf
sudo postmap relay_recipient_maps.conf
sudo postconf
sudo service postfix restart
At this point, testing in telnet shows that valid addresses are accepted and invalid addresses result in a 550 error. There is still some work to do, but this is an ideal result - if 550 errors are returned for valid e-mail addresses mail would end up bouncing, whereas a 451 error for another configuration issue will result in mail being deferred.
sudo nano /etc/postfix/main.cf
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no
readme_directory = no
smtpd_relay_restrictions = permit_my_networks permit_sasl_authenticated defer_unauth_destination
myhostname = mail3.thejc.me.uk
myorigin = /etc/mailname
smtp_bind_address = 149.255.99.50
smtp_bind_address6 = 2a03:ca00:8000:7673::19
mydestination = mail3.thejc.me.uk, mail2.thejc.me.uk, vps2.thejc.me.uk, localhost
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
relayhost =
mynetworks = [2a03:ca80:8000:7673::18]/127 [2a01:d0:8214::]/48 [2001:470:1f09:38d::]/64 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_protocols = all
inet_interfaces = 149.255.99.50,[2001:470:1f09:38d::25:1],[2a03:ca80:8000:7673::19]
relay_domains = mysql:/etc/postfix/mysql/relay_domains.cf
relay_recipient_maps = mysql:/etc/postfix/mysql/relay_recipient_maps.cf
relay_transport = smtp:[fdd7:5938:e2e6:1::25:1]
maximal_queue_lifetime = 100d #// the maximum permitted is 100d
# SSL - Server
smtpd_tls_cert_file = /etc/ssl/mail/mail3-startssl-cert.pem
smtpd_tls_key_file = /etc/ssl/mail/mail3-startssl-key.pem
smtpd_tls_CAfile = /etc/ssl/StartCom/ca-sha2.pem
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtpd_tls_security_level = may
smtpd_tls_received_header = yes
smtpd_tls_loglevel = 1
tls_random_source = dev:/dev/urandom
# SSL - Client
smtp_use_tls = yes
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
# Server - Restrictions
smtpd_helo_required = yes
smtpd_helo_restrictions = permit_mynetworks, permit # TODO: helo check
smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination
smtpd_sender_restrictions = reject_unknown_sender_domain
smtpd_client_restrictions = reject_rbl_client zen.spamhaus.org=127.0.0.[2..8]
This is enough to get my VPS working as an incoming mail relay (store and forward). This is not the most secure (or spam-resistant) configuration, but I was pressed for time and didn't make any notes after the 67 hours down mark as at that point I found out some mail servers are configured to only retry delivery for 72 hours.
The above configuration is what I was using at the time of writing, with mail entering vps2. and being forwarded on to home. so I know it works. Some things were throwing errors so with just 15 minutes to go before the 72 hour mark I decided to make it less secure and come back to it in a few days once all that deferred mail backlog has come through.
I should point out that I am not using a separate database for my home server and my VPS, which is why on my VPS I am not testing whether a mailbox/alias has the backup mx flag set (my VPS does not store mail itself, other than for onwards delivery), nor am I using the location value for which mail server is the next hop (the location for all domains is the same - my home server).
The Milters
SPF, DKIM, and DMARC checks are something that I have decided to add 24 hours later, after looking at RAM usage.
SPF Checks
I am going to use the same configuration I previously used on my VPS.
sudo nano /etc/postfix-policyd-spf-python/policyd-spf.conf
debugLevel = 1
defaultSeedOnly = 1
HELO_reject = SPF_Not_Pass
Mail_From_reject = Fail
PermError_reject = True
TempError_Defer = False
skip_addresses = 149.255.99.50,2a03:ca80:8000:7673::18/127,2001:470:1f09:38d::/64,127.0.0.0/8,::ffff:127.0.0.0/104,::1/128
Domain_Whitelist = microsoft.com
sudo nano /etc/postfix/main.cf
smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, check_policy_service unix:private/policy-spf
# SPF
policy-spf_time_limit = 3600s
sudo nano /etc/postfix/master.cf
policy-spf unix - n n - - spawn
user=nobody argv=/usr/bin/policyd-spf
postconf
sudo service postfix reload
DomainKeys Identified Mail (DKIM)
sudo nano /etc/opendkim.conf
AuthservID mail3.thejc.me.uk
Syslog yes
UMask 002
ADSPAction reject
AlwaysAddARHeader 1
AuthservIDWithJobID 1
BaseDirectory /var/opendkim/
KeyTable /var/opendkim/key-table.tbl
SigningTable refile:/var/opendkim/signing-table.tbl
# InsecureKey = UnprotectedKey (DNSSEC related)
UnprotectedKey none
# InsecurePolicy = UnprotectedPolicy (DNSSEC related)
UnprotectedPolicy apply
LogWhy 1
Mode sv
RemoveOldSignatures 0
Socket inet6:8891@[fdd7:5938:e2e6:9660:7f00:1:b:8891]
SubDomains 0
sudo mkdir /var/opendkim/
sudo su
rsync -avrplogP --progress -e 'ssh -p 8043 -i /home/thejc/.ssh/thejc.vps2' thejc@home.thejc.me.uk:~/vps-backup-2015-01-24/var/opendkim/ /var/opendkim/
chown -R opendkim:opendkim .
exit
sudo service opendkim restart
sudo nano /etc/postfix/main.cf
# DKIM
milter_default_action = accept
milter_protocol = 6
smtpd_milters = inet:[fdd7:5938:e2e6:9660:7f00:1:b:8891]:8891
non_smtpd_milters = inet:[fdd7:5938:e2e6:9660:7f00:1:b:8891]:8891
DMARC
sudo nano /etc/opendmarc.conf
AuthservID mail3.thejc.me.uk
ForensicReports true
PidFile /var/run/opendmarc.pid
RejectFailures true
Socket inet6:8893@[fdd7:5938:e2e6:9660:7f00:1:b:8893]
UMask 0002
UserID opendmarc
AuthservIDWithJobID true
ForensicReportsSentBy dmarc@johncook.co.uk
PublicSuffixList /home/thejc/Scripts/output/effective_tld_names.dat
In order to use the PublicSuffixList option, we need a public suffix list. Mozilla update what is considered to be the globally recognised public suffix list at publicsuffix.org, but with such a large file potentially being requested by a billion machines per day (assuming each was running software that checks the list directly every 24 hours for updates) I do not want to waste their bandwidth.
For that reason, the following is more involved than just fetching the file using cron once a day (publicsuffix.org requests we don't check more often than that) but the result is that it will cut down bandwidth use considerably.
First we install curl, and then update the OS certificates list - when the certificates screen pops up, it should be fine to just leave things as default, tab to OK, and press enter.
Next we download the public suffix list, binding to my ethernet interface (--interface eth0:0) with my mail server IP address (so a reverse DNS lookup by Mozilla would show who downloaded it, rather than every request using a random IP on my server) and following any redirects (--location) using the OS certificate list (--cacert /etc/ssl/certs/ca-certificates.crt) because curl has trouble with the --capath option on my system, using the last modified time of a file (-z), creating directories if necessary (--create-dirs), requesting a compressed gzip copy of the file and decompressing it (--compressed) to the output file (-o), without any progress meter or error messages (--silent) but with verbose output so we can see the Last-Modified header among other things (--verbose).
The reason we need --verbose is because curl does not seem to set the modification time of the file to that in the Last-Modified header.
mkdir -p ~/Scripts/output
cd ~/Scripts
sudo apt-get install curl
sudo dpkg-reconfigure ca-certificates
curl --compressed --interface eth0:0 --location --cacert /etc/ssl/certs/ca-certificates.crt -z /home/thejc/Scripts/output/effective_tld_names.dat --create-dirs -o /home/thejc/Scripts/output/effective_tld_names.dat --verbose --silent https://publicsuffix.org/list/effective_tld_names.dat
The Last-Modified date is "Tue, 20 Jan 2015 04:26:09 GMT".
touch -t `date "Tue, 20 Jan 2015 04:26:09 GMT" +%Y%m%d%H%M.%S` /home/thejc/Scripts/output/effective_tld_names.dat
The Expires date is "Wed, 04 Feb 2015 19:25:33 GMT". I am not sure yet how I am goin to programatically do things, but in a script [ `date -d $expires +%s` -lt `date +%s` ]
will return 1 if the Expires date is in the future.
The reason we need to set the modification time of the file with touch after a successful (200) GET is because otherwise curl uses the current date, and some server software like Varnish does not compute If-Modified-Since < Last-Modified, rather it computes If-Modified-Since != Last-Modified.
sudo nano /etc/postfix/main.cf
smtpd_milters = inet:[fdd7:5938:e2e6:9660:7f00:1:b:8891]:8891 inet:[fdd7:5938:e2e6:9660:7f00:1:b:8893]:8893
non_smtpd_milters = inet:[fdd7:5938:e2e6:9660:7f00:1:b:8891]:8891 inet:[fdd7:5938:e2e6:9660:7f00:1:b:8893]:8893
postconf
sudo service opendmarc restart
sudo service postfix reload
Home Mail Server
Unlike with my VPS, I have a lot of software installed on my home server already so it is a bit difficult to determine just what other things would need to be installed on a freshly installed Debian Wheezy installation. This is what I needed to install, anyway:
sudo apt-get install postfix postfix-mysql postfix-policyd-spf-python dkimproxy dovecot-core dovecot-mysql dovecot-sieve telnet
For Postfix Configuration this time I am choosing Local only so that postfix won't come up and start using my ISP IP address. For local domain, I am using home.thejc.me.uk.
Quick Dovecot Configuration
sudo su
cd /etc/dovecot
cp dovecot-sql.conf.ext dovecot.sql.conf
cd conf.d
cp auth-sql.conf.ext auth-sql.conf
exit
cd /etc/dovecot/
sudo nano dovecot.conf
...
listen = 127.0.0.1
sudo nano dovecot-sql.conf
...
driver = mysql
connect = host=127.0.0.1 dbname=mail user=mail password=mail
default_pass_scheme = MD5-CRYPT
password_query = SELECT username as user, password, '/home/vmail/%d/%n' as userdb_home, 'maildir:/home/vmail/%d/%n/mail' as userdb_mail, 5000 as userdb_uid, 5000 as userdb_gid FROM mailbox WHERE username = '%u' AND active = '1';
user_query = SELECT '/home/vmail/%d/%n' as home, 'maildir:/home/vmail/%d/%n/mail' as mail, 5000 as uid, 5000 as gid, CONCAT('dirsize:storage=', quota) AS quota FROM mailbox WHERE username = '%u' AND active = '1';
cd conf.d
sudo nano auth-sql.conf
...
userdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf
}
sudo nano 90-sieve.conf
...
plugin {
...
sieve = ~/.dovecot.sieve
...
sieve_global_dir = /var/lib/dovecot/sieve/global
...
sieve_quota_max_storage = 8M
}
sudo nano 15-lda.conf
protocol lda {
...
mail_plugins = $mail_plugins sieve
}
sudo nano 10-master.conf
...
service auth {
...
unix_listener /var/spool/postfix/private/auth {
mode = 0660
user = postfix
group = postfix
}
...
}
...
sudo nano 10-mail.conf
...
mail_location = maildir:/home/vmail/%d/%n/mail
...
sudo nano 10-auth.conf
...
auth_mechanisms = plain
...
Quick Postfix Configuration
cd /etc/postfix
sudo nano master.cf
...
dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${recipient} -a ${original_recipient}
sudo mkdir mysql
cd mysql
sudo nano virtual_alias_maps.cf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = alias
select_field = goto
where_field = address
additional_conditions = and active = '1'
#query = SELECT goto FROM alias WHERE address='%s' AND active = '1';
sudo nano virtual_domains_maps.cf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = domain
select_field = domain
where_field = domain
additional_conditions = and backupmx = '0' and active = '1'
#query = SELECT domain FROM domain WHERE domain='%s' AND backupmx = '0' AND active = '1';
sudo nano virtual_mailbox_limit_maps.cf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = mailbox
select_field = quota
where_field = username
additional_conditions = and active = '1'
#query = SELECT quota FROM mailbox WHERE username='%s' AND active = '1';
sudo nano virtual_mailbox_maps.cf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = mailbox
select_field = CONCAT(domain,'/',maildir)
where_field = username
additional_conditions = and active = '1'
#query = SELECT CONCAT(domain,'/',maildir) FROM mailbox WHERE username='%s' AND active = '1';
sudo nano virtual_sender_login_maps.cf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = mailbox
select_field = username
where_field = username
additional_conditions = and active = '1'
#query = SELECT username FROM mailbox WHERE username='%s' AND active = '1';
sudo postmap *.cf
cd ..
sudo nano main.cf
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no
# TLS parameters
smtpd_tls_cert=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_user_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
myhostname = home.thejc.me.uk
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = home.thejc.me.uk, PC2-JC.thejc.local, localhost.thejc.local, localhost
relayhost =
mynetworks = [2001:470:1f09:1aab::]/64 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
queue_directory = /var/spool/postfix
smtpd_sasl_auth_enable = yes
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = loopback-only
default_transport = error
relay_transport = error
inet_interfaces = [fdd7:5938:e2e6:1::25:1]
virtual_mailbox_domains = mysql:$config_directory/mysql/virtual_domains_maps.cf
virtual_mailbox_base = /home/vmail
virtual_mailbox_maps = mysql:$config_directory/mysql/virtual_mailbox_maps.cf
virtual_alias_maps = mysql:$config_directory/mysql/virtual_alias_maps.cf
virtual_minimum_uid = 5000
virtual_uid_maps = static:5000
virtual_gid_maps = static:5000
virtual_transport = dovecot
dovecot_destination_recipient_limit = 1
smtpd_sasl_type = dovecot
sudo groupadd -g 5000 vmail
sudo useradd -m -u 5000 -g 5000 -s /bin/bash -d /home/vmail vmail
sudo su
rsync -rtplogP --progress /home/thejc/vps-backup-2015-01-24/home/vmail/ /home/vmail
sudo chown -R vmail:vmail /home/vmail/
sudo service postfix restart
sudo service dovecot restart
I believe I have covered all the configuration file changes I had to make to get things to start working properly.
There are still numerous things that need to be done before my mail system is fully functional:
- Outgoing mail.
- DKIM/Domainkey signing of outgoing mail.
- DKIM/Domainkey checking of incoming mail.
- SPF checking of incoming mail (policyd-spf.conf).
- Configure my VPS to recognise my home server for outgoing mail.
- DKIM/Domainkey/SPF/DMARC/other checks on outgoing mail.
- IMAP server (home server) configuration.
- Webmail server (VPS via home server) setup.
- DMARC and ADSP Discard rejection on incoming mail.
- Header checks on incoming mail.
- helo_access.cf checks
- Sieve/Managesieve Dovecot configuration (do I need to install something that autmoatically creates the symlinks?)
- Mail Delivery Stack (Dovecot)?
- Correct SSL/TLS configuration.
Something not on this list, but which is needed to reduce latencies in my IPv6 ULA network, is to switch from going over my IPv6 tunnels and tunnelling between IPv4 IP addresses instead.
I also need to add encryption as the current setup does not use encryption. That is the reason I have been sure to use TLS in my configuration of postfix on my VPS and home server, as connections between my ULA IPv6 IPs are not secured.
Backups are going to be very important now as the only place my e-mails are stored is in-house. I am not yet sure how I am going to approach backups, but I think I will want something more than just a copy on my server backup hard disk.
Encryption is another subject I am not sure how to approach. Although my server uses an encrypted filesystem, do I want individual e-mails to be encrypted? Will that make reading them using e-mail clients a lot more difficult or impossible?
I have made one change to the main.cf configuration file for postfix on my home server:
smtpd_tls_cert_file = /etc/ssl/calendar.thejc.me.uk/calendar_thejc_me_uk.pem
smtpd_tls_key_file = /etc/ssl/calendar.thejc.me.uk/calendar_thejc_me_uk.key
smtpd_tls_CAfile = /etc/ssl/calendar.thejc.me.uk/sub.class1.server.sha2.ca.pem
I will need to change the MySQL passwords for both replication and select access. The passwords will need to be changed as follows:
- Replication
- Master MySQL Server (user replicator)
- Slave MySQL Server (change master to)
- Select
- VPS
- Postfix - mysql/*.conf
- MySQL (user mail)
- Home Server
- Postfix - mysql/*.cf
- Dovecot - dovecot-sql.conf
- MySQL (user mail)
- VPS
I won't be doing that now, however, as I want to leave things for 72-96 hours before doing anything with my mail servers just in case an e-mail server decides to try its final attempt at delivering a deferred e-mail during a time I break the configuration.
A further 3-4 days would make it 6-7 days after my mail server went down, so the odds of mail being bounced as undeliverable if it is currently in a deferred/try later state will be insignificant (assuming no more messages highlight more configuration issues).
Static Content Web Server
At the moment, my sites that are up on my VPS are having dynamic content cached temporarily in varnish, and static content is being piped (streamed?) via my home server.
Now that I have dealt with a time-sensitive configuration (e-mail servers) I can now look at static content.
As all of my (new) sites use the same CSS, JavaScript, and favicon.ico, there is the question of whether using a cookieless domain for static content will provide benefits over using SPDY.
When it comes to caching, however, it would possibly be more beneficial that static content uses the same domain over the three sites so that switching sites will not result in duplicate content in the cache from different domains.
As an example, let's take a look at the different "unique" domains where the CSS for this page can be found:
- https://web.johncook.uk
- http://web.johncook.uk
- https://johncook.uk
- http://johncook.uk
- https://web.watfordjc.uk
- http://web.watfordjc.uk
- https://watfordjc.uk
- http://watfordjc.uk
- https://web.johncook.co.uk
- http://web.johncook.co.uk
- https://johncook.co.uk
- http://johncook.co.uk
That is a potential 12 copies of the same CSS and JavaScript files stored in your browser cache.
An alternative to using a cookieless domain for static content would be to use a single domain for static content.
Since https://web.johncook.uk is the canonical domain for everything but the content I flag as NSFW, then using that for static content would mean there is a good chance it is the current domain anyway.
If I also move favicon.ico to the same domain, and 301 redirect /favicon.ico to https://web.johncook.uk/img/favicon.ico, for browsers that support SPDY there will be a chance a connection to https://web.johncook.uk is already open when it finds the linked stylesheet.
Going by my server logs, some browsers make a request for /favicon.ico automatically, even though it is not referenced on any pages (nor has it ever existed).
Of course the major factor in page loading times, and how responsive the page seems on first load on a cold cache, is (a) how quickly the page content loads, and (b) how quickly the render-blocking elements load.
Having moved most of the JavaScript to the bottom of the <body> element, that leaves the favicon.ico and the CSS.
If the favicon.ico covers a DNS lookup and HTTPS connection (assuming the page being loaded is not the same domain and protocol) as long as it isn't too big it might help if the CSS GET request is sent using an already existing connection. Speaking of the size of the favicon.ico, I have just made some modifications to mine, reducing the size from 5,430 bytes to 326 bytes (a 94% size reduction).
I have decided to redirect /favicon.ico and /img/favicon-4.ico (the latter being a test file that has since been renamed to /img/favicon.ico) to https://web.johncook.uk/img/favicon.ico. I have also decided to use an absolute URI to reference it inside the <head> tag.
I have likewise switched to using an absolute URI for the stylesheets and JavaScript. Although it means there will be the HTTPS negotiation overhead, it does mean that the URI for fonts and jquery is now identical accross my sites.
What Web Server?
The first thing I need to do is install lighttpd. But do I want to use lighttpd? The reason I am using it is mainly for PHP processing and static files. Also, most of my redirects and rerwrites are done within lighttpd.
But if I am using nginx anyway, can I just use nginx for PHP? I have previously tried to get PHP (fastcgi as well as fpm) to work with nginx but in the end just gave up. But, now is as good a time as any to investigate things.
I need Web servers for two things: static content like images and text like JavaScript and CSS, and dynamic content - PHP.
As I am using nginx for HTTPS and HTTP termination, how does it perform when it comes to serving static files directly from disk? Here is my current nginx configuration for static and dynamic content:
location ~* \.(gif|jpg|jpeg|png)$ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
location / {
proxy_pass http://johncook.varnish-lighttpd;
include /etc/nginx/includes/proxy.static;
location /img/ {
proxy_pass http://johncook.lighttpd;
include /etc/nginx/includes/proxy.static;
}
The sections for /js/ and /css/ are identical to that for /img/. proxy.static contains (if I include the included proxy_params include in place):
proxy_set_header Accept-Encoding $spdy_ae;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Is-Spdy $spdy_connection;
proxy_pass_request_headers on;
Now the nginx serving static content guide gives details about the root directive, the try_files directive, and also that a backend can be added to the try_files directive.
With a bit of trial and error, my nginx configuration file johncook.uk now looks like the following:
server {
include /etc/nginx/includes/web.johncook.uk-ips-https;
include /etc/nginx/includes/web.johncook.uk-ips-http;
server_name web.johncook.uk;
include /etc/nginx/includes/web.johncook.uk-ssl;
root /home/www/var/www/johncook_co_uk;
etag off;
autoindex off;
location ~* \.(gif|jpg|jpeg|png|ico)$ {
try_files $uri @static;
add_header Pragma "Private";
add_header Cache-Control "private";
expires 1M;
sendfile on;
tcp_nodelay on;
tcp_nopush on;
}
location ~* \.(css|js|svg|ttf|otf)$ {
try $uri @static;
# add_header Vary "Accept-Encoding";
add_header Pragma "Private";
add_header Cache-Control "private";
add_header Access-Control-Allow-Origin "*";
add_header X-Content-Type-Options "nosniff";
expires 1M;
sendfile on;
tcp_nodelay on;
tcp_nopush on;
}
location ~* \.(woff|eot)$ {
try $uri @static;
add_header Pragma "Private";
add_header Cache-Control "private";
add_header Access-Control-Allow-Origin "*";
add_header X-Content-Type-Options "nosniff";
expires 1M;
sendfile on;
tcp_nodelay on;
tcp_nopush on;
}
location / {
try_files @dynamic @dynamic;
}
location @dynamic {
proxy_pass http://johncook.dynamic;
include /etc/nginx/includes/proxy.static;
}
location @static {
proxy_pass http://johncook.static;
include /etc/nginx/includes/proxy.static;
}
server {
include /etc/nginx/includes/web.johncook.uk-ips-https;
include /etc/nginx/includes/web.johncook.uk-ips-http;
server_name johncook.uk;
include /etc/nginx/includes/web.johncook.uk-ssl;
return 307 https://web.johncook.uk$request_uri;
}
And my upstream servers file now includes the following:
upstream johncook.static {
server [fdd7:5938:e2e6:1::80:c]:80;
}
upstream johncook.dynamic {
server [fdd7:5938:e2e6:9660::6081:c]:6081;
server [fdd7:5938:e2e6:1::80:c]:6081 backup;
server [fdd7:5938:e2e6:1::80:c]:80 backup;
}
This is the closest I can get my nginx configuration to my lighttpd configuration for static files. There are several issues with nginx.
No matter what setting I use, I can not get nginx to send the header to vary by Accept-Encoding. Using curl, it turns out nginx does send the header to vary by Accept-Encoding so I have commented the line in the above configuration. Connections over SPDY are not sent the Accept-Encoding Vary header, because all connections over SPDY must support gzip.
ETAG can only be set to on or off, so unlike with lighttpd I cannot set it to just be based on modification time and size.
Because nginx uses file streaming when compressing with gzip, Content-Length headers are not sent for files compressed on-the-fly. For static files that don't change that often that can be compressed, I have set gzip_static on;
in the http { }
section of nginx.conf.
By running gzip -k css/combined.min.css
and gzip -k js/combined.min.js
and then creating symlinks to those files appending the gz extension to the symlink (e.g. combined.2015-01-01R001.min.js.gz -> combined.min.js.gz) Content-Length headers are sent by nginx because it doesn't have to compress on-the-fly.
What files should I make a .gz statically compressed version of? All of the ones that I am telling nginx to compress, which are in the gzip_types
directive in nginx.conf. Those are, in short, .txt, .css, .json, .js, .xml, .rss, and .svg. I have already gzipped my .css and .js file, but there are a few more files that can be compressed:
- /css/foundation-icons.svg
- /css/foundation-icons.ttf
- /css/svgs/*
- /img/by-sa.svg
I should probably add .ttf's MIME type to the gzip_types directive, plus other missing text files at some point.
Nginx Pitfalls includes a number of suggestions for rewrite rule best practises, which I will revisit later.
Dynamic Content Web Server
Until now I have been using two instances of varnish in front of lighttpd on my home server. As I have now switched over to nginx for static content, can I do the same for dynamic content?
Obviously, I am going to be using varnish in front of whatever processes PHP most of time, but on those occasions I break varnish (or my home connection goes down) I need a fallback method. For this, I have decided to use php-fpm.
Before doing that, however, I need to modify all my ESI includes. At the moment, my ESI includes look like this:
<esi:include src="/inc/esi/nav_head"/>
The problem is, that doesn't deal with those times when the file is being accessed directly, rather than via varnish. Thus, I need to test for the Surrogate-Capability HTTP header:
<?php if (isset($_SERVER['HTTP_SURROGATE_CAPABILITY']) && strstr($_SERVER['HTTP_SURROGATE_CAPABILITY'],"ESI/1.0")) { echo '<esi:include src="/inc/esi/nav_head"/>'; } else { include $_SERVER['DOCUMENT_ROOT']."/inc/esi/nav_head.php"; } ?>
It isn't pretty, but it does the job. Having glanced at the documents for the Surrogate Capability header, varnish on my VPS and varnish on my home server should, really, have their own name rather than "varnish" in the string varnish=ESI/1.0.
That would probably make things easier when it comes to determining in varnish whether to process the ESI includes or not. But I'll leave that to another time.
Anyway, with a bit of shifting code around I believe my site now works with and without downstream ESI support, so I can now look at adding PHP support on my VPS.
Having corrected a typo in my varnish user.vcl (the sample code in this page has also been corrected), it is probably a good time to look at my caching policy for my ESI includes. I have gone with a max-age of 7200 seconds (2 hours) and an s-maxage of 86400 seconds (24 hours). This is because the code for my site navigation now looks stable and unlikely to change.
In fact, if the code for the site navigation were to change, I would likely test it directly using the IP address for lighttpd and then using ban req.url ~ /inc/esi/
in varnishadm
when I am happy with the changes so the updated code gets pulled into the varnish caches on next page load.
Looking around, I have decided to use the Zend OpCache in PHP 5.5 on my home server, which required some work:
wget http://www.dotdeb.org/dotdeb.gpg -O- | sudo apt-key add -
sudo nano /etc/apt/sources.list
# PHP 5.5
deb http://packages.dotdeb.org wheezy-php55 all
deb-src http://packages.dotdeb.org wheezy-php55 all
sudo apt-get update
sudo apt-get install php5-cgi php5-cli php5-fpm
sudo nano /etc/php5/cgi/php.ini
date.timezone = "Etc/UTC"
cgi.fixpath_info=0
opcache.enable=1
opcache.memory_consumption=64
opcache.interned_strings_buffer=4
opcache.max_accelerated_files=2000
opcache.use_cwd=1
opcache.load_comments=0
sudo ~/Scripts/sync-webroot-after-updates.sh
sudo service lighttpd restart
sudo service php5-fpm restart
Although I haven't started using php5-fpm yet, I thought I might as well install it at the same time.
I don't yet know whether the opcode caching is working, but I can now move on to installing PHP 5.5 on my VPS.
As Trusty Tahr has PHP 5.5 in the repositories, it is as simple as the following:
sudo apt-get update
sudo apt-get install php5-cli php5-fpm php5-gd
sudo nano /etc/php5/fpm/php.ini
date.timezone = "Etc/UTC"
cgi.fixpath_info=0
opcache.enable=1
opcache.memory_consumption=64
opcache.interned_strings_buffer=4
opcache.max_accelerated_files=2000
opcache.use_cwd=1
opcache.load_comments=0
sudo service php5-fpm restart
At this point I have PHP5-FPM installed, using the same (Zend) opcache options as on my home server. Next up is to try and modify nginx so that PHP files can be processed without using varnish.
sudo nano /etc/nginx/sites-available/johncook.uk
location / {
try_files = @dynamic;
}
location @dynamic {
try_files $uri.php @varnish;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location @varnish {
proxy_pass http://johncook.varnish;
include /etc/nginx/includes/proxy.static;
sendfile on;
tcp_nopush off;
tcp_nodelay on;
keepalive_requests 500;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
As I am using clean URLs, none of the files that reach the @dynamic location should have a .php ending, which is why I am not going to be using a regex to test for a .php extension.
I am going to be using PHP5 FPM with TCP sockets rather than UNIX sockets, as I will be moving it to a ULA IPv6 IP address after testing.
sudo nano /etc/php5/fpm/pool.d/www.conf
;socket /var/run/php5-fpm.sock
socket 127.0.0.1:9000
access.log = /var/log/$pool.access.log
catch_workers_output = yes
php_admin_value[error_lot] = /var/log/fpm-php.www.log
php_admin_value[log_errors] = on
sudo killall php5-fpm
sudo service php5-fpm start
sudo service nginx restart
One final tweak is to make caching possible when PHP5-FPM is being used. To do that, I need to make a bit more of a modification to /etc/nginx/sites-available/johncook.uk:
fastcgi_cache_path /var/cache/nginx/johncook_uk levels=1:2 keys_zone=johncook_uk:10m max_size=20m inactive=1h;
server {
...
location @dynamic {
try_files $uri.php @varnish;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_cache johncook_uk;
fastcgi_cache_key "$scheme$host$request_uri$request_method";
fastcgi_cache_valid 200 301 302 30s;
fastcgi_cache_use_stale updating error timeout invalid_header http_500;
fastcgi_pass_header Set-Cookie;
fastcgi_pass_header Cookie;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
include fastcgi_params;
}
...
}
sudo mkdir -p /var/cache/nginx/johncook_uk
sudo chown www-data:www-data /var/cache/nginx/johncook_uk
sudo service nginx restart
It should be noted that I am not testing for cookies (or anything else) and using fastcgi_cache_bypass
and fastcgi_no_cache
for this site because there is no login capability on this site.
The fastcgi_cache_path should be inside the http { }
block which is why I have placed it at the top of the file above the start of the server { }
block. The max_size=20m
parameter says that a maximum of 20 MiB should be used for the cache zone johncook_uk
stored in /var/cache/nginx/johncook_uk
.
Although I could have placed fastcgi_cache_key
inside nginx.conf or outside the server { }
block, it cannot be defined twice in the same block - putting it where it is makes the code more portable, and adds the possibility of shifting the fastcgi_cache_* lines to an include.
There are two things worth noting. The first is fastcgi_cache_valid
says that any response from PHP5-FPM that is a response code 200, 301, or 302, should be cached for 30 seconds. The second is fastcgi_cache_use_stale
says that if communicating with PHP5-FPM results in an error
or timeout
, or if it returns an invalid or empty header (invalid_header
) or a 500 error (http_500
) then it should serve a stale copy of the page if it has been previously requested and still in the cache. It should also serve a stale copy if it is in the process of updating
the copy in the cache (i.e. stale-while-revalidate).
After successful testing, I switch the location / { }
block so that try_files = @varnish
. At the moment @varnish does not fall back to @dynamic, but if Varnish were to have trouble it would be a simple matter of manually switching / back to @dynamic and reloading nginx. Given the varnish cache is larger than the 20 MiB FastCGI cache, however, and that my code is going to be optimised for ESI rather than SSI, switching to @dynamic should only be something I do when I need to restart varnish.
Having said that, by writing the code the way I have, I can copy the configuration file for this domain to a new installation of nginx on my home server with minimal code changes.
Something I have noticed since upgrading Ubuntu is that my iptables and ip6tables upstart scripts do perform saving on stopping. This means if I need to add an IP address to an interface, modifying the ip(6)tables save file will not survive a reboot of the VPS (or a restart of the relevant "service") - something I will need to keep in mind as I add more ULA IPv6 IP addresses.
Webmail
It has been a week since I have had my webmail working. Although I have got incoming mail back up and running, I do not yet have any way to read my mail other than using the less
command.
I still have the content of my Webmail Web site within my VPS backup, but I am going to switch to using nginx instead of lighttpd, as well as switching from hosting it on my VPS to hosting it on my home server (since that is now where the mail resides).
One advantage of this is going to be that when I am at home, I am not going to be wasting bandwidth when checking my e-mail as I will be connecting to a server within the LAN. As long as I use adequate encryption this will reduce the surface area of attacks as I will not be logging in over the Net whilst at home.
Something I also want to do is reduce potential mobile phone data usage by getting rid of the multiple e-mail accounts on my phone so I am only checking one account over 3G. The problem with the iPhone 4S Mail app is that it doesn't allow customised from addresses, so an account needs to be created for every address I want to send mail from. As I rarely send e-mail from my mobile phone, this shouldn't be much of an issue; hopefully there will eventually be a free Webmail interface that is mobile friendly.
I am going to stick with Roundcube for webmail as I am not only used to it but it has plugins for 2FA (using Google Authenticator) and custom from addresses (so I can reply from the alias address an e-mail was sent to, or compose a new message from an alias without needing to create a new identity).
As the only customisations I made to the original roundcube installation were (a) upgrades, (b) plugins, and (c) the database configuration, I think it will be best to install the latest release from scratch, configure the database settings, and add the plugins/extensions.
So, back on my home server:
sudo apt-get update
sudo apt-get dist-upgrade
sudo mkdir -p /home/www/var/www/webmail.thejc.me.uk
sudo chown www-data:www-data /home/www/var/www/webmail.thejc.me.uk
cd /home/www/var/www/webmail.thejc.me.uk
sudo su
Download the "complete" stable version using Iceweasel, saving it to my Downloads folder. Then, back in the root shell:
mv /home/thejc/Downloads/roundcubemail-1.0.5.tar.gz .
tar -zxvf roundcubemail-1.0.5.tar.gz
chown -R www-data:www-data roundcubemail-1.0.5
cd /etc/nginx
mkdir sites-available sites-enabled
nano nginx.conf
...
http {
...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
nano sites-available/webmail.thejc.me.uk
server {
listen 127.0.0.2:80;
root /home/www/var/www/webmail.thejc.me.uk/roundcubemail-1.0.5;
index index.php;
location ~*\.php {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
}
Double-check /etc/php5/fpm/pool.d/www.conf to make sure the listening socket is the same as we have just configured.
Visit http://127.0.0.2/installer and check that the first 3 sections are all OK. I have Mcrypt and Intl listed as NOT AVAILABLE, so back in the shell sudo apt-get install php5-mcrypt php5-intl
and then refresh the page.
With everything else showing a green OK (apart from some databases, but I have php5-mysql installed), click on next.
- product_name
- John Cook Webmail
- support_url
- https://twitter.com/JohnCookUK
- identities_level
- many identities with possibility to edit all params
- db_dsrw
- Modify settings - see next paragraph.
- default_host
- ssl://localhost
- default_port
- 993
- smtp_server
- ssl://mail.thejc.me.uk
- smtp_port
- 587
- smtp_user/smtp_pass - Use the current IMAP username and password for SMTP authentication
- checked
- language
- en_GB
- prefer_html
- unchecked
mysql -u root -p
create database roundcubemail
grant all on roundcubemail.* to webmail@127.0.0.1 identified by 'strongpassword';
flush privileges;
Create Config, and continue.
Initialise database, continue.
At this point, roundcube is configured properly, albeit with some settings that are temporary. At the moment, however, dovecot on my home server is not configured for IMAP login and postfix on my VPS is not configured for sending mail.
Given that reading mail is currently the most urgent need, it is time to set-up IMAPS login in dovecot.
Configuring Dovecot for Logins
At the moment, dovecot is working away in the background, verifying e-mail addresses are valid and storing messages in the correct folder. Logging in, however, is not currently configured.
sudo mkdir /etc/ssl/mail
rsync -avrpPlog --progress /home/thejc/vps-backup-2015-01-24/etc/ssl/mail/ /etc/ssl/mail
sudo apt-get install dovecot-imapd
sudo nano /etc/dovecot/conf.d/10-ssl.conf
ssl = yes
ssl_cert = </etc/ssl/mail/mail3-startssl-cert.pem
ssl_key = </etc/ssl/mail/mail3-startssl-key.pem
ssl_protocols = !SSLv2 !SSLv3
ssl_cipher_list = ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
sudo nano /etc/dovecot/conf.d/10-master.conf
...
service imap-login {
inet_listener imap {
port = 143
address = localhost
}
inet_listener imaps {
port = 993
ssl = yes
}
...
}
...
sudo nano /etc/dovecot/conf.d/20-imap.conf
...
protocol imap {
...
imap_client_workarounds = delay-newmail tb-extra-mailbox-sep
}
sudo service dovecot restart
At this point reading e-mail is working again, but sending is still not possible. Although listing messages, changing folders, and reading messages is possible, it is extremely slow. The reason for this is because roundcube (due to PHP) has to create a new IMAP connection for every action performed.
This is not ideal and with a setting of checking for new mail every 1 minute means the mail log will get bloated with logins. A bit of Googling later and a roundcube suggestion is to use imapproxy.
sudo apt-get install imapproxy
sudo nano /etc/imapproxy.conf
...
server_hostname = localhost
...
listen_port = 1143
...
listen_address = 127.0.0.1
...
server_port = 143
This configuration makes imapproxy listen on 127.0.0.1:1143 and proxy the connections to localhost:143. With this done, I just need to modify roundcube.
sudo nano /home/www/var/www/webmail.thejc.me.uk/roundcube-1.0.5/config/config.inc.php
...
$config['default_host'] = '127.0.0.1';
...
$config['default_port'] = 1143;
sudo service dovecot restart
sudo service imapproxy restart
sudo service mysql restart
sudo service nginx restart
Although I probably don't need to restart those 4 services, I thought it would be the best way to make sure everything is OK. Reloading 127.0.0.2 in my Web browser and logging in shows things are much improved.
I still can't send e-mails though, and I still haven't installed those roundcube plugins. Plugins first, and then SMTP, and then making things work non-locally.
Roundcube Plugins
My old roundcube configuration had the following line in config.inc.php:
$config['plugins'] = array('custom_from', 'twofactor_gauthenticator');
sudo mkdir /usr/local/src/roundcube-plugins
cd /usr/local/src/roundcube-plugins
sudo chown thejc:www-data .
git clone https://github.com/r3c/CustomFrom.git
sudo chown thejc:www-data -R CustomFrom
ln -s /usr/local/src/roundcube-plugins/CustomFrom/custom_from /home/www/var/www/webmail.thejc.me.uk/roundcube-1.0.5/plugins/custom_from
git clone https://github.com/alexandregz/twofactor_gauthenticator.git
sudo chown thejc:www-data -R twofactor_gauthenticator
ln -s /usr/loca/src/roundcube-plugins/twofactor_gauthenticator /home/www/var/www/webmail.thejc.me.uk/roundcube-1.0.5/plugins/twofactor_gauthenticator
sudo nano /home/www/var/www/webmail.thejc.me.uk/config/config.inc.php
$config['plugins'] = array('custom_from', 'twofactor_gauthenticator');
Rather than generating a new key for 2FA, I just searched the mysqldump backup for 'twofactor', found what looked like the key, copied and pasted it into the settings for that mail account in roundcube, and the used the code test function with the 6 digit number displaying on my phone for that account.
With the code confimed as valid, I activated 2FA, saved the changes, logged out, logged in, entered 6 digit code, logged in successfully.
As for the rest of my previous roundcube settings, I probably don't need them any more. All my identities/aliases were created before I had found the custom_from plugin.
I will come back to configuring roundcube later so I can login from the LAN (and WAN).
Outgoing Mail (SMTP)
At present my postfix installation on my VPS is only able to deal with incoming mail. To enable outgoing mail through SMTP I need to configure submission (port 587).
Most of this configuration is just going to be a copy of my previous configuration. Given my mail server has been using the same IPv4 IP address for several years and in that time it has never made it onto a blacklist (nor have I received any abuse reports) I shall tentatively state that has been secure enough to prevent spammers from using it as a relay.
As the DNS settings are already in place for my previous VPS installation, I am going to keep my existing SMTP server hostname, rDNS, and TLS certificate. My previous Postfix master.cf submission section was as follows:
submission inet n - - - - smtpd
-o syslog_name=postfix/submission
-o smtpd_tls_security_level=encrypt
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_security_options=noanonymous
-o milter_macro_daemon_name=ORIGINATING
I also need dovecot-core and dovecot-mysql as I will be using dovecot for SASL authentication using the existing mail database.
sudo apt-get install dovecot-core dovecot-mysql
When asked if I wanted to create a self-signed SSL certificate I chose No because I have an exisiting key and certificate I will be using (not that I will be using TLS for dovecot as I'll be using UNIX sockets).
What I need to do is configure Dovecot for just SASL authentication. The easiest way of doing that is to set port = 0
for all inet_interface sections of dovecot's 10-master.cf:
sudo nano /etc/dovecot/conf.d/10-master.cf
...
service imap-login {
inet_listener imap {
port = 0
}
inet_listener imaps {
port = 0
}
...
}
service pop3-login {
inet_listener pop3 {
port = 0
}
inet_listener pop3s {
port = 0
}
}
...
service auth {
unix_listener auth-userdb {
mode = 0660
user = postfix
group = postfix
}
unix_listener /var/spool/postfix/private/auth {
mode = 0660
user = postfix
group = postfix
}
}
...
I also need to configure mysql:
sudo cp /etc/dovecot/auth-sql.conf.ext /etc/dovecot/auth-sql.conf
sudo cp /etc/dovecot/conf.d/auth-sql.conf.ext /etc/dovecot/conf.d/auth-sql.conf
sudo nano /etc/dovecot/conf.d/auth-sql.conf
passdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf
}
userdb {
driver = sql
args = /etc/dovecot-sql.conf
}
sudo nano /etc/dovecot/dovecot-sql.conf
...
driver = mysql
connect = host=127.0.0.1 dbname=mail user=mail password=mail
default_pass_scheme = MD5-CRYPT
password_query = SELECT username as user, password, '/home/vmail/%d/%n' as userdb_home, 'maildir:/home/vmail/%d/%n/mail' as userdb_mail, 5000 as userdb_uid, 5000 as userdb_gid FROM mailbox WHERE username = '%u' AND active = '1';
user_query = SELECT '/home/vmail/%d/%n' as home, 'maildir:/home/vmail/%d/%n/mail' as mail, 5000 as uid, 5000 as gid, CONCAT('dirsize:storage=', quota) AS quota FROM mailbox WHERE username = '%u' AND active = '1';
Finally, the most difficult (for me) part was to get postfix to recognise the database and to get outgoing and incoming mail working properly. I did create user/group vmail (5000) although it is unlikely to ever be used on this system - I didn't create /home/vmail.
sudo nano /etc/postfix/mysql/virtual_sender_login_maps.cf
user = mail
password = mail
hosts = 127.0.0.1
dbname = mail
table = alias
select_field = goto
where_field = address
additional_conditions = and active = '1'
#query = SELECT goto FROM alias WHERE address='%s' AND active = '1';
sudo postmap /etc/postfix/mysql/virtual_sender_login_maps.cf
The virtual sender login maps MySQL query is going to be used to lookup who owns an alias. That is to say, if someonerandom@ is an alias of mailbox mailbox1@ then this query will return mailbox1@ if the MAIL FROM: is someonerandom@.
If the SASL authenticated user is not mailbox1@, using the postfix smtpd_sender_restrictions reject_sender_login_mismatch
, then the message will not be permitted to be sent. Although there are also the options reject_authenticated_sender_login_mismatch
and reject_known_sender_login_mismatch
, the option I have chosen is the broadest.
Reject the request when $smtpd_sender_login_maps specifies an owner for the MAIL FROM address, but the client is not (SASL) logged in as that MAIL FROM address owner; or when the client is (SASL) logged in, but the client login name doesn't own the MAIL FROM address according to $smtpd_sender_login_maps.
www.postfix.org, smtpd_sender_restrictions, reject_sender_login_mismatch
The part I found most confusing was getting settings mixed up with each other. inet_interfaces
, for example, are the IP addresses postfix listens to for incoming mail. smtp_bind_address
and smtp_bind_address6
, on the other hand, are the IP addresses postfix uses for outgoing mail. For outgoing mail, you can only use one IPv4 and one IPv6 IP address - using :: for IPv6 thinking it is for incoming mail will result in postfix using a seemingly random IP address that will probably not be one permitted to send mail by SPF.
With SPF and DMARC it is important to get the SMTP bind addresses correct as a DMARC (or ADSP) rejection is not the most helpful error message. It is also important to make sure that outgoing mail is not checked for DMARC compliance as SASL authenticated users will most likely not be using an IP address that passes SPF checks.
There is another issue, and this one is related to my MySQL lookups for aliases. It is possible for an alias to point to another alias, rather than a mailbox. Although not a problem for incoming mail, it does make outgoing mail using aliases a bit troublesome. The error message, however, is descriptive enough to know that there is a problem with the alias used as the from address not belonging to the mailbox that is logged in, so it is a question of should an alias be permitted as an alias of an alias?
My thoughts on this are that recursion is not efficient and the longer the chain the more likely there is a typo or outdated data somewhere. For incoming mail, this is not an important issue as the mail will bounce, but for outgoing mail the alias should point at the mailbox that 'owns' the alias to ensure that a reply goes to the correct place.
Changing a mailbox from one address to another is another potential issue, as if not well planned out could result in a race condition with mail ending up in the old mailbox. With my plan to retire two old domains, one of which I use for my mailbox login and the other which has a number of aliases, this scenario is not something that is unlikey to occur.
Anyway, the last thing to do is to tell Postfix how to authenticate users using SASL authentication.
sudo nano /etc/postfix/main.cf
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_relay_restrictions = permit_my_networks, permit_sasl_authenticated, reject_unauth_destination
myhostname = mail3.thejc.me.uk
myorigin = /etc/mailname
# IP addresses to bind to for outgoing mail.
smtp_bind_address = 149.255.99.50
# Use 6in4 tunnel rather than native, as PTR record on native IP (rDNS) not currently working properly.
smtp_bind_address6 = 2001:470:1f09:38d::25:1
mydestination = mail3.thejc.me.uk, mail2.thejc.me.uk, vps2.thejc.me.uk, localhost
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
relayhost =
mynetworks = [2a03:ca80:8000:7673::18]/127 [2a01:d0:8214::]/48 [2001:470:1f09:38d::]/64 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
# IP addresses to bind to for incoming mail.
inet_protocols = all
inet_interfaces = 149.255.99.50,[2001:470:1f09:38d::25:1],[2a03:ca80:8000:7673::19]
relay_domains = mysql:/etc/postfix/mysql/relay_domains.cf
relay_recipient_maps = mysql:/etc/postfix/mysql/relay_recipient_maps.cf
relay_transport = smtp:[fdd7:5938:e2e6:1::25:1]
# the maximum permitted queue liftetime in postfix is 100d
maximal_queue_lifetime = 100d
# SSL/TLS Parameters
tls_random_source = dev:/dev/urandom
tls_high_cipher_list = ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
tls_preempt_cipherlist = yes
# SSL - Server (inbound connections)
smtpd_tls_cert_file = /etc/ssl/mail/mail3-startssl-cert.pem
smtpd_tls_key_file = /etc/ssl/mail/mail3-startssl-key.pem
smtpd_tls_CAfile = /etc/ssl/StartCom/ca-sha2.pem
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtpd_tls_security_level = may
smtpd_tls_received_header = yes
smtpd_tls_loglevel = 1
# SSL - Client (outbound connections)
smtp_use_tls = yes
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_tls_ciphers = high
# Server - Restrictions
## Restrictions on client IP addresses and hostnames
smtpd_client_restrictions = permit_sasl_authenticated, reject_rbl_client zen.spamhaus.org=127.0.0.[2..8]
## Restrictions on HELO/EHLO
smtpd_helo_required = yes
# TODO: helo check
smtpd_helo_restrictions = permit_mynetworks, permit
## Restrictions on MAIL FROM
smtpd_sender_restrictions = reject_unknown_sender_domain, reject_sender_login_mismatch
## Restrictions on RCPT TO
smtpd_recipient_restrictions = reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, check_policy_service unix:private/policy-spf
## Restrictions on message headers
## Restrictions on message body
## SASL Authentication
smtpd_sasl_auth_enable = yes
smtpd_sasl_exceptions_networks = $mynetworks
smtpd_sasl_security_options = noanonymous
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_local_domain = $myhostname
smtpd_sasl_authenticated_header = yes
smtpd_tls_auth_only = yes
smtpd_sender_login_maps = mysql:/etc/postfix/mysql/virtual_sender_login_maps.cf
smtpd_tls_ciphers = high
home_mailbox = Maildir/
## SPF Checks
policy-spf_time_limit = 3600s
# Milters
milter_default_action = accept
milter_protocol = 6
## DKIM
milter_opendkim = inet:[fdd7:5938:e2e6:9660:7f00:1:b:8891]:8891
## DMARC
milter_opendmarc = inet:[fdd7:5938:e2e6:9660:7f00:1:b:8893]:8893
## Milters for SMTP
smtpd_milters = $milter_opendkim $milter_opendmarc
## Milters for non-SMTP (e.g. Sendmail)
non_smtpd_milters = $milter_opendkim $milter_opendmarc
There is one more issue, and that is Ubuntu Trusty only has opendmarc version 1.2.0. Version 1.3.0 includes the boolean option IgnoreAuthenticatedClients
. Until then, we need to modify master.cf so that submission does not do opendmarc checks:
sudo nano /etc/postfix/master.cf
submission inet n - - - - smtpd
-o syslog_name=postfix/submission
-o smtpd_tls_security_level=encrypt
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_security_options=noanonymous
-o milter_macro_daemon_name=ORIGINATING
-o smtpd_milters=$milter_opendkim
Might as well add the vmail user and group, not that these permissions should ever need to be used.
sudo groupadd -g 5000 vmail
sudo useradd -m -u 5000 -g 5000 -s /bin/bash -d /home/vmail vmail
sudo postconf
sudo service postfix restart
A bit more testing between Roundcube and Gmail, and things look like they are working properly. It is hard to know until I start to receive some other mail because googlemail.com does not have a DMARC record in DNS. The DMARC authentication results, however, are showing in the inbound mail headers from my gmail account even if they are saying dmarc=none.
The only thing left to do for e-mail is to set-up Roundcube and Dovecot to listen on a publicly accessible IP, and to add TLS encryption.
There are also some more restrictions that need adding to Postfix (helo checks, header checks, body checks) but that can wait until later. As for putting postfix in a chroot jail on both servers, that is something else I will have to investigate later.
Publicly Accessible IMAP and Webmail
My Webmail was previously available at https://webmail.thejc.me.uk, and IMAP was previously at ssl://mail3.thejc.me.uk.
The old convention of using smtp.example.com and imap.example.com is not something I have used until now because I have simply been increasing the digit after mail (i.e. mail, mail2, mail3) when I have needed to "move" the mail server to a new IP address when switching VPS provider (or VPS provider moving data centre).
The other convention of numbered mx servers is also something I have avoided, and can probably continue to do so as mail3 is still the MX server and (outbound) SMTP server. On the subject of IMAP and SMTP server names, the configuration profiles on my iPhone and iPad are a bit of a mess with some of them pointing at really old (mail2) servers.
Anyway, Webmail is going to continue to live at https://webmail.thejc.me.uk, and IMAP is going to move to ssl://imap.thejc.me.uk. In order to get it to function when I am not at home, I will need to use dynamic DNS. I will also need to create a certificate for imap.thejc.me.uk and enable
Node.js, npm, and Grunt
My CSS files use SASS, and by combining my stylesheets for JohnCook.co.uk, JohnCook.UK, and WatfordJC.UK I needed to modify the padding and margin of the padlock icon on secure pages because on WatfordJC.UK the site navigation bar links were a few pixels too wide causing the Articles, Blogs, etc. links to appear on a second row. Shrinking the padlock "glow" by a few pixels was enough to fix the problem, but I didn't have grunt installed so couldn't roll out the modification.
On my home server, I ran the following:
curl -sL https://deb.nodesource.com/setup | sudo bash -
sudo apt-get install nodejs
sudo npm install -g npm
sudo npm install -g grunt-cli
cd /home/www/var/www/johncook_co_uk/scss/
grunt
cd ../css/
ln -s combined.min.css combined.2015-02-06R001.min.css
ssh thejc@vps2.thejc.me.uk
sudo chown thejc:www-data -R /home/www/var/www/johncook_co_uk/
exit
rsync -avrpPlog --progress /home/www/var/www/johncook_co_uk/ thejc@vps2.thejc.me.uk:/home/www/var/www/johncook_co_uk
ssh thejc@vps2.thejc.me.uk
cd /home/www/var/www/johncook_co_uk/css/
rm combined.min.css.gz
gzip -k combined.min.css
ln -s combined.min.css.gz combined.2015-02-06R001.min.css.gz
exit
Then it was a simple case of modifying header.php to point at the new filename, and then running rsync again to upload it to my VPS.
My /css/ directory is starting to get bloated with a lot of symlinks, so something I could consider at a later date is using 301 redirects from old CSS filenames to the current CSS filename. I will have to look at how that would impact caching.
If a browser has in its cache newcssfile.css and an expired oldcssfile.css, and it accesses a Google Cache page that references oldcssfile.css, when it gets a 301 back pointing at newcssfile.css does it perform another If-Modified-Since request for newcssfile.css? If so, does it use the Last-Modified date/time of newcssfile.css or oldcssfile.css?
A look in Chrome at a GET request for /img/favicon-4.ico suggests the browser (Chrome at least) will follow the redirect to /img/favicon.ico and, because it already has a copy of that in the cache that hasn't expired, serve it straight from the cache. A potential problem, however, is that 301 redirects can be cached by browsers.
The reason I am looking into this now is because caching is not only about storing things so they are able to be accessed more quickly, but it is also about optimising the disk usage of cached objects. My current way of doing things through symlinks may mean a browser has multiple copies of a file in cache that have the same content. There is not really any way to have long expiry times, latest version of a resource, and minimise disk cache usage.
I suppose one can only hope that older versions of the CSS files will drop out of the cache when they haven't been accessed in a while. That does appear how cache eviction works, but a glance in Chrome on my laptop just now and a bit of math later says that Chrome is currently only using about 200 MiB of disk space for cache. While that is OK on my laptop, given how little free space I have, on my home server I have loads of space.
So, on my home server where I use Iceweasel as my primary browser, my cache size is also set as the default. In Iceweasel Preferences, Advanced, Network tab, it states my "content cache is currently using 171 MB of disk space". In about:config browser.cache.disk.capacity is set as 358,400 (and it is bold and says it is user set). If that is in KiB, then that equals 350 MiB of disk space for Web browser cache on disk.
On my home server, my /home partition has 1.4 TiB free (1.8 TiB capacity). That means my browser is getting a whopping 0.019% of total disk space in /home for caching files. That is ridiculously small. Firefox (and Iceweasel) have an about:cache page which has some more interesting numbers.
Memory (presumably RAM) cache size is set to a maximum of 32,768 KiB (32 MiB) of which 1,981 things are cached in 4,727 KiB. Disk cache size is set to a maximum of 358,400 KiB (350 MiB) of which 7,112 entries are using 175,142 KiB. Offline cache (also on disk) is set to a maximum size of 512,000 KiB (500 MiB) of which 241 entries are using 4,042 KiB.
OK, it turns out Firefox/Iceweasel are limited to a maximum 1024 MiB (1 GiB) disk cache and the default is 350 MiB due to performance reasons. Given my current utilisation of disk cache (probably because my browser crashes every few days) it is probably fine for the moment.
Anyway, on to making my home server's webmail publicly available.
SSL Certificate and Key
As I am reusing an existing key and certificate, I don't need to generate new ones.
cd /etc/ssl/
sudo mkdir webmail.thejc.me.uk
sudo rsync -avrpPlog --progress /home/thejc/vps-backup-2015-01-24/etc/ssl/webmail.thejc.me.uk/ /etc/ssl/webmail.thejc.me.uk
sudo nano /etc/nginx/sites-available/webmail.thejc.me.uk
server {
listen 0.0.0.0:443 ssl spdy;
listen [2001:470:1f09:1aab::b:0]:443 ssl spdy;
server_name webmail.thejc.me.uk;
root /home/www/var/www/webmail.thejc.me.uk/roundcubemail-1.0.5;
include /etc/nginx/includes/webmail.thejc.me.uk-ssl;
index index.php;
gzip on;
gzip_types text/plain text/css application/json application/javascript text/javascript text/xml;
location ~* \.php {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
}
The webmail.thejc.me.uk-ssl
include contins the ssl_certificate and ssl_certificate_key for the domain, as well as including the nginx.ssl-* files.
Having changed the port for fastcgi, I needed to change the settings for the domain calendar.thejc.me.uk so that it referred to port 9000 instead of port 12345, and then restarted all the related services.