Install the latest version of NginX in Ubuntu

Ubuntu is a great OS, but like many distributions the repositories are often a little behind the latest editions.

In particular i wanted to use the latest HTTP2 features in NginX.  These are available from version 1.9.5 onwards, but the ubuntu repo’s only seem to have 1.8.0 (at the time of writing)

It’s pretty easy to install the latest stable edition:

$ curl | apt-key add -
$ echo -e "deb `lsb_release -cs` nginx\ndeb-src `lsb_release -cs` nginx" > /etc/apt/sources.list.d/nginx.list
$ apt-get update
$ apt-get install nginx


$ apt-get dist-upgrade

Rate limiting with NGINX

How secure is your login form?

Really though, how much effort did you go to to ensure nobody could maliciously access your new whizzy web application?

One area that is often forgotten is rate limiting how quickly users (or more specifically computers) can attempt a login.

Given an 8 character password, brute force attempts using all possible character combinations using upper-case, lower-case numeric and symbols, salted and SHA-1 encrypted would take just under 7 years (that’s 1127875251287708 password combinations)

Reducing that to just 6 characters can be completed in just over 10 hours – (that’s 195269260956 password combinations)

….and that’s the brute force approach which isn’t using dictionaries or any other time reducing trickery.

A simple answer is to limit quite how often a login can be attempted.  If we limit any given IP to no more than 1 login attempt per second, that’s really not going to be an issue for a genuine user who makes a typo, but for a malicious attacker, our 6-character password now take 6192 years!  That’s a bit of an improvement from 10 hours….

There are a number of ways to do this, but if you’re using NginX as your front end of choice, the answer is almost unbelievably simple:




Rate limiting can be defined in terms of “zones”, and you can create as many as you need.  Each zone has it’s own tracking table stored in memory, and also has it’s own limits.

The first step is to create a zone in your site’s conf file at the top level:

limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

Now, for the specific path you want to rate limit you need to create a location block, and apply the zone:

limit_req zone=login burst=5;

To give this a bit more context, here is a simplified complete site config using this rate limiting:

upstream appserver {

#create login zone for rate limiting
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

server {
  listen 443 ssl http2;
  server_name _;

  ssl on;
  ssl_certificate /var/app/ssl/cert.pem;
  ssl_certificate_key /var/app/ssl/cert.key;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  location / {
    proxy_pass http://appserver;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;

  location /login/ {
    #apply our rate limiting zone
    limit_req zone=login burst=5;

    #proxy on to the app server as usual
    proxy_pass http://appserver;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;

These particular settings create a 10MB cache for the zone, which works out to around 160K entries. That’s should be enough, but it’s easy to increase if you need more!  we are also setting this to limit to an average of 1 request per second, but in the limit_req directive we also allow for bursting of up to 5 requests which should be enough for any genuine systems.

You can do quite a bit with these directives, and there are also options for setting the HTTP return code and logging etc. For more details on these directives, check out the Nginx Documentations at

You can test out the rate limiting using any online testing service, but a simple Bash/Curl command line test can verify that it’s working:

for i in {0..20}; do (curl -Is | head -n1 &) 2>/dev/null; done

You should see 200 responses roughly every second, and 503’s when they’re limited. You may get a few more 200’s initially as we’re allowing bursting of 5 attempts before we hit the limit.

Fixing Dual Icons in Docky

Docky is a great little taskbar for Linux, and it does a decent job of replicating the OSX bar.

It does however have a few problems in identifying an application when it’s running, because it’s expecting the xwindows WM_CLASS value to match the name of the executable in the Exec entry of the *.desktop file…

There is a pretty simple way of fixing this, and an ideal candidate to explain is Google Chrome.

To start with, we need to start the offending application.  go ahead and open Google Chrome, and you get two icons in Docky.  One is the launcher icon, and the other is the running instance.

Next, open a terminal and run the command xprop

At this point your pointer becomes a cross, and you need to click the application in question.  xprop will then return the xwindow properties of the window you clicked.  for Google Chrome you should see something like:

WM_CLASS(STRING) = "Google-chrome-stable", "Google-chrome-stable"

The important piece of information in those results is:

WM_CLASS(STRING) = "Google-chrome-stable", "Google-chrome-stable"

Notice the capital ‘G’ in Google there?

next, go to the location of the *.desktop file for that application, which is likely to be:

$ /usr/share/applications

and open the *.desktop file:

sudo gedit google-chrome.desktop

in the file are three sections: [Desktop Entry], [NewWindow Shortcut Group] and [NewIncognito Shortcut Group].  You’ll notice that each of these sections has a line looking something like:


Notice the lower-case ‘g’ in Google?

This is where Docky gets all confused, because it launches google-chrome-stable, but the window is called Google-chrome-stable!

All Docky needs is a little hint that is needs to launch the application, but look out for a window with a slightly different name.  To do this, in each section add:


Save the file, close Google Chrome and open it again, and hey-presto, the launcher icon gets the highlight of an open window!!!

Keeping worker process going with Supervisord

Supervisord is a great system for monitoring processes and restarting them when they fail.  For a Web application, a great use is worker processes which monitor message queues and process jobs asynchronously to the UI.

Supervisord can be installed with:

$ apt-get install supervisor

One nice feature is a web interface which allows you to monitor the processes, and manually restart if necessary.  By default it’s turned off, but you can turn by adding the following lines to the /etc/supervisor/supervisord.conf file:


The web interface will now be available on port 9001

by default, the configurations for each process we need to monitor are stored in /etc/supervisor/conf.d

you can have multiple configurations in one file, or keep each one separate.  as an example, here is a file i use to keep a worker process running:

command=/usr/bin/php /usr/share/tock/worker/worker.php

This automatically starts the process at boot, and also restarts if it fails!

it’s a pretty configurable system, more details can be found at:

Configuring sendmail to use an external smarthost

Because the world of Spam email exists, sending emails direct from a server can sometimes be troublesome, especially if you end up in a situation where a large number are being sent.  If you;re running some form of Web App, you obviously dont want your server being accused of Spamming, so a smarthost is the only option!

sendmail is either preinstalled, or easy to install on just about every Linux system I have come across, and setting this up is a breeze.

First you need to set the authorization credentials in /etc/mail/access "U:yourUserName" "P:youPassword" "M:PLAIN"

Next we need to define the smarthost in /etc/mail/

define('SMART_HOST', '')dnl
define('RELAY_MAILER_ARGS', 'TCP $h 587')dnl
define('ESMTP_MAILER_ARGS', 'TCP $h 587')dnl

these files are all great human readable config files, but they need to be compiled:

$ cd /etc/mail
$ m4 >
$makemap hash access < access

Then we need need to restart to make the settings take effect:

$ service sendmail restart

And we’re done!


Fail2Ban is a simple service you can install to monitor your auth.log file and temporarily ban IP’s who are trying to log in to your systems.

It works with an number of protocols, but out of the box it comes pre configured to monitor and secure SSH.  You can install is on debian linux with:

$ apt-get install fail2ban

Once installed it will work as-is, but there are two specific things worth configuring.  It’s great to have an email alert when an attempt is made, so we need to configure the default action.  There are three options:

[Default] Just go ahead and ban the IP
Ban the ip, but also send an email and whois report
Ban the IP, send email with whois report and also the auth.log lines containing the rouge IP


This needs to be set in /etc/fail2ban/jail.conf.  The default is (line 102):

action = %(action_)s

and finally we need to configure the email address we will send to.  This is on line 57:

destemail =

restart the service:

$ service fail2ban restart

And we’re done!  By default IP’s are banned through IPTables for a period of 10 minutes.

Could not load host key: /etc/ssh/ssh_host_ed25519_key

While checking out my AWS instances /var/log/auth.log I came across a message repeatedly showing up:

Feb 03 18:04:11 edrc sshd[13041]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key

If you;re seeing this message, it means that the ed25519 HostKey setting is enabled in your sshd_config, but no host key was generated for it.

The fix is pretty simple.  Just run the following command:

$ ssh-keygen -A
ssh-keygen: generating new host keys: ED25519

Cisco 857 Router Config

I work from home most of the time, which means my ADSL really is a life line.  Without it i’d be making a 35 mile trek to the office every day.

The village i live in doesn’t have the greatest ADSL, but it’s not too bad either.  For most stuff it’s perfectly workable, however I have repeatedly had problems with home routers and their inability to work correctly for extended periods.  From a ton of reading i guess it’s down to memory leaks etc.  A simple power cycle fixes it, but that’s not a great help during a VoIP call when the line keeps breaking up.  Power cycles typically take 2-4 minutes to complete, which is often an issue, followed by a 1 min VPN reconnect….
Continue reading