Rate limiting with NGINX

How secure is your login form?

Really though, how much effort did you go to to ensure nobody could maliciously access your new whizzy web application?

One area that is often forgotten is rate limiting how quickly users (or more specifically computers) can attempt a login.

Given an 8 character password, brute force attempts using all possible character combinations using upper-case, lower-case numeric and symbols, salted and SHA-1 encrypted would take just under 7 years (that’s 1127875251287708 password combinations)

Reducing that to just 6 characters can be completed in just over 10 hours – (that’s 195269260956 password combinations)

….and that’s the brute force approach which isn’t using dictionaries or any other time reducing trickery.

A simple answer is to limit quite how often a login can be attempted.  If we limit any given IP to no more than 1 login attempt per second, that’s really not going to be an issue for a genuine user who makes a typo, but for a malicious attacker, our 6-character password now take 6192 years!  That’s a bit of an improvement from 10 hours….

There are a number of ways to do this, but if you’re using NginX as your front end of choice, the answer is almost unbelievably simple:

limit_req_zone

and

limit_req

Rate limiting can be defined in terms of “zones”, and you can create as many as you need.  Each zone has it’s own tracking table stored in memory, and also has it’s own limits.

The first step is to create a zone in your site’s conf file at the top level:

limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

Now, for the specific path you want to rate limit you need to create a location block, and apply the zone:

limit_req zone=login burst=5;

To give this a bit more context, here is a simplified complete site config using this rate limiting:

upstream appserver {
  server 10.0.0.25:3001;
}

#create login zone for rate limiting
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

server {
  listen 443 ssl http2;
  server_name _;

  ssl on;
  ssl_certificate /var/app/ssl/cert.pem;
  ssl_certificate_key /var/app/ssl/cert.key;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  location / {
    proxy_pass http://appserver;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
  }

  location /login/ {
    #apply our rate limiting zone
    limit_req zone=login burst=5;

    #proxy on to the app server as usual
    proxy_pass http://appserver;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
  }
}

These particular settings create a 10MB cache for the zone, which works out to around 160K entries. That’s should be enough, but it’s easy to increase if you need more!  we are also setting this to limit to an average of 1 request per second, but in the limit_req directive we also allow for bursting of up to 5 requests which should be enough for any genuine systems.

You can do quite a bit with these directives, and there are also options for setting the HTTP return code and logging etc. For more details on these directives, check out the Nginx Documentations at http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_status

You can test out the rate limiting using any online testing service, but a simple Bash/Curl command line test can verify that it’s working:

for i in {0..20}; do (curl -Is https://example.com/login/ | head -n1 &) 2>/dev/null; done

You should see 200 responses roughly every second, and 503’s when they’re limited. You may get a few more 200’s initially as we’re allowing bursting of 5 attempts before we hit the limit.