Reverse Proxy Web Access via SSH

I have a server located on a ‘secure’ network; Secure in the sense that it has extremely limited access either in or out to other networks, and specifically, doesn’t have internet access.

Normally, thats a good thing. It does however present some challenges in trying to set things up, or apply patches etc.

One option is to download packages, transfer them by SFTP, and then install by hand – not so easy when there are a string of dependencies, or you’re trying to do something more exotic!

The answer is to use SSH to create a SOCKS proxy and tunnel traffic over the SSH link.

1 – Start a local SOCKS proxy:

$ ssh -f -N -D 54321 localhost

The switches are:

  • -f : run in the background
  • -N : Don’t execute a remote command
  • -D 54321 : Create a dynamic listening port on 54321
  • localhost : connect to localhost

The -D option is the key to creating the SOCKS proxy. According to the man page:

-D [bind_address:]port
Specifies a local “dynamic” application-level port forwarding.

This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file.

IPv6 addresses can be specified by enclosing the address in square brackets. Only the superuser can forward privileged ports. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or `*’ indicates that the port should be available from all interfaces.

2 – Connect to the remote server, and set up reverse port forwarding

$ ssh root@server -R6666:localhost:54321

This creates a normal SSH connection to the remote server, with a tunnel listening on remote port 6666, forwarded to the local port 54321 (which we created in the last command)

3 – Configure the remote server to use the proxy

install proxychains (using whatever method).  The easiest way is to manually download the packages, SFTP to the remote server and install manually.  Thankfully this only needs to be done once.

Proxychains uses a clever technique to redirect traffic via the sock5 server you specify.  It can do a lot of other things, but for our purposes, that’s all we need it to do!

Edit /etc/proxychains.conf, and near the bottom ad:

[ProxyList]
socks5 127.0.0.1 6666

4 – Fix Proxychains DNS

Proxychains does one quite annoying thing.  Is uses dig for DNS lookups, but is hard-coded to use 4.2.2.2 for the DNS server’s IP address.  In my case this wont work (i’m sat behind a corporate firewall which blocks outgoing DNS requests)

Edit the file /usr/lib/proxychains3/proxyresolv

It should look something like:

#!/bin/sh
# This script is called by proxychains to resolve DNS names
# DNS server used to resolve names
DNS_SERVER=4.2.2.2

if [ $# = 0 ] ; then
echo ” usage:”
echo ” proxyresolv <hostname> ”
exit
fi

export LD_PRELOAD=libproxychains.so.3
dig $1 @$DNS_SERVER +tcp | awk ‘/A.+[0-9]+\.[0-9]+\.[0-9]/{print $5;}’

editing the correct DNS server ip in to this file should fix the problem.

5 – Profit!

at this point, prefixing any command with proxychains should make it use the SOCK5 proxy for networking! eg:

$ proxychains apt-get update

Install the latest version of NginX in Ubuntu

Ubuntu is a great OS, but like many distributions the repositories are often a little behind the latest editions.

In particular i wanted to use the latest HTTP2 features in NginX.  These are available from version 1.9.5 onwards, but the ubuntu repo’s only seem to have 1.8.0 (at the time of writing)

It’s pretty easy to install the NginX.org latest stable edition:

$ curl http://nginx.org/keys/nginx_signing.key | apt-key add -
$ echo -e "deb http://nginx.org/packages/mainline/ubuntu/ `lsb_release -cs` nginx\ndeb-src http://nginx.org/packages/mainline/ubuntu/ `lsb_release -cs` nginx" > /etc/apt/sources.list.d/nginx.list
$ apt-get update
$ apt-get install nginx

OR

$ apt-get dist-upgrade

Rate limiting with NGINX

How secure is your login form?

Really though, how much effort did you go to to ensure nobody could maliciously access your new whizzy web application?

One area that is often forgotten is rate limiting how quickly users (or more specifically computers) can attempt a login.

Given an 8 character password, brute force attempts using all possible character combinations using upper-case, lower-case numeric and symbols, salted and SHA-1 encrypted would take just under 7 years (that’s 1127875251287708 password combinations)

Reducing that to just 6 characters can be completed in just over 10 hours – (that’s 195269260956 password combinations)

….and that’s the brute force approach which isn’t using dictionaries or any other time reducing trickery.

A simple answer is to limit quite how often a login can be attempted.  If we limit any given IP to no more than 1 login attempt per second, that’s really not going to be an issue for a genuine user who makes a typo, but for a malicious attacker, our 6-character password now take 6192 years!  That’s a bit of an improvement from 10 hours….

There are a number of ways to do this, but if you’re using NginX as your front end of choice, the answer is almost unbelievably simple:

limit_req_zone

and

limit_req

Rate limiting can be defined in terms of “zones”, and you can create as many as you need.  Each zone has it’s own tracking table stored in memory, and also has it’s own limits.

The first step is to create a zone in your site’s conf file at the top level:

limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

Now, for the specific path you want to rate limit you need to create a location block, and apply the zone:

limit_req zone=login burst=5;

To give this a bit more context, here is a simplified complete site config using this rate limiting:

upstream appserver {
  server 10.0.0.25:3001;
}

#create login zone for rate limiting
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

server {
  listen 443 ssl http2;
  server_name _;

  ssl on;
  ssl_certificate /var/app/ssl/cert.pem;
  ssl_certificate_key /var/app/ssl/cert.key;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  location / {
    proxy_pass http://appserver;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
  }

  location /login/ {
    #apply our rate limiting zone
    limit_req zone=login burst=5;

    #proxy on to the app server as usual
    proxy_pass http://appserver;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
  }
}

These particular settings create a 10MB cache for the zone, which works out to around 160K entries. That’s should be enough, but it’s easy to increase if you need more!  we are also setting this to limit to an average of 1 request per second, but in the limit_req directive we also allow for bursting of up to 5 requests which should be enough for any genuine systems.

You can do quite a bit with these directives, and there are also options for setting the HTTP return code and logging etc. For more details on these directives, check out the Nginx Documentations at http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_status

You can test out the rate limiting using any online testing service, but a simple Bash/Curl command line test can verify that it’s working:

for i in {0..20}; do (curl -Is https://example.com/login/ | head -n1 &) 2>/dev/null; done

You should see 200 responses roughly every second, and 503’s when they’re limited. You may get a few more 200’s initially as we’re allowing bursting of 5 attempts before we hit the limit.

Fixing Dual Icons in Docky

Docky is a great little taskbar for Linux, and it does a decent job of replicating the OSX bar.

It does however have a few problems in identifying an application when it’s running, because it’s expecting the xwindows WM_CLASS value to match the name of the executable in the Exec entry of the *.desktop file…

There is a pretty simple way of fixing this, and an ideal candidate to explain is Google Chrome.

To start with, we need to start the offending application.  go ahead and open Google Chrome, and you get two icons in Docky.  One is the launcher icon, and the other is the running instance.

Next, open a terminal and run the command xprop

At this point your pointer becomes a cross, and you need to click the application in question.  xprop will then return the xwindow properties of the window you clicked.  for Google Chrome you should see something like:

_GTK_HIDE_TITLEBAR_WHEN_MAXIMIZED(CARDINAL) = 1
_NET_WM_ALLOWED_ACTIONS(ATOM) = _NET_WM_ACTION_MOVE, _NET_WM_ACTION_RESIZE, _NET_WM_ACTION_STICK, _NET_WM_ACTION_MINIMIZE, _NET_WM_ACTION_MAXIMIZE_HORZ, _NET_WM_ACTION_MAXIMIZE_VERT, _NET_WM_ACTION_FULLSCREEN, _NET_WM_ACTION_CLOSE, _NET_WM_ACTION_CHANGE_DESKTOP, _NET_WM_ACTION_ABOVE, _NET_WM_ACTION_BELOW
WM_WINDOW_ROLE(STRING) = "browser"
WM_CLASS(STRING) = "Google-chrome-stable", "Google-chrome-stable"
_NET_WM_WINDOW_TYPE(ATOM) = _NET_WM_WINDOW_TYPE_NORMAL
_NET_WM_PID(CARDINAL) = 26456
WM_LOCALE_NAME(STRING) = "en_GB.UTF-8"
WM_CLIENT_MACHINE(STRING) = "My Computer Name"
WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOW, _NET_WM_PING

The important piece of information in those results is:

WM_CLASS(STRING) = "Google-chrome-stable", "Google-chrome-stable"

Notice the capital ‘G’ in Google there?

next, go to the location of the *.desktop file for that application, which is likely to be:

$ /usr/share/applications

and open the *.desktop file:

sudo gedit google-chrome.desktop

in the file are three sections: [Desktop Entry], [NewWindow Shortcut Group] and [NewIncognito Shortcut Group].  You’ll notice that each of these sections has a line looking something like:

Exec=/usr/bin/google-chrome-stable

Notice the lower-case ‘g’ in Google?

This is where Docky gets all confused, because it launches google-chrome-stable, but the window is called Google-chrome-stable!

All Docky needs is a little hint that is needs to launch the application, but look out for a window with a slightly different name.  To do this, in each section add:

StartupWMClass=Google-chrome-stable

Save the file, close Google Chrome and open it again, and hey-presto, the launcher icon gets the highlight of an open window!!!

Keeping worker process going with Supervisord

Supervisord is a great system for monitoring processes and restarting them when they fail.  For a Web application, a great use is worker processes which monitor message queues and process jobs asynchronously to the UI.

Supervisord can be installed with:

$ apt-get install supervisor

One nice feature is a web interface which allows you to monitor the processes, and manually restart if necessary.  By default it’s turned off, but you can turn by adding the following lines to the /etc/supervisor/supervisord.conf file:

[inet_http_server]
port=9001

The web interface will now be available on port 9001

by default, the configurations for each process we need to monitor are stored in /etc/supervisor/conf.d

you can have multiple configurations in one file, or keep each one separate.  as an example, here is a file i use to keep a worker process running:

[program:worker]
command=/usr/bin/php /usr/share/tock/worker/worker.php
autostart=true
autorestart=true

This automatically starts the process at boot, and also restarts if it fails!

it’s a pretty configurable system, more details can be found at:

http://supervisord.org/

Configuring sendmail to use an external smarthost

Because the world of Spam email exists, sending emails direct from a server can sometimes be troublesome, especially if you end up in a situation where a large number are being sent.  If you;re running some form of Web App, you obviously dont want your server being accused of Spamming, so a smarthost is the only option!

sendmail is either preinstalled, or easy to install on just about every Linux system I have come across, and setting this up is a breeze.

First you need to set the authorization credentials in /etc/mail/access

AuthInfo:smtp.example.com "U:yourUserName" "P:youPassword" "M:PLAIN"

Next we need to define the smarthost in /etc/mail/sendmail.mc

define('SMART_HOST', 'smtp.example.com')dnl
FEATURE('access_db')dnl
define('RELAY_MAILER_ARGS', 'TCP $h 587')dnl
define('ESMTP_MAILER_ARGS', 'TCP $h 587')dnl

these files are all great human readable config files, but they need to be compiled:

$ cd /etc/mail
$ m4 sendmail.mc > sendmail.cf
$makemap hash access < access

Then we need need to restart to make the settings take effect:

$ service sendmail restart

And we’re done!

Fail2ban

Fail2Ban is a simple service you can install to monitor your auth.log file and temporarily ban IP’s who are trying to log in to your systems.

It works with an number of protocols, but out of the box it comes pre configured to monitor and secure SSH.  You can install is on debian linux with:

$ apt-get install fail2ban

Once installed it will work as-is, but there are two specific things worth configuring.  It’s great to have an email alert when an attempt is made, so we need to configure the default action.  There are three options:

action_
[Default] Just go ahead and ban the IP
action_mw
Ban the ip, but also send an email and whois report
 action_mwl
Ban the IP, send email with whois report and also the auth.log lines containing the rouge IP

 

This needs to be set in /etc/fail2ban/jail.conf.  The default is (line 102):

action = %(action_)s

and finally we need to configure the email address we will send to.  This is on line 57:

destemail = admin@example.com

restart the service:

$ service fail2ban restart

And we’re done!  By default IP’s are banned through IPTables for a period of 10 minutes.

Could not load host key: /etc/ssh/ssh_host_ed25519_key

While checking out my AWS instances /var/log/auth.log I came across a message repeatedly showing up:

Feb 03 18:04:11 edrc sshd[13041]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key

If you;re seeing this message, it means that the ed25519 HostKey setting is enabled in your sshd_config, but no host key was generated for it.

The fix is pretty simple.  Just run the following command:

$ ssh-keygen -A
ssh-keygen: generating new host keys: ED25519