Search This Blog

Tuesday 6 September 2016

Linux Ransomware and SSH

I recently came across this article on the FAIRWARE ransomware attacking Linux servers by brute forcing SSH according to the referenced article here. I thought why in this day-and-age is brute forcing SSH from the Internet working? Surely we are not exposing SSH administrative interfaces to the big bad Internet, let alone in an insecure way (rhetorical)?

So, whilst by no means a complete How-To, I thought I would go through a few areas that I hope may improve the security of systems running SSH. Though, as we all know, what you actually do depends on your risk appetite among other things.

In summary this is really about defence in depth. Not only make good security decisions, but also don't rely on a few. My view is if there is a shortcoming, no matter how minor, deal with it (assess it, then either fix it or accept the risk (with possible mitigations)).

For SSH, in no particular order.

1. Don't bind to all interfaces and certainly not your Internet facing one

In today's world of virtualisation, there really should be no excuse for not re-configuring a management service such as SSH to only listen on a management interface. Such a management network must not be directly accessible via the Internet; you should need to go via a VPN, Citrix or other strong security gate as a minimum to access such a network.

If your management services are only listening on your management interface, then even if your firewalls, or other Internet facing protections are breached, it is still going to be a bit of a challenge for an adversary to get shell access via that service.

This can be achieved using the Listen directive in sshd_config.

2. Never, ever, allow direct privileged logons

Being able to directly SSH (login) as root or a privileged user is just asking for trouble. Would you build a safe so that the door was accessible via an outside wall that the public walk past? I hope not; so should we really be doing the same thing with our servers holding our sensitive information assets?

The easiest way to prevent this is to only permit normal users (typically having a common group). i.e. we whitelist accounts rather than blacklist, so if a new account is created it needs to be assessed first.

PermitRootLogin should always be set to no, but we also need to take account of other 'unprivileged' users that may be applications (thus have access to or “own” potentially sensitive information) and users with enhanced 'root' rights (such as having capabilities in Linux, or via sudo).

3. Multi-factor authentication

It is always a good idea to examine a technology to fully understand what it can do. So, in this case, even if you don't have true multi-factor authentication, you would have spotted the RequiredAuthentications or AuthenticationMethods directive in SSH, which states what methods must succeed to allow access.

So the following on CentOS 7 requires both a publickey and a password to successfully log on (or more precisely the use of the associated private key on the client where the public key is configured on the user's account on the server).

AuthenticationMethods publickey,password

So, even if the public key does not have a paraphrase, we have the following, showing the public key authentication is not sufficient; you also need the account password.

$ ssh -x localhost
Authenticated with partial success.
auser@localhost's password:
Last login: Tue Sep  6 10:59:49 2016

I hope you would agree, even brute forcing that “poor-man's 2FA” is a bit more challenging.

4. Bar tunnelling over SSH

A tunnel allows you to tunnel any connection in any direction, via the SSH connection. Indeed the man page describes how to set up a VPN as well! This allows trivial bypass of firewalls and other security devices, by using the trusted encrypted SSH connection.

Unless you have very specific requirements, which are documented and limited by the configuration such as PermitOpen or the use of a directive in a cert, disable SSH tunnelling via PermitTunnel no. Don't forget the tap devices, if supported.

Even if you have a captive client, the user can press ~C to open an SSH “command line” to dynamically add a tunnel. So, if it isn't disabled at the server side a user can still add it in a captive setup. See the man page on ssh.

5. Require strong MACs and Ciphers

If both the client and server supports it, shouldn't we actively prevent the use of weak MACs and Ciphers?

The default configuration on CentOS 7 allows MD5 and SHA1 as MACs. It also allows sha2-256, sha2-512. Perhaps we should specify only the strongest subset and thus put a mandatory bar on MD5 and SHA1, for example.

6. Use only SSH version 2

Fortunately most modern configurations only enable version 2 (Protocol 2). Protocol version 1 has known design flaws, so should not be used. See an example from CERT here.

7. Client side

Don't forget the client side configuration as well.

If a client also only allows strong MACs and Ciphers, for example, and you suddenly can no longer access a host due to cipher support, does this suggest a problem? Certainly worth investigation.


This is just a small sample of the options open to you. There are many other configuration options for example, including Match statements for finer-grain configuration, adding options in the authorized_keys file, client IP lockdown, and SELinux tweaks, to name a few.

Oh, and monitoring. An unmonitored log is a useless log.

What would be your top recommendations?

Friday 2 September 2016

A look at an SELinux error message

As a fan of the SELinux security framework that runs as part of Linux, I thought it would be a good idea to improve my skillset in this area, and go beyond the basics.

Part of that is research, subscribing to SELinux mailing lists, playing with test setups and, of course, reading what others are saying via Google searches.

One, whilst going off at a tangent to my goal, sparked my interest as all the posts I saw didn't have the answer. It was this type of error:

libsemanage.validate_handler: MLS range s0-s1:c1,c3 for Unix user XXX exceeds allowed range s0:c1,c3-s1:c1,c3 for SELinux user XXX
libsemanage.validate_handler: seuser mapping [XXX -> (XXX, s0-s1:c1,c3)] is invalid
libsemanage.dbase_llist_iterate: could not iterate over records

For example here, just search for libsemanage.validate_handler.

The answer should be straightforward, and relies on the fact that SELinux enforces mandatory access controls; aka stop you doing stupid or bad things (or at least what policy states are stupid/bad things), plus the tools stop stupid/bad configurations (again, at least what the developers state are such).

We can look at three types of user concept here. First, we have the traditional UNIX user as defined in /etc/passwd or your favourite password backend. Secondly, we have the concept of an SELinux user, which is more akin to a class or UNIX group. Finally, we have mappings which map the UNIX user to the SELinux user, which themselves (can) specify ranges.

SELinux users are managed by semanage user, whilst the mappings are managed by semanage login.

If we map a UNIX user to a SELinux user, we are stating that the UNIX user has a particular range of clearances to access and do stuff on the machine. However, when we set up the mapping we can further constrain the user, but cannot (shouldn't be able to) grant more than the base SELinux user they are mapped to.

So, here's an example on one of my test boxes.

The test box is running the mls policy and is using poly-instantiated directories of /tmp /var/tmp and $HOME based on the context (i.e. we can only see a home area or temp area commensurate to our current level.

First, lets create an SELinux user of devuser_u which has the standard user role of user_r:

[root@centos-7-2-t1 ~]# semanage user -a -R user_r -r s0-s1:c1,c3 devuser_u
[root@centos-7-2-t1 ~]# semanage user -l

                Labelling  MLS/       MLS/                         
SELinux User    Prefix     MCS Level  MCS Range            SELinux Roles

devuser_u       user       s0         s0-s1:c1,c3          user_r

Now  let's create a user that is of that type:

[root@centos-7-2-t1 ~]# useradd -m -Z devuser_u dev1
[root@centos-7-2-t1 ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          user_u               s0                   *
dev1                 devuser_u            s0-s1:c1,c3          *

Now lets say that we wish for the user to be able to access a new category (compartment) of c8. We may state that we do this with the mapping, but as this user is an SELinux user of devuser_u this isn't going to work:

[root@centos-7-2-t1 ~]# semanage login -m -r s0-s1:c1,c3,c8 dev1
libsemanage.validate_handler: MLS range s0-s1:c1,c3,c8 for Unix user dev1 exceeds allowed range s0-s1:c1,c3 for SELinux user devuser_u
libsemanage.validate_handler: seuser mapping [dev1 -> (devuser_u, s0-s1:c1,c3,c8)] is invalid
libsemanage.dbase_llist_iterate: could not iterate over records
ValueError: Could not commit semanage transaction

Instead we either need to create a new SELinux user (and map the unix user to it) or modify the existing one. So, lets modify the existing one so that it includes this new category:

[root@centos-7-2-t1 ~]# semanage user -m -r s0-s1:c1,c3,c8 devuser_u
[root@centos-7-2-t1 ~]# semanage user -l

                Labelling  MLS/       MLS/                          
SELinux User    Prefix     MCS Level  MCS Range            SELinux Roles

devuser_u       user       s0         s0-s1:c1,c3,c8       user_r

Cool, that worked. Note that this does not update the individual mappings.

[root@centos-7-2-t1 ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          user_u               s0                   *
dev1                 devuser_u            s0-s1:c1,c3          *

So, lets do that.

[root@centos-7-2-t1 ~]# semanage login -m -r s0-s1:c1,c3,c8 dev1
[root@centos-7-2-t1 ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          user_u               s0                   *
dev1                 devuser_u            s0-s1:c1,c3,c8       *

So now all is well.

Providing the MLS/MCS range of the SELinux user is a superset of the range an individual user has, then you shouldn't have a problem.

A side note, I have found it is occasionally possible to sometimes modify the SELinux user to no longer be a superset; but then all other modifications against that user breaks until you fix it (i.e. make it a superset, then clear the categories or sensitivities you wish to remove from the member UNIX user mappings, then update the SELinux user); a bug perhaps.

Thursday 25 August 2016

A Brief Look at EMail, SPF, DKIM and DMARC

Having recently built my home email server and wanting to be a good MTA I decided to look a number of anti-spam mechanisms. Whilst host-based anti-virus solutions and the like offer anti-spam engines, there also exist a number of other technologies to help MTAs determine if email is legit and which can have unintended consequences. There is also a brief look at understanding the internals of an org without access it (recon).

Here I'll briefly cover four of them: SPF, DKIM and DMARC. The fourth is a a common practice that if you see email originating from you (as defined by the From: address) which did not originate from your server, you just discard the email. A good number of organisations do this.

Finally, we will have a look at a couple of examples.

The first three listed aren't always enforced. However, that doesn't infer your email is going to get through. Those anti-spam engines can and do use the results to rate the email, so a misconfigured setup can result in mail been rated as spam or discarded, not by spf/dkim/dmarc directly, but by the anti-spam mechanism.

Further, as you will see, to have a good all-round solution, you will need DNSSec fully implemented.

There is a wealth of information on this subject, including and Google itself. Just search for it.


At a basic level, Sender Policy Framework (SPF) works by a domain publishing a DNS TXT record in it's domain to describe who sends mail on the behalf of the domain. Typically this ends with an all action which describes how the receiving MTA should handle unmatched senders. For example:   text = "v=spf1 +mx -all"  text = "v=spf1 -all"  text = "v=spf1 ~all"   text = "v=spf1 ?all" # NB: the main rec is a redirect

A minus preceding the all infers that you should perform a a 'hard fail', a tilde infers a 'soft fail' and a question mark infers treat it as if the SPF record didn't exist (i.e. is not a positive validation; it is 'neutral').

Oftentimes, the receiving MTA does not discard a mail, even if it is a hard fail. However, an anti-spam solution may make a determination or adversely rank it. It is up to the policy that the receiver chooses to implement as to how to proceed. My server rejects hard fails but currently allows soft fails (which may get rated as spam by either my mail server or mail client).

As a result of this, and organisations rejecting email that states it came from them, but from an unknown host, mailing lists will often rewrite the From: address so you see the on behalf of statements in your email.


At a basic level DKIM works by adding a mail header to outgoing messages. This is an assertion to protect certain header fields from being altered and to provide an assertion that it was signed by an authorized host; for example protecting the From: and Subject: fields.

The linkage to the authorized host is via a selector. This is a hook to a DNS TXT record. For example, a originated email may have the selector s2048, so we prepend this to domain and the subdomain _domainkey to get published bit including the public key. The following is truncated for brevity.    text = "k=rsa\; p=MIIBIjANBgkqhkiG.....

With DKIM there is a lot of debate about what mailing lists and forwarders do with this. As previously mentioned, due to a number of reasons the From: field will get re-written. However, this will then break the DKIM signature. So, do you keep it and risk a receiving MTA or end-user anti-spam solution treating it as SPAM, or not.


Domain-based Message Authentication, Reporting & Conformance (DMARC) sits on top of SPF and DKIM. It allows a domain to publish a policy on how to handle messages that fail SPF and DKIM tests, and also publish a reporting function.

Like SPF and DKIM the policy is published as a DNS TXT record. For example:  text = "v=DMARC1\; p=reject\; pct=100\;\; aspf=r\; adkim=r\;"  text = "v=DMARC1\; p=none\;" text = "v=DMARC1\; p=reject\;"  text = "v=DMARC1\; p=reject\; pct=100\;\;"

The p= part is the policy: none infers to just report on it, quarantine infers mark failed messages as spam, and reject infers reject the message at the SMTP layer.

Again, it is up to the receiving MTA to determine if they honour the policy or not.

Example: Sending to (some) addresses

A recent email I sent to someone at had a delivery receipt set. The receipt will contain an attachment which is the header of the original message that was sent, and revealed a few interesting things about the path it (most likely) took. It also reveal a bit of information about how the network is set up and what's running on them. Useful information you can publicly get hold of; remember that the organisation voluntarily gave this information as part of a standard email.

Here is an edited version containing the Received: headers.

Received: from ( by ( with Microsoft SMTP
 Server id 15.1.557.21
Received: from
 (2a01:111:f400:7e06::204) by
 (2a01:111:e400:7a1b::34) with Microsoft SMTP Server id 15.1.587.13 via Frontend Transport
Authentication-Results: spf=fail (sender IP is;; dkim=fail
 (signature did not verify);; dmarc=fail
 action=oreject;; dkim=fail (signature did not
Received-SPF: Fail ( domain of does not
 designate as permitted sender); client-ip=;;
Received: from ( by ( with Microsoft SMTP
 Server id 15.1.587.6 via Frontend Transport
Received: from ( by ( with Microsoft SMTP Server id
Received: from ( by ( with Microsoft SMTP Server id 8.3.342.0
Received: from ( by
 ( with Microsoft SMTP Server id

We have to read these bottom up to track the flow.

First we note from (or other sources) that a Microsoft SMTP server id of 15.1.x.x is probably Exchange Server 2016; it seems that Microsoft are running a really new version of Microsoft Exchange ;-)

We also note that is Update Rollup 5 for Exchange Server 2010 SP3 and 8.3.342.0 is probably Update Rollup 12 for Exchange Server 2007 Service Pack 3.

Now for the interconnect. is my mail server; the origin of the email. This connected to (ip address

The next line up then states the message was received from ( by a different host. This tells us that the public interface for has an IP address of and an internal interface of (which is also an RFC1918 address; not valid on the Internet; hence internal – or there is a bit of NAT). We can repeat this up the chain to reveal the likely path.

We can also see that is also known internally as (the reverse path with the same IP's and the receiving path).

Finally, the outgoing path goes from a public address to an rfc1918 address. This suggests some DMZ with a B2B link to Microsoft (there are other possibilities).

In some cases, viewing whois records can also offer some intelligence.

I will leave it as an exercise for the reader to produce a diagram of the interconnect.

What is of particular interest is that the email goes into BT then back out again to the Microsoft cloud.

This is where the problem occurs. Earlier on, and further down in the header we see this:

Received-SPF: Pass ( domain of
 designates as permitted sender);

It shows that when it was on its initial incoming path the BT mail servers correctly validated the SPF rule. However, as it went on its way out to the cloud this test was re-done (see the earlier trace, key part of SPF highlighted in red), along with the embedded DKIM signature and the DMARC policy for my domain. Accordingly, it all failed as it was basing its results on one of the internal BT relays being authorized to generate mail for my domain and had already munged some of the other fields which broke DKIM as well, so DMARC then also failed.

Epic fail.


So what could be worse? Perhaps ensuring that any sender with an SPF fail policy is marked as spam just because you evaluated the SPF policy against your Internet facing email relay server?

This is what the Raynet mail forwarding service appears to be doing; lets see if you concur...

The reason I was looking at this one is that none of my emails were getting through when sent to this account when it was set up to forward to my ISP.

First, the original failure was because the forwarder forwarded the email as-is, without changing the From: field. Accordingly my ISP was discarding the email since the Raynet server isn't one of my ISP's email servers, and the mail claims it originated from my ISP.

How do I know? Well I changed the forwarding to my address and did the same thing. A mail sent came back (in the logs at least); with two SPF fails. On marking it as SPAM by the second Raynet server, and then me as the From: field asserted it was from, but my SPF rules stated 'computer says no'/hard fail.

The key headers (cut for brevity) are:

Received: from ( [])
Received: from ( []) by
Received: from ([]:33182) by
  with esmtp (Exim 4.82_1-5b7a7c0-XX)
Subject: [SPAM] test
X-hMailServer-Spam: YES
X-hMailServer-Reason-1: Blocked by SPF () - (Score: 3)
X-hMailServer-Reason-Score: 3

As we can see, there are X-hMailServer headers that state that this email is spam due to an SPF failure.

Now, given that I'm not running hMailServer (a freeware mail server) and Raynet's first mail server is running Exim 4.82, we can reasonably assume that is relaying it to, and it is this second server that is running hMailServer (we can research that online).

Secondly, as I know my SPF records are valid (others such as Google, Yahoo, etc all confirm it is OK), we can reasonably assume that the second Raynet server is performing the SPF check against the first Raynet server; which will of cause always fail where there is an SPF record for the sending domain.

Whois is also interesting with this one.

I'm sure you have seen many other examples of anomalies that are misconfiguration or logic issues rather than a suspect email.