Antidetect browser Linux.
The ransomware problems reported by The Reg over the past few weeks are enough to make you, er, wanna cry. Yet all that’s happened is that known issues with Windows machines – desktop and server – have now come to everyone’s attention and the bandwidth out of Microsoft’s Windows Update servers has likely increased a bit relative to the previous few weeks.
But there’s more to life than Windows XP and the day-to-day computing landscape consists of a rich sediment of accumulated and inherited non-Windows operating systems. And my fiver says that only a tiny minority of you have leapt into action and rushed to update these particular systems in the wake of WannaCry.
What exactly are we talking about? According to netmarketshare.com the non-Windows desktop market share is about 10 per cent – two per cent of which is Linux and 3.6 per cent macOS. In the server world, looking this time at some data from Spiceworks, about 12 per cent of surveyed on-premises servers run non-Windows OSs, with RHEL at 1.2 per cent and various other Linuxes making up 10.5 per cent. The core server Linuxes aside from RHEL are Ubuntu, SUSE, CentOS, Debian and Oracle Linux.
Server vs web farm
But wait, let’s look at the stuff that’s actually accessible directly from the web. Now, by “directly” I mean something that’s publicly accessible – it may or may not be sat behind a load balancer or some such but you don’t need to, say, use a VPN or other remote access connection to get to it from outside. The story’s different here: according to W3Techs, Linux and Unix-like operating systems account for 66 per cent of the world’s web servers, while Windows has 33 per cent.
Of those Unix-ish web-facing servers, 37 per cent can be clearly identified as something running Linux of some description – mainly Ubuntu, Debian and CentOS with a smattering of also-rans.
Linux is a big deal when it comes to threats, then. And don’t get into the “Ah, but Linux is much less susceptible to viruses than Windows.” We’re not talking about viruses in particular, but about vulnerabilities in general. Remember, the damage WannaCry inflicted was due to it exploiting an inadequacy in an old version of Microsoft’s file-sharing protocol. Yes, it mainly got in through a virus (a worm, actually) but an individual able to access the target machine manually would have been able to exploit the same issue.
New version, new danger
Upgrading your operating system is a non-trivial thing to do, though. When a new version of your chosen operating system – Windows, Linux or whatever – is released, there’s a chance that any apps you’re running – particularly any bespoke or legacy ones – may have some kind of problem if you upgrade the operating system under them. But do you have to?
Let’s look at a couple of the Linuxes I’ve mentioned, starting with Red Hat Enterprise Linux (RHEL). I was a bit surprised when I saw not so long ago that someone’s server was running RHEL 5. After all, RHEL 7 has been on the market for more than three years. But look at the lifecycle and you see that 5.x has only just fallen out of mainstream support, and is under extended support until 2020. Yes, it’s ancient (its last virtual birthday cake had ten candles) but its parents still love it.
As W3Techs cite Ubuntu as the most common Linux, let’s look at that. Ubuntu has two concepts: standard releases (supported for nine months) and “long-term support” releases that are supported for five years. As I write this, the oldest version of Ubuntu Linux still under maintenance is 14.04; 16.04 has been out a year (so can be considered stable by now) and will see updates until early 2021. And CentOS is currently at release 7, but version 6, released in 2011, is still supported until 2020. CentOS 5 has only just fallen out of support as of March 2017.
As these operating systems continue to be supported, then, with both functional and security patches produced, there’s just one thing remaining – to actually do something about it. There are patches available so all you have to do is use them.
There’s one slight complication here, and that’s minor versions. We talk about, say, RHEL 5 or CentOS 7, but each of these versions has sub-versions and they do fall out of support over time. Take RHEL 6, for instance – extended support for 6.0 ended in 2012, but 6.7 has more than a year to go. Now, there’s a difference between applying patches for, say, version 6.2 and updating from 6.2 to 6.3: in-version patches will generally not affect applications, but minor version upgrades have a higher risk. Hence, it’s easy to shy away from doing them. And of course once you’ve missed a few minor version upgrades you’re getting closer to being out of support and the security patches no longer being produced.
But is this justified? Is it risk management or complacency? I’d say complacency. Why? Because managing the risk of breaking your applications is a relatively straightforward thing to do in the average organisation. Particularly if you have a virtualised world, because you have so many options to test and/or roll back. You could have a process of cloning the server VM into a test VLAN and testing the update. If you can’t do that then at least snapshot the live server pre-upgrade so that the rollback is a simple shutdown-rightclick-rollback-reboot. And of course this is just what you should be doing anyway when installing in-version patches, as it’s an easy ride back to the working version if you break something.
And of course
All those of you out there who rushed into a massive Windows patching campaign need to kick yourselves. On one hand it was a good thing to do – the later Petya attack exploited the same SMB flaws and so your patching will have slammed the door. On the other hand, what you should have done was rush into a massive general patching campaign – across all your systems, not just your Windows boxes. Of course this doesn’t apply to those whose Linux, AIX, Cisco, NetGear, Juniper, HP, Dell, IBM and other patches are all appropriately recent, but given that there aren’t many smug people reading this, it probably does apply to you.
If you’re sceptical about this idea that patching may well be more important than upgrading, let’s look at something Reg security reporter John Leyden wrote on June 29. Looking into an analysis of the NHS WannaCry attack, he pointed out that the unsupported operating systems weren’t the primary reason for the attack, and that “post-hack technical analysis revealed that Windows XP systems were more likely to crash than get infected”. Instead, “Windows 7 systems left unpatched… were a much bigger problem”. Unexpected but true.
So, yes, operating system versions that no longer have security patches produced are a problem. But in many cases – including the recent NHS attack – the culprit is the systems that were supported. In the aftermath of WannaCry I castigated risk managers for failing to upgrade their ancient, unsupported operating systems. What I should have done, though, is to kick them even harder for not patching the stuff for which off-the-shelf patches were still being regularly produced.
Because with today’s technology testing and applying patches is often so easy that you may well deserve a P45 for not doing it. ®
Ads browser antidetect.