WannaCrypt: A test for an organization’s security posture?
A week after the news about the WannaCry (WannaCrypt) ransomware attacks, the security companies FUD and the general misunderstanding of the root cause of this supposed nightmare – where are we and what exactly have we learned? Other than I am ranting more about it?
Where are we? Well honestly, that’s the question every IT shop should be asking themselves. The SMB1 protocol issues that were the underpinnings of this attack have had patches available for well over a month now, and Microsoft even took unprecedented steps to release patches for Windows versions that are no longer supported. As I said previously, this event was completely avoidable with base level care and feeding of a company’s IT systems. If an organization got hit by this ransomware it was because they did not patch or upgrade their systems. Perhaps they didn’t have the resources, or they just rolled the risk dice and came up snake-eyes, but there is no other way to look at this.
Nearly every auditable standard has requirements for timely application of vendor supplied patches, which begs the question, how the hell did this happen to organizations that say they comply with ISO, NIST, PCI-DSS or these other standards? Oh, I am sure there will be shouts of “scope”, “segmentation”, “compensating controls” or other nonsense, but at the end of the day – they simply didn’t patch.
Eventually every dice player craps out and the house will always win in the long run.
Doubling down on bad bets?
For larger organizations that have cyber security insurance, I would argue that underwriters should be asking, “What was the impact of WannaCry on your organization”. If you suffered significant impacts, the only reason that could be is a failure of your Vulnerability Management or patch programs. I don’t want to hear about AntiVirus solutions not having signatures, IPS/IDS updates, segmentation, or other excuses. Patching systems and/or not running unsupported operating systems was the real fix here… I am not aware of a single system that was patched having issues with any of the WannaCry variants – if you are please point to the source of the story in a comment 🙂
Underwriters should really be looking hard and long at this. There was a LONG stretch of time between the release of a patch to fix the exploit and the attack. This was weeks that we had to patch our systems, unlike most situations where an attack is launched within hours/days of the exploit’s release. We had plenty of time and warning to avoid this.
I don’t say this lightly, but face it insurance rates and premiums are based on the aggregate, and yes those that get hit can impact everyone’s rates. It’s not different from two homes built on a floodplain – hopefully you will get credit for having your home on stilts.
In comes the FUD
My inbox this week exploded with all sorts of vendors selling their cyber snake oil for “protecting your enterprise from WannaCry” blah blah blah. Ok, let’s be serious for a moment. Cyber Security is sometimes hard, esoteric and specialized in skill sets/tools, but this was NOT one of those times. You did not need to know the inner workings of the SMB protocol to download and apply the patch, you needed basic IT skills; how to download a patch, schedule outages, double click a few things, etc. While I may need the latest and greatest security widget, it is not something that is needed for this and companies trying to hawk their wares using this as a fear tactic are not ones I want to do business with.
Perhaps I’m an old-fashioned, grizzly IT turned InfoSec guy, but I do believe that responsibility never goes out of style. We are ultimately responsible to ensure our executives know the risks they are accepting (if they do) and to minimize the impact of the decisions made to the best of our ability. Maybe I’m just idealistic, but these events should not occur when there is such a long lead time for remediation.