Black Hat 2017 Takeaways: Treating the Root of End-User Risk

Share with your network!

Wombat_Blog_Blackhat_Aug2017.jpgLast week, I got to spend time with many other members of the security community at Black Hat USA 2017. Despite being in the infosec space for the past ten years, this was my first time attending the event, and I was impressed with the breadth of topics covered. None too surprising, I found a lot of interesting talks in the “Human Factors” track, but it was refreshing to see how broadly this community is looking at security.

Though future blog posts will dig into additional topics that piqued my interest during my time in Las Vegas, I wanted to use this post to highlight the point that most resonated with me during the show, which I heard during the keynote by Alex Stamos, Facebook’s Chief Security Officer. While Stamos offered a number of great insights, the one that stuck with me was the statement that we too often focus on fixing a specific issue or bug, and fail to think about the root cause and how we can address that. I found this to be sage advice not just for the security space, but for life in general. That’s not to say we should go philosophical and “meta” with every problem presented to us every day, but when you see similar things happening over and over, it’s worthwhile to take a step back and attempt to look at the situation with fresh eyes. Doing so can help reveal a fundamental issue that has been overlooked, thus causing repeated issues.

Over the past ten years, I’ve spent a lot of my time thinking about non-technical end users, and their impact on security. Many of my peers have been thinking about them as well, and asking questions: Are we doomed? Are users unteachable? If not, why do we keep seeing the same mistakes made? My feeling is we certainly aren’t doomed; further, in line with what Stamos discussed in his keynote, I think that, historically, we’ve not looked at and addressed the whole problem.

Infosec professionals tend to look at technology solutions as panaceas; I hear continued belief among many of my peers that, eventually, technical advancements will allow us to effectively automate problem solving so that human error can’t be introduced. While there certainly are cases where automation is improving efficiency and effectiveness, all technical solutions are essentially augmentations of human processes; technology enhances, not completely replaces, the human component of security. Email filters are a great example; these tools automatically stop a large portion of junk and malicious messages from getting to our inboxes, preventing us from having to weed out all the bad eggs ourselves. As good as those tools are, though, they still miss phishing attacks. And because email is essential to daily business, users — including us — are making decisions on messages every day because we couldn’t do our jobs if we didn’t.

When a mistake is made — like credential compromise or the installation of malware, a virus, or ransomware — the response typically focuses only on the technical side, though. We add a signature for the binary to our block list. We reimage the computer and restore a backup, and send the offending user on his or her way. If it was a big enough or bad enough issue, there might be an organization-wide response, including a corporate communication and/or awareness exercise. And while all of these things are helpful and necessary in the wake of a successful phishing attack, we’re missing part of the problem.

 

We can help you deliver phishing tests and employee training that will help to mitigate end-user risk.

 

 

 

 

The reality is that users are consistently showing us the root of the problem: there are things about cybersecurity they don’t understand — or that they don’t understand the consequences of. And it’s this lack of knowledge that is leading them to make poor decisions. The question is whether we’re doing enough to remediate that aspect of the issue.

In my opinion, it’s time we take a more holistic view of incident response. If a human is part of the process, you can’t just remediate the device and look to technology to solve the problem; you also have to work to address the knowledge gaps of the user. In the past ten years, we’ve seen the proliferation of APIs, with almost every piece of software we use providing a programmatic way to connect it to something else. As security professionals, we should be looking and pushing vendors to help bridge the gap between systems and enable us to address our challenges more completely.

Ultimately, I walked away from Black Hat encouraged. Sure, there were a lot of exploits discussed, and a lot of scary things that can happen. And I’m certain all of us have security challenges we wish we were tackling better. But even as we all face pressures to minimize downtime and get compromised devices and systems back up and running as quickly as possible following end-user errors, I encourage you join me in periodically taking some time to think about the bigger picture. While we absolutely need to fix problem instances, we also need to think about root causes, particularly in the case of problems (like successful phishing scams) that present themselves with regularity.

Too often we focus on what’s directly in front of us and don't consider the fundamentals. Just as we’ve used technology to enhance and advance other processes, I believe we can use it to do the same for end users’ recognition of and response to cybersecurity threats. Fortunately, the technology is already there to share information between systems; we just need to identify when and where it makes sense to do so. This is something I intend to continue to think about and push forward for end users, and I’m excited to hopefully hear how others have taken this message and applied it at next year’s Black Hat.

 

Subscribe to the Proofpoint Blog