Identity Threat Defense

Why Deception Gives Cybersecurity Teams the Upper Hand (part 2)

Share with your network!

The first part of this guest post series from Kevin Fiscus, SANS instructor and cybersecurity expert, explained the challenges of early threat detection strategies. In part 2, we look at how a deception-focused strategy can confuse attackers, limit lateral movement, and give security teams back the advantage against attackers.

I mentioned the value of recognizing normal activity in an IT environment, but also looked at some of the challenges of that approach. How can we make detection successful?

The answer is to create environments where normal can be easily defined. This is accomplished by placing resources such as listening ports, services, files, URLs, credentials, hosts or even entire networks on the network that have no other legitimate business use. The key is that these resources should not be normally used or accessed by anyone. For these resources, normal means nobody touches them. If anyone or anything attempts to interact with these resources, that activity is, by definition, abnormal and cause for concern.

By creating “traps” for attackers, detection speed is significantly enhanced with some studies showing a reduction in detection time from months to days or even hours. Once an attacker interacts with any of the “traps,” an alert is generated, the SOC team is notified, and incident response measures can be enacted. False positives are possible but extremely uncommon, as these resources are invisible to legitimate employees and alerts are triggered only upon direct interaction.

If, however, an attacker is on the network, they can be investigated and appropriately handled.

Interacting with ghosts – influencing attacker behavior without them being aware of it

Making this detection strategy even more interesting, these “notional” resources (those that have no other business function) serve multiple purposes. They facilitate detecting the attacker but can also influence the attacker’s behavior, allowing defenders to direct the attacker’s actions without the attacker becoming aware of the fact.

We can cause attackers to spend valuable time interacting with ghosts rather than targeting legitimate and valuable resources – distracting them while also gaining valuable threat intelligence.

By creating systems that appear to represent a strong security posture (e.g. fake monitoring tools, etc.) we may be able to deter them from further penetration of the environment. By creating the impression of live systems, open ports or even vulnerabilities where none actually exist, we can delay attackers long enough to take appropriate action.

Threat intelligence through deception

Through distributing fake resources that, in reality, collect information about the attacker when accessed we can increase our knowledge of hacker techniques and even facilitate attacker attribution. For this to work however, it must be done carefully as, if the attacker detects the deception, they could gain the advantage.

Prior to launching Operation Overlord (the D-Day invasion of France), the Allies launched Operation Bodyguard. The Germans knew that the Allied powers would be invading France, but they did not know where or when. One component of Operation Bodyguard was a deception plan designed to get the Germans to think the Allies would be crossing the English Channel at the Pas de Calais. To accomplish this the Allies “created” a fake army that was led by General Patton. They faked communications traffic, used mock military equipment and fed information to the Germans via double agents. The plan worked because it used the preconceptions of the Germans against them. It was very reasonable for the Allies to land at the Pas de Calais and thus fit German expectations. The plan had to be very detailed because any inconsistency could alert the Germans to the deception. The goal of this plan was to get the Germans to reinforce their defenses in one place when the Allies were planning on landing at another. In other words, they used deception to manipulate the actions of their adversary. Had the Germans detected the ruse, the plan, and possibly the entire invasion could have failed. These lessons must be applied to the world of cyber.

When implementing our deception strategies, it is critical that our deception tactics are in line with what the attacker expects, that our deception tactics influence the attacker to take desired actions and that our deception tactics remain undetected by the attacker. If done correctly, these cyber deception tactics can fundamentally change the information security “game.”

It is often said that defenders must be right 100% of the time while attackers need be right only once. Mistakes on the part of the attacker cost them little while mistakes on the part of defenders can represent significant problems.

Cyber deception can flip that script on its head.

By deploying multiple cyber deception resources throughout the network, we create an environment that is decidedly hostile to the attacker. The attacker may avoid the overwhelming majority of these cyber lamps and digital squeaky boards, but they only need to interact with one for detection to occur. In other words, a properly implemented cyber deception strategy means that the attacker now needs to be right 100% of the time and defenders need to be right only once. This is a game changer. Done correctly, we can make attacks against our networks so detectable that it will literally be easier to break in physically than to execute traditional cyber-attacks.

If done correctly, Deception can truly give back the advantage and change the world of information security.

Kevin Fiscus is an information security expert with over 27 years of IT experience, over half of which has been focused exclusively on information security. He is the founder and lead consultant for Cyber Defense Advisors where he performs security and risk assessments, vulnerability and penetration testing, security program design, policy development, and security awareness.

Learn more about deception and in-network threat detection

Want to find out more?