Insider Threat Management

4 Reasons Data Loss Prevention Tools Aren’t Cutting It

Share with your network!

Data loss prevention software is designed to detect data leakage and exfiltration by keeping an eye on sensitive data while it is in use, in motion, and at rest. There’s a reason these tools exist. The data loss problem is a big one, and growing every year. In fact, the first half of 2017 saw more data breaches than the entire previous year.

In 2017, about 3 million records were stolen every day. While only the biggest data loss incidents, like the Equifax breach, make the headlines, it’s a risk that applies to businesses of all sizes, types, and industries.


Data loss prevention tools, often called DLPs for short, are understandably popular given the magnitude of the data loss problem. However, they often fall short of the mark when it comes to actually preventing data loss—and especially when it comes to detecting and investigating incidents while and after they happen.

Let’s take a look at why traditional data loss prevention tools aren’t fitting the needs of security teams anymore and how your organisation can rethink its approach to data loss.


Many organisations who invest in DLPs quickly discover that these tools are difficult and time-consuming to deploy. It can take quite a bit of manpower to set them up before it’s possible to use them to their full extent. For this reason, incomplete DLP deployments are quite common. Many organisations get partway through the deployment process, grow frustrated, and take a step back.

Even if you are able to deploy a DLP, they are inherently high-maintenance. They require a endless fine-tuning when it comes to the rules and signatures that are used to detect potential data loss. They often force companies to set up and maintain complex data classification schemes. While data classification may be part of your organisation’s compliance requirements to begin with, it’s not ideal to have to translate this to your DLP and continually maintain it.

Additionally, DLPs come with heavyweight agents that commonly slow down endpoints and frustrate end users. For this reason, many technical users will bypass them (sometimes for benign reasons, to get their jobs done, but in other cases so that they can exfiltrate data or use it in ways that go against policy.)

The bottom line is that DLPs are slow to deploy, complicated to maintain, and resource-draining.


All of the above might be a worthwhile trade-off if it was certain that your DLP would catch every instance of data loss before it was too late. Unfortunately, that’s not the case. DLPs are notoriously ineffective at stopping data loss caused by insider threats, since they are not well-designed for this common and insidious use case.

As we mentioned above, DLPs are often trivial for technical users to bypass, which means that if someone on the inside really wants to exfiltrate data, they will probably find a way to do it. We had one customer, a CISO at a big financial services organisation, tell us, “I haven’t seen an enterprise DLP my team can’t bypass in a matter of seconds.” Yikes.

DLPs are also incomplete in the sense that they do not offer all-in-one detection, deterrence, and mitigation of data exfiltration and insider threats. While they may very well catch some instances of attempted data exfiltration, they can’t do much to help you investigate or respond effectively. They certainly don’t have proactive user education built in to reduce accidental misuse. And, as we mentioned above, they have a real blind spot for insider threats, since they are designed primarily to regulate the exchange of network data.

DLPs often force organisations to respond far too slowly after an incident takes place, because it can take so long to gain context around what happened.


One of the major downsides of DLPs is that they often make it harder, rather than easier, for IT admins to communicate with data owners. Organisations often struggle with communication between these parties to begin with, and the obfuscating nature of DLPs can make this worse.

For example, the security team at your organisation likely has no idea which marketing documents contain sensitive information. They would have to ask people on the marketing team to explain which files are sensitive. At the pace with which data is created today, this would hardly be a one-time request; they’d have to ask constantly. Unstructured data is growing exponentially today, so expecting security teams to keep DLPs up-to-date based on a static classification system is completely impractical.


DLP agents are, in almost all cases, kernel-based. This means that they hook deep into the operating system. As a result, they tend to dramatically slow down endpoints, crash systems and apps, and just generally make it difficult for people to get work done. End users can tell when this is happening, and usually realise that the DLP is the reason for it, so these agents quickly develop a bad rap around the organisation.

To add insult to injury, the slow-down may cause friction between the security team and the rest of the organisation. Security teams just want to do their jobs and keep the business safe, but when DLPs get in the way of other team members’ work, they can be seen as the bad guys. DLPs, therefore, cause a whole lot of friction and often slow the business down beyond what is reasonable.


As you can see, while data loss prevention as a concept is something that all organisations should strive for, it’s time to rethink the approach. Rather than investing in traditional DLP platforms, look for a data loss prevention solution that is lower maintenance, that is designed for today’s real risks (including insider threats), and that facilitates rather than stymies communication at your organisation.

Proofpoint offers insider threat detection, investigation, and prevention that halts data loss in its tracks.

See for yourself how much more effective this approach can be.

Learn More