Government

Understanding the EU AI Act: Implications for Communications Compliance Officers 

Share with your network!

The European Union’s Artificial Intelligence Act (EU AI Act) is set to reshape the landscape of AI regulation in Europe—with profound implications. The European Council and Parliament recently agreed on a deal to harmonize AI rules and will soon bring forward the final text. The parliament will then pass the EU AI Act into law. After that, the law is expected to become fully effective in 2026.  

The EU AI Act is part of the EU’s digital strategy. When the act goes into effect, it will be the first legislation of its kind. And it is destined to become the “gold standard” for other countries in the same way that the EU’s General Data Protection Regulation (GDPR) became the gold standard for privacy legislation.   

Compliance and IT executives will be responsible for the AI models that their firms develop and deploy. And they will need to be very clear about the risks these models present as well as the governance and the oversight that they will apply to these models when they are operated. 

In this blog post, we’ll provide an overview of the EU AI Act and how it may impact your communications practices in the future. 

The scope and purpose of the EU AI Act 

The EU AI Act establishes a harmonized framework for the development, deployment and oversight of AI systems across the EU. Any AI that is in use in the EU falls under the scope of the act. The phrase “in use in the EU” does not limit the law to models that are physically executed within the EU. The model and the servers that it operates on could be located anywhere. What matters is where the human who interacts with the AI is located. 

The EU AI Act’s primary goal is to ensure that AI used in the EU market is safe and respects the fundamental rights and values of the EU and its citizens. That includes privacy, transparency and ethical considerations. 

The legislation will use a “risk-based” approach to regulate AI, which considers a given AI system’s ability to cause harm. The higher the risk, the stricter the legislation. For example, certain AI activities, such as profiling, are prohibited. The act also lays out governance expectations, particularly for high-risk or systemic-risk systems. As all machine learning (ML) is a subset of AI, any ML activity will need to be evaluated from a risk perspective as well. 

The EU AI Act also aims to foster AI investment and innovation in the EU by providing unified operational guidance across the EU. There are exemptions for: 

  • Research and innovation purposes 
  • Those using AI for non-professional reasons 
  • Systems whose purpose is linked to national security, military, defense and policing 

The EU AI Act places a strong emphasis on ethical AI development. Companies must consider the societal impacts of their AI systems, including potential discrimination and bias. And their compliance officers will need to satisfy regulators (and themselves) that the AI models have been produced and operate within the Act’s guidelines. 

To achieve this, businesses will need to engage with their technology partners and understand the models those partners have produced. They will also need to confirm that they are satisfied with how those models are created and how they operate. 

What’s more, compliance officers should collaborate with data scientists and developers to implement ethical guidelines in AI development projects within their company. 

Requirements of the EU AI Act 

The EU AI Act categorizes AI systems into four risk levels: 

  • Unacceptable risk 
  • High risk 
  • Limited risk 
  • Minimal risk 

Particular attention must be paid to AI systems that fall into the “high-risk” category. These systems are subject to the most stringent requirements and scrutiny. Some will need to be registered in the EU database for high-risk AI systems as well. Systems that fall into the “unacceptable risk” category will be prohibited. 

In the case of general AI and foundation models, the regulations focus on the transparency of models and the data used and avoiding the introduction of systemic risk. (Systemic risk means that if we are all using the same AI model, then we may all have a common point of failure.)  

The European AI Alliance suggests there are “10 Requirements and Obligations for High-Risk Systems” that firms using them will need to adopt. They are shown in the image below. 

Requirements and Obligations for High-Risk AI Systems

The 10 requirements and obligations for high-risk AI systems. 

High-risk AI systems will be held to a higher standard  

The EU AI ACT identifies several types of AI systems as “high risk.” That includes systems used in critical infrastructure, education, healthcare and financial services.  

High-risk AI systems are subject to rigorous compliance obligations like: 

  • Data quality and governance. Firms must ensure the quality and accuracy of the data they use in their AI systems. Unreliable data can lead to biased or unfair outcomes. 
  • Transparency. Compliance officers should focus on ensuring that AI systems provide clear explanations of their decisions and actions. This is especially important when AI is used for processes like automated credit scoring or fraud detection. 
  • Accountability. Companies that deploy high-risk models must maintain records of the AI system’s operations and actions. They must also allow for audits and investigations if needed. 
  • Human oversight. The Act emphasizes the importance of human oversight in high-risk AI systems. This oversight is especially vital in decision-making processes that can impact the rights of individuals. 

Certain AI practices will be prohibited  

The EU AI Act bans certain types of AI practices. Examples include the use of AI systems that are designed to manipulate human behavior or exploit vulnerabilities. Compliance officers should ensure that their company’s AI systems adhere to these prohibitions. 

Several regulatory bodies will enforce the EU AI Act 

The EU will enforce new AI regulations through various governing bodies. That includes a new AI Office within the commission that a group of experts will support. Member states will appoint an AI Board. And industry representatives, civil society, academia, subject matter experts and start-ups will be a part of an advisory forum. The act will establish a European AI Board and national competent authorities to oversee AI compliance. 

Communications compliance officers need to understand the nature and level of risk associated with AI models. Thus, they must have appropriate coordination with regulatory bodies and ensure that their AI systems meet the relevant certification requirements. 

AI systems will need to align with GDPR  

There is a close relationship between AI and data processing. So, compliance officers must ensure that AI systems comply with the GDPR. This means they will need to: 

  • Obtain explicit consent for data usage 
  • Minimize data collection 
  • Implement strong data protection measures 

There are requirements for reporting and incident response 

Communication compliance officers, in coordination with their IT colleagues, should establish incident reporting mechanisms and response protocols in case of AI-related issues. Transparency is key when addressing AI failures that may affect customers or regulatory compliance. 

There are penalties and liabilities for noncompliance 

Getting compliance with the EU AI Act right is essential if a company wants to avoid significant fines and legal liabilities. Compliance officers should work to ensure that AI systems meet all regulatory requirements to avoid potential legal consequences.  

If a business fails to comply with the EU AI Act, it may face severe penalties. The company could see fines of up to 7% of its annual global turnover or €35 million (in cases where prohibited systems are used), whichever is higher. 

This penalty regime is similar—though more severe—to the one implemented in GDPR, which is the larger of a maximum fine of €20 million or 4% of global revenues. So, the price of noncompliance with the EU AI Act is potentially an existential risk for most businesses. 

It will impact customer communications 

AI-driven customer communications, such as chatbots and virtual assistants, are already widely used across many sectors. The EU AI Act underscores the importance of transparency and informed consent when deploying AI for customer interactions. Banks and organizations in other industries that lead with AI-driven communications must that customers are aware of AI involvement and have the option to engage with human agents if they prefer. 

Conclusion 

The EU AI Act is a landmark regulatory framework that will shape how AI is used in the European Union and beyond. Communications compliance officers are pivotal to ensuring that their respective organizations meet the act’s requirements, particularly regarding high-risk AI systems and customer communications.  

Compliance officers need to stay informed about the act’s provisions. They should also engage in active collaboration with relevant stakeholders. That way, they can be effective in helping their institutions navigate the evolving AI landscape while upholding ethical standards, data privacy and regulatory compliance. 

Proofpoint is a longtime leader in the regulatory technology and cybersecurity technology industries. We have deployed hundreds of AI models in the real world. These models have been tested for effectiveness and efficiency, and they are subject to risk review and assessment.  

AI models from Proofpoint fall under the “minimal risk” category. We will soon reflect on this with updates to our Trust site, where we share privacy and security details with customers.