On October 30, 2023, the Biden administration issued an Executive Order (EO) on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (AI) (“Order”). The Order establishes a framework for regulation of AI and is structured around the following guiding principles:
- Ensuring the Safety and Security of AI Technology
- Promoting Innovation and Competition
- Supporting Workers
- Advancing Equity and Civil Rights
- Protecting Consumers, Patients, Passengers, and Students
- Protecting Privacy
- Advancing Federal Government Use of AI
- Strengthening American Leadership Abroad
The Order is far reaching and issues directives to over 20 federal agencies to investigate and create policies and initiatives to regulate the development and use of AI based technologies. Notably, the Order calls for guidance to help detect and label synthetic content generated by AI and to issue recommendations on copyright and IP-related risks. The National Institute of Standards and Technology is also mandated to establish guidelines to promote industry standards for the development and use of trustworthy AI systems and specifically with respect to red-team safety testing. There are additional sector-specific initiatives vis a vis a call for regulatory protection to address the potential risks of AI in the areas of Transportation, Housing, Consumer Financial Protection, Health and Education.
Currently, there are limited requirements for the private sector. The Order invokes the Defense Production Act to mandate that companies developing dual-use foundation models that reach certain thresholds and companies / individuals / organizations that acquire, develop or possess a potential large-scale computing cluster used to train AI (“covered companies”) provide the federal government with ongoing reports regarding activities related to training, developing or producing the model, the ownership and possession of the model weights and the results of any red-team testing. Additionally, American IaaS companies are required to collect “know your customer” information from any foreign customers using the IaaS to train AI models and a reporting obligation of that activity to the federal government. Moreover, by developing standards for AI within the federal government, the Order will help create best practices for the responsible development and use of AI across the private sector.
As implementation ranges from 30 to 365 days, we expect 2024 to be a robust year for AI governance and regulation in the United States. We are monitoring any activity that might impact Proofpoint or our customers. Proofpoint is committed to the responsible development and use of AI and ML. Please visit our Trust site for additional information on how Proofpoint’s products and services responsibly use AI and are secure by design.
© 2024. All rights reserved. The content on this site is intended for informational purposes only.
Last updated December 05, 2024.