Happy new year, and welcome to Eye on AI. In this edition: How cybersecurity training is—and isn’t—keeping up with generative AI’s new threats; OpenAI beefs up its political lobbying; Nvidia to open-source Run:AI software following acquisition; and AI leadership roles triple in two years.
2024 saw brand new types of generative AI-enabled digital fraud make headlines, from a deepfaked video call that cost a company $25 million to new research on how AI copilots being built into enterprise software can be weaponized as “automatic phishing machines”. Even classic phishing attacks are getting worse and getting more personal, the Financial Times reported today, thanks to AI bots’ ability to easily ingest large amounts of data about a company or person’s style and tone and then easily replicate it. They can also scrape data from a person’s online activity to make phishing emails more personal, and thus more convincing.
As generative AI swiftly upends the cybersecurity threat landscape, companies need to ensure employees are aware of the technology, its capabilities, and its risks. To educate employees on how not to fall victim to various types of attacks, most companies turn to cybersecurity training, which typically consists of an informational video or a series of modules and quizzes that employees must complete. So how is this training keeping up with the new threats being posed by generative AI? I checked in with top providers including Huntress, Ninjio, and KnowBe4 and watched their trainings to find out.
Gen AI cybersecurity trainings cover some, but not all, new threats
Across all the training courses I tested, some of the new threats posed by generative AI were covered thoroughly, in particular how the technology can be used to create more convincing phishing emails and the risks involved with inputting sensitive company information into commercial chatbots. Ninjio’s AI-related training was the only one I reviewed that didn’t cover phishing, though the company says it plans to release a new video on this early this year.
Most of the trainings also covered how generative AI can be used to make convincing deepfakes of someone’s voice. Some addressed video deepfakes, but they focused on videos that would be distributed to employees or shared online. The trainings from Huntress, notably, did not discuss deepfakes of any sort. None of the trainings, however, discussed the emergence of deepfakes being used in live video calls—like the aforementioned instance from earlier this year when an employee at an multinational company thought he was on a call with his executive team, but was actually talking to AI-generated deepfakes created by malicious actors, who deceived him into transfering $25 million of the company’s funds.
None of the trainings addressed prompt injection, a new type of attack that could be used against companies deploying AI assistants in enterprise software like Microsoft 365. These attacks exploit the AI assistant’s ability to retrieve documents and can open up companies to data theft and new types of social engineering attacks. For example, a malicious actor could send an email to a victim at a target company that presents the bad actor’s bank information as that of some trusted entity. If the employee uses Copilot to search for that entity’s banking information, Copilot could surface the malicious email, misleading the victim into sending the money to the malicious actor instead. In the same vein, a hacker could send a malicious email to direct someone to a phishing site—all without having to gain access to the employee’s email.
Companies must be proactive about employee education
I began looking into how cybersecurity training is addressing the new threats posed by generative AI after watching a training video assigned to a relative. When they told me they had just been assigned a new video specifically talking about generative AI as part of their annual cybersecurity training, I was excited to see how the increasingly important topic was being covered. After watching the video, however, I was shocked and disappointed. The video was just a few minutes long, barely touched on the new types of threats I had reported on this past year, and depicted generative AI as if it were some far-off sci-fi technology. It wasn’t until I started diving deeper into the offerings from the cybersecurity industry that I realized that video wasn’t the whole story.
That video was from KnowBe4, and on its own, I don’t think it would be sufficient for informing employees about the risks and threats. I soon discovered that it’s just one of many AI-focused cybersecurity videos offered by KnowBe4, which turned out to have the largest catalog of AI-focused videos and some of the most informative content out of everything I viewed. KnowBe4 told me company admins are able to preview and assign trainings based on their company’s needs. Clearly whoever chose the videos to assign for my relative’s company wasn’t as thorough as they should’ve been. There were additional videos on deepfakes, CEO fraud, phishing, and the dangers of AI chatbots that together would’ve been much more comprehensive.
This made clear that cybersecurity and IT leaders inside companies need to take an active role, familiarizing themselves with the new threats and the training content that exists to inform employees about them. More than ever, cybersecurity education needs to be continuous; a brief video once a year isn’t enough.
Cybersecurity is an endless cat-and-mouse cycle, with security professionals and IT teams often playing catch-up to whatever innovations the fraudsters and hackers decide to adopt. Huntress said it’s in the late stages of developing a training on AI hallucinations and plans to create one on deepfakes this year. Ninjio is still developing trainings on generative AI’s impact on phishing and how malicious actors can use AI to automate attacks. KnowBe4 said it’s working to incorporate information on prompt injection attacks into its trainings. (It was the only provider to directly address my questions about this type of emerging threat.) But the training courses won’t do anything unless corporate IT leaders do their job in making sure employees engage in the trainings thoroughly and regularly.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
OpenAI expands its D.C. policy team to beef up lobbying efforts. The company recently tripled the size of its policy team to 12 members, scooping up D.C. insiders from across the political spectrum. While still smaller than many big tech lobbying shops, it signals how OpenAI is increasingly looking to influence tech policy in Washington, particularly as the Trump administration (which has appointed AI figures including Elon Musk and venture capitalist David Sacks to help run the government) prepares to reenter office. Specifically, OpenAI hopes to convince government leaders that the AI industry is a vital part of the economic and security race against China, as well as drum up support for building infrastructure to support AI development. You can read more from Politico.
Nvidia completes its acquisition of Run:AI and says it will open-source the software. The company’s software so far has only supported Nvidia’s GPUs, enabling customers to schedule GPU resources for AI workloads in the cloud. Open-sourcing it was potentially key to avoiding antitrust scrutiny of the deal, which Nvidia is increasingly subjected to due to its enormous growth and dominant role in AI hardware. The final cost of the deal was not disclosed, but prior reports estimated it at $700 million. You can read more in VentureBeat.
State legislatures keep their eyes on AI for 2025. Anticipating more gridlock in Congress, state lawmakers are planning to take on big issues themselves, including tech topics such as AI, PBS reported. 2024 saw state lawmakers struggling with whether to pass legislation on AI, with some state-level efforts stalling after key legislators or state governors argued that regulation of a technology as impactful and sweeping as AI should be set at the federal level.
FORTUNE ON AI
AI-powered mining firm backed by Bill Gates and Jeff Bezos is now worth $2.96 billion as it takes on Chinese rivals —by Eleanor Pringle
Sam Altman says OpenAI’s new o3 ‘reasoning’ models begin the ‘next phase’ of AI. Is this AGI? —by Sharon Goldman
From AI reasoning to green fatigue: Fortune’s bold business predictions for Europe in 2025 —by Fortune Editors
AI CALENDAR
Jan. 7-10: CES, Las Vegas
Jan. 16-18: DLD Conference, Munich
Jan. 20-25: World Economic Forum, Davos, Switzerland
Feb. 10-11: AI Action Summit, Paris, France
March 3-6: MWC, Barcelona
March 7-15: SXSW, Austin
March 10-13: Human [X] conference, Las Vegas
March 17-20: Nvidia GTC, San Jose
April 9-11: Google Cloud Next, Las Vegas
EYE ON AI NUMBERS
$1 billion
That’s how much Nvidia invested in AI startups in 2024, compared to $872 million in 2023, the Financial Times reported. The investment took place across 50 startup funding rounds and several corporate deals. Much of the money has gone towards backing its own customers and other companies supporting its position in the AI race. The mass of spending makes Nvidia one of the most crucial investors of the AI, as well as being the most crucial hardware provider of the generative AI boom.