Peril vs. Promise: Companies, Developers Worry Over Generative AI Risk
The vast majority of developers believe that using generative AI systems will be necessary to increase productivity and keep up with software challenges, but intellectual property issues and security concerns continue to hold back adoption.
CXOs and directors are growing wary of generative AI: Report
Generative AI has emerged as the chief concern for companies across geographies as three-fifths of global board members believe it poses huge security risks, according to a report by Proofpoint. The report built from survey responses of 659 board members at organizations with 5,000 or more employees across industries underlined a general sense of panic as it found a majority of respondents fearing a material attack in 2023.
Boards are grasping cyber threats, but CISOs still feel underprepared
Proofpoint's second annual Board Perspective report, published Sept. 6, explores three key areas: the cybersecurity threats and risks boardrooms face, their level of preparedness to defend against those threats and their alignment with CISOs – based on the sentiments uncovered in the company's Voice of the CISO report released earlier this year.
Proofpoint Previews Generative AI Tools to Thwart Social Engineering
At the Proofpoint Protect 2023 conference, Proofpoint today revealed it is leveraging a BERT large language model (LLM) originally created by Google to thwart social engineering attacks using generative artificial intelligence (AI).
Back to "Business as Usual." After a brief respite, CISOs see the threat landscape heating up once again, and have recalibrated their level of concern to match what they felt at the start of the pandemic.
In this study sponsored by Proofpoint, Ponemon surveyed 641 people responsible for security strategies – including setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.