AI Threats Raise Demand for Cybersecurity Products That Don’t Exist (Yet)
Artificial intelligence that handles complex tasks with minimal human oversight, also known as an agent, is creating a bevy of security holes that require plugging.
The problem: Tools to protect companies against the risks posed by the newfangled AI don’t exist yet, according to cybersecurity sellers, investors and IT executives.
That’s why some corporate security executives say they’ve blocked employees from using new products like OpenAI’s ChatGPT Agent mode, which takes over customers’ web browsers to send emails or shop online on their behalf. (It’s one of seven types of AI agents.)
Investors are trying to fund startups aiming to fix agent-related security problems that have already emerged publicly, and existing cybersecurity firms are racing to build new products as well.
“A lot of enterprises are now realizing they want to buy a solution for these new vulnerabilities, but there isn’t really a solution on the market right now,” said Jim Routh, a former chief information security officer at companies including MassMutual, Aetna and KPMG.
Among the AI security holes that don’t have clear solutions: preventing AI agents from taking harmful actions like deciding to wipe out a company’s code base as a way to fix a small bug; giving agents access to third-party applications such as Gmail or Salesforce so they can send emails or log data in applications—without compromising workers’ passwords; ensuring that apps employees create by vibe coding with AI don’t include any harmful code; and blocking AI agents from interacting with malicious websites set up by hackers.
The risks are especially severe among cutting-edge agents that aim to carry out complex actions across various applications. The most prominent example is OpenAI’s ChatGPT Agent Mode, which can take over people’s web browsers and log into websites to order food, send emails or move money between bank accounts. (Google and Anthropic have also teased browser-using agents but haven’t made them broadly available.)
For now, some corporate security executives say they’re instructing staff to steer clear of such agents until companies understand how to limit the risks they pose. For instance, payments software firm Plaid has blocked employees from using Agent Mode as well as similar browser-using tools offered by Perplexity, according to Kenneth Moras, who oversees Plaid’s security compliance.
“We’re trying not to create a shadow IT problem,” he said, referring to the possibility that AI agents could use software or browse the web without the company’s supervision. “One of the biggest risks isn’t humans accessing data but machines accessing data, and if just one agent is compromised, the amount of risk that an organization takes on is immense.”
Cybersecurity companies have also criticized OpenAI because using ChatGPT Agent Mode to access third-party applications requires customers to enter their usernames and passwords for those apps on the ChatGPT website—a long-standing security taboo.
OpenAI says ChatGPT doesn’t take screenshots of or store people’s passwords, but the fact that ChatGPT asks for passwords and the agent can remain logged in to people’s accounts is itself a cause for concern.
“It is such a glaring red flag to give ChatGPT unrestricted access to what you are responsible for as an employee. It’s totally unmanageable and unworkable,” said Peter Horadan, CEO of Vouched, a startup that sells software for verifying people’s identities online.
Horadan, who previously worked on payments software for banks at Microsoft, said the industry needs a new open-source standard to ensure the agents have the right permissions to access sensitive data. He said Vouched is “scrambling” to build such a tool for its customers and other companies to use. In the meantime, he says security executives should block all logins from ChatGPT’s agent to their company systems.
Cloud providers like Microsoft and Google have previewed tools that aim to verify the AI agents have permission to access certain apps, but they are limited in what they can do. Microsoft’s and Google’s tools only work for agents their respective customers have developed using each cloud provider’s technology.
“We need a more standardized way” for agents to prove they have the right permissions to access data, said Ori Goshen, co-CEO of AI startup AI21 Labs, which has raised over $300 million from backers including Google and Nvidia. Goshen said some of AI21’s customers have been reluctant to use agents that browse the web because of these concerns.
More broadly, agents have rapidly introduced new vulnerabilities for enterprises without clear solutions.
“We don’t know how to secure agents, to be fully honest—nobody knows how,” said Ami Luttwak, co-founder and chief technology officer at Wiz, a cloud security startup Google is acquiring for $32 billion. “The features that the industry needs right now frankly don’t exist.”
In addition to the threats posed by AI agents accessing other applications and websites, cybersecurity teams are neck-deep in another problem: the proliferation of quickly created AI-generated applications, part of a phenomenon known as vibe coding.
For instance, there’s no established method to automatically vet new code for security bugs as quickly as AI can generate it, according to Wiz’s Luttwak. He said enterprises are also scrambling to keep track of vibe-coded applications made by employees, including salespeople who don’t have coding backgrounds.
Vibe-coding tools can go off the rails. One engineer said a coding agent powered by software from Replit went rogue and deleted his whole application without permission, for instance.
Security Investors See Green
The growing threats are music to the ears of investors that seek to back new security startups.
“AI will redefine [cybersecurity] industries” and highly valued incumbents won’t be able to move as quickly as startups, said Gili Raanan, managing director of early-stage cybersecurity venture firm Cyberstarts.
Another cybersecurity investor—Jay Leek, SYN Ventures’ managing director—also said he’s been eyeing new startups looking to solve problems such as automatically patching AI-generated code.
“This is a big concern in the industry and a big opportunity because nobody’s really cracked it,” he said.
In the meantime, Routh said he’s advising chief information security officers to embrace AI agents for use cases that clearly help their business, but only if the agents address security vulnerabilities in tandem.
“Cybersecurity executives have to design new control capabilities that didn’t exist before, and that’s not something that the industry does really well,” Routh said. “It’s a challenge, but it’s better to face the challenge head-on than to bury your head in the sand and try to avoid it.”
Originally published on The Information. For more details, visit the source.