Tech companies and open-source teams are facing a deluge of AI-discovered software vulnerabilities. Now we're starting to get a sense of how big a deluge it is.
The Zero Day Initiative, the largest vendor-agnostic bug bounty program in the world, has already seen a 490 percent increase in submissions this month compared to April last year, according to data provided to Mashable. And the month isn't even over yet.
"Organizations that receive bug reports are struggling to keep up with the triage and response process,” Dustin Childs, Head of Threat Awareness at the Zero Day Initiative, told Mashable. “A couple of programs, most notably the Internet Bug Bounty program, completely shutter[ed] their doors rather than try to keep up.”
On March 27, the Internet Bug Bounty Program announced it was closing submissions entirely because of the bug submission crisis — which it said was changing the entire "landscape" of bug discovery.
“AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed,” HackerOne, the group that administered the program, said in a statement. “Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals.”
As AI tools improve, they’re also finding much more severe vulnerabilities that require patching. And thanks to Anthropic and other AI companies, the deluge could be just beginning.
The Claude effect
Anthropic recently heralded the arrival of Claude Mythos, claiming it was too dangerous for public release. Claude Mythos “demonstrated a striking leap in cyber capabilities,” the company said, and was capable of autonomously discovering and exploiting so-called "zero-day vulnerabilities" (the most urgent kind of bug, likely to be exploited by hackers) in every major operating system.
Anthropic released Claude Mythos to a closed group of organizations, claiming it wanted to give tech leaders a chance to "secure the world's most critical software." The company said it found too many bugs to report them all at once.
Critics have dismissed this as security theater and a publicity stunt; Anthropic pledged to disclose all the vulnerabilities Claude found after they’re patched.
Tucked inside its April 7 blog post about Claude Mythos, Anthropic included quite the flex. The company wrote that “fewer than 1% of the potential vulnerabilities we’ve discovered so far have been fully patched by their maintainers.”
That’s because when Anthropic finds new zero-day bugs, it triages them and discloses only the highest-severity bugs first. The company says it does this to avoid flooding other organizations with “an unmanageable amount of new work.”
What’s more, Anthropic estimates this is just “a small fraction” of the bugs it will find in the months ahead. To cope with the volume, Anthropic says it had to hire security contractors just to help with the disclosure process.
The volume and severity of bugs are increasing
Pre-Claude Mythos, cybersecurity researchers warned that AI tools had led to a surge in bug reports, but that the reports were typically very low quality. But the severity of bug reports is once again increasing, not that that helps developers.
“Not every submission ends up being a real bug, but we still have to triage it as if it is,” Childs said.
Daniel Stenberg, a Swedish open-source coding expert and lead developer of cURL, paused the cURL bug bounty program in January because of AI. Stenberg recently said that cURL had received more bug reports in 2025 than in the previous two years combined, and that number is set to double again in 2026.
“The main goal with shutting down the bounty is to remove the incentive for people to submit crap and non-well-researched reports to us. AI-generated or not. The current torrent of submissions put a high load on the curl security team and this is an attempt to reduce the noise,” he wrote on his blog.
However, he told Mashable that the latest deluge of security reports does, in fact, represent genuine security concerns, a stark reversal from last year’s trend. Stenberg wrote this month that he had heard from more than 20 open-source projects “who all confirm this trend: a larger volume of decently highly-quality security reports.”
He confirmed in the latest update on his blog that both the volume of new bug reports and the severity of those bugs are increasing in 2026. “The rate of confirmed vulnerabilities is back to and even surpassing the 2024 pre-AI level, meaning somewhere in the 15-16% range."
Stenberg also worries about the impact on developers. “I can only imagine that projects that are all volunteers, with a larger code base that perhaps has gotten less scrutiny, perhaps because they are younger, they can easily get drowned in quality reports," he says. “That has to be overloading and take a mental toll on many maintainers.”
So, is this zero-day deluge the Claude Mythos effect in action?
Until Anthropic completes its reporting on the bugs Claude Mythos discovered, it’s hard to know for sure, and neither Childs nor Stenberg said they could attribute the increases to Mythos specifically.
Indeed, there are also signs that private companies are seeing an increase in AI-discovered bugs. Microsoft announced 165 new bugs patched in its April security update. Childs noted this was "the second largest monthly release in Microsoft's history," citing AI as a likely cause for the increase in his Patch Tuesday blog.
In a statement to The Register, Microsoft denied that AI was to blame for the unusual security update, while also crediting Anthropic researchers for one of the bugs.
No matter the cause, the overall industry trend line is clear — a huge increase in both potential and real bugs that require urgent fixing.
AI and cybersecurity: What comes next
In the Claude Mythos system card, Anthropic said AI tools will provide more benefits to cybersecurity defenders in the long run. However, hackers may have the advantage in the short-term.
Existing AI tools "already provide ‘significant help’ to the relevant threat actors in the sense of increasing their general productivity," the company said.
AI is likely both the problem and the solution for developers, who are turning to AI to triage the bugs discovered by AI.
"We’ve begun using AI to aid in the triage process," Childs says. "It’s the only way we’ll be able keep up with this level of submissions." He allowed that "many entries are AI slop, but we’ve purchased a few of these bug [reports] just to teach our models what AI slop look like so we can avoid them in the future."
If the industry doesn’t adapt to the new reality, Childs added, consumers will suffer the consequences.
"We’ve got to figure out how to scale up our fixes as fast as researchers (and attackers) are scaling up their findings," he said, otherwise users will have “little chance to apply these [fixes] in a timely manner" if they don't want to get hacked.