Google's Threat Intelligence Group stopped a criminal hacking operation that used artificial intelligence to discover and exploit a zero-day vulnerability, the company disclosed Monday. It is the first known case of a non-state group using AI to both find and weaponize a previously unknown software flaw. John Hultquist, head of GTIG, said the incident signals the start of a new era of AI-driven cyberattacks.
The attackers targeted a popular online system administration tool. They found a way to bypass its two-factor authentication. Google did not name the tool or the company that was the ultimate target.
The vulnerability was a zero-day exploit, meaning software developers had zero days to patch it before it was used. Google described the planned operation as large-scale. The company intervened before the attack could launch, notifying both the victim and law enforcement. "The age of AI-driven vulnerability discovery and exploitation has begun," Hultquist said in a statement released by Google.
He noted that criminal hackers typically work slowly and in secret. AI changes that equation entirely. Hultquist called the technology a "massive advantage" for criminals.
It lets them move at a speed that was previously impossible. Defense against cyberattacks, he said, is once again a race against time. Google published a blog post detailing the incident but withheld critical specifics.
The company declined to identify the criminal group behind the operation. It said only that the group is a known cybercriminal entity with no apparent ties to a hostile government. The attackers did not use Google's Gemini or Anthropic's Claude Mythos, the company stated.
The specific AI model remains unknown. Google also noted that groups linked to China and North Korea have experimented with similar techniques. This ambiguity frustrates independent security researchers. "It is difficult to independently assess the real risk because AI companies regularly release only a few details in such cases," Der Spiegel reported, citing IT experts.
The reports draw attention to a genuine problem, but they also serve as a marketing opportunity for Google to showcase its leadership in AI development. Anthropic reported a related finding in April. Its model, Mythos, discovered an old security flaw in a widely used operating system.
The company chose not to release the model publicly. It is only available to select enterprises. The White House has taken notice.
Under President Donald Trump, officials have proposed reviewing particularly powerful AI models before their public release. Dean Ball, a former White House technology policy advisor who helped craft Trump's AI strategy, addressed the need for regulation. "I would prefer things not be regulated. But I think in this case we have to do it," Ball told Der Spiegel.
His statement reflects a growing consensus that voluntary safety measures may be insufficient. The policy says one thing. The reality says another.
For years, cybersecurity was a game of cat and mouse between human analysts. A defender would spot a breach, write a patch, and push it out. The attacker would then look for a new way in.
That cycle could take weeks or months. AI compresses that timeline to hours or minutes. A model can scan millions of lines of code, identify a logic flaw, and generate an exploit.
The human role shifts from discovery to deployment. This is what makes the Google case a watershed moment. What this actually means for your family.
The tool targeted in this attack was a system administration platform. These are the unseen backbones of corporate networks, hospitals, and government agencies. If an attacker bypasses two-factor authentication on such a tool, they can move laterally across an entire organization.
They can steal patient records, shut down power grids, or lock city services behind ransomware. Google's decision to withhold the tool's name leaves system administrators in the dark. They cannot check their own networks for the vulnerability.
They cannot pressure their vendors for a patch. The secrecy protects the victim but leaves every other user of the unnamed tool exposed. This tension is not new.
Security researchers and tech giants have long debated "responsible disclosure." The standard practice is to give a vendor 90 days to patch a flaw before going public. But when a zero-day is discovered in the wild, the calculus changes. Every hour of silence is an hour that other attackers could be using the same hole.
Both sides claim victory. Google says it prevented a major attack and is working with law enforcement. Critics say the company is sitting on information that could protect thousands of other organizations.
The numbers are not public. We do not know how many servers run the vulnerable tool. AI models are now dual-use tools of the highest order.
The same system that helps a radiologist spot a tumor can help a hacker spot a buffer overflow. Controlling access to the most capable models is the central policy challenge of the moment. Anthropic's decision to restrict Mythos is one model.
The White House's proposed review process is another. Neither is a perfect solution. A determined criminal group can use open-source models, stolen credentials, or foreign AI services.
The attack surface is global. Hultquist's warning carries weight. He has spent years tracking nation-state hacking groups.
For him to say that AI gives criminals a "massive advantage" is not marketing hyperbole. It is a threat assessment from one of the world's most experienced defenders. The criminal group in this case was not state-sponsored.
That detail matters. It means the barrier to entry for advanced cyberattacks has fallen. A well-funded ransomware gang can now do what only intelligence agencies could do a decade ago.
Why It Matters:
A shift from human-discovered to AI-discovered zero-days fundamentally alters the economics of cyber defense. Companies patch known vulnerabilities on a monthly cycle. AI can find new ones faster than humans can fix them.
If this becomes the norm, the current model of periodic security updates collapses. Organizations would need continuous, automated patching systems. Most are not ready.
The cost of a single successful breach against critical infrastructure can run into the billions. The Google case is a warning shot, not the main event. Key Takeaways: - Google confirmed the first criminal use of AI to find and exploit a zero-day vulnerability, stopping a planned large-scale attack. - Google's GTIG chief said AI gives criminals a "massive advantage" in speed, turning cyber defense into a race. - The incident intensifies pressure on the White House to regulate powerful AI models before their public release.
What comes next. The immediate question is whether Google or the victim company will disclose the vulnerable tool. System administrators worldwide are waiting for a patch they cannot yet request.
Law enforcement's investigation will determine if the criminal group attempts the same technique against a different target. The White House's AI review proposal will face new scrutiny in light of this case. Dean Ball's reluctant endorsement of regulation may signal a shift in Republican tech policy.
Watch for an advisory from the Cybersecurity and Infrastructure Security Agency in the coming days.
Key Takeaways
— Google confirmed the first criminal use of AI to find and exploit a zero-day vulnerability, stopping a planned large-scale attack.
— The attackers bypassed two-factor authentication on a popular system administration tool, though Google has not named the tool or the criminal group.
— Google's GTIG chief said AI gives criminals a "massive advantage" in speed, turning cyber defense into a race.
— The incident intensifies pressure on the White House to regulate powerful AI models before their public release.
Source: Der Spiegel









