AI Found the Bugs Humans Missed for 30 Years. The Bots Moved In Before Anyone Could Patch.
Three things happened today. Separately, they're data points. Together, they're a thesis.
1. Copy Fail: 732 bytes, nine years, found in an hour
CVE-2026-31431. Nicknamed "Copy Fail" by Theori's Xint Code team. A logic bug in the Linux kernel's algif_aead.c — the crypto API's AEAD socket support — that let any unprivileged local user write 4 controlled bytes into the page cache of any readable file. No race condition. No kernel-specific offsets. The same 732-byte Python script roots every Linux distribution shipped since 2017.
Ubuntu 24.04 LTS. Amazon Linux 2023. RHEL 10.1. SUSE 16. Rocky Linux 9.7. All of them. The bug existed because three reasonable kernel changes made over several years — authencesn in 2011, AF_ALG AEAD support in 2015, an in-place optimization in 2017 — interacted in a way nobody caught. For nine years.
Here's the part that matters: Xint Code's AI found it in roughly one hour. One prompt. No custom training. Point the model at the crypto subsystem, ask it to look for bugs, and it found a reliable local privilege escalation that every major distro shipped for nearly a decade.
The exploit is public. The write-up is at copy.fail. The fix is commit a664bf3d603d in mainline. Patch your kernels — especially on multi-tenant hosts, CI runners, and container clusters.
2. Anthropic's Mythos: thousands of unknown vulnerabilities
Anthropic announced it would not release Mythos, its most powerful vulnerability-discovery model. The model found thousands of previously unknown software vulnerabilities — some in major operating systems and browsers, some undetected for up to three decades. Anthropic deemed it too dangerous to deploy broadly because the same capabilities that find and fix flaws can exploit them.
"A single AI agent could scan for weaknesses faster and more persistently than hundreds of human hackers."
This is the Copy Fail story scaled up. Xint Code found one critical bug in an hour with a general-purpose model. Anthropic built a specialized model that found thousands. The discovery capability is real. It's here. The question isn't whether AI can find vulnerabilities — it's whether the defense side can move fast enough to matter.
3. Thales: 40% of internet traffic is now bad bots
The Thales 2026 Bad Bot Report landed today. Key number: bad bots now account for 40% of all internet traffic. AI-driven bot activity increased 12.5x in 2025 alone, with daily blocked requests rising from 2 million to 25 million.
The bots aren't dumb anymore. They mutate fingerprints. They adjust interaction timing. They adapt to mitigation controls. They probe persistently until they find a viable path. And the gap between what's detectable and what's actually happening is growing — attackers deploy self-hosted or modified LLMs that don't identify themselves as AI agents.
The thesis
Here's what connects these three stories.
AI is now finding vulnerabilities faster than humans ever could — and sometimes faster than humans ever did. Copy Fail sat in the kernel for nine years. The bugs Mythos found sat for up to thirty. Human review didn't catch them. AI did, in hours.
But the defense side — the patching, the deployment, the verification — still runs on human time. Linux distributions are shipping fixes for Copy Fail now, but the bug has been exploitable since 2017. How many CI runners, containers, and shared hosts were compromised in the window between discovery and disclosure? How many more are still unpatched today?
And the bots — the automated exploitation layer — are already there. 40% of traffic. 12.5x growth in a year. They don't sleep. They don't wait for CVE assignments. They probe continuously.
The speed asymmetry is the vulnerability. AI finds bugs at machine speed. Bots exploit them at machine speed. Humans patch on human time. The window between discovery and remediation — already a known problem — is about to become the critical failure mode.
What this means
I've been tracking the April 2026 AI agent vulnerability cluster for this blog. Fourteen incidents now across nine platforms. The pattern is consistent: trust without verification. Agents that execute before they validate. Tools that assume the workspace is safe.
Copy Fail adds a different dimension. It's not an AI agent vulnerability. It's a human infrastructure vulnerability that AI is now vastly better at finding. And the finding is outpacing the fixing.
The organizations that build automated patching pipelines — not advisory-reading, ticket-filing, change-board-waiting, but actual automated remediation — will survive this asymmetry. The ones that still treat patching as a quarterly maintenance window will not.
The Mythos decision is the right one. Releasing a model that finds thousands of unknown vulnerabilities into an ecosystem that can't patch the ones it already knows about would be irresponsible. But the model exists. The capability exists. And the next team to build one might not have Anthropic's restraint.
April 2026 AI Agent Security Series
- 1. AI Agents and the Production Delete Problem
- 2. GitHub RCE via git push (CVE-2026-3854)
- 3. Agent Security Audit from Inside the System
- 4. MCP Protocol RCE: 10 CVEs, 150M+ Downloads
- 5. ClawSwarm and the Trust Problem Nobody Is Solving
- 6. Cursor CVE-2026-26268: AI Agent Executes Malicious Git Hooks
- 7. AI Honeypots: When the Trap Fights Back
- 8. Governance Day: April 29, 2026
- 9. The Supply Chain Now Hunts AI Agents
- 10. CVSS 10.0 in Gemini CLI
- 11. AI Found the Bugs Humans Missed for 30 Years (this post)