On April 19, 2026, Vercel disclosed a security incident. The headline was about environment variables and customer data. The actual story is about a threat most companies still aren't auditing: the third-party AI tools their employees connect to corporate accounts.
Here's the chain: an AI startup called Context.ai got compromised. An attacker used that access to hijack a Vercel employee's Google Workspace account through Context.ai's OAuth app. From there, they pivoted into Vercel's internal systems and decrypted customer environment variables.
But it gets worse. Context.ai's security certifications were performed by Delve — the same compliance startup that an anonymous whistleblower accused of faking customer data and rubber-stamping audits in March. Delve has since been dropped by Y Combinator. LiteLLM, another Delve customer, had malware planted in its open-source code. Lovable, yet another, inadvertently shared customer chat data publicly and then dismissed the vulnerability reports that warned them about it.
This isn't one breach. It's a cluster. And the connective tissue is AI tooling.
Vercel's own security bulletin lays it out with unusual transparency:
110671459871-...apps.googleusercontent.com). Hundreds of users across many organizations had granted it access to their Google accounts.Vercel assessed the attacker as "highly sophisticated" based on operational velocity and deep understanding of Vercel's API surface. They engaged Google Mandiant and law enforcement.
This is where it stops being a single-vendor problem and becomes an ecosystem problem.
Context.ai's security certifications were done by Delve. Delve is the compliance startup that:
Context.ai has since dropped Delve and is being re-certified through Vanta and Insight Assurance. LiteLLM dumped Delve too. Lovable had already left. But the certifications were in place when the breaches happened. The rubber stamp didn't prevent anything.
I wrote recently about prompt injection attacks against AI coding agents and what happens when AI agents get too much access. The Vercel incident adds a third dimension: AI tools you didn't even authorize can become the entry point.
The Vercel employee didn't install malware. They didn't click a phishing link. They connected a legitimate AI tool to their corporate Google account — a tool that had been security-certified by a compliance company that appears to have been certifying fiction.
This is the OAuth supply chain problem, but accelerated by AI tool adoption. Every AI startup your team connects to Google Workspace, GitHub, Slack, or AWS is an attack surface. Most of these startups are early-stage, lightly audited, and holding OAuth tokens with broad scopes. One breach at the vendor and the attacker is inside your perimeter.
Delve is a cautionary tale about what "SOC 2 certified" actually means. Context.ai had certifications. LiteLLM had certifications. Lovable had certifications. All three were breached or exposed data in the same month.
Compliance is not security. A SOC 2 Type II attestation says you have policies and processes. It doesn't say those policies are good, that they're followed, or that the auditor actually tested anything. When your compliance vendor is accused of fabricating audit evidence, every certification they issued becomes suspect.
As someone running an AI operation myself, I'll be direct: the AI tool ecosystem is expanding faster than the security ecosystem can audit it. Every week there's a new AI-powered analytics tool, code review assistant, or agent framework that wants OAuth access to your core systems. Most of them are built by small teams with good intentions and limited security budgets.
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com), assume compromise. Vercel published this specifically for community vetting.April 2026 has been a brutal month for AI security. Beyond Vercel:
The threat model is shifting. It's not just "attacker exploits vulnerability." It's "AI tool your team connected six months ago gets compromised, and the OAuth token they granted silently becomes a skeleton key." It's "your compliance auditor was certifying fiction while your vendors were getting breached." It's "your own AI agent hallucinates its way into data it shouldn't see."
The Vercel incident is a template. Not because Vercel did something uniquely wrong — their transparency in publishing the full attack path and IOC is a model for incident response. It's a template because the conditions that enabled it exist in every organization using AI tools today: OAuth grants to early-stage AI startups, certification theater from compromised auditors, and environment variables that are easier to decrypt than they should be.
Audit your AI tool stack. Today. Not next quarter's security review.