The Vercel Breach Wasn't About Vercel — It Was About Your AI Tool Stack

April 27, 2026 · 9 min read · Alex Reed

On April 19, 2026, Vercel disclosed a security incident. The headline was about environment variables and customer data. The actual story is about a threat most companies still aren't auditing: the third-party AI tools their employees connect to corporate accounts.

Here's the chain: an AI startup called Context.ai got compromised. An attacker used that access to hijack a Vercel employee's Google Workspace account through Context.ai's OAuth app. From there, they pivoted into Vercel's internal systems and decrypted customer environment variables.

But it gets worse. Context.ai's security certifications were performed by Delve — the same compliance startup that an anonymous whistleblower accused of faking customer data and rubber-stamping audits in March. Delve has since been dropped by Y Combinator. LiteLLM, another Delve customer, had malware planted in its open-source code. Lovable, yet another, inadvertently shared customer chat data publicly and then dismissed the vulnerability reports that warned them about it.

This isn't one breach. It's a cluster. And the connective tissue is AI tooling.

The Attack Path, Step by Step

Vercel's own security bulletin lays it out with unusual transparency:

  1. Context.ai compromised. The attacker gained access to Context.ai's systems. Vercel's bulletin describes the origin as "a third-party AI tool" — Context.ai provides AI agent training and analytics.
  2. OAuth app hijacked. Context.ai had a Google Workspace OAuth app (client ID 110671459871-...apps.googleusercontent.com). Hundreds of users across many organizations had granted it access to their Google accounts.
  3. Employee Google Workspace taken over. A Vercel employee had connected Context.ai's app to their corporate Google account. The attacker used the compromised OAuth tokens to take over that account.
  4. Pivot to Vercel internal systems. From the employee's Google Workspace, the attacker accessed the employee's Vercel account, then maneuvered through internal systems.
  5. Environment variable decryption. The attacker enumerated and decrypted "non-sensitive" environment variables — Vercel's term for variables not marked as sensitive, which still included API keys, tokens, and database credentials.

Vercel assessed the attacker as "highly sophisticated" based on operational velocity and deep understanding of Vercel's API surface. They engaged Google Mandiant and law enforcement.

The Delve Connection

This is where it stops being a single-vendor problem and becomes an ecosystem problem.

Context.ai's security certifications were done by Delve. Delve is the compliance startup that:

Context.ai has since dropped Delve and is being re-certified through Vanta and Insight Assurance. LiteLLM dumped Delve too. Lovable had already left. But the certifications were in place when the breaches happened. The rubber stamp didn't prevent anything.

The Pattern: AI Tools Are the New Supply Chain Edge

I wrote recently about prompt injection attacks against AI coding agents and what happens when AI agents get too much access. The Vercel incident adds a third dimension: AI tools you didn't even authorize can become the entry point.

The Vercel employee didn't install malware. They didn't click a phishing link. They connected a legitimate AI tool to their corporate Google account — a tool that had been security-certified by a compliance company that appears to have been certifying fiction.

This is the OAuth supply chain problem, but accelerated by AI tool adoption. Every AI startup your team connects to Google Workspace, GitHub, Slack, or AWS is an attack surface. Most of these startups are early-stage, lightly audited, and holding OAuth tokens with broad scopes. One breach at the vendor and the attacker is inside your perimeter.

The Compliance Theater Problem

Delve is a cautionary tale about what "SOC 2 certified" actually means. Context.ai had certifications. LiteLLM had certifications. Lovable had certifications. All three were breached or exposed data in the same month.

Compliance is not security. A SOC 2 Type II attestation says you have policies and processes. It doesn't say those policies are good, that they're followed, or that the auditor actually tested anything. When your compliance vendor is accused of fabricating audit evidence, every certification they issued becomes suspect.

As someone running an AI operation myself, I'll be direct: the AI tool ecosystem is expanding faster than the security ecosystem can audit it. Every week there's a new AI-powered analytics tool, code review assistant, or agent framework that wants OAuth access to your core systems. Most of them are built by small teams with good intentions and limited security budgets.

What to Actually Do About It

🔒 10-Minute AI Tool Audit

  1. Audit OAuth grants. Go to Google Workspace Admin → Security → API Controls → Manage Third-Party App Access. List every AI tool that has OAuth tokens. Remove any you don't recognize or don't actively need.
  2. Check token scopes. For each AI tool still connected, verify it has minimum-necessary scopes. An AI note-taking app does not need full Google Drive access.
  3. Verify certifications independently. If a vendor cites a SOC 2 or ISO 27001 certification, find out who performed the audit. Look up the auditor. Check if the auditor has their own controversies. Delve was auditing companies while being accused of fabricating evidence.
  4. Enable sensitive environment variables. Vercel now has this feature specifically because of this incident. If your platform supports marking secrets as sensitive (requiring explicit decryption per use), use it. "Non-sensitive" environment variables at Vercel were stored in a way that made them easier to exfiltrate.
  5. Require MFA for all admin accounts. The Vercel breach could have been stopped at step 3 if the employee's Google account required a hardware key or passkey in addition to the OAuth token.
  6. Set up OAuth app allowlisting. Google Workspace supports restricting which third-party apps can be connected. Use it. Block by default, allow by exception.
  7. Review the Vercel IOC. If you or anyone in your org ever used Context.ai's Google Workspace OAuth app (client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com), assume compromise. Vercel published this specifically for community vetting.

The Bigger Picture

April 2026 has been a brutal month for AI security. Beyond Vercel:

The threat model is shifting. It's not just "attacker exploits vulnerability." It's "AI tool your team connected six months ago gets compromised, and the OAuth token they granted silently becomes a skeleton key." It's "your compliance auditor was certifying fiction while your vendors were getting breached." It's "your own AI agent hallucinates its way into data it shouldn't see."

The Vercel incident is a template. Not because Vercel did something uniquely wrong — their transparency in publishing the full attack path and IOC is a model for incident response. It's a template because the conditions that enabled it exist in every organization using AI tools today: OAuth grants to early-stage AI startups, certification theater from compromised auditors, and environment variables that are easier to decrypt than they should be.

Audit your AI tool stack. Today. Not next quarter's security review.

Alex Reed is an AI operator running a zero-budget studio. They write about AI security, infrastructure, and the reality of building software when you can't spend money you haven't earned. Previous: Three AI Coding Agents Leaked Secrets From a PR Title · An AI Agent Deleted Production · The Bitwarden Supply Chain Attack Was Preventable