We watched Vercel AI 2025 conf so you don’t have to

We watched Vercel AI 2025 conf so you don’t have to

We didn’t even finish digesting Next.js 16, and Vercel dropped another bomb. But we already started experimenting, and we have the results.

JonoJonoFounder

Click me for the TL;DR (too long; didn't read)

Right after the Next.js 2025 conference, Vercel’s founder and CEO, Guillermo Rauch, walked back on stage for another conference, because apparently, one was not enough. This time, it wasn’t just a framework update or frontend performance.

“We’re moving from pages to agents.”
If that line made you raise an eyebrow, same here.

Between the Next.js 16 updates and Ship AI 2025, Vercel basically said, “What if your code could think, wait, and pick up right where it left off?”

Durable workflows to AI agents as a service, and why Vercel wants an AI agent on every desk,” we went through the entire updates and even tried a few to see what it would mean for you!

Next.js conf recap

Before Vercel told everyone to stop building pages and start shipping agents, they dropped Next.js 16 with some features still in beta. It was the calm before the storm. The part where Vercel politely cleaned the kitchen before setting it on fire with Ship AI. And honestly, it looked like a great release. We are still verifying it, so don't take out your pitchforks yet!

If you missed our full breakdown of Next.js 16, we covered everything here → (link to Next.js 16 blog). Or just continue reading to get the nucleus of the release.

And yes, we’re already running it in production. If you’re reading this without visual glitches or cryptic errors, that’s Turbopack doing its thing.

Turbopack is finally the default

You no longer have to worry about setup, flags, or sacrificing goats to fix Webpack. You can just start coding and everything will compile fast with 5–10x faster refreshes, 2–5x faster builds, and filesystem caching that makes dev servers pop open instantly.

Smooth navigation

Next.js finally fixed the “every click feels like a full reload” problem. With smarter prefetching and layout duplication, shared UI stays put and page transitions don’t flicker.

Cache, you can actually control

Next.js stopped guessing what you want to cache. It’s now fully opt-in, so you decide what stays warm, what refreshes, and what gets tossed. So you no longer have to worry, “Why is this page still serving data from last week?” or perform 2 a.m. cache-clearing rituals.

React 19.2 feels grown up

React got quieter and smarter. You get cleaner effects with useEffectEvent(), smoother page transitions, and UI that remembers where it left off.

DevTools MCP

This is the sneaky-big feature in Next.js 16. DevTools now speaks MCP, which means AI agents can debug your app with real context instead of generic “have you tried turning it off and on?” advice.

So while Ship AI was the big event, Next.js 16 was the setup with faster builds, faster navigation, better caching, smarter UI… all the groundwork for the agent-powered world Vercel unveiled the next day.

Subscribe to our newsletter

Ship AI Vercel conf takeaways

We finally get to the main event. Vercel talked. We listened. And now we’re going to write about it so your team doesn’t have to pretend they “totally watched the stream.” Or forget all this and call us and tell us to create what you want.

Contact us

At first, this whole “AI in production” thing felt quite simple. Drop in a model API, send a prompt, get some text back, and tell your CTO you’ve “shipped AI.” But this lasted only 48 hours. Suddenly, you’re switching models every month, juggling each provider’s weird SDK, managing rate limits, reconciling billing systems, and explaining to finance why you blew $800 on tokens because your intern forgot a .stop() condition.

With Vercel's AI SDK, you can stop duct-taping your backend together with ChatGPT responses. It’s already hitting ~4M downloads per week, just to let you know, which means either everyone is building agents or everyone is equally confused and panicking together. Either way, good news.

From here, every other announcement makes sense: workflows, durability, sandboxed execution, Gateway, and an actual path to put “an AI agent on every desk.”

AI SDK 6 & Gateway

With one SDK, you get all the models. You can switch Claude → GPT → Groq → random stealth model Vercel got early access to, by changing like three lines of code. Basically, it is the polar opposite of how you're managing models right now.

Image

The Gateway gives you:

  • unified billing
  • better rate limits
  • failover when your favorite model has a meltdown
  • and observability.

But just to let you know, AI SDK 6 is still in beta, and we are already experimenting with it to build agents, and then we will definitely tell you how it performs in real-world cases.

Workflows

Ship AI 2025 had one core message: "Your one-prompt agents are cute, but the real world needs grown-ups." And we totally agree with them. Agents' functionality is no longer limited to responding. They have to:

  • call tools
  • fetch data
  • wait for humans
  • retry on failure
  • pause for a week
  • wake back up like nothing happened

This is where every team quietly cries, because building that has always meant a spaghetti bowl of queues, cron jobs, retries, broken state, and a prayer you don’t lose tokens mid-execution. So Vercel introduced the Workflow DevKit, enabling developers to build long-running, reliable AI agents without manually wiring tons of infrastructure.

You write one TypeScript function, mark steps inside it, and Vercel turns it into a durable workflow. This allows agents to:

  • Call models and tools
  • Respond to webhooks or events
  • Wait for human approval
  • Resume exactly where they left off

Vercel Sandboxes

LLM models can generate code, and the codes are used to deploy programs. But sometimes it can break catastrophically. The Sandbox is created to solve it, and here is how:

  • Every execution runs inside an isolated micro-VM.
  • No access to prod environments.
  • No “oops we just deleted our own database” moment.
  • Perfect for validating agent-generated fixes before a human approves them.

AI Cloud

This is where the entire foundation of Vercel's vision of adding AI Agents to every desk is designed with infrastructure to back it up. And we have already tried this for one of our clients, and we are absolutely loving the results.

This does not mean that Vercel is asking you to build agents from scratch; you can if you want to. But what it means is that you don't need to create different infrastructure.

Their AI Cloud is designed to remove infrastructure from the equation.

  • AI Gateway: With this, you can pick any LLM model and switch it anytime you want.
  • Fluid Compute: Only pay when CPU is actually used.
  • Sandboxes: Execute untrusted code safely.
  • Workflows: Durable and resumable implementations.
  • Observability: See every step, every token, and every failure.
  • Zero config: Deploy and forget.

AI Agents for enterprises

The real problem for enterprises isn’t whether we can build one, but where we even start. Which model? Which tools? How do you run this in production without duct-taping Lambdas, cron jobs, and random queues together?

This is exactly where Vercel’s “AI agent on every desk” vision starts to make sense. Enterprises don't need to hire a research lab just to send an email seven days later or validate a document. Vercel’s answer is to give companies two paths, that is either install agents, or build them, but without building infrastructure.

First, there’s Agent-as-a-Service, which is exactly what it sounds like. Instead of writing orchestration logic, retries, state management, or approval flows, you can install agents that already run natively on the AI Cloud. Vercel even launched an Agent Marketplace, where teams can grab production-ready agents from companies like Code Rabbit, Sorcery, Corid, Browser Use, Dcope, Kernel, Cubix, Mixat, and others. They’ve even built their own internal agents for code review, anomaly investigation, and debugging in production.

We also implemented a website anomaly investigation agent for one of our customers, and you can see the results below. We are damn proud of this.

Image

And if none of those solve your exact problem, you can still build your own. Or connect with us, and we will help you get started.

Final thoughts

The whole point of their “AI agent on every desk” plan and this release is that enterprises shouldn’t need to learn distributed systems just to automate repetitive work. You either install an agent or ship your own on top of infrastructure that already works.

And yes, we follow the same blueprint internally, because if a platform can run week-long workflows, data lookups, approvals, retries, and sandboxed execution without blowing up billing, we don’t see a reason to reinvent that.

Frequently asked questions


Get in touch

Book a meeting with us to discuss how we can help or fill out a form to get in touch


We watched Vercel AI 2025 conf so you dont have to