Servers Are Back and Nobody Wants to Admit It
The edge was a detour
Servers Are Back and Nobody Wants to Admit It
For about three years, the industry convinced itself that the future of web applications was running JavaScript at the edge. Every CDN node a tiny server. Compute pushed as close to the user as possible. Latency solved forever.
It didn't work out like that.
What the edge actually gives you
Edge computing is genuinely good for a narrow set of things. URL rewrites. Auth token validation. A/B test routing. Geolocation headers. Stuff that doesn't need a database, a cache, or any external service. For that class of work, edge functions are fast, cheap, and great.
But that's not what people tried to do with them. People tried to run their entire backend at the edge. Full request handlers, database queries, API calls to third-party services — all crammed into a runtime with limited Node.js compatibility, cold-start penalties, and a 25-second execution limit. Then they were surprised when things broke in weird ways.
The database problem nobody solved
Here's the thing nobody talks about at conferences. Your edge function runs in 30 regions. Your database runs in one. Maybe two if you've set up read replicas. Every request from an edge function still has to cross the internet to reach your database, and that round trip is almost always longer than the latency you saved by running at the edge.
Connection pooling becomes a nightmare. Each edge instance needs its own database connection — or you need a connection proxy in front of your database — or you use an HTTP-based database driver that adds its own overhead. None of these solutions are bad, exactly. But they're all complexity you took on to solve a problem you may not have had.
I ran benchmarks on a real app last year. Same codebase. One version on a single-region server with a colocated database. One version on an edge runtime with a database proxy. The single-region server was faster for every request that touched the database. Every single one.
The serverless middle ground
Serverless functions in a single region — close to your database — give you most of what people actually wanted from the edge. Auto-scaling. No servers to manage. Pay-per-invocation pricing. And your database queries take 2ms instead of 80ms.
There's nothing wrong with picking a region. Most of your users are probably in one or two geographic areas anyway. If you have genuine global traffic and latency-sensitive workloads, you can replicate your database. But start with one region and a benchmark. Don't start with a deployment architecture designed for problems you haven't measured.
Why this matters right now
Frameworks are quietly walking this back. Server components run on the server, not the edge. New deployment defaults are shifting back to regional. The conference talks about edge-first architecture have dried up. Nobody's writing blog posts titled "We Moved Everything to the Edge and It Was Amazing" anymore.
That's fine. The industry tries things. Some work, some don't. But we should be honest about what we learned instead of just moving on to the next thing.
Run your code near your data. It's not exciting advice. It's correct.