For many years, the advantages of cloud computing for startups and growth-stage companies have been obvious. It makes infrastructure effortless. It eliminates the need for the grey-bearded sysadmin. It lets small teams deploy software without thinking about hardware.
The assumptions that have led to cloud inevitability are changing. AI is expensive to run in the cloud. Renting GPUs at scale doesn’t make financial sense. Elasticity—the cloud’s biggest selling point—isn’t as critical when workloads are predictable. And perhaps most importantly, the software infrastructure that grew alongside cloud computing—container orchestration, deployment automation, monitoring—has matured to the point where it’s just as valuable for running owned hardware.
For the first time in years, and partly because of AI, early-stage startups will reconsider their cloud dependence because vertically-integrated, on-prem infrastructure is getting better, cheaper, and easier to manage.

DHH and 37signals have been the loudest voices in this space, moving Basecamp and their other products off AWS in favor of owned infrastructure. Their workloads didn’t justify the cost of cloud tenancy.
Elastic compute was the cloud’s breakthrough. AWS gave businesses the ability to scale up when demand surged and scale down when it faded. Before that, startups had to over-provision to handle peak traffic.
But now, many workloads aren’t as unpredictable as they once were. AI inference is latency-sensitive but has stable, continuous demand. Better observability and forecasting mean startups can plan infrastructure more precisely.
The cloud also changed how developers interact with infrastructure. DevOps blurred the line between development and operations. The dedicated sysadmin faded. Platform tools abstracted the complexity of provisioning, configuring, and managing servers. Developers could focus on writing software, not operating hardware.
These shifts are why on-prem infrastructure is changing. Not to resurrect manual configuration and old-school data center management, but to bring cloud-like developer ergonomics to integrated infrastructure.
Oxide Computer is making rack-scale systems that bring cloud-like automation and observability to owned hardware.
Lambda Labs is offering AI startups an alternative to hyperscaler lock-in, providing both cloud and on-prem options for training and inference.
Groq is designing AI chips with a vertically integrated software stack that eliminates the inefficiencies of general-purpose GPUs.
Tiny Corp is creating AI systems where the hardware and API are a single, cohesive product.
Tenstorrent is building a variety of infrastructure on open RISC-V-based chips that give startups an alternative to Nvidia’s proprietary ecosystem. (I’ll take one Grayskull™, please.)
Lemony is making it as easy to run AI workloads on a single machine as it is to run them in the cloud.
But it’s not just the hardware that’s improved. The developer tools that made cloud appealing are now making on-prem manageable.
Kamal lets teams deploy to their own machines as seamlessly as deploying to AWS.
DuckDB and MotherDuck make local-first analytics viable, reducing the need for cloud-based query engines.
Litestream turns SQLite into a high-availability database.
Dagster is making data pipelines more flexible, bridging local and cloud-based processing.
The pattern here is the same: better software makes infrastructure choices more flexible. The software stack that once made cloud computing indispensable is now agnostic to where it runs.
This is not a rejection of the cloud, but rather a breaking of the assumption that cloud is the default. The economics of AI, the maturity of developer tooling, and the predictability of modern workloads are making ownership viable again. The cloud isn’t going anywhere, but for startups with steady, resource-intensive workloads, owning compute will be the obvious choice.