There's a classic tension in software engineering: premature optimization is the root of all evil, but underestimating scale is the root of expensive rewrites. We've made both mistakes. Here's what we've learned.
Don't solve problems you don't have yet
The instinct to build for infinite scale on day one is understandable, especially for engineers who've been burned by constraints before. But over-engineering early-stage systems is genuinely harmful. It slows you down, adds cognitive overhead, and often solves problems that never materialize.
Our rule: design for 10x your current scale, not 1000x. That's enough to avoid the sharp cliffs while keeping the system comprehensible.
The decisions that actually matter
Not all architectural decisions are equal. Some are cheap to change later; others are extraordinarily expensive. We invest heavily in getting the expensive ones right: data model design, API contracts, authentication architecture, and how we handle eventual consistency in distributed systems.
We're more relaxed about things that are easier to change: framework choices, internal tooling, deployment configurations. The goal is to correctly identify which category you're in before you start building.
Incremental improvement beats big rewrites
When scale problems do appear, the temptation is always to "do it properly this time" — a complete rewrite with all the lessons learned. Almost always, the better path is incremental improvement with careful instrumentation. Rewrites routinely take longer than expected and reintroduce bugs you forgot you'd already solved.
Understand your system deeply before changing it fundamentally. The boring answer is usually the right one.