There's a mass delusion in software engineering that more tools equals more capability. That if Netflix uses Kubernetes, you should too. That if Facebook built React, your three-page marketing site needs it. That the "modern stack" is modern for a reason, and choosing anything simpler means you're behind.
It's nonsense. Most of the time, the simplest tool that solves your problem is the right tool. Not the most popular one. Not the one with the best conference talks. The one with the fewest moving parts.
Here's how I think about technology choices, layer by layer.
Backend: FastAPI/Flask > Django > Spring
If you're building an API, start with FastAPI or Flask. One file. A few routes. You can read the entire application in ten minutes. There's no ORM you didn't ask for, no admin panel you'll never use, no middleware stack you need to understand before you can return JSON.
Django is a different tool for a different problem. It's an application framework, not a microframework. It makes decisions for you: how your database models work, how authentication works, how your admin interface looks. If you're building a product with user accounts, permissions, a database-backed admin panel, and you have a team of five or more, Django saves you months. It's opinionated in ways that keep large teams from making inconsistent choices.
The mistake is reaching for Django when you need three endpoints and a database query. You'll spend more time learning what Django wants than building what you want.
Spring is the same idea taken further. Enterprise teams, compliance requirements, complex service meshes. If you're at a bank with 200 engineers and a mandate to use Java, Spring is genuinely good at what it does. If you're two people building a SaaS, it'll bury you.
The rule: Pick the framework whose complexity matches your problem's complexity. If you can describe your backend in one sentence, you probably need a one-file framework.
Frontend: HTML > HTMX/Svelte > React
This is where the industry has lost its mind the most.
Plain HTML with some CSS works for an astonishing number of things. This site you're reading is static HTML. No JavaScript framework. No build step. No node_modules folder with 800 packages. I write Markdown, a Python script generates HTML, and Nginx serves it. It loads in under a second on any connection.
When you need interactivity, HTMX is a revelation. It lets you add dynamic behavior by writing HTML attributes instead of JavaScript. Need a button that loads content without a page refresh? One attribute. Need a form that submits asynchronously? One attribute. You keep writing HTML and your server keeps rendering HTML. No JSON APIs, no client-side state management, no hydration bugs.
Svelte sits in a similar sweet spot. It compiles to vanilla JavaScript. No virtual DOM. No runtime library. The output is small and fast. If you need more interactivity than HTMX gives you but don't want the overhead of a full SPA framework, Svelte is excellent.
React makes sense when you're building a genuinely complex, stateful application. Think Figma. Think Google Docs. Think Notion. Applications where the UI has hundreds of independent state changes, where components need to communicate across deep trees, where you need a component ecosystem because you're building something too large for one team to own entirely.
The problem is that people use React for everything. Landing pages. Blogs. Documentation sites. Marketing pages. I've seen teams spend three sprints setting up Next.js for a site that could have been twenty HTML files. The framework overhead, the build pipeline, the SSR complexity, the hydration issues. All of it unnecessary. All of it actively making the product worse by making it slower and harder to maintain.
The hierarchy:
- Static HTML + CSS. Can you get away with this? Try harder. You probably can.
- HTML + HTMX. Need some interactivity? Sprinkle it in. Stay on the server.
- Svelte. Need a reactive UI without the baggage? Compile-time magic.
- React/Vue. Building a complex application with a large team? Now it earns its keep.
Infrastructure: Bare Server > Containers > Kubernetes
I run this site on a Raspberry Pi behind Nginx. The deploy script is 40 lines of bash. rsync copies files, Nginx serves them. If the Pi dies, I buy a new one for $50 and rsync again. Total infrastructure cost: about 5 watts of electricity. Twelve months in, it's still running.
For most small projects and early-stage products, a single server with systemd services is all you need. You SSH in. You know what's running. You can read the logs. When something breaks, you fix it directly. There's one machine, one set of logs, one reality to debug.
Docker containers make sense when you need reproducible environments. When your app has dependencies that conflict with other things on the same machine. When you need to hand off a project and say "run docker compose up and it works." When you're deploying the same service across staging and production and need them to be identical. Containers solve real problems.
But Docker adds a layer. Now you're debugging things inside a container that works differently than your local machine. Now your logs are in a different place. Now you need to understand volumes, networks, build caches. Every layer you add is a layer you need to understand when things go wrong.
Kubernetes makes sense at scale. When you have dozens of services, when you need automated scaling, when you need rolling deployments with zero downtime across multiple regions. When you have a platform team whose full-time job is managing infrastructure. Google built Kubernetes because Google has Google-sized problems.
If you're three engineers running a SaaS with a few thousand users, Kubernetes is not solving your problem. It's replacing your problem with a harder problem. You'll spend more time writing YAML manifests and debugging pod scheduling than building your product.
I've watched startups with five engineers spend months setting up Kubernetes clusters. The CTO wanted it on the resume. The engineers wanted to learn it. The product needed a VPS and a Postgres database. They could have shipped in a week and scaled later. Instead they built infrastructure for a million users while they had twelve.
The hierarchy:
- Bare server + systemd. One machine. SSH. You understand everything.
- Docker Compose. Multiple services, reproducible environments, easy handoff.
- Managed containers (ECS, Cloud Run, Fly.io). You want containers without managing the orchestrator.
- Kubernetes. You have a platform team. You have dozens of services. You have the scale to justify the complexity.
Databases: SQLite > Postgres > Distributed
SQLite is the most underrated database in existence. It's a single file. No server process. No connection management. No port configuration. It handles more concurrent reads than most apps will ever need. If your data fits on one machine and you don't need concurrent writes from multiple processes, SQLite is fast, reliable, and effectively zero maintenance.
Postgres is the right choice for most production applications. It's battle-tested, well-documented, and handles basically everything. JSON columns, full-text search, geospatial queries, pub/sub. Before you reach for Redis, Elasticsearch, or any other specialized database, check if Postgres can already do what you need. It usually can.
Distributed databases (CockroachDB, Cassandra, DynamoDB) solve problems that 99% of applications don't have. If you need writes across multiple regions with single-digit millisecond latency, sure. If you need to handle millions of writes per second, sure. If you're building a CRUD app with a few thousand users, a single Postgres instance on a $20/month VPS will serve you for years.
AI Models: Small and Specialized > Large and General
This is the one closest to home because I build tools around this exact principle at NehmeAI Labs.
The default instinct right now is to throw the biggest frontier model at every problem. Summarization? Frontier model. Classification? Frontier model. Extracting a date from an email? Frontier model. It's the Kubernetes of AI: massively capable, massively overkill for most tasks.
A 4B parameter model trained on a specific task will outperform a 400B general model on that task. Not sometimes. Reliably. FlashCheck is a 4B model that detects hallucinations in RAG pipelines. It scores 91.7% on RAGTruth. It beats Llama 405B on this specific task while using 100x less compute. Not because small models are magic, but because a focused model doesn't waste capacity on things it doesn't need to know.
The same logic applies to inference costs. Most teams routing everything through their most expensive model are sending simple prompts to a frontier model because they never measured whether a cheaper one could handle it. RightSize exists to answer exactly that question: it evaluates your real prompt traffic and finds where smaller, cheaper models are sufficient. Typical savings are 50-100x. Not because the quality drops. Because most prompts don't need the most powerful model available.
This maps directly to the philosophy of this entire post. Your inference costs are a design choice, not a fact of life. You don't start with the biggest model and optimize later. You start with the smallest model that works and scale up only where the task demands it.
The hierarchy:
- Rule-based logic. Can you solve it with an if statement or a regex? Do that. No model needed.
- Small specialized model (fine-tuned BERT, small LLM). Trained on your specific task. Fast, cheap, accurate.
- Mid-tier general model (GPT-mini, Claude Haiku, Mistral). Good enough for most generative tasks.
- Frontier model (GPT, Claude Sonnet/Opus, Gemini). Complex reasoning, multi-step tasks, ambiguous inputs. Use it when you've proven the smaller ones can't handle it.
The pattern is always the same. Start simple. Measure. Add complexity only where the data tells you to.
When Simple is Wrong
I'm not arguing that simple is always right. I'm arguing that it's the right default. You should start simple and add complexity when you feel the pain, not before.
Here's when you genuinely need the heavier tools:
Large teams. Django's opinions prevent 50 engineers from making 50 different architectural choices. React's component model lets teams work on the same frontend without stepping on each other. Kubernetes lets a platform team provide self-service infrastructure to dozens of product teams. The overhead pays for itself through coordination.
Compliance and enterprise. Banks, healthcare, government. When you need audit trails, role-based access at every layer, and the ability to prove exactly what ran in production six months ago. Spring Boot's security stack and Kubernetes' declarative infrastructure make compliance audits possible at scale.
Genuine scale. When your SQLite file would be terabytes. When your single server can't handle the traffic. When you actually have the users and the load that demand distributed systems. The key word is "actually." Not "projected." Not "hoped for." Actually. And when you do migrate, be honest that it's a rewrite, not a simple upgrade.
Complex client-side applications. If your users spend hours in your app doing complex, stateful work (design tools, collaborative editors, data dashboards), React or similar frameworks earn their complexity budget.
The Cost of Unnecessary Complexity
Every tool you add is a tool you need to:
- Learn
- Debug
- Update
- Recruit for
- Onboard new hires onto
- Monitor
- Pay for
A three-person startup using Kubernetes, React, GraphQL, Redis, Elasticsearch, and Terraform has the operational complexity of a 50-person engineering org without the people to manage it. They'll spend 70% of their time on infrastructure and 30% on the product. That ratio should be inverted.
The best engineers I know aren't the ones who use the most tools. They're the ones who use the fewest tools that solve the problem. They resist the urge to optimize for problems they don't have. They ship with boring technology and save the interesting technology for when it's actually needed.
Simple isn't a limitation. It's a discipline. And the debugging is a lot easier when there are fewer layers to dig through.