19 Apr 2026 – Wrocław
What Is Going On Right Now?
There’s a big cognitive dissonance happening in most engineering teams right now. On one hand, almost everyone is using LLMs to write code faster than ever before. On the other hand, the way we think about structuring that code - our modularity philosophy, our instincts about good design - hasn’t moved. Which means we are applying the mindset developed under a specific set of constraints and economies into a world with a completely different set of constraints and economies, effectively trying to fit a square peg into a round hole.
This is an attempt to think through what modularity should actually look like given the tools we have today. It’s grounded in something concrete: what we’ve been building and learning at EverAI in our own mid-sized Rails app, where 15 engineers ship a lot of pull requests every day. Readers familiar with Erlang will recognize where this is going. This is in no way a “final” version. It is a look back and a look forward with an attempt to develop new mindset and new approach, but please remember it’s far from complete.
Why Modularity Has Always Mattered
Before we talk about what should change, it’s worth being clear on why modularity matters in the first place. The answer isn’t “it’s good engineering practice” - that’s circular logic, and “good engineering” is not a first principle nor a property of reality. The real answer comes from cognitive science and economics.
Human working memory holds roughly 4–7 items at a time. That’s it. It was enough for most of our history as a species, when we were dealing with sticks, bananas, hunting and avoiding being hunted. Software is way more complex for our unevolved brains. When we’re reading code or, worst-case scenario, debugging a gnarly piece, you can’t hold the mental model of the whole system in your head - you can only hold a small slice of it. Good modularity is what makes that slice manageable. A well-designed module is a black box: you know what goes in, you know what comes out, and you don’t need to care about anything in between. It’s “conceptual compression” - taking something complex and squishing it down to a single mental token. Lack of modularity, spaghetti of code jumps and calls with unstructured changes to the state is an absolute nightmare to understand, debug and trace down for those very same cognitive reasons, as every experienced developer can attest.
This is why modularity has always been an economic argument as much as a technical one. Human time and human attention is expensive. Anything that lets an engineer reason about a larger system with less mental overhead has direct business value. Code you can understand quickly is code you can debug quickly, extend quickly, hand off quickly.
That economic logic is what has driven virtually every major advance in programming over the past seventy years. It allowed us to build more complex systems with bigger teams, overcoming the limitation that every single one of us still has the same unevolved brain with meager 7 registers.
A Very Brief History of Modularity
This is going to be a fast tour - not nostalgia, but context. What’s striking is how consistent the underlying motivation has been across every generation of tooling:
-
Subroutines and libraries (1950s–60s) - from macro assemblers through FORTRAN and COBOL, we learned to name and call units of logic. The subroutine was the first real “black box.”
-
Structured Programming (late 1960s) - Dijkstra’s famous argument against
GOTOwas fundamentally about readability. Code flow that a human brain can follow is code that a human brain can reason about. -
Information Hiding (1970s) - David Parnas articulated something still underappreciated: the right boundary for a module isn’t what it does, it’s what it hides. A module is defined by what it doesn’t expose.
-
Abstract Data Types and Objects (1970s–80s) - Liskov, Smalltalk, C++. We bundled data and behaviour together into cognitive units that map to real-world concepts.
-
Interfaces, Patterns, and Package Managers (1990s–2000s) - contracts between pieces of a system, reusable shapes of solutions, and eventually modularity at civilisational scale via open source.
-
Microservices and Modern Module Systems (2010s–present) - strict boundary enforcement, explicit exports and imports, and organisational isolation on top of technical isolation.
Let’s look at that list to find common theme and pattern. Every single item is an answer to the same two questions: How do we make code cheaper to understand? How do we make code cheaper to write?
That’s what modularity has always optimised for. Human reading. Human writing. Making the complex stuff simpler to reason about, driving down costs of growing software.
Until now.
The Economy Just Flipped
(This is a wonderfully bad pun and you’ll appreciate it later, dear reader)
Here’s the thing: those two things are no longer expensive.
Pattern recognition - reading code and understanding (or at least describing) what it does - is something LLM is genuinely very good at. Pattern reproduction - writing code that fits a known shape - is also something LLM is genuinely very good at. And they better should be, as they are basically huge multi-dimensional matrices aimed at pattern recognition and reproduction.
The two cognitive tasks that have driven seven decades of software engineering innovation are now, effectively, cheap.
So what’s expensive now?
Reviewing. A human still has to look at this code and decide whether it’s safe to ship. That takes time and attention. At the same time, we all know it’s not bullet proof: reviewing the diff does not show entire context and hides potential clashes with not-shown pieces of the codebase. It would help if we could shorten reviews by asking narrower, more targeted questions.
Shipping. Deployment pipelines, coordination, timing, risk management. Any decent end-to-end test suite or QA process will take time that’s hard to cut because even a simple and automated flow of clicking a specific vertical in a browser takes time for subsequent requests to execute, return and be followed up with next click.
Reverting. When something goes wrong in production, rolling back is expensive - especially if changes are entangled.
Downtime. This is the big one. Production incidents are expensive. Not just in engineer time, but in user impact, trust, and sometimes revenue.
The constraint has shifted. We used to optimise for the cost of understanding and writing code. Now we need to optimise for the cost of risk and recovery.
That should change how we think about modularity.
Familiarity side note. If you already follow the philosophy of “Decouple Deploy from Release” - that shipping code and feature rollout should be separate concerns - as outlined by this wonderful blogpost from Charity Majors, then some of this stuff should be familiar and less scary. We just need to take another step forward in this mindset. And if you don’t follow this philosophy - now is the time.
A New Modular Philosophy
Here’s the core of what we’ve been moving toward: modules as isolated, replaceable, switchable units - designed not for human comprehension, but for blast radius control.
The old model says: a module is a well-named abstraction that hides complexity and helps developers think.
The new model says: a module is a functional unit with defined inputs, outputs, and side effects, designed so that if it breaks, we can turn it off immediately, and if requirements change, we throw it away and write a new one from scratch. Even if it means code duplication.
Those models are very different things.
The One Non-Negotiable: Runtime Code Path Control
The foundation of the whole approach is this: the decision of which code path to enter - which version of a module to run - happens at runtime, not at deploy time.
Most preferably via Feature flags, so that the rollout can be gradual and we can observe if application behavior is not negatively impacted by a specific module. In our case the feature flag library is Flipper for its simplicity and doing things right (like deterministic percentage-of-actors algorithm).
This sounds simple, but it changes everything. If you can turn a module off without a deploy, then shipping that module stops being a scary event. The risk profile of “ship this module” drops dramatically, because the cost of reverting is just flipping a flag. When reverting is cheap, you can ship sooner. When you can ship sooner, you get feedback faster. When you get feedback faster, you learn faster.
New Modules, Not Updated Modules
Here’s the instinct that’s hardest to let go of: when requirements change, we want to go into the existing module and update it. Refactor it. Improve it. Generalize the code and pull out some abstractions. That’s the craft. That’s what good engineering looks like.
But again, those were the practices of old motivations and limitations. So let’s try exactly the opposite, leveraging the cheapness we now have. When requirements change significantly, let’s write a new version of the module from scratch. The new version lives alongside the old one. The runtime code path routes users to the new one (gradually, via feature flags - 5% of users, then 20%, then everyone). Once we’re confident, we delete the old one.
The old module is not improved. It is replaced.
Mechanically, this means a few things:
The module has a single entry point gated by a feature flag. Calls into it are routed at runtime. If the flag is off, the old path (or no path at all) takes over.
Two versions coexist in the codebase during a transition. The old one serves the majority of traffic while the new one ramps from 5% to 20% to 100%. Once the new version is serving everyone and is stable, the old one gets deleted.
Modules don’t share internals with each other. Shared helpers across modules recreate coupling: every shared abstraction is a dependency, and when two modules import the same helper, changing that helper changes both. Duplication between modules is the price of independent replaceability - and at today’s cost of writing code, it’s a cheap price.
The review question narrows. Instead of “is this code well-crafted?” it becomes “if this breaks, can it be turned off without touching anything else?”
The Module Interface (a work in progress)
There’s one thing that does need care: the public interface. The classes and methods that other parts of the application call into need to be stable and well-defined. Because when you write a new version of the module, the interface needs to be callable by the part of the app (and its variables) that decides which module to call.
In a Rails app, we can use Packwerk for this. It acts as a linter that enforces which constants are publicly accessible from outside a module. If code outside the module tries to call a private class, the linter catches it. The public interface is explicit, checkable, and enforced. It’s public-private all over, just at the level of entire set (namespace) of classes.
I marked it as “pending” because while I got my share of Packwerk experience back at Shopify, we are not using it in our application yet. And I’m not sure we will, because we are not yet at this level of complexity and having to manage interfaces.
Wait, Isn’t This Just Erlang?
I did not come up with this myself entirely out of thin air, I’m not that smart. It’s just that when I started thinking about the idea of “discardable” modules, a part of my brain brought back an old presentation of Erlang VM and philosophy by my old friend, around a decade ago.
Now, what we are dealing with and how we approach it is not exactly like Erlang, as we are working under different constraints and economies. But still some parts of those philosophies are close enough to look closer and try to see if we can apply Erlang’s philosophy for our benefit.
Erlang was designed in the late 1980s at Ericsson for telecom infrastructure. The constraints were completely different from ours - we’re not building phone exchanges - but some of the conclusions overlap in interesting ways.
The Erlang philosophy includes:
- Small, focused modules - not as a purity exercise, but because small things are easier to replace.
- “Let it crash” - don’t write defensive code that tries to recover from every possible failure state. Let things fail, and make the failure cheap. This sounds reckless and is actually extremely pragmatic. The process (the main app) is designed in a way to handle gracefully any crash of a module
- Modules are replaceable at runtime - Erlang has hot code loading built into the VM. You can swap a module while the system is running. We’re approximating this with feature flags, but the principle is identical.
- Prefer rewriting over abstracting - this one is perhaps the most Erlang-flavoured principle we’re adopting here. The instinct to generalise and abstract is usually justified by the assumption that you’ll maintain this code for a long time. If you’re not going to maintain it - if you’re going to replace it when it no longer fits - the abstraction is unnecessary overhead.
- The process owns state and errors - in Erlang’s actor model, each process owns its state and handles its own failure. In our Rails app, this role is played by the main application: it owns the database models and the frontend, and it handles errors. Modules don’t own state (with database-backed web application they still interact with ActiveRecord models, of course, but the schema and validations are global and owned by app). They receive inputs, produce outputs, and hand errors back up.
Erlang arrived at these conclusions because telecom systems couldn’t afford downtime. We’re arriving at similar conclusions because AI makes rewriting cheap. Different starting points, similar destination. Not a complete analogy, not a perfect analogy, but still one with surprising amount of contact points.
The Uncomfortable Conclusions
The mechanics above are easy to describe. Living with them is harder, because they grate against instincts that most of us spent a career developing.
The refactoring instinct becomes an anti-pattern. When you’ve been using a module for a while and you understand it well, the natural thing is to improve it. Rename things. Extract the right abstractions. Make it cleaner. That instinct is well-earned - it’s what separates a senior engineer from a junior one. In this model, it’s usually the wrong move. If it’s working, leave it. When requirements change enough to justify the effort, rewrite it. Telling a good engineer to stop refactoring feels like telling a carpenter to stop sanding.
You’re paying for safety with code you’re not proud of. AI-generated code is often verbose, repetitive, and slightly inelegant. The old instinct - clean it up before shipping - was virtuous when you were going to live with the code for years. It’s waste when you’re going to delete it in a quarter. “Embrace the slop” isn’t a celebration; it’s an honest accounting of where time and risk actually live. Craft spent making module internals beautiful is craft that isn’t being spent on the interface, the tests, or the rollout plan. Spend craft where it reduces risk. Inside the module is not where risk lives.
Modules will break in production. Not occasionally - routinely. That’s not a failure of the approach, it’s the approach working as designed: the flag goes off, the old version takes over, we ship a fix the next day. But it requires a team culture that can separate “a module failed” from “we failed.” If every flag trip triggers a post-mortem and a round of blame, the model collapses under its own friction.
You have to give up a specific kind of pride. The pride of a beautifully factored internal design. The pride of the clever abstraction. What you get in exchange is a different kind of pride: systems that degrade gracefully, ship safely, and recover fast. That trade is worth it, but it is a trade, and pretending otherwise does a disservice to everyone who has to make it.
Closing Words
The underlying shift is this: for seven decades, modularity was about managing what’s expensive for humans — understanding code and writing code. Those things aren’t expensive anymore. What’s expensive is risk, reverting, and downtime. Good modularity in 2026 is modularity that minimises blast radius, maximises replaceability, and treats code as disposable rather than precious.
It’s a strange thing to say that good engineering might mean caring less about the quality of your code. But the quality of your system - how quickly it recovers, how safely you can ship, how little any one piece of it can damage the rest - that’s what good engineering looks like right now.
Ship the slop. Control the blast radius. Delete and rewrite without guilt.
That’s the new modularity.
One honest caveat: we have not yet deployed and embraced, as a team, all of the concepts outlined here. But we have most of them, and the rest seems like a matter of time and complexity. This article is my own work and thoughts, sometimes extrapolations, not an exact description of our workflow at EverAI.