The microservices fallacy - Part 2

4 min read Original article ↗

The microservices fallacy - Part 2

This post discusses the widespread fallacy that microservices are needed to tackle scaling issues.

In the previous post we looked at the origins of microservices, how the hype started, and we listed the most widespread fallacies regarding microservices. In this post we take a closer look at the first fallacy – that microservices are needed for scalability.

Scalability

One of the most commonly used justifications for microservices is scalability: “We need microservices to scale our application dynamically.”

If you ask the same people if a deployment monolith would not be sufficient for their scalability needs, you get a firm “No!”

The scalability discussions are often related to scenarios where a big number of customers gets access to a company’s application – web or mobile applications most of the times.

Well, let us do the math:

  • A simple LAMP stack server 1 on an average 5.000 EUR server 2 can – configured correctly – easily serve up to 6.000 requests simultaneously, probably even more if you use NGINX instead of Apache, because you do not hit the thread limit.
  • Now let us assume that a request takes 200ms to complete at the 99th percentile 3 and that each user interacting with your offering sends a request every 10s in average. This means that you can serve up to 300.000 concurrent users, i.e., users that interact with your offering at the same time with a single LAMP stack node (6.000 * (10s / 200ms)).
  • If you get the session handling right (which is pretty simple), you can dynamically add and remove additional nodes behind a simple load balancer and with 10 LAMP stack nodes and a single load balancer you can serve up to 3.000.000 concurrent users.

Note that I wrote “concurrent users”, not “registered users” or alike. You can safely assume that no more than 5% of your registered users are using your application at the same time (typical numbers are quite a bit lower based on my experiences).

All the numbers are relatively conservative. In practice they should be even better 4. This means (phrased a bit provocatively):

With 15 LAMP stack servers you can easily serve whole Germany from toddler to dodderer (>80 million users).

To illustrate this statement with a little real-life example from my project past: In the early 2000s my company developed and maintained the self-service portal of a big German telecommunications company. The registered user base was a bit less than 10.000.000 users. In peak times up to 100.000 concurrent users were logged in. All those users were handled by a single BEA WebLogic instance (with a cold standby), running on average hardware from around 2000, backed by an Oracle RAC cluster that also served the call center application, all POS systems and a lot more.

And – surprise – we never ran into any serious performance problems. If we encountered any problems, it usually was due to some inattentive development and after fixing the programmatic shortcoming the problem was gone.

Coming back to my sample calculation: I am sure that 99,9% of all applications with scalability demands could easily be handled by a single or a few LAMP stack server instances behind a regular load balancer. A little bit careful application design and development, a few commodity servers and a simple load balancer will do the trick for all their scalability demands.

Reality:

You do not need microservices to satisfy regular enterprise scalability demands.

Unless you are a hyperscaler, an online ad broker or someone else from the 0,01%, there are a lot simpler ways to satisfy your scalability demands – because microservices are anything but simple.

This brings us to the next fallacy – that solutions become simpler with microservices – which I will discuss in the next blog post. Stay tuned …