Why our team cancelled our move to microservices
steven-lemon182.medium.comPractically every technology decision has pros and cons.
I think that if you can't think of a good reason not to use some technology, then you don't understand the technology well enough yet and you shouldn't try to use it in production yet. It may be what you need, but thinking that "everything must switch to this" is usually a huge warning signal that cargo cult engineering is happening instead of real engineering.
This point is reminiscent of the Chesterton’s Fence concept discussed on HN recently
I completely agree, good point! Historically Chesterton's Fence is that you shouldn't remove something until you understand why it's there, but we should probably broaden the lesson to cover change of any kind. Something like, "Don't remove anything until you understand why it's there, and don't use something new until you understand under what conditions that new thing is better (vs. when it isn't)." I'm not against change, but I do hate the endless cargo cults.
Fashion has H&M
Tech has HN
> We have approximately 12 developers spread across 2 feature teams and a support team.
If I were consulting for this company, I would have told them to stop right there, microservices are probably not for them. Unless you build from the start for microservices on something like AWS lambda, doing with such a small team would be really hard.
And as they eventually discovered, a lot of unnecessary overhead for such a small team.
Yeah, honestly this should be the entire content of the article. Not only do they only have two small teams but they all overlap completely anyway. There is no reason even a significantly larger org - say 40+ people in 8-10 teams cannot work effectively in a single repository and monolith architecture. Beyond that there are certain growing pains and if you don't effectively manage those then I could see how you end up going with micro-services.
Beyond scaling a large development org the primary benefits of micro-services accrue to consultants who bill by the hour.
One of the questions I like asking developer pals is what ratio their company has between engineers and services/deployable units. Anybody reading this care to share?
For me, that number says a lot more about the day-to-day life of devs than the microservices vs monolith label does.
I don't remember the hard numbers, but on average each service at Netflix was maintained by about 4 people but there were outliers in both directions. Sometimes there were four or five services maintained by one person, and sometimes there was one service backed by a team of 25+.
The other important number was that about 25% of engineering was dedicated to building the tools to manage the microservices. We didn't work on customer facing software -- the other engineers were our customers. And I found that number to be pretty consistent amongst any company that was fully invested into microservices.
For anyone with a monolith considering microservices, that 25% of total is 33% more work than the monolith.
But if 33% more work isn't enough, we can things in languages with undefined behaviour like Javascript, and also use lots of different languages in the services so out support gets more complex!
Jokes aside, the saving grace is t25% sounds like empirical observation so it probably an all up figure so probably includes the large amounts of bikeshedding that often accompanies a microservices implementation.
Where I currently work, there are 10 services per developer, actively running (if we're counting in the least charitable manner, maybe 4 per dev with most charitable counting). Our ratio is this huge due to large changes in company size over time. Consolidation is very helpful at this stage. If there's one thing I recommend to everyone who encounters this situation, you absolutely must keep a copy of all the source for all the repos on your computer, all at the same time, to make cross-repo grepping easy. For this purpose I recommend ghorg[0], but however you do it, it'll make everything easier.
At my last place, a team of about 4 would usually handle maybe 2 services, possibly a 3rd if it was some kind of light weight shim. Larger teams towards about 8 might have 4-5 components. There were some outliers as ever in an org with 100's of engineers, typically teams who had more components struggled to deliver their roadmaps. They'd also often end up with huge blocks of work when a bit of technical debt cropped up as it was complicated by having more services.
We eventually got to the point where we started re-definining everything in terms of "business processes" (ie: creating a basket, paying for an order, making a complaint, etc). Teams were moving towards talking about processes rather than services, how we chose to organise the code and deploy it was starting to become more about practicalities. Such as: this collection of endpoints services an internal tool and scales differently to this other collection which service customer facing clients.
I left not long ago but my team of around 7 engineers had 2 primary service where 90% of the work happened. The other 10% of the work was on a few edge services, typically small serverless functions, which were mostly set and forget. It felt like we could move at a good pace with this setup. My new job is even more monolithic leaning and it's even faster to get work done and build internal tools.
We are currently between 3-7 services per dev depending on how you count (~40 services developed in house, ~70 if you include third party micro services we run)
It would likely be insurmountable if not for using kubernetes, we choice the idiomatic options (gke/GitHub actions/Argo/Prometheus/etc) so starting with micro services wasn’t too bad
Wow, that sounds like an utter nightmare.
I can imagine why you think that, but actually it works really well. I could write a blog post about it if there is some interest.
I'm definitely interested!
My team of 5 has 5 services. 3 of those are tiny function-as-a-service APIs. 2 of those are Azure App Services, where our involvement in the infrastructure is minimal. And we're about to add a 3rd one of those. That sounds like a lot but we wanted the flexibility of making those capabilities independently updatable and deployable. And only one of those has an API used by other teams, so as far as the rest of the company is concerned it's like we have only one service.
At one point we had twice as many teams and almost twice as many services. The team had grown too large for optimal performance. When it was time to divide the team, we were able to categorize our services in two ways by the path of the data (deep back-end data processing for internal customers vs product services). Then we were able to assign a category to each team so that we have minimal dependencies and no code conflicts between teams.
2 Pizza-sized per service is the rule of thumb. I'm happy with even somewhat less, as long as the scope narrows accordingly and there's CI/CD/infrastructure support experience deploying many services.
I don't know if I agree. Deploying servers and calling out to services should be a core competency. I don't think it's as drastic as you make it out to be.
No ownership is a bigger problem than team size imo.
That said, no reason to do it just because it's trendy.
> calling out to services should be a core competency.
What’s the typical failure rate of a method call within a process of a language of your choice?
If it’s not Java2k it will be one or two orders of magnitude lower than any cross-process, host or provider RPC call you’re ever going to make.
At smaller scales, just not having to deal with all the bugs and cleanup work missed error handling brings - and then building proper error handling for all relevant cases - can make a huge difference.
I came looking for this in the comments. Invalidates the rest of the post, really.
The post is very explicitly coming from a perspective of "we didn't know what we were getting ourselves into". The story begins after the decision to move to microservices was made, and the backstory of the decision isn't dwelt on (beyond allusions to top-down pressure from management).
The important lessons here are on how they recognized the mistake before they had fully committed. Having worked on a team with a similar story, it all rings very true.
I can certainly see how this happens. Points to how poorly all the chatter about microservices obscures important points. This post itself doesn't strongly enough highlight that 12 developers is a full-stop no microservices decision point.
Or to phrase it differently, it's like a horror film where a small/skinny person goes into a dark unknown basement by themselves. Sure there's a story to be had, but the important bits to know come before the decision. I suppose it could be useful how to best defend oneself in a rare situation but avoiding it altogether is better if easy to learn.
Microservices shift complexity from code to operations - microservices don't remove complexity - they spread it wider, making the whole more complex.
I agree that microservices shift complexity, but I'm not sure that this inherently results in an overall increase in complexity. For example, using highly automated ops tools like autoscaling and load balancing can enable smooth handling of problems that can be challenging to handle in code.
Instead of shuffling data between things via shared memory/function args, you're doing it over the network. This will always be more complicated.
Here at Google, one of our most popular microservices frameworks enables microservices to be assembled into servers. If one microservice calls another one in the same assembly, it won't touch the network, and is quite optimized. There's no reason that microservices have to all run in separate binaries, but when it's useful it can be done easily enough without having to change your code.
I find my self doing the opposite. I’ll often create a Cloud Task that will call the same server that created it, just to make sure that another container will get spun up to handle the load if needed. Using Cloud Run this way quite happily for a mix of OLAP and OTAP workloads.
Is the framework open source? And what language(s)? It sounds interesting!
Not open source; I've only used the java version, but there's support for c++ and go as well.
Yes, it shifts communication to over the network, but this comes with the benefit of making system boundaries much clearer. It's been my experience that this can allow different parts to evolve separately and make the choices best for their own needs, which can sometimes result in a paradoxical reduction of complexity as one all-in-one system turns into a series of simpler tools. Suddenly the part of the system that does background ETL work and the part of the system that serves up the admin panel can use different tools.
I will agree that networks are inherently complicated, but your system or systems probably use networks anyway. Using them internally often forces you to grapple with the complexity you were already facing. Plus, it's been my experience that networks are easier for most to reason about than shared memory and mutexes.
Let's not forget providing an easy set of dials to turn to scale a given part of the system. That's a point of complexity, but it can be an immensely useful one. Complexity is not always the enemy - a power drill is more complex than a manual screwdriver and this complexity is pretty useful.
Given the trade off of synchronous vs asynchronous, I hope most folks try their damnedest to use the former before having to use the latter.
Microservices probably do increase the overall amount of complexity in a sense, but the idea is to trade a little bit of complexity in order to decouple teams. i.e., each team can own its service from soup to nuts without having to coordinate with a bunch of other teams. To put it differently, there’s a small increase in technical complexity in exchange for a significant decrease in organizational complexity. That said, if you go out of your way to retain that organizational complexity (e.g., by dividing your services in a way that doesn’t resemble your org chart) then you’re going to have a bad time with microservices. Similarly, if you don’t have organizational complexity (e.g., you only have one or two teams) then microservices are probably the wrong way to go.
Examining behavior in the limit (a la physics) is interesting. Imagine a system where every function call goes out to a separate process. Suddenly you have a lot of processes waiting around to be called. Debugging requires special tools. Profiling becomes a nightmare. Even determining if the system is fully running becomes difficult. What have you gained?
> What have you gained?
Separation of responsibilities? Easier to analyze because you only have so many inputs and outputs to a simpler system?
Debugging something that touches a lot of paths in a monolith can be quite nightmarish as well.
Most comments about this assumes a poorly designed monolith and a well designed set of microservices. A microservices architecture can be a rats nest too.
I guess what it means is that even if you can build a well modularized system, it will only stay well modularized if you use a network call to enforce it. Well, at least for most companies.
Conceptually, there's nothing keeping you from designing your codebase to work as both microservices or direct calls. I've certainly done it before - each service defined a Java interface, and codegen could hook that up as either a direct call or to route over some kind of layer.
Or something that can be spun up as a service or imported as a library
> you only have so many inputs and outputs to a simpler system?
Doesn't a function only have so many inputs and outputs too? Scope capturing/global variables aside.
> Doesn't a function only have so many inputs and outputs too? Scope capturing/global variables aside.
Sure, and a microservices architecture to me implies a larger movement towards function-esque, idemopotency, analyzability and away from global state.
To me, monolithic architecture implies global shared state that is difficult to reason about.
It’s been mentioned elsewhere in this thread, but the lack of microservices doesn’t imply a global shared state. There’s a big difference between a large service that has well isolated modules, and a large service where all the state is contained in one struct, for example. I feel this issue is pitched as a false dichotomy.
If I'm in Java, JavaScript, or Python and there is a code fault, the system provides me a stack trace of the call structure that lead to the error. If I catch the error I can output more related data as I deem necessary. This comes effectively, out of the box.
How do I do a similar stack trace in microservices to understand the path that led to this state? I've used microservices at a couple of companies and their methods were effectively, look at the request id and trace it through this mass of log files for each microservice we are running. It was terrible and could take hours to compile the same information.
What tooling exists to solve this problem with microservices? Genuinely want to know.
> If I'm in Java, JavaScript, or Python and there is a code fault, the system provides me a stack trace of the call structure that lead to the error. If I catch the error I can output more related data as I deem necessary. This comes effectively, out of the box.
Sure, if you are not writing a program using event-driven async style. Every monolith I've worked with has been in async style pretty much.
With global state in the monolith, this can become quite difficult to reason about. By contrast, with microservices, you can analyze the service as performing a small function with a single input and output without global state dependencies. This can be easier to debug.
It seems to me that you are comparing apples and oranges. If your code has no relevant state or coupling, a pure function. Sure, you can look at it in isolation. You could do the same in a monolith.
I've worked on plenty of monoliths with async style code and have found the stack traces to be plenty helpful. It's never been an issue. At least in JavaScript you retain all relevant scoped data and can dump it all if you like. Debugging gets much more interesting when there are no obvious such errors, yet error conditions are present.
I'm still not seeing any equivalent microservice tooling.
Also, security. Each component has an identity and it’s own set of permissions so the custom emoji widget doesn’t have access to the payments system or the health data.
This would be true in a monolith as well unless configured otherwise. Functions don’t just start talking to each other; you have to explicitly connect them via a function invocation. I’m not sure how an architecture would change that.
It’s pretty straightforward. In a monolith everything is in the same memory space, so the payment system credentials and routines are available to the whole monolith. In a microservice architecture, one service can’t access the memory or routines of other services.
One function cannot access the memory space of another function any less explicitly than one process can access the memory of another process. Either way, you are just picking different contract enforcement mechanisms. The only way this would be helpful is if your whole goal is to make something (breaking the contract) as hard to do as possible. And in that case, I really have to ask, who are your teammates that you trust them so little as to literally enforce separation this way? No healthy organization should have to resort to this kind of civil war.
> One function cannot access the memory space of another function any less explicitly than one process can access the memory of another process.
You're mistaken. The operating system largely prevents cross-process memory access. Of course, CPU vulnerabilities can circumvent these protections, but even then your micro services can run on distinct machines while monoliths can't.
> The only way this would be helpful is if your whole goal is to make something (breaking the contract) as hard to do as possible
Yeah, that's the idea. "defense in depth".
> And in that case, I really have to ask, who are your teammates that you trust them so little as to literally enforce separation this way? No healthy organization should have to resort to this kind of civil war.
(1) not all attackers are internal
(2) some attackers are internal and many very profitable organizations do restrict employee access to secrets (and in many cases this is required by compliance).
> The operating system largely prevents cross-process memory access.
As do programming languages. Yes, a few languages like C allow you to do anything that's physically possible, but most popular languages including Python and JavaScript make leaking function-internal memory to the outside world either very inconvenient or impossible.
As for memory being shared among processes, it's not only not impossible but quite common in performance-sensitive applications. https://man7.org/linux/man-pages/man7/shm_overview.7.html
> many very profitable organizations do restrict employee access to secrets
For one, code is not secret. Secondly, when was this discussion scoped to profitable organizations? A very, very large chunk of HN readers work in startups.
> in many cases this is required by compliance
This argument weakens the entire stance, because it shifts the goalpost from "this thing is good" to "this thing is legally required." Let's stay focused. There are so many implicit assumptions you're taking as granted, I'm starting to doubt this is even an argument in good faith.
> Let's stay focused. There are so many implicit assumptions you're taking as granted, I'm starting to doubt this is even an argument in good faith.
I share your doubts that this is a good faith argument, in particular, you keep responding to arguments that I patently didn't make:
> For one, code is not secret.
I didn't claim code was secret, I claimed that we should secure secrets and prevent attackers from executing routines that they shouldn't have permissions to execute.
> Secondly, when was this discussion scoped to profitable organizations?
I didn't scope the conversation to profitable organizations, I mentioned that profitable organizations often secure their applications. I'm giving you an example and you bizarrely think I'm limiting the conversation only to that example. You do the same thing here as well (and suggest that I'm the one who isn't focused):
> This argument weakens the entire stance, because it shifts the goalpost from "this thing is good" to "this thing is legally required."
Again, I give "in some cases security is even required" and you take it as "we're no longer talking about other reasons for which security is beneficial".
Honestly, the "monolith vs microservice" question is interesting, but it's not interesting to debate someone who is bent on responding to arguments I clearly didn't make. :) Whether you're being obtuse on purpose or by accident, this conversation has become dull, and I'm ducking out of it.
> I claimed that we should secure secrets and prevent attackers from executing routines that they shouldn't have permissions to execute.
While this is obviously a noble goal, you mentioned restricting employee access to secrets in relation to restricting (employee) access to code as well. To me, this is throwing the baby out with the bathwater. We can obviously make things harder for attackers by making things harder for everyone, but to carry it out even farther, we may as well make the code fully immutable since then an attacker wouldn’t be able to do anything to it then. If you disagree with this, then surely you must agree that there’s a balance to be found and it depends on the needs of the organization?
> I mentioned that profitable organizations often secure their applications
Well, I never made the argument against securing applications, but assuming you meant that they use microservices, that’s great, although unfortunately it doesn’t mean anything about the overall appropriateness of microservices. Profitable companies are known for moving so slowly they are unable to adapt to anything even when it means their existence is at stake, and also for blindly following the path laid out for them by laws, shareholders, stakeholders, etc. almost as an automaton without brains. So this supporting point isn’t hugely convincing.
If your point is that in a very specific set of cases, microservices are appropriate, then we are in agreement. However this wasn’t the tone generated by the comments I was replying to.
Minor nit: functions as a service are almost exactly this - and they (from a technical perspective) don’t wait to be called, they pop into existence just in time.
Arguably though it’s not special tools is just different tools. You usually can’t run a debugger live in production but with tools like distributed tracing and service meshes you get really close
It’s about trade offs I think.
Monolith can Microservices are so often presented as “x is better than y”, but it should be “which is more applicable for the team size, product and operational concerns”.
Monoliths are a great choice for certain team sizes and applications, want stricter isolation and blast-radius between different teams and products and need to scale different things differently? Micro services are probably a better choice.
The operations needed to keep a large monolith buildable when a lot of different teams work on it are also highly complex, and increase in complexity relative to the number of contributors to the codebase. It becomes a massive coordination problem at scale.
Microservices decouple teams, so they can get their work done without stepping on each other's toes.
Eh. Properly segmented monoliths are just fine. And are probably much simpler to deploy.
Until there needs to be a change in the interface. Now you have N other services to deploy to and coordinate
More people can do operations, and more operations work is parallelizable, so yeah it's a tradeoff but there is upside to that tradeoff.
We're going to need some evidence or at least more development on these ideas.
We? I'm sorry I don't understand.
Monolith vs microservice is not a dichotomy, it's a spectrum (as it is with so many other things). Individual microservices can still become monolith-y and become responsible for doing a Lot of Things (TM).
That being said, the biggest hurdle in a re-architecture project like this is usually in the "n=1 -> n=2" stage, and "n=2 -> n=5" is a lot easier: once you add service #2, you learn how to set up telemetry, permissions, billing/chargeback, alerting, etc. The next few are just repeating the same process.
I always say n=2 -> n=3 is the hardest step, then n=1 -> n=2. With 1 to 2, you can take a lot of shortcuts with the n's communicating directly. With 3, you have to start formalizing message transmission either through a message bus or a cache or whatever. But once you have n=3, n=3+ is pretty easy as it's mostly edge cases to solve for or geometric scaling problems.
It definitely depends on the shortcuts you take! e.g. if you run #1 and #2 with the same identity, you don't have to figure out how to make your ACLs work... but you're also breaking principle of least access :)
The term "Monolith" was devised by people who wanted to brand microservices as newer and superior.
It's almost as if in order to succeed these days you need to discredit and disparage your competition rather than simply having a better product, and that's why I don't buy into buzz words at all.
If it's not broken, don't fix it... Microservices are relatively new and unproven. The way the world has rushed to dive into microservice infrastructure only highlights reckless spending and waste that is characteristic of overpriced goods and high taxes that are constantly in turn thrust upon us, as consumers.
Microservice architecture is also inherently designed to lock a customer into very specific tools that make future migration to any other platform a very costly decision in most cases... Thereby locking a customer into platform-specific dependency. Microservices architecture also introduces the ability for providers to charge for each specific service as a utility... Instead of being charged for one single server annually, on microservices you can be charged for many individual components that run your app independently, and when usage skyrockets, it's a sticker shock that you can only stop by going offline.
We have also seen enough failures and pain points within microservice and even cloud architectures over the past two years alone to raise questions about whether or not it it indeed a better solution.
We need to stop disparaging traditional (non-cloud) hosting and solutions that aren't obsolete at all in this manner, and focus on what works, what is secure, and what is cost effective in order to stay sustainable into the future.
The more we allow marketing minds to take control of our IT decisions over reasonable technical minds, the more costly it will be to us all over time, no matter what salary we make. Bog tech firms will hate me for saying this, but any human in the chain can tell that reckless drive for weak/vulnerable/costly/and over-complex IT solutions cannot be sustained as a viable long-term business sales strategy anyway.
Microservices make sense in some scenarios.
I work at a large retail company with who knows how many developers. We have different teams for payment, promotions, product search, account, shipping and more. All of them working on a single codebase with coordinated deployments would be a nightmare.
Previously, I joined a startup (previous coworkers of mine), a developer and a business guy. The developer "drank the microservices kool-aid", and came up with (in theory) super scalable solutions and like a dozen of microservices. It was difficult to keep things in mind, the tech stack was way too complicated for two developers. It was also less performant and more costly. The added complexity was totally unnecessary, especially because we never got neither tons of users, nor more developers. The business guy trusted the developer, so the company never worked enough on their product and USP. I guess the developer just didn't want to accept that the fancy tech solutions won't bring success.
Yet another time, we were a small team (5-ish devs, product owner, and a designer). We started with a monolith and we paid attention to software design and moved quickly.
Also, for some reason it's often overlooked, that you can make your monolith modular and design it so that when the day comes, you can split it up into smaller services. You don't need to start with microservices, you can start with a monolith and figure out later how to split it up (if necessary).
Microservices and "monoliths" have their place, you just need to know when to use which.
This is agreeably congruent with my original statement,
The main problem I have with microservice marketing:
It is often promoted to clients that do not have applications that are large or critical enough to warrant leveraging them.
That buyers often are properly warned about their inability to easily migrate if they invest in platform-specific microservices too heavily.
And clients are often not aware of the operational costs that can rise over time for each component of the distributed architecture.
"Monolithic" solutions have also not stagnated... They can be run in distributed methods, they can leverage microservices in parts, they can also leverage containers, they are far from obsolescence because they are using the same languages that microservice architectures use, just with less distribution overall.
The term monolithic is often used to indicate that less-distributed solutions are somehow "out of date", "obsolete" and "not innovating" inaccurately, when the real story is that the business case usually dictates which solution will fit best.
You’re confusing “microservices” with dependencies on a particular platform (presumably cloud providers). Microservices aren’t more likely to have these dependencies, as monoliths are often also deployed on cloud providers and assume a particular database, etc.
Anyway, if you’re dealing with microservices “lock-in” isn’t a real problem—you just move one service over to your new platform at a time. Good luck doing that with your monolith without decomposing it into services (and frankly, if you can feasibly decompose your monolith into services, your architecture is probably cleaner than 95% of monoliths out there).
It really doesn’t seem like they are confusing microservices, I’m not sure what makes you think that.
> That buyers often are properly warned about their inability to easily migrate if they invest in platform-specific microservices too heavily.
There’s really not much coordination with a monolithic deployment. Merge your commit and get in line. Monitoring tools will ping you on slack if there’s any issues. It’s probably easier than micro services because all tooling and builds are centralized.
> Merge your commit and get in line.
Much easier said than done, if there are 50+ devs working on the same service (monolith). Decisions are made much slower, improvements need to go through long discussions, deployments are much more risky, onboarding is a hassle, and so on. Sure, monoliths work very good for small teams (it's hard to define what "small" is), but I'd say that around 15-30 developers you can think about splitting up.
Are you saying this from experience? From my experience working on a monolith every day with around 400-500 other devs, this is not the case. I don’t even need a code review for minor things. Just commit and get in line (about 2-3 minutes before my commit hits 100% of production).
Yes! This really sums it up - they have their place, but it takes experience and good judgement to know where they are appropriate, and how to divide up concerns.
> It's almost as if in order to succeed these days you need to discredit and disparage your competition rather than simply having a better product, and that's why I don't buy into buzz words at all.
Meh. In order to sound smart on HN it's easiest to point at something and call it "hype".
> Microservices are relatively new and unproven
SOA is old as fuck. Microservices are also fairly old, but especially when you consider they're really just SOA + dogma.
> Microservice architecture is also inherently designed to lock a customer into very specific tools that make future migration to any other platform a very costly decision in most cases...
No? Not at all.
> Instead of being charged for one single server annually, on microservices you can be charged for many individual components that run your app independently, and when usage skyrockets, it's a sticker shock that you can only stop by going offline.
Alternatively phrased: If you only use one service you only pay for it, not for the whole suite of features you don't need or want.
> We have also seen enough failures and pain points within microservice and even cloud architectures over the past two years alone to raise questions about whether or not it it indeed a better solution.
And plenty of success stories.
> We need to stop disparaging traditional (non-cloud) hosting and solutions that aren't obsolete at all in this manner, and focus on what works, what is secure, and what is cost effective in order to stay sustainable into the future.
Microservices work, are secure, and are cost effective.
Honestly your post contains no useful information and is satirically close to a "return to traditional family values!" speech.
That’s exactly what Big Microservices wants you to think.
> Microservices are relatively new and unproven.
They are so old, buddy, actually. Splitting monolith into services(not always been micro) is a natural evolution for any software.
> Splitting monolith into services(not always been micro) is a natural evolution for any software.
No less natural than "joining the innumerable incompatible and bug-ridden fragments into a single unified solution."
Linux is a bazar. Except distros and package repositories are cathedrals.
Windows is a cathedral. Except software distribution is a bazar.
It was SOA (Service Oriented Architecture) where you would split them up, that predates Microservices by quite a bit. I remember doing that in the early 2000s.
What I think he is saying is that Microservices people pitch their service against monolith as better, but monolith hasn't been in vogue for 20 years. I saw the same tactic with scrum people pitching against waterfall which hadn't been in vogue for quite a while either.
Well, SOA is such a broad term. If you include CORBA, you could say it's terrible. On the other hand, gRPC is a pleasure to work with if you need to deal with different environments.
> On the other hand, gRPC is a pleasure to work with if you need to deal with different environments.
I remember doing RPC via Java RMI and Jini at some point early in my career in the late 90s. It was really nice and gRPC reminded me of that.
Although clunky to implement, EJB had really well thought out concepts, architecture and roles
I miss dealing with Websphere every time I am fixing Kubernetes spaghetti.
The OP was pretty explicitly arguing in favor of monolithic architecture.
> The term "Monolith" was devised by people who wanted to brand microservices as newer and superior. It's almost as if in order to succeed these days you need to discredit and disparage your competition rather than simply having a better product, and that's why I don't buy into buzz words at all.
Yeah, it used to be called “Service Oriented Architecture”, and is nothing new.
Agreed in essence... The ideology is indeed old, but the practice of putting flashy wrappers and catchy names around the services are new.. Like "Dynamo DB" and "Route 53". Those names appeal to non technical product owners that then force adoption onto development teams... Pure fluff at it's best.
That's the think about the marketing first model we're dealing with now... No real innovation, just branding/name changes and highly tailored customization to lock a customer into a specific platform.
There’s nothing new about marketing your product, and make no mistake that DynamoDB and Route53 are products.
> Those names appeal to non technical product owners that then force adoption onto development teams
Route 53 is a pun on the DNS port—not exactly common knowledge among nontechnical product owners. Anyway, DynamoDB and Route 53 succeed on merit. There are certainly better examples of shitty technologies that win on marketing or other nontechnical considerations. E.g., Oracle anything (of course Oracle aren’t know for being purveyors of microservices).
> They are so old, buddy, actually. Splitting monolith into services(not always been micro) is a natural evolution for any software.
"Microservice" isn't really about that, it's a marketing paradigm that is here to serve the (paid) tools, not really the architecture, it is EXACTLY like "serverless", it's not an architecture, it's a really about promoting paid platforms.
The parent is 100% right about their point about marketing buzzwords, because that's really all what it is all about.
I don't think it was "splitting monolith" it was more like connecting separate applications.
Like you had a payroll in your enterprise of 1000 employees and you needed that same data in 5 applications in 3 different departments. So you would wrap payroll into a service and have that data accessible in multiple places.
I think that is still a valid approach to build monolith app and use multiple services if they are internal apps.
For customer facing and quickly changing stuff you might want to add microservices to be able to build new features quickly when ideally microservice should have its own database with what it needs to operate.
This. And now "microservices" is a rallying cry to avoid any kind of architecture organizing.
Because the broke you know isn't called "broke."
Monoliths are pretty great, and there are tons of valid criticisms of microservices; however, this comment managed to steer clear of all of them. :)
> The term "Monolith" was devised by people who wanted to brand microservices as newer and superior.
Not everything is a conspiracy. Sometimes it’s just useful to have a word to describe a particular architecture. In this particular case, “monolith” isn’t even disparaging, so if Big Microservices we’re trying to disparage monolithic architectures, why wouldn’t they use a term with a negative connotation?
> If it's not broken, don't fix it... Microservices are relatively new and unproven.
The microservices people would argue that monoliths are broken for many use cases. In particular, individual teams can’t deploy their code without coordinating with every other team, which yields long user feedback loops and a bunch of other knock-on effects. Microservices exist to support nimble organizations by helping to remove technical coupling between teams. This is all 101-level stuff but the microservices critics always ignore it in their criticism.
> Microservice architecture is also inherently designed to lock a customer into very specific tools that make future migration to any other platform a very costly decision in most cases... Thereby locking a customer into platform-specific dependency.
I don’t think you could be more incorrect :). Microservices are almost universally built atop containers, and the whole purpose of containers is to decouple the application from the platform.
> We have also seen enough failures and pain points within microservice and even cloud architectures over the past two years alone to raise questions about whether or not it it indeed a better solution.
Microservices are typically more robust than monoliths if only because components are isolated—a failure in a superficial component doesn’t bring the whole app down. Moreover, monoliths are less secure as well because there’s no way to regulate permissions within the process—anything one component can do, the whole system can do.
> The more we allow marketing minds to take control of our IT decisions over reasonable technical minds
Literally laughing out loud at the idea that marketing people are behind microservices.
>individual teams can’t deploy their code without coordinating with every other team, which yields long user feedback loops and a bunch of other knock-on effects
Fine until one of the microservices needs an update with new features -> new API because Something Has Changed, and suddenly...
>the whole purpose of containers is to decouple the application from the platform.
Are containers not a de facto platform?
>Literally laughing out loud at the idea that marketing people are behind microservices.
Here's a moderately complete list of products. How many are pay-to-play?
https://www.aquasec.com/cloud-native-academy/container-platf...
All you're really doing with containers is creating a meta-monolith running on external hardware with custom automation - managed by an ever so handy third party software product. Also running on external hardware. All of which you're paying for.
You can also DIY and not pay. In theory. But really...?
This makes sense at global scale where you're drowning in income and need to handle all kinds of everything for $very_large_number customers.
It's complete madness for a small startup that doesn't even have a proven market yet.
> Fine until one of the microservices needs an update with new features -> new API because Something Has Changed, and suddenly...
1. Right--you only depend on other teams when you need to
2. Even in this case, you just version your API, deploy your new version at your leisure, and incrementally move people over to the new version while deprecating the old version.
> Are containers not a de facto platform?
Not in any interesting or meaningful sense. Containers are precisely the technology that allow you to pack up and go to a different orchestrator or cloud provider. Note that the original concern was being locked into a cloud provider or PaaS--what does it mean to be "locked into containers"? Who has ever been "locked into containers"?
> Here's a moderately complete list of products. How many are pay-to-play?
You're conflating a lot of things. First of all containers aren't micro services or vice versa. Containers are just an interface for processes, and they enable things like orchestrators. "Containers versus <whatever>" is orthogonal to "micro services vs monoliths". Secondly, there's nothing about paying for something that implies more lock-in. Whether you're paying or not, it's a lot easier to transition between platforms with micro services than with a monolith (micro services can move incrementally while a monolith has to move all at once or be painstakingly broken into micro services).
> All you're really doing with containers is creating a meta-monolith
With the enormous caveat that microservices can be deployed independently and even on different platforms and use different technologies... Which is to say microservices are nothing like monoliths. :)
> This makes sense at global scale where you're drowning in income and need to handle all kinds of everything for $very_large_number customers.
It makes sense if you have more than a few teams.
> It's complete madness for a small startup that doesn't even have a proven market yet.
There are degrees between "small startup without a market" and "global scale / drowning in income". But yes, I agree that microservices aren't a good fit for the very earliest, smallest-scale companies.
> Not everything is a conspiracy. Sometimes it’s just useful to have a word to describe a particular architecture. In this particular case, “monolith” isn’t even disparaging, so if Big Microservices we’re trying to disparage monolithic architectures, why wouldn’t they use a term with a negative connotation?
This isn't the reality on the ground. Where I currently work, "Monolith" is absolutely used as a pejorative by those advocating for microservices.
I can't speak to your workplace, but in the broader debate it's never been pejorative. It's always been neutral and descriptive. If they wanted to use a pejorative title, they could have gone with "bulky" or "cumbersome" or "inflexible" or any other number of loaded descriptors.
Right but the term monolith was used in at the latest in the 80s to describe kernels (monolithic v micro) so it’s a fairly standard term in software.
> that's why I don't buy into buzz words at all.
Clearly you do, because this is mostly nonsense driven by your knee jerk reaction to “microservices”. Very little you’ve written here is substantive. It’s all emotional appeal covering ignorance.
> If it's not broken, don't fix it...
But it is broken. Engineers often experience significant pain from monoliths so they look for a solution. They often also experience significant pain from microservices so the pendulum returns. Hopefully during all of this we learn enough that at least some pain is reduced, whether we land on microservices or monoliths or hybrid solutions.
> We need to stop disparaging … and focus on what works
Here I agree. Focus on what works and stop engaging in low value attacks on solutions that clearly work for some.
> The more we allow marketing minds to take control of our IT decisions…
What “marketing minds” are making decisions about service architecture? This seems like an imaginary issue.
> What “marketing minds” are making decisions about service architecture? This seems like an imaginary issue.
It is hard to hold this comment in a generous light (per HN rules) when some of my most poignant experiences in dealing with tech vendor salespeople at conferences is (and I paraphrase), "well, it's working great for $MEGACORP, you do want to be like $MEGACORP dont you?"
One time I asked one of those salespeople point blank if that line actually works on people. Apparently it does. And those people for whom that line works with make big expensive tech decisions.
Did you become a marketing mind because they were pitching you?
I did not say that marketing isn’t making pitches. I said they aren’t making the decisions.
Any company allowing marketing to choose the technical platform for the engineers is going to be very short lived. This is not a real issue.
You're assuming engineers can't be victims of cargo cult, and we all know that's not globally true as it's been discussed multiple times on HN. Marketing teams know that as well, and they exploit that as much as they can.
> Marketing teams know that as well, and they exploit that as much as they can.
They exploit this mercilessly, just as hard as they exploit our curiosity and desire for novelty as engineers.
To the GP, your company's marketing/sales team may not drive internal technical decisions, but I can promise you that other companies' sales teams do. Why do you think they so persistently step on the heads up their first POC so they can be put in touch with C-levels or department heads?
Enterprise sales is a trip, dude
This is very “everyone else is such an idiot”. I feel like you’re a half step away from saying that engineers shouldn’t be allowed to learn about new tech because they’re too stupid to know when not to apply it.
If your team of engineers is incapable of determining good engineering trade-offs, and you have too little influence on the team to steer them back to a reasonable choice, it’s not really marketing’s fault.
You’re probably going to tell me again that I’m too dumb to understand that marketing impacts engineers, so let me be clear, I am well aware that marketing impacts engineers, just as it influences everyone else. But engineers are not helpless fools incapable of making decent technical decisions just because someone tried to sell them on whatever latest serverless tech.
Not at all. I am simply describing my experience with salescritters as a SWE. A good company gives Engineering a seat at the table but sometimes you have to deal with things imposed from higher up. This is especially true in a large company.
That said I think that we are a particularly gullible cohort with misplaced desire for novelty and shiny tech. We overrationalize in favor of these desires and have a hard time walking a mile in others' shoes. And companies know this and exploit this to constantly fill our heads with noise and bad ideas that happen to be profitable for them. This is one of my biggest beefs with "tech" in general, and why I am taking some time off from this trade.
> You're assuming engineers can't be victims of cargo cult
At no point did I say anything to that effect.
I’m saying that marketers are not making technical decisions. That’s all I’m saying.
That is how you get to it.
If you are noname rapper you start dissing bigger guys so they diss you back and you get notoriety because someone noticed you.
As a politician you have to say others are the worst and broke everything but you have plan to fix everything that is broken now.
In the end all the swearing is posturing and all "great plans" turn out not possible in reality.
While yes you can do nice stuff with microservices, it is not a silver bullet.
> Microservice architecture is also inherently designed to lock a customer into very specific tools that make future migration to any other platform a very costly decision in most cases... Thereby locking a customer into platform-specific dependency
Can you elaborate on this? Examples? Thanks!!
Looks like OP talked about serverless / google cloud run, where rather than traditional server deployment, it use specific provider's implementation. Additionally it may also means CI/CD pipeline where usually it's different between provider.
I'd need days to detail it all specifically enough, but I'll give you a simple example...
A "monolithic" solution usually relies on a basic (e.g. LAMP) stack that can all be run on one server if the need arises... Your web and database server can be migrated from AWS to Azure much easier if pretty much all of the functionality relies on a close-knit local server architecture that can have a less complicated security, code, and endpoint design as well.
If you create an app that works based on a highly distributed architecture, suddenly, migrating a solution is far more complex - Platforms like AWS and Azure do not run all of the same services, and those services often require lots of refactoring to work properly with your prior data, you'll also need to do many test cycles after migrating to ensure that solution integrity is maintained.... At that point, a simple migration might as well be a total refactor.
After implementing policies like zero trust and working out whitelisting, if your solution is complicated or large-scale, you also have to deal with service-specific nuances in your code and in your architecture design that don't translate well to the completely different tools available on an alternate cloud hosting platform, because usually they have completely different nuances and conventions on their service architectures.
To put it simply, it's like buying a ford pickup truck and installing an aftermarket 6.34 x 9.85ft camper top (and other ford-specific aftermarket parts) on it, and then trying to install that custom Ford camper top and the other Ford aftermarket parts onto a Toyota which only fits a 5.77 x 8.34ft camper later on... It usually doesn't work out well, and usually provides very unexpected results and more financial loss than using a universally sized 5.55 x 8.20ft camper (The monolithic option that fits in both trucks, albeit not perfectly).
Each platform is very specialized in their own way... Just as with Ford any Toyota pick up trucks, they have completely different build dimensions, and that's why the parts aren't interchangeable between the two trucks.
Monolithic solutions were originally designed to work agnostic of platforms, so they can work on either provided that they are implemented correctly...
Ultimately, the business need should be carefully evaluated by an experienced architect to determine which architecture fits the need best, and then other factors should be reviewed (like if you'll need to migrate any time in the future for example) to make the final call.
You’re confusing “microservices” and “distributed architecture” with “depending on specific cloud provider services”. Microservices don’t have to depend on any cloud services at all and monoliths can (and often do!) use cloud services.
Many "Monoliths" can be run on something as simple as a desktop emulator. They don't generally rely on cloud either.
Microservices are "distributed" across a cloud host platform because they are each updated and maintained by different teams. My use of the term "distributed means that if AWS East has your DB instance and your web server is stored in an entirely different region, you app goes down anyway, but your front-end team can maybe still deploy updates... Which is not really a dramatically productive gain for a customer running a restaurant web site.... On the other hand, if you're running a massive video streaming site, it might be a good thing to base it on micro service architecture. Each use case is different.
I'm resisting the pressure to be drawn into a debate about which one is better, that's not what I'm out to do... What determines which solution is better is the business case it seeks to resolve. Neither is inferior or more obsolete, the two ideals both can and often do run on identical/similar code bases... It's the configuration and potential uses/application/benefits that differ.
> Many "Monoliths" can be run on something as simple as a desktop emulator.
I'm not sure what a "desktop emulator" is, but a lot of micro services can run as native processes or in a VM. There's nothing about micro services that fundamentally restrict where they run--they're just application processes at the end of the day.
> my use of the term "distributed means that if AWS East has your DB instance and your web server is stored in an entirely different region, you app goes down anyway, but your front-end team can maybe still deploy updates... Which is not really a dramatically productive gain for a customer running a restaurant web site.... On the other hand, if you're running a massive video streaming site, it might be a good thing to base it on micro service architecture. Each use case is different.
I mean, that's one scenario but the inverse could be true--your custom emoji service could go down but everything else--your payment service, etc could stay up. More than that, you can play it fast and loose with deploying to your emoji service while your important core services get more scrutiny. With monoliths, any change could take down core services so you have to use the same scrutiny when deploying changes to the emoji features. Moreover, you don't need to coordinate with a bunch of other teams to deploy--your team can deploy its own service whenever it needs to. You can use whichever language is best for your component. Etc, etc. But we are agreed that micro services aren't fit for every use case--if you only have a few teams then you probably won't benefit much from micro services.
> I'm resisting the pressure to be drawn into a debate about which one is better, that's not what I'm out to do... What determines which solution is better is the business case it seeks to resolve. Neither is inferior or more obsolete, the two ideals both can and often do run on identical/similar code bases... It's the configuration and potential uses/application/benefits that differ.
You're back pedaling pretty quickly from the tone and claims of your original comment, but I accept all of this--neither is a silver bullet. :)
FTR... A desktop emulator was a reference to something like WAMP or XAMPP... On which a monolith can be run, developed upon, and even tested entirely independent of any host or VM infrastructure.
I'm not back pedaling, you're reading too closely and judging instead of looking at the discussion from an objective standpoint and simply seeking clarity and working towards truth. Here's what I posted in this same thread even before my post in this branch -
+++
by winternett 6 hours ago | parent | context | prev | next [–] | on: Why our team cancelled our move to microservices
A size 20 shoe is better for a large foot... But not better for a size 15 or size 10 foot. Saying Microservices are better is the same as me saying "a size 20 shoe is better than any other shoe"... for everyone.
It's not a viable statement in any use case, except for people with size 20 feet.
The business need is what determines the solution necessary.
> you're reading too closely and judging instead of looking at the discussion from an objective standpoint and simply seeking clarity and working towards truth
I think you're projecting here. Perhaps we can stick to the actual discussion instead of speculating on my motivations--as though you can understand someone's inner workings from half a dozen Internet comments :).
Anyway, we're no longer talking about microservices or monoliths so it would seem this thread has run its course. Enjoy your day!
Well if the problem domain and scope is very-very well defined, developing a service as microservice is good / better than monolith. I can't imagine google map API to be developed inside the monolith app instead of running as their own service.
The problem that many developers fall on is sometimes some problem domains feel like they're well separated. However in practice those domains are tightly coupled into each other, that merging them together is better.
> The term "Monolith" was devised by people who wanted to brand microservices as newer and superior.
I am interested in hearing more history on this
Well, back in the oughts, I think maybe 2001 . . .
I don’t even know what anyone means by monolith or micro services.
I’m somewhat sure everyone is somewhere in between depending on who you ask.
ISBN-13: 978-1491950357 ISBN-10: 1491950358
Here you go, you can read this book and it explains.
I know what the terms means, but I think in common language people mean different things when you find out what they are doing.
Oh, sure. People are wrong a lot.
Pretty sure "Monolith" was originally designed by a weird team of space aliens who left one on the Moon and Europa :-)
Going from a monolith to a micro-service setup is essentially my idea of a Christian hell. Swirling depths of pain and uncertainty interspersed with screaming and urgency. There is no rest. No one knows when it will end.
I think this is because our monoliths are so complicated they hide away our technical debt like monstrous Jack-In-The-Boxes. When you start breaking it into chunks all of these issues come exploding out of them. Suddenly huge bugs that no one noticed or cared about are showing up in testing. Old libraries that sat dormant wake from their crypts to harass and torture junior developers. Forgotten binaries whose source code was lost with the changeover from ancient source control software to GIT starts showing up security issues in VeraCode.
Really, a well coded monolith is just a bunch of micro-services on the same server communicating through memory. In reality it's more of a Lich who's eyes shine with the light of the tortured souls of fallen QA testers and developers.
Agreed, however I've experienced something worse: trying to refactor domain models across an SOA that was poorly factored to begin with, and then layered 10k eng-years of incremental feature development driven by a fractured product team with short average tenures.
Sounds like a second circle. Jeeze, my condolences.
This is poetry.
A few points I'd like to make:
1. You can't "migrate" to microservices from a monolith. This is an architectural decision that is made early on. What "migrating" means here is re-building. Interestingly, migrating from microservices to a monolith is actually much more viable, and often times just means stick everything on one box and talk through function calls or IPC or something instead of HTTP. Don't believe me? See this quote:
> The only ways we could break down our monolith meant that implementing a standard ‘feature’ would involve updating multiple microservices at the same time. Having each feature requiring different combinations of microservices prevented any microservice from being owned by a single team.
Once something is built as "one thing," you can't really easily take it apart into "many things."
2. Microservices does not mean Kubernetes. The idea that to properly implement microservices, you need to set up a k8s cluster and hire 5 devops guys that keep it running is just flat-out wrong.
3. Microservices are "antifragile," to use a Talebian term. So I think that this paragraph is actually incorrect:
> This uncertainty made creating microservices more fraught, as we couldn’t predict what new links would pop up, even in the short term.
A microservice is way easier to change (again, if designed properly), than a huge app that shares state all over the place.
4. What's the point here? It seems like the decision was hasty and predictably a waste of time. Any CTO/architect/tech lead worth his or her salt would've said this is a bad idea to begin with.
> Microservices does not mean Kubernetes. The idea that to properly implement microservices, you need to set up a k8s cluster and hire 5 devops guys that keep it running is just flat-out wrong.
You don’t need to use kubernetes but I strongly believe it’s the best choice if you’re not using FaaS. If you pick nomad or bare vms you’ll spend a lot of your time building a framework to deploy/monitor/network/configure etc your services whereas kubernetes has “sane” defaults for all of these
That said - you should use managed kubernetes and not deploy it from scratch
> A benefit of microservices is that each team can be responsible for releasing their services independently and without coordination with other teams.
Sounds almost sarcastic. How do you deliver API changes without alerting other teams?
This is how things were done at Amazon quite successfully. The golden rule is to never break API backwards compatibility. If you must, create a new version of the API and leave the old version functional. If you need to shut down old functionality, it becomes a campaign you have to drive to move your dependents off of it. One little thing that people often overlooked but was very important was to have API operations for describing your enums rather than just putting them in the docs. This allowed for easier adoption of them by API consumers and forced them to consider what to do in the case of an unrecognized value being encountered.
>The golden rule is to never break API backwards compatibility. If you must, create a new version of the API and leave the old version functional
It also helps with zero-downtime deployments:
1) spawn a new instance of the service with the new API, side by side with the old one
2) now incoming traffic (which still expects the old API) is routed to the new instance with the new API, and it's OK, because it's backward-compatible
3) shut down the old instance
4) eventually some time later all clients are switched to the new API, we can delete the old code
How can accumulation of versions be prevented? Now the same team has to maintain two products, and the underlying mechanism is still limited by the older version.
Anecdotally, a robust backward-compatibility has been seen as a hinderance to e.g. Java's progress (so much that a newer language, Kotlin, was created to break free from that burden).
There isn't really a solution to version bloat aside from good processes and general diligence. There's no easy way to handle that sort of thing, unfortunately.
However, I do think it can be easier to deal with for internal services then for something like Java. When the number of users is in the dozens rather then the millions, it's a lot easier to make sure everyone gets moved over to the new version.
That's what makes the whole idea of microservices seems weird to me. If a functionality has merit on its own (e.g. an authentication service), then it will naturally fall outside of the main application. If a service is tightly coupled to other parts of the app, then microservices seems like intentionally hindering yourself: the coupling remains (as evident by the need for backward compatibility), but now we it's harder to keep everything aligned due to the extra separation (e.g. different code bases, multiple databases, no static validation of remote interfaces etc.).
The point of organizations and products is to work as a tightly coordinated machine. The decoupling that microservices create seems opposite to that goal.
Happy to hear different perspectives.
I agree that tightly coupled modules should live inside the same deployment, but:
>it's harder to keep everything aligned due to the extra separation (e.g. different code bases, multiple databases, no static validation of remote interfaces etc.)
It's solvable with appropriate tooling. I.e. you can store API definitions in a separate repository and make the services or CI/CD check API usage is valid at build time.
>tightly coordinated machine
What do you mean by that? For example, we have 10 teams all developing different features in parallel, with a tight release schedule. If there was tight coordination for every change, we'd degrade to a waterfall on the scale of the whole organization. Major API changes are discussed in advance during P/I planning; for minor changes, it's a matter of simply notifying other teams "hey, add this to your backlog, please" (we enforce backward compatibility for zero downtime anyway, so it's not urgent)
> It's solvable with appropriate tooling. I.e. you can store API definitions in a separate repository and make the services or CI/CD check API usage is valid at build time. Makes sense, though it feels somewhat like re-inventing the wheel (same-codebase tooling are generally easier and faster to use).
> What do you mean by that? Data and workflow have to be unified across the company's products to provide the user with a seamless experience. As above, to me this seems may contradict some components of microservices like splitting the database, since one ends up with the same constraints (synchronization, shared schema) while complicating the orchestration (since now cross-system integration is needed).
Sure, something like Reddit or HN can break the unified experience, but any important or productivity system will greatly suffer from such fragmentation. I assume it can be achieved with micro-services, but it seems somewhat harder.
One way is to rewrite version N endpoints to use version N+1 endpoints. You just need to ensure clients can handle null/empty data so that when some requested data is depreciated, you don't break old apps. The increased latency from N conversion calls also encourage the oldest clients to migrate without breaking backwards comparability.
Well we usually coordinate between the teams. I.e. we don't force other teams to make changes as soon as possible (they have their own plans) but we agree to add relevant changes to their backlog, so that it was fixable in a 1-2 month window.
You need to talk to the other teams. Usually the change isn’t so drastic, I often made the change myself in the other teams service and sent them a review.
And still done! Fully agree.
> Sounds almost sarcastic. How do you deliver API changes without alerting other teams?
Sounds almost sarcastic. To deliver API changes without alerting other teams you, of course, simply deploy the changes without sending a message to the other teams.
The non-sarcastic answer is that sometimes you want to make changes that will not affect an APIs users in any significant way. Of course you would still document these changes in a change log that the consumers of the API may or may not check. Or you may want to hype/market these changes for clout reasons.
Maybe it's an API that services multiple sets of users with different partially-overlapping requirements and they don't all need to know about the new change.
Maybe it's a soft launch for a surprise feature that's going to be announced later.
Maybe the other team is on vacation and you just want to get changes out the door before some holiday.
Do you not version APIs you design?
When engineering an API meant for consumption by disparate services it’s imperative to provide back words compatibility.
This is pretty basic stuff anyone designing a serious API should be taking into account.
Sure we do, it's been /v1.0/ for the last 2 years!
All API that I am running had been v1 the whole time.
Yes, and I have to consult with other teams when versioning, I don't just go "welp here's the next version I created and deployed without asking anyone because that's what the IBM microservices manifesto (2006) suggested and some random guy on HN insinuated I'm causing undue friction if I work with you guys"
sure, but all of that is a cost which you don't have to pay with a monolith. Versioning APIs and having to constantly think about backwards compatibility with independently moving services is not trivial.
Sometimes the cost is worth it. Most of the time it's not
Oh, you absolutely have to pay that cost with a monolith, it's just less clear and obvious because you can change all of the consumers when you make a change to an interface.
Also, this (independent deployability) is simply not a feature of microservices. It is a feature of any well architected code base.
I've always worked on monoliths, and I've almost never needed to coordinate a release with anyone. I just merge my branch and deploy. Github and shopify talk about merging and deploying monoliths hundreds of times per day without coordination.
The case where you would need to coordinate a release in a monolith is exactly the same case where you would need to coordinate a release in microservice app. That's the case where your change depends on someone else releasing their change first. It doesn't matter if their change is in a different service, or just in a different module or set of modules in the same application.
Now, most application are not well architected - micorservices or monoliths. In the case of a poorly architected app deploying a monolith is much easier anyway. Just merge all that spaghetti and push one button, vs trying to coordinate the releases of 15 tangled microservices in the proper order.
Don't ever change APIs. It's pretty simple. I don't know why the monolith people believe this is such a gotcha.
If you really need to change the API, give the new API another name. You may choose to think of this as "versioned APIs", if you want, but "versioned" and "renamed" are the same thing.
Proper deprecation procedures. you can document how you uniformly deprecate and remove APIs. This is a strength of using something like OpenAPI for documentation, or GraphQL, for instance. It is then the responsibility of a consumer to deal with these deprecation(s). On the most basic level you could also do versioning, though its not my recommendation
Document and set expectations accordingly. I've done this move before breaking apart a monolith into separate micro services and this is key. Spending more time on good documentation is generally a good idea regardless.
I'm assuming we're not talking about public facing APIs. That's a situation where versioning might make a lot more sense.
Generally speaking if you're adding another field to a JSON or something, that doesn't really break the parser [1] or affect downstream. While you should still probably let the downstream teams know, it's not necessarily going to break anyone's code.
[1] I'm aware that that's not always true (e.g. adding a field that's ridiculously large choking up the parser).
or you have a customer that has strict validation on and adding a field breaks their ability to deserialize.
Something like Microsoft does with the their interfaces
They have multiple versions of calls. The older one function as before and never change. Want different behavior - here is your_interface_v1(), your_interface_v2(), etc.
You still alert team about new functionality but they're free to consume it at their own pace. This of course involves a boatload of design and planning.
I am in general against microservices and consider those as the last resort when nothing else works. To me a microservice is mostly my monolith interacting with another monolith.
When monolith becomes big enough that it needs 2 teams I usually handle it by each team releasing their part as a library that still gets linked into the same monolith. That is my version of "microservice" when the only reason for it to exist is to have two or more "independent" teams.
I'd guess that 90-95% of tickets do not alter a existing API in a non-backwards compatible way
Not all changes result in a change to the way your service is called, and even those changes can (with some effort and care) be made backwards-compatible. Performance-level changes are one obvious one - for example, I wouldn't expect to have to keep my caller in the loop if I decrease my API's latency by 50ms, even though it might be a good idea.
But other behavior changes are also not necessarily something that requires a team to be alerted. A good design provides an abstraction where the caller shouldn't have to care about the underlying implementation or details of how a request is fulfilled.
Never break the API. If you need a new API contract use API versioning so that consumers can upgrade when it's convenient. Additionally, use contract testing.
That's why finding the right boundaries between services (yes, services, microservices is a harmful buzzword) is important, so that you minimise having to communicate and coordinate with other teams.
100%. Boundaries are extremely important, and if you're a service onto which 7 other teams rely on, there's an issue with your teams and the way you've setup your services. Bounded contexts!
Their architecture didn't provide any clear boundaries to be sufficient for microservices, however that isn't the case for many medium to larger sized projects.
(By the way, just because there's still quite a bit of coupling between services, doesn't mean there aren't clear boundaries - Microservices can communicate with one another all the time and still be justified in being decoupled)
There isn't an absolute answer to monolith vs microservices - It depends case by case.
Instagram was built using Django and I'm unsure of ig's architecture today, but it remained monolithic for a very long time (at least till late 2019), and if that architecture sufficed for Instagram, I'm sure it would suffice for many other projects.
However, still, it's not a this or that as many of the comments here would seemingly imply - Again, it's HEAVILY dependent on the case.
> Microservices allow your team to have control over the full stack they require to deliver a feature.
This is honestly pretty rare, at least in my experience. What I have seen is that organizations will buy in to the microservices hype, then dictate to their teams what stacks, deployment paradigms, etc. (sometimes even down to the sprint cadence) are acceptable.
My experience is the opposite of yours. Teams I've worked with get massive freedom to implement their services with any (reasonable) language + framework: Rust, Python, Java, Go, C++, C#, and so on.
Seems like an organizational decision
My experience was mostly at small companies with dedicated ops personnel and in government. I worked with a large number of teams that had been chartered with implementing microservices within government agencies, and every single one of them was told what stack they were going to use, either by agency leadership or by infosec personnel.
There was slightly more freedom of choice when I was at AWS, but compliance requirements and tooling support basically strongly encouraged everyone to adopt a standardized stack.
All of which is to say, I get that what you're describing is in theory what microservices are supposed to allow, but I have yet to see it actually work that way in practice.
Honestly, "every team in the same company having full control over what kind of stack they want to use" sounds like a nightmare. Soon you have services running in Python, C++, Java, Go, and node.js, with five different internal libraries talking to MySQL, all slightly incompatible with each other. (Let's hope someone doesn't start up a PostgreSQL server.)
or even worse: the teams DO have full control over their stack and now you have dozens or more different technology stacks at the company and expertise isn't shared between them
Extra fun when you're on-call for one of those stacks that only 2 people in the company have experience with, and you can't get hold of either of them.
I think that moving a monolith to a microservices architecture is only justified if the organization size is large enough, so there are different business teams/departments. In that scenario, each team/department will own a microservice and this will speed up the development on each team. Still, every time there is a change in any microservice API, that will require coordination. For a small company(12 developers), I can't see the benefit.
We have a hybrid model: modular monolith + microservices. It has worked well so far.
The core of the product is found in the monolith. We use bounded contexts ("modular monolith") with strictly separated concerns. There are no immediate plans to split the core into microservices (unless absolutely necessary) because the logic between modules is too intertwined and coupled. Splitting the core into microservices would overcomplicate everything for us, and the performance would suffer.
As for microservices, we usually use them for:
1) critical infrastructure which needs to be fast and scalable (for example, the auth service)
2) isolated helper services, for example a service which allows to integrate with third-party platforms
3) isolated features/products which minimally interact with the rest of the system; for example, we have an optional feature which shares the UI with the rest of the application, and uses some of its data, but ultimately it's a product of its own, it's developed separately with its own codebase, and integrated into the monolith
So I think it's a false dichotomy that you either have a monolith, or microservices. You can use both, they can complement each other.
> We use bounded contexts ("modular monolith") with strictly separated concerns.
>because the logic between modules is too intertwined and coupled.
That doesn't sound very modular? If your bounded contexts are intertwined, I don't think they can be considered bounded contexts. A modular monolith would only communicate between contexts through well-defined and non-leaky APIs, and that's the opposite of intertwined.
They only communicate through well-defined APIs, and there's a rule that cross-context API calls can only happen in the anti-corruption layer (we use a tool to check it at build time).
What I meant by intertwined (maybe a wrong word, I'm not a native speaker):
1) there's a lot of data/logic dependency between the contexts (i.e. a context in its operation depends on N other contexts), although we at least disallow circular references; it's unfortunately dictated by business rules and I'd like to see contexts to be more isolated and self-contained. Some can say that if a change in the requirements requires to change many contexts at once, maybe it's one fat context after all - and they may be right, but we enjoy the current modularization effort, one big fat module would be far less manageable for us.
2) there're occurrences of temporal coupling; there are synchronous operations that spawn several contexts, with a lot of data flowing back and forth
Now, it's easier to manage it in a monolith, in the same process, because:
1) there are no network trips back and forth in case of complex operations with a lot of data
2) no retry logic in case of network connectivity issues
3) DB connections/locks and other in-memory structures can be reused
4) same codebase, so easier to reason about
Microservices require more care and more complex solutions:
1) distributed transactions are hard
2) eventual consistency is hard
3) the idiom "DB per microservice" makes managing the infrastructure harder
4) deployment is harder (if you have changes in several related contexts, there's only 1 deployment in the monolith as opposed to N deployments of microservices)
5) you have to manage different codebases/repos, can't see the whole picture
6) you have to defend against network connectivity issues, microservice unavailability etc.
7) debugging is harder, you can't just step into another microservice like you do with in-memory modules
8) new devs need to be taught all that
The list can go on and on. So we don't try to make all our modules/contexts into microservices just because we like microservices, we have to substantiate a move to a microservice with proof that it will make development/scalability easier for us, and that the advantages outweigh the disadvantages.
Just last week I found an interesting and unexpected (for me) advantage of microservices. We have two monoliths written in different stacks/frameworks, and developed by different departments. Monolith #1 is being split into microservices, we already have around 20 microservices. Monolith #2 kind of lags behind, and there are certain problems that they encountered, which are already solved in one of the microservices split from monolith #1. The solution I came up with is to simply reuse the microservice from monolith #1 in monolith #2 (the service is isolated and self-contained so it doesn't care who uses it). I found it to be a rather elegant and simple solution for cases when you want to reuse an implementation but can't package it into a library because clients have different stacks.
There is a big misconception that a monolith has to be fully deployed every time. A well designed monolith can be partially deployed.
Can you say more about that ? I'm a DotNet developer and I don't see how this could be possible without having several applications
If a monolith has routes /a and /b, you can deploy the whole service to 2 servers with a proxy where all of the requests for /a goes to server 1 and all the requests for /b go to server 2. Server 1 has all the code to respond to /b but will never see that request.
Feature flagging comes to mind, just don't expose the pieces that don't work or are in-progress? or .exclude or something.
How?
The 'monolith' (which I find a silly term but I'll use it here) can expose different parts of itself as services. As long as those services can be versioned and are backwards compatible, you can deploy the monolith using any schedule or notification mechanism you like.
If the monolith is composed of modules with a DAG-like dependency structure (e.g. maven projects), then pieces of the monolith can be deployed alongside the dependencies they need.
I think the problem is there's no popular framework that makes this easy (or is there?)
Depends on the eco system, I've found this straightforward with maven / sbt + GRPC / Akka / Thrift, and then more difficult in environments that don't have baked in concepts of packages and module deployables, but from experience the mention of any of those technologies can start a flamewar of good and bad experiences therein :).
I think many microservice implementations are more complex than necessary but I also am extremely skeptical of someone’s competence if the database is on the same compute instance as everything else
Rephrasing: "this person, who is probably solving a very different problem with different design constraints than me, is doing things differently than the handful of ways I have ever seen in my limited career, and is therefore stupid."
if I meant to say that, I would have said that
for anybody else passing by, the people still doing things like a LAMP stack on a single compute instances are getting the same user experience issues under load that we solved for over the last decade or so. I’m just not running into people that have looked at a system design tutorial in the last 10 years that thought “ah this is the use case where I can reject all these guidelines because I’m so experienced”
I’m skeptical because they’re on IIS in 2022.
> Recently our development team had a small break in our feature delivery schedule. Technical leadership decided that this time would be best spent splitting our monolithic architecture into microservices.
Maybe redesigning the architecture of the product just because there is time vs. there is a pain point/problem that needs solving is already a red flag. In this context it feels like “micro services” was a hammer looking for a nail, and they had no such nail.
Edit: typo
Exactly.
Microservices can solve for some problems, eg: scaling infrastructure in a non-uniform manner or scaling development velocity non-uniformly across many teams.
But there are also tons of other ways to solve these problems. The mistake is in assuming that you need microservices to do x, without really critically thinking about what is actually stoping you from having x right now.
The move to microservices (or any similar kind of rewrite efforts) should be undertaken only when it's painfully obvious that it's needed.
I always start with a monolith while keeping microservices in mind. Have clear communication boundaries, avoid shared state as much as possible, consider asynchronous and parallel processing needs, etc.
Actor systems are a natural fit for this eventual de-coupling. What starts as a simple actor w/ a mailbox can eventually grow to a standalone REST service with minimal headache in architectural refactoring.
Do you have an auth service that does not do API? Does your API ask the auth rather than reaching into the auth table to see who is authorized? When you send an email, do you do it inline or do you trigger a push to a queue with separate worker(s)? Does your externally accessible API talk to internal services using a predefined protocol rather than reaching directly into a database?
Congratulations, you have micro services!
As someone who have driven the migration from a monolith (just set environment variables and magically the same codebase becomes auth, notifications, workers, web and API and the same codebase reaches into every single database and talks to every single service) into micro services because a simple features were taking months to implement, I can confidently say that even today, in 2022, an average organization does not have the tooling or the team to do a monolith. Monolith is a cargo cult. Break stuff into an digestable chunks, externalize your own internal libraries if they are shared, version the interfaces and stop worrying about microservice complexities.
Some perspective from Netflix. Around 10k employees [1] (could not find how many are working with software), more that 1000 microservices [2].
The second article also provides some insight to the services. Those make sense for me - they truly sound like independent, relatively large pieces of software. Not like “LoginService” type of things you sometimes see.
Few examples: 1)Create a main menu list of movies 2)Determine your subscription status to provide content relevant to that subscription tier 3)Use your watch history to recommend videos you may like
[1] https://www.macrotrends.net/stocks/charts/NFLX/netflix/numbe... [2] https://www.cloudzero.com/blog/netflix-aws?hs_amp=true
My team built an application using a multiple not so well thought out microservices, and it ended up creating a lot of unnecessary maintenance and complexity and has been a long term pain. I wish we had just built within a single service.
But, my company split out a much older, larger monolith over many years into separate services with clear ownership across a variety of teams. This has been a huge benefit, coming from clear ownership, API boundaries, and separation of concerns.
So neither monolith or microservices are a clear winner. It depends on context. An easy litmus test, IMO, is that a single dev team will get little to no benefit from managing many microservices, but a company scale problem will get a lot of benefit from having each team manage and deal with an independent service.
From what I gather microservices architecture works in a large enough organization where there are enough teams to manage each individual microservice. If your business logic allows for nicely isolated modules, which can almost act as separate "products" that each individual team builds and then the rest of organization dogfoods, then sure. But if there's a single underlying dependency, that won't work. If there're complex interdependencies, it won't work (as nicely). If you have a small team, it won't work. If requirements frequently change, it won't work.
Ultimately works well if you have something very high scale with enduring set of requirements.
> We have approximately 12 developers spread across 2 feature teams and a support team.
I started work at Amazon in 2001 when they were near the beginning of the transition to microservices. I think they had a couple thousand software developers at that time.
As with all architectural patterns there are tradeoffs. Microservices for one thing are not functions. Granularity is an important concern.
A DDD approach up front will help with granularity.
The other leg is serverless support. Without that you are stuck with maintaining infrastructure in tandem with all the other considerations, which takes a lot a specialists - lots of engineers.
Definitely a game of scale and not for small organizations.
However, if scale is the key ingredient for success and the value proposition is based on scale, then this kind of architecture is worth looking at.
That said, all shops are not Netflix or AWS...
> Because we couldn’t isolate any of our services properly, this was going to mean that we would be left with a significant amount of duplication. For example, we identified one particularly complicated and essential piece of business logic that would have to be copy-pasted and maintained across 4 of the planned microservices.
Wouldn't this piece of business logic be best placed in an import-able module? Then, that module would be imported by those 4 microservices and problem solved ...? I don't really understand this argument.
I have a very strong objection to this line of thinking.
Effective use of microservices depends upon a strong, meaningful boundary between the services and that boundary should be business driven, not code driven. As soon as you start dealing in packages of code[1], there’s no longer a meaningful boundary between your services, instead the boundary is completely arbitrary and each service becomes a microservice in name only.
If every microservice knows about the business logic for generating basket prices, whether the code comes from a package or not, you no longer have microservices… you have a lot of monoliths.
I joined a company that did this and it was one of my worst experiences as a software engineer, I would never recommend it.
[1] specifically packages containing business logic. Packages containing functionality for cross-service communication etc. are very reasonable.
> As soon as you start dealing in packages of code[1], there’s no longer a meaningful boundary between your services, instead the boundary is completely arbitrary and each service becomes a microservice in name only.
While this sounds very radical (to me at least), I mostly understand how you've come to this conclusion. Obviously "just one package" is going to lead to further complexities down the line, and perhaps many more packages than that.
Perhaps a dedicated microservice for this piece of business logic would be better, as you suggested.
Perhaps the language/stack they were using wasn't condusive to this but honestly I can't think of one where this would be a problem they couldn't solve with a module/package the performs the essential business logic piece.
Or moved into its own microservice and then RPC'd to from those other services. But yeah.
Why would you consider microservices if you are only 12 developers?
Because the team might already be comfortable working in that way? Because certain parts of the application might require specialised implementations and very natural lines of separation fall out?
I’m in a team of 4 and the few API’s we expose would be considered microservices. We did that because it was easiest and fastest for us to build and maintain and the features we provide were all quite distinct.
Most of the conversation so far has focused on the development benefits of microservices (decoupling deployments, less coordination between teams, etc). Small teams don't really have this problem, but there are other benefits to microservices. One of the biggest is scaling heterogeneous compute resources.
Suppose, for example, your webapp backend has to do some very expensive ML GPU processing for 1% of your incoming traffic. If you deploy your backend as a monolith, every single one of your backend nodes has to be an expensive GPU node, and as your normal traffic increases, you have to scale using GPU nodes regardless of whether you actually need more GPU compute power for your ML traffic.
If you instead deploy the ML logic as a separate service, it can be hosted on GPU nodes, while the rest of your logic is hosted on much cheaper regular compute nodes, and both can be scaled separately.
Availability is another good example. Suppose you have some API endpoints that are both far more compute intensive than the rest of your app, but also less essential. If you deploy these as a separate service, a traffic surge to the expensive endpoints will slow them down due to resource starvation (at least until autoscaling catches up), but the rest of your app will be unaffected.
So sure, maybe make the ML model a separate service, but you don't really have the same driver for other services; state-less server processes tend to need the same type of resources only in different amounts, and you don't really gain anything by splitting your work load based on the words used in your domain description.
Real-world monoliths often do have some supporting services or partner services that they interact with. That doesn't mean you need a "micro-service architecture" in order to scale your workload.
Well, no. You can deploy the same monolith to two different clusters with different resource configurations.
In fact, this is what you usually do with "worker" nodes that do background jobs.
And you can always have feature flags/environment variables to disable everything you don't need in a given cluster.
I'm not saying you have to use microservices to solve these problems, just that they are potential reasons why you might want to, even with a small team of developers. I would also argue that if you're deploying the same codebase in two different places and having it execute completely different code paths, you're effectively running two separate services. Whether or not you decide to deploy a bunch of dead code (i.e. the parts of your monolith that belong to the "service" running on the other cluster) along with them doesn't change how they logically interact.
It can sometimes make sense even that small, like if team has different geographical locations and/or time zones.
And that I think how you should approach micro services, to solve an organizational problem, not use it for solving a technical problem.
I'm not speaking from experience here, but it seems like rather than "moving to a microservices architecture" it would perhaps be better to think more in terms of "splitting out specific functionality X into an independently deployable and hostable service, which should alleviate the specific problem Y that we've been experiencing due to their being too closely coupled" and if there are no obvious X and Y then maybe the "monolith" is fine?
For sure. Find a problem first, then look at solutions. Try one out, see if it helps. If not, try a different solution.
Man-with-a-hammer syndrome is dangerous.
An example to show you how easily the idea of "boundaries" breaks down...
We need to show customers a list of products they registered in a scrollable list. The product registration data is basically a product ID, the user ID and date of registration. It does not contain actual product details.
The product registration data is in one database whilst the product details data is in another. Each has their own data store, API and team.
So it's a classic join problem. To show the customer a list of their registered products including the product name and a thumbnail of it, a call has to be made to get the list of product registrations, plus several dozens of individual calls to get the product details (name, picture), one call for each item.
It's terrible UX. Slow and jumpy.
Yes, I know...a monolith would also struggle with this scenario as the data sources are split for organizational reasons not relevant to discuss here. You might also create a new service that somehow joins these data sources, although I'm sure that might be an anti pattern.
What is the best solution isn't really my main point, rather that boundaries are not typically respected, whether it is at data level or at service level. The real world doesn't care because data and services are not products. Furthermore, when designing these boundaries you absolutely can not foresee how they're going to be used.
As such, it is incredibly common that for an app/web developer, the API fails to meet needs. It doesn't have the data, you get too much of it, or combinations of data are not efficient to get. In my experience, this is the norm, not the exception.
There's another downside. Now that your services are maintained by autonomous teams, guess what...they really are autonomous. That extra data that you need...sorry, not on our roadmap. Try again in 6 months. Sorry, another team is building a high prio mobile app, their needs come first.
A boundary is not a feature. A boundary is a problem. It makes everything inefficient, slow and complex.
I'm not closed minded, there's a time and place for micro services but I would consider it a last resort. Only do it when having explored all options to avoid it and when things really are bursting at the seams.
The #1 reason to split your serving architecture into microservices is that your application can't fit in 1 server's memory.
If your application fits in 1 server, you have a choice, otherwise you don't.
If your application can't fit in 1 server and you can't split it up, you have to refactor so it can.
If you can't refactor your application to have isolated domains, aka your domain is so complex it must take up an entire server, you have a serious problem.
Clustered applications are unavoidable.
"Microservices had been sold to us as the ideal architectural for perhaps a year now." When he learn that there is no ideal architectural in software? )
Monolith almost sounds cool to me. To be honest.
I remember hearing a story (from a person inside said company) about a reasonably sized company with a 2 (really 1) man dev team (they contract for most of their needs) and how excited the company was to move their internal stuff to microservices and off a monolith.
He looked at me like i was insane when i said a monolith would work better for them.
"What were we trying to achieve again?"
This is the main takeaway here. If you have a problem and think microservices might be a good way to solve it and possibly worth the effort, then go ahead and investigate. But without a clear problem and plausible solution involving MS, it's going to be a big waste of time.
> It is useful to bear Conway’s law in mind when considering the shape of your architecture. It states that your software’s architecture grows in a way that mimics how your organization and teams are structured.
I had always assumed that it was the other way around. Good to be made aware of the alternative.
The debate about microservices and monoliths are really about valuing consistency of tooling over best tool for the job. Microservices tend to emphasize allowing a developer to use whatever tools services and languages they want to implement a service. We define the input to the service and the output from the service, but little in between. The consistency is in the interfaces between the services - how each service is built can be totally different. Monoliths emphasize consistency of tooling and language across services, so there are fewer tools, fewer things to know to operate and develop the application.
You know what? You can totally screw up both architectures, you can have cost overruns, and you can fail to scale. Neither microservices or monoliths are going to make you succeed of fail.
The real question is, where do you want to put the consistency? Is that the right way to do it for your app? Can your team maintain and keep building, or is maintenance going to blow you up?
Answer: "Because your team is smart and doesn't follow every hype as it happens."
I wish more languages had an abstraction layer above namespaces, classes & interfaces, something like Java modules, to help organize a large monolith.
Instead of creating isolation over an network interface we add an abstraction to achieve it.
Microservices and SPAs are to me in the same category of "things we do because it is fun" rather than because the business will take any benefit out of it.
The amount of effort wasted is just not worth it in 90% of the cases.
The real problem in monolithic codebases isn't that it's large and needs to be separated- it's that the pieces are logically coupled. Microservices force you into separation but do not force decoupling.
Microservice architecture isn't "better".
Monoliths aren't "better".
Because the whole idea of "better" makes absolutely no sense without context. Sometimes microservices are better in a certain context. Sometimes a monolith is better in a different context. And sometimes, one or the other is "better" but not by enough of a margin to care about.
It's the oldest cliche in the book, but one that this industry seems to hate with a passion: "Pick the right tool for the job."
Sadly, in our world, the received wisdom sometimes seems to be "Use the newest, shiniest, most hyped tool that is what everyone is talking about."
Salutes go out to the folks who can handle microservices, because they have always been a pain in the ass for me. The world needs more libraries, not services
The problem is in the hype.
For example, the tradeoff between centralized and distributed has been taken (mostly informally) by big institutions for years. It´s not possible for a large bank with multiple overlapping domains, hundreds or thousands of dev tems (some of them outsourced/offshored) to have all of it´s code in a single repo or a single executable. And not all of it´s applications have the same requirements (technical, scale, etc) either.
SOA came to aid in this case by putting a common integration pattern between the interested parts.
But at some point the idea was hyped, and even small teams with no diverse technical or scale problems started doing simple backends using full blown distributed systems without reason.
Basically: if you don´t have problems of scale (domain, technical or people related) going microservices first is probably not granted.
Article is from 2019.
Is there any way to read without login?
Isn’t it cropped on the first paragraph for you?
Medium? Nah
It took me a while to accept that microservices are better. Not in every case, but in a surprising number of cases. They really shine when combined with serverless computing. Clear seperation of code by a networking call is the next logical step in the encapsulation principle of object oriented programming. We hide the implementation details and only expose an interface, which creates seperation and forces us to stop sphagetti logic. Microservices are the enxt step in that design pattern, and only with the improvement in container technology and cloud computing has this become achievable (in the sense of there not being so much operations and complexity overhead).
A size 20 shoe is better for a large foot... But not better for a size 15 or size 10 foot.
Saying Microservices are better is the same as me saying "a size 20 shoe is better than any other shoe"... for everyone.
It's not a viable statement in any use case, except for people with size 20 feet.
The business need is what determines the solution necessary.
I know its not a catch all. But more often than I would like to admit they bring simplicity and reduce complexity. The more I develop software, the less I can stomach monoliths running on some big server. The other day I was considering deploying an MVP I have written on Django, and instead just ripped it apart and pushed the pieces into their own seperate lambdas. Deploying some monolith API like that was nerve wrecking, updates are, the blast radius is higher, and the composability of components is easier with smaller microservices.
I think the catch is that "Services" are typically the scope at which encapsulation should stop.
They don't need to be "Micro".
This is just arguing over the service boundary definition and not the architecture. There are multiple comments here trying to differentiate between a "service" and a "microservice" which seems like a fools errand to me.
> Clear seperation of code by a networking call is the next logical step in the encapsulation principle of object oriented programming.
You can have clear separation at the import/library level though. No need to add that extra latency to every call.
Separating services with network requests doesn’t stop you having “spaghetti logic”. All it does instead is add more spaghetti and put it in different bowls.
I hope you're trolling.
If not, enjoy implementing joins across services.
Doesn't this just mean you picked a bad service boundary? You shouldn't have to join across services, ever.
Seems like a straw man.
"just picked a bad service boundary" -- well that's the thing isn't it. If you always pick the right boundary up front, something that is perfect both now and also anticipates any kind of future crazy feature request -- if you can pull that off I'd say any architecture will work well.
But most people get boundaries wrong some times. Sometimes very badly wrong. Sometimes the boundaries are historical, set by product owners without technical input, set by junior developer, set by superficial attributes, and sometimes even the most experienced developer-architect just does a mistake.
And the whole point of not doing microservices is you don't have a huge investment in your boundaries, it's more feasible to change them once you inevitably now and then realize you got them wrong.
Isn't the point of making services as small as possible so that you can easily shift boundaries? Isn't DDD (a common companion of microservices) all about constantly shifting business domains?
We've gotten boundaries wrong tons of times. We change them, which includes a migration script to move historical data from one service to another, if possible. Yes, it's work, but it's not any more work than having everything crammed into the same monolith and having to deal with all the downsides therein.
Well the parent talked about not joining across services. Then there are hard limits to how small you can make them.
And what need to join with what is something that can change.
So if you make services very small, and no joins across services, you get a lot of copies of the same data everywhere...
And if you make the services too small you have just moved the exact patterns you would have inside a monolith to into your APIs..how can that be easier?
You say "downsides of monoliths" but monoliths isn't a homogenous thing, they come in all shapes and varieties. So does microservices.
Myself I happen to have experienced microservices systems that are a real mess, and pretty clean monolith designs. Consider for instance an event sourced service where every endpoint a) reads events from database, b) if business rules allows (involving possibly external calls) writes an event to the database. No CRUD. This pattern keeps every handler/business rule reasonably isolated from other, and it doesn't matter if it runs in 1 or 10 services...
It will eventually happen even if you manage to create the perfect boundaries, e.g for report & statistics.
Reports and statistics, especially on material that crosses service boundaries (but even on single-service information, to keep services single-responsibility) are their own logical services, which operate on copies of data received from (often multiple) upstream services. (In many real world cases, you’ll want this functionality in a data warehouse, but there are some cases where some of it may be in something that looks like normal services, whose data that can change other than by push from other services deals with reporting and report-delivery configuration, not the business data on which reporting is done.)
You shouldn’t be doing report and statistics on live data, but in a data warehouse.