Skip to main content

The monolith: large, impenetrable, legacy, evil? The rise of Microservices has infected the developer zeigeist with a fervent hatred for the monolith. So is the monolith evil? Or is it, as I would argue, just another tool in the software architect’s arsenal, albeit one that should be used judiciously and with appropriate forethought.

Monolith

A monolith, yesterday.

There have been some recent examples of development teams that have begun on their journey to the promised land of Microservices only to retreat back to the comfort of monolithic development. Why is that? Were they wrong or do they possess some innate wisdom that the rest of us lack? I suspect that it’s neither. Whilst I couldn’t comment on specifics I would make the following general observations:

  • Our development tools are still largely oriented to monolithic development. It is much easier to ‘just add some more code’ to a monolith than it is to create a new service. Microservices take discipline to do well.
  • Language choice maters. Developers that are used to working with static high ceremony languages such as Java will develop a certain mindset that I think favours the monolithic approach. Those that work with more dynamic languages such as JavaScript and Python are more likely overall to favour the distributed approach. After all it takes much less effort to create a service using Node.js than it does in Java.
  • Microservices require investments in CI/CD, DevOps and newer deployment technologies such as containers and Serverless.
  • Microservices != many repositories. This is common complaint and misconception. In my view the repository per service is an anti-pattern that causes significant unnecessary overhead for a team. Use a Mono-repo. You can still do Microservices. See our thoughts on this here
  • If you’re trying to achieve distributed transactions involving the co-ordination of multiple Microservices you’re probably doing it wrong!

Extremism seems to be popular these days, unfortunately. I think that it is a failing to adopt a dogmatic approach to software development. The world is shades of gray, not black and white and a balanced approach is required. Sometimes a monolith is just fine, sometimes you should make the investment in Microservices.

Which begs the question, when should one adopt either of these broad architectural paradigms and what are the appropriate circumstances for each?

Having spent a lot of time over the last few years helping teams rescue their monolithic platforms and decompose them into smaller more distributed architectures I think it comes down to entropy. Entropy is a measure of the ‘disorder’ in a system. More precisely:

‘Entropy is an extensive property of a thermodynamic system. It is closely related to the number of micro-states that are consistent with the macroscopic quantities that characterize the system (such as its volume, pressure and temperature).’ Source: https://en.wikipedia.org/wiki/Entropy

Moreover, the second law of thermodynamics states that the total entropy of an isolated system can never decrease over time. Put another way, ‘in a closed system entropy or disorder will always increase’.

If we consider a software development team and the code that they are producing as a closed system, then without any external forces – for example external code review and intervention, the entropy as expressed in the code will increase over time. I’m stretching the analogy to breaking point here but bear with me.

The point is that we have all seen rotted code bases. Whilst as developers we do our level best to keep the code clean, over time technical debt will build up. The impact of that technical debt or code entropy is far more significant in a large monolithic codebase than in a system comprised of small cooperating components. Why is that? Because the tendency with a large monolithic code-base is to struggle onwards even in the face on mounting entropy because ‘We can refactor this later when we have time’, but the time to pay back that debt never materializes in practice.

In a microservice system the entropy of each small component is effectively firewalled off from the rest of the system. If one component decays or becomes unfit for purpose then it can simply be replaced.

Overall, I believe that Microservices are a better pattern for managing code entropy. Given that most of the cost of software is in maintenance this is a key benefit of this approach.

So where do monoliths fit in? Well if you are building a system as a prototype or you know that the length of time that the software will be in operation is relatively short, then a monolith is just fine because the system will likely be decommissioned before you start to see significant costs due to entropy. It makes no sense to invest the effort and extra time in Microservices in this case.

Also if the scope of the system is very well defined and will not increase over time, again a monolith is just fine because entropy is unlikely to get out of hand under these conditions.

As a rule of thumb then:

  • Software that has a short life time or very well defined scope can most likely be built and deployed more cost effectively as a monolith
  • Larger software platforms, that will need to change and flex over a long lifetime are better built using Microservices.

Of course your mileage may vary!

Get in touch with us to share your thoughts.