The current architecture style “de jour” is Microservices. You can’t get away from articles that mention them. People write about when to use them and when not to, how big they should be and what they help you achieve and how they will cause problems. If you are working on projects using Microservices these debates will rage, opinions will conflict and sometimes, just sometimes something useful will be delivered.
Whilst these conversations are going on, the one thing that is often missing is something that can be the key to the success of a project. The thing that is missing is a common understanding of the term “Service”.
Once you identify that a common understanding is required the next problem is to agree. If you have three different people to define a service you get three different opinions, three different perspectives. I’m not going to write about how you get a consensus, that is a different subject. What I will describe is my understanding of services… take it or leave it.
When I visualise a service I see an independent of unit software with discrete and clear boundaries. I home in the now elderly “Services are Autonomous” tenet from the original SOA Four Tenets. Don Box is attributed with coming up with the four tenets. When he defined “Services are Autonomous” he meant that services should be built, deployed and versioned independently from each other. People jumped on this bandwagon and duly built systems this way but they often turned out to be brittle.
These systems delivered a network of services with many runtime dependencies. Situations could arise where many services shared dependencies with a small number of core services. A failure of any one of these services, which had been built, versioned and deployed independently, brought the whole system down. It was not just one application that was effected. The organisations that had fully embraced SOA were sharing services across multiple applications, so now a single failure had the ability to disrupt a whole suite of business applications.
To understand why this happened you need to go back to the Fallacies of Distributed Computing. The approach described above introduces temporal coupling where all services are assumed to be up and responsive at all times. The fallacies tell us that, well, this is a fallacy. Instead your architecture should expect regular failures in a network of services.
More recently people (notably Udi Dahan) have advocated that the “Services are Autonomous” should be extended to also stipulate that services should be able to function autonomously at run time. That is, they should be able to function as much as possible without direct influence or dependency from or on other services. That is not to say that services live in a vacuum. Services must of course be able to interact with other services but now they use asynchronous one-way messaging.
I am firmly in this camp. My service architectures are likely to minimise temporal coupling and therefore required some form of asynchronous communication. Depending on the situation this may be pseudo asynchronous patterns built on top of HTTP or it may be a fully fledge messaging system. I often face the challenge that this type of approach is too complex or just too hard. Surely it is as simple as Service A synchronously calling Service B.
I smile and then refer the challenger to this post.