After generations of finding themselves locked into successive technology platforms that leave them unable to modernize without massive cost and disruption, enterprises today are looking for more flexible options. Composable architecture seems to offer an answer, holding out the promise of being able to plug different components together using programmable interfaces known as APIs, with the possibility to continue upgrading, adding and replacing components over time. But enterprise IT leaders are rightfully wary — they've heard this promise many times before, yet always ended up locked in to ageing platforms with no easy way out.
One of the challenges in the early days of an emerging technology architecture is that there's no authoritative guidance on how to get it right. The pioneers haven't yet figured out how to identify and explain the core principles, while others latch on to the emerging trend but misinterpret what it truly entails. Confusion abounds. You only have to look at the early days of cloud computing to see how messy this can get.
It's therefore no surprise to see the proponents of composable architecture — particularly vendors of composable commerce and digital experience platforms (DXPs) — taking steps to provide guidance, from the Composable Charter compiled at the recent user conference of DXP vendor Contentstack to the vendor certification provided by the MACH Alliance, an industry organization that promotes composable architecture. But a clear, simple definition remains elusive.
I went searching for clarity at two events in June, first at AWS Summit in London, and a week later at the MACH Alliance's annual conference, MACH TWO. Here I discovered that the MACH Alliance is already having to fend off misleading claims by established vendors about MACH compliance. But I also felt that the four principles embodied in the MACH acronym — Microservices-based, API-first, Cloud-native SaaS and Headless — still leave far too much room for misinterpretation.
For one thing, how those APIs are either orchestrated or choreographed matters a lot in achieving a truly composable architecture, as I discussed earlier this week. Now we turn to the equally important matter of how the APIs themselves are designed and implemented. As part of this, the decisions about what functionality to include in each independent microservice are also significant.
How the MACH Alliance defines API-first
It turns out there's a lot of nuance in how the MACH Alliance understands the term 'API-first' when it comes to certifying an application as MACH compliant. Becoming truly composable is much more than simply decomposing an application into APIs. What type of APIs you have, and how you allow those APIs to interact are both crucial considerations. Casper Rasmussen, President of the MACH Alliance and also Group SVP of Technology at tech consultancy Valtech, outlined some of the key characteristics when we met at MACH TWO. The API coverage, in other words the functionality offered via API, must encompass all of the functionality of the application or service, including configuration management. He provides an example:
[This is] so you can automate and script the definition of whatever configurations you have, whether it's your content models in your CMS, or if it's the currencies and language and market models that you have in your commerce engine. That needs to be completely declarative-configuration-based through APIs. Don't deploy code into it.
Interoperability must go beyond simply having an interface for querying data, with the ability to subscribe for notifications of changes. He elaborates:
What does the actual implementation of API-first look like? Meaning, what is the responsibility of those APIs? You need to be able to query it in more than one way, for instance, REST versus tRPC versus GraphQL. But you also need to be able to actually subscribe to those APIs in case you're waiting for an event to occur that you need to do something on behalf of.
If you don't have all of that, to be very honest, then you won't end up with the type of architecture that we are evangelizing. You might very well end up with a composed architecture that has integrations coupling systems, but it won't be as decoupled, as orchestrated or choreograped.
GraphQL comes to the fore
GraphQL is becoming the mechanism of choice for composable APIs, supplanting REST and other formats. REST, which became a de facto standard for web APIs in the mid 2000s, has worked very well as a lightweight framework for creating, discovering and updating HTTP resources. But because it always sends a pre-defined answer and has to create a new version to be able to include new information, it's proven unsuitable for many front-end applications. GraphQL overcomes these drawbacks by providing a tree of information from which the client can pick just the information it's interested in — and new information can be added to the tree without the need for a new version. Reducing the amount of information that is sent helps maintain fast response times without having to resort to a synchronous connection that ties up both endpoints. GraphQL also allows for real-time subscription, in which a service can request notification of any information updates. In a conversation at AWS Summit, Danilo Poccia, Chief Evangelist (EMEA) at AWS, explained why these features are proving popular. He says:
GraphQL was created, originally at Facebook, and the idea was, as a client, I want a fast answer. I want only the minimum information I need, because sometimes I don't need all the information that I can know. Maybe I'm using a network that is not super fast, so I cannot overload it. And also, I want something that is easy to maintain, because as my business features evolve, the backend evolves, my mobile application also continues to evolve ...
When you give me this information in GraphQL I can say 'Give me only the estimated delivery date. I don't want to know all the other hundreds of details that you know, because I just need to display this on the client application' ... Being able to ask only for the minimum amount of information that I need on the client is the reason why GraphQL is very popular as the main interface between the backend and the front end.
Although Rasmussen includes REST in the list of protocols an API should support, it's rapidly falling out of favor because of the need to create a new version each time a REST interface changes. An April article on the MACH Alliance blog about some of the ways in which composable applications may fail to meet MACH criteria includes advice that effectively rules out the use of REST. It says:
In order to be MACH certified, APIs must be versionless. This ensures consistency and stability for development teams when creating new functionality or automated processes.
But even if an application adopts GraphQL, there may still be shortcomings in how the underlying microservices are structured which make them unsuitable for a fully composable architecture. As Rasmussen explains, the API needs to be able to interact in any context with no expectation of a specific UI or data dependency. He adds:
There may be assumptions in the underlying system that GraphQL is organizing, where HTML might be an underlying assumption, and that makes it web-centric, or at least it has assumptions around the type of interface. In my mind, that's a no-go.
Refactoring to serverless
In a presentation at AWS Summit on 'refactoring to serverless,' Gregor Hohpe, Senior Principal Evangelist at AWS and author of The Software Architect Elevator and other books, set out various scenarioes when moving to serverless architecture. This is a form of composable architecture where all functions are provided as discrete, self-contained autonomous services. AWS provides many of these capabilities, including Lambda serverless functions, AWS Eventbridge, a serverless event bus connects components using event-driven logic, and AppSync, a fully managed serverless GraphQL API service that connects to backend resources.
Hohpe says that refactoring for serverless really means taking as much prescription as possible out of the application code and leaving it to separate automation and workflow logic to determine what happens next. He sums up:
You use the platform capabilities, you reduce application code, and you replace that code with automation calls that have equivalent functionality.
This is important because it makes the topology of the application more visible to everyone. While it often appears simpler and faster to put automation code inside a specific function, this introduces dependencies that may cause problems when the application changes later on. He explains:
Somebody looks across the application, they won't be able to understand the topology, they won't be able to understand the dependencies that you have. And they will break something. It's a classic local optimization fallacy.
Right-sizing your microservices
A final consideration is the granularity of the microservices components. Sometimes it can be faster or more cost-effective to keep a collection of capabilities together as a single component rather than breaking it down further into subcomponents. But where a capability is common to many different components, it makes sense to make it available as an autonomous service rather than building and maintaining many different versions. When building an application from scratch, Poccia says it often makes sense to start out with a monolith and then decompose it when you understand which are the crucial components. He explains:
If you're building something new, and you don't know where the boundary of your services should be, it's better off to start with something like a monolith. Then when you get this experience to understand, 'Oh, this sub-component is critical. Its performance characteristics are different than the rest,' then you can remove it and migrate it to a separate service. Sometimes starting with something too complex from scratch, it's not a good approach.
But for many organizations, they already have a patchwork quilt of existing applications, and they have to make choices as to which to modernize first. A composable architecture therefore has to be able to co-exist and interoperate with more monolithic applications and services. The challenge for IT leaders is to figure out a roadmap towards a more composable future without taking a wrong turn that leads to a dead end. For the MACH Alliance, Rasmussen says that it's important to recognize the real-world practicalities that vendors and enterprises face. He says:
There is potentially a religious way of looking at it, and then there is a pragmatic way of looking at it. I'm not saying we need to be overly religious at all time. But that is part of our responsibility to enforce some of those expectations.
The characteristics of a truly composable architecture are starting to emerge, but more clarity is needed from vendors in the space and organizations such as the MACH Alliance to help early adopters figure out whether they're on the right path. They are going to have to counteract plenty of misdirection coming from those wedded to more monolithic architectures, who will promote their own interpretations of composability.
Enterprises shouldn't underestimate how different this new architecture is and how much of a culture change it will require among their IT teams before they can fully embrace it. But it is coming, part of a wider move to what diginomica has called Tierless Architecture. As I'll explore in the next and final article in this series, it's only a matter of time before the architecture moves out of its current stronghold in the field of digital commerce and experience into other enterprise application categories, including ERP.