When people try to cut their costs of owning and using their enterprise technology stacks, it’s natural for them to look at the biggest slices of their pie charts and to figure out how those can be trimmed a bit. Even a small percentage improvement can be a substantial number if the target is big enough – and the legacy systems in many organizations are massive targets.
Like many “natural” behaviors, though, “trim the fat from the fattest” is a strategy that’s worth turning inside out and upside down and maybe even backwards to see if there’s something better: plot spoiler, there is, and I call it “trimming the long tail.”
The obvious costs of a tech stack are square footage, power and cooling, servers and switches, systems software (operating systems, backup and other administrative tools, databases) and “company standard” applications like office suites and administrative tools (whether licensed or subscribed). The much less visible costs are intangible, dispersed to the point of being almost fractal dust in the budget, but devastating when they’re all added up: I’m talking about the dozens, or hundreds, of splinters and fragments of a company’s knowledge of facts and its performance of vital processes that are scattered over its inventory of applications.
Did I really say “dozens or hundreds”? I did, and that’s not exaggeration. My Salesforce colleagues at MuleSoft produced their heavily researched 2022 Connectivity Benchmark Report earlier this year and found that “Organizations now have an average of 976 discrete applications, an increase of 133 in the last 12 months.” Yes, that’s 976 for an entire company, but surely the number of applications used by any one employee is a small fraction of that?
It is, according to research published in August at Harvard Business Review, but don’t breathe any sighs of relief quite yet: “Consider an example from a Fortune 500 consumer goods organization we studied. To execute a single supply-chain transaction, each person involved switched about 350 times between 22 different applications and unique websites. Over the course of an average day, that meant a single employee would toggle between apps and windows more than 3,600 times.”
Let’s think about the implications of an enterprise portfolio of 976 applications, of which (from the same MuleSoft Report in February 2022) “only 28% are integrated, a slight decrease from a year ago.” If the 28% that “are integrated” represent only the simplest possible case, in which any one piece of that 28% pie-chart slice is exchanging data and/or sharing steps of a process with only one other, then there are Combinations(976*0.28,2) or C(273,2) possible pairwise interactions requiring consideration, with a subset of those then demanding time and talent for their creation and maintenance: that’s more than thirty-seven thousand opportunities for inconsistency, unreliability, or failure to provide adequate security or regulatory compliance.
What about the other 72%, that population of the unintegrated that has slightly grown year-on-year? Let’s imagine that even a mere one-tenth of those would create new value if they could be brought into the sunlit uplands of connection, making our number of possible pairwise connections now C(343,2): an explosion of complexities, growing from our former thirty-seven thousand to almost fifty-nine thousand opportunities for not-what-we-needed.
Are things enormously better if we only consider the 22 applications or other tools being used by any one employee? Perhaps. There are only C(22,2) or 231 pairwise interactions in play – but that’s potentially a great many different lists of two-dozen(ish) endpoints, and what are the odds that many of these exchanges (at the level of the single person or small department) depend on tribal-knowledge copy/paste rituals rather than automated workflows?
What is the cost in errors incurred, delays suffered, and opportunities overlooked that can arise when someone is switching context (in the case described above by HBR) 3,600 times a day? If each such context switch takes only a second, which seems like a hugely charitable assumption, that’s one full hour a day being thrown in the wood chipper – quite apart from collateral damage to employee engagement, and opportunity cost of taking that time away from opportunities for collaboration and up-skilling and mentorship.
When I started writing this, I thought I might be onto something, but these numbers are far worse than I would have guessed – and I don’t believe I’ve framed these calculations in a way that overstates the problem. If anything, assuming simple pairwise connections and ignoring costs of data transformation are unrealistic sweeteners of these results.
I believe that this makes the case for starting our search for enterprise efficiencies at the end of the long tail. In a typical response to a “get the IT costs under control” mandate, the people at the center of an enterprise IT galaxy are likely to focus on its biggest and brightest stars. Let’s try a totally different approach, where every team looks with fresh eyes at where their time is being wasted and how their interactions with other teams are being crippled.
Wherever a problem is being solved (in a short-run but sadly long-lived way) by spinning up a new spreadsheet or deploying some new tool, let’s look harder for low-code and no-code ways to craft an organic extension of a collaboration and automation fabric. Let’s build a smarter organization, instead of breeding a chaotic ecosystem of niche species – and let’s focus our crucial resources at the head, not the tail, of the beast.