When people are trying to get something done, words like “objectives” and “metrics” quickly enter the conversation – as people seek proof that they’re accomplishing something. Unfortunately, in most real-world situations, the things most easily and most precisely measured aren’t accomplishments – but merely activities.A world of connected products and processes offers opportunities to get more in touch with reality, but also may challenge organizations that want to believe in deeply embedded (and even cherished) illusions about what’s worth doing.
Much of the time, measuring activity is good enough. When things are working normally, there are useful relationships that make for timely and helpful feedback at low measurement cost. For example, the number of new customers acquired in a given month is easy to measure – and if you’re used to a fairly constant level of customer attrition, new-customer count offers a good basis for estimating added revenue for the year. Salespeople may even point out, reasonably, that getting new customers is the only thing they directly control, and—they may say—is therefore the only thing that should be used as their basis for compensation.
The problem with this thinking is that metrics are most needed precisely when things aren’t going the way they usually do. If a car’s wheels are spinning on ice, a speedometer measurement based on wheel speed says nothing about the movement of the car. If customer service problems are drastically increasing the termination rate of customer accounts, it may actually be counter-productive to reward salespeople for signing up more new customers to offset the losses: this may just be increasing the number of people who are likely to leave soon, after a bad service experience of their own, and tell their friends how bad it was.
In today’s world of connected customers, any situation that produces a growing number of unsatisfied customers is a mortal threat to a brand. For that reason, there’s never been a more important time to connect the core-process workers, at the center of an organization, to the farthest limits of the network that seeks to deliver recognized value at the edge.
Measuring accomplishment, not merely effort, was one of the key insights put forth by statistical quality guru W. Edwards Deming in the 1950s. Rather than merely measuring the number of holes drilled by a machine operator, for example, Deming would insist on measuring the number of holes produced within a certain tolerance of specification. He would thereby figure out when a drill bit should be replaced, even if the operator was not yet noticing the effects of wear, because the cost of the “early” replacement was more than offset by improved delivery of higher-quality work.
Measuring holes in a work piece, perhaps only a few feet away and a few moments after the act of drilling those holes, is a cheap and tight feedback loop. What’s novel, and needs real effort to drive into organizations’ consciousness, is the idea that equally immediate and accurate data may now be available at even less cost in a world of pervasive wireless connection and ever-cheaper sensors.
It may not be clear whose job it is to introduce this change of viewpoint into the minds of key decision makers. Further, in many organizations, a particular metrics culture may be deeply embedded, despite being an actual distraction from the reasons that the organization exists. This can create political risks in what we might ideally envision as a purely objective question: “What measurements best enable mission assurance?”
In one case, for example, I had an opportunity to observe a campaign to increase the voter turnout of university students at public elections. The organization running that campaign started to measure its efforts by the number of cards signed by students that pledged their intent to vote. Funds for the effort were raised by showing potential sponsors the graphs of improvement, over time, in the number of such cards obtained at campus events such as visiting classrooms – but the data masked a dangerous trend.
Yes, the number of cards collected was going up, but not as quickly as the total number of students on the lengthening list of universities being visited. At some universities, annoyance with class interruption was even threatening to result in banning the campaign’s workers from campus entirely. Meanwhile, there was no data on whether the cards collected from various different types of student engagement were better or worse indicators of actual intent to vote. A card was a card was a card.
As a result, any ideas for new ways of engaging the prospective voter had to pass the test of “Will this produce signed cards?” – rather than, “Will this improve voter turnout at the next election?” It became impossible for new volunteers to exercise any real imagination. The easily measured activity had turned from an indicator of success, into the only valued definition of success.
In his subversive 1975 classic, Systemantics, John Gall called it “Orwell’s Inversion: The Confusion of Input and Output.” He offered an example of his own: “A giant program to Conquer Cancer has begun. At the end of five years, cancer has not been conquered, but one thousand research papers have been published. In addition, one million copies of a pamphlet entitled ‘You and the War Against Cancer’ have been distributed. Those publications will absolutely be regarded as Output rather than Input.”
There are extraordinary opportunities in every field—manufacturing, services, government, education and many more—to measure what’s happening closer to the point of useful effect, instead of being limited to measurements of what happens close to the point of controllable effort. Those opportunities are sterile, though, if there’s not a shared willingness to re-ask questions like “What’s our real definition of why we do what we do?” The good news is that the technology is improving in performance, and plummeting in costs, at amazing rates. The unfortunate news is that there’s no Moore’s Law for organizational intelligence.