Main content

Numbers can be mismeasurements. You can count on that!

Peter Coffee Profile picture for user Peter Coffee March 4, 2020
Numbers aren't always an example of 'satisfactory' knowledge, says Salesforce's Peter Coffee.


Apropos of global concerns regarding a potential COVID-19 pandemic (live dashboard thanks to Johns Hopkins), a salvo of items concerning remote work practices hit my news feed on a single morning near the end of February. All of them had numbers, some of which were simply wrong, many of which were incompatible. I was reminded of the justly famous statement by William Thomson, better known as Lord Kelvin (as in the scale of absolute temperature): that “when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.” I don’t know what Lord Kelvin would say about our age in which people express many things in numbers that are clearly not examples of “satisfactory” knowledge.

Sometimes, we’re talking about simple innumeracy, as in the case of the breathless headline saying “rose by 400%” when a number (percentage of employees reporting that they worked remotely at least once a week) increased over the past ten years from 9.5 to 36. Perhaps I may offer a simple thought experiment: if it had gone from 9.5 to 10.45, would that be “rose by 110%”? No, it would be a rise by 10%. Now, do the math behind that just-mentioned headline: 36 minus 9.5, the difference divided by 9.5, the quotient multiplied by 100 to give a percentage value: the increase was 279%, which is still a lot but not nearly as much. “Grew by a factor of almost 4” is not the same thing as “rose by 400%”, even if the latter is easier to write: in a world where we write formulas in spreadsheets to make important decisions, far too often incorrectly as is, this is not something that we may safely leave behind in elementary maths.

This kind of problem is mere mechanics, though, compared to the cognitive dissonance of a second article dated only three days later on the very same news site: this one reported that “remote work…over the previous decade…grew by 91%.” That’s still a lot of growth, but a factor of less than two—compared to a factor of nearly four—represents a considerable difference if you are undertaking workforce or office space planning. Numbers, if I may be so bold as to emend Lord Kelvin, do not become a measure of what we are speaking about until they are framed with precision and transparency: when something was measured, by what method, with what limits, subject to what circumstances.

I take umbrage, for example, concerning the tendency to report the inexpensively collected numbers that arise from self-selected samples of people who (i) saw a survey at all, (ii) were interested enough to answer the questions, (iii) may have been careful enough to read the questions and answer what they actually asked, and (iv) might have given answers that were accurate and not aspirational. In political polling, for example, a rigorous study of the United States presidential election campaign in 2012 [PDF]  found that “apparent swings in vote intention represent mostly changes in sample composition – not changes in opinion…these ‘phantom swings’ arise from sample selection bias in survey participation.”

Signal issue

The researchers just quoted above observe, almost delicately, that people who study the data that they have at hand may want to believe that they are studying signal rather than noise: “The existence of a pivotal set of voters attentively listening to the presidential debates and switching sides is a much more satisfying narrative, both to pollsters and survey researchers, than a small, but persistent, set of sample selection biases.” If this is not causing you, as you read it, to wonder if this is affecting your own market research and other influential analyses, then all I can do is add the researchers’ observation that their work is “a cautionary tale” of “temptation to over-interpret” one’s data that can be “difficult to resist.”

This highlights the additional point that what we enumerate, once we get beyond one or two significant figures, can quickly become disconnected from the phenomenon that we actually want to measure (or better still, the thing we want to predict).

Leaving the domain of elections, we can see this clearly in the still larger domain of global climate concerns: the Global Carbon Atlas project offers rankings of various countries’ carbon footprints that vividly demonstrate the importance of being quite exact about what is being measured. Is China emitting almost twice as much carbon dioxide as the United States? Yes, but it is emitting less than half as much per person. Then again, it emits two-thirds more (as in, about 1.7 times as much) per unit of Gross Domestic Product. These different numbers could lead to quite different assessments of relative contribution to the problem, and to different conversations about policies and remediations.

This is not the first time that I will use space here on diginomica to discuss the data flows, rather than the technologies, that enable and support the digital economy. Cheap and abundant data, readily correlated by cheap and abundant computation and machine learning capabilities, were a large part of what I discussed on one such occasion; challenges of making democratized analytics a tool for intelligent action, rather than accelerated and intensified misunderstanding, were my main concern on another. The more powerful, and accessible, become the tools, the greater the need for training in their use. We’ll talk about this again. Count on it.

A grey colored placeholder image