[caption id="attachment_6249" align="alignright" width="300"] Peter Coffee[/caption]
Thirty years ago, making computers do interesting things required manipulating hardware. Tech columnist Steve Ciarcia could occasionally remind us that “my favorite programming language is solder.”
Tracy Kidder, in his Pulitzer-winning book The Soul of a New Machine, could narrate an overnight debugging process that concluded by wiring an eight-cent logic gate into a circuit board – to produce a signal that the engineers called “NOT YET.”
It was even a time when I could pay for my own computers with contracted hacks, like setting a timing chip to vibrate a computer’s speaker – in a way that sounded a lot like a croaking frog.
It was an era of bare-metal macho. It was already in its endgame.
The brief reign of the “coder”
Within a few short years, Intel would introduce the i486 (also called the 80486 or just “486” by most PC buyers); an Intel VP, Claude Leglise, would tell me that this single chip integrated so many of the elements of a personal computer that “engineers will get to pick the color of the box.”
We were clearly on a fast track to a place where code cutters, not wire cutters, would rule – and even that was just a pause for breath, on the way to our present era of configuration rather than code.
What does it mean to configure, rather than code? Machine language tells a cook, “Pour what’s in the bottle on the top right shelf to fill the cup in the lower left drawer, then pour the contents of the cup into the pan in the middle cabinet.”
High-level language says, “Add one cup of vinegar.”
Configuration says “adjust the overall mixture to a pH of 5” – specifying a result to be achieved, rather than specific mechanism and process.
Does configuration, rather than coding, require substantial new technology resources to interpret and execute the details in the moment? Yes. But technology shortage is not a problem, compared to our much more limited ability to predict the future.
While the code-writing developer is asking, “Should we do this first for iOS or Android?” – the configuring developer is getting the experience right, and letting a Platform-as-a-Service render it correctly in both of those mobile markets (and on web browsers, and on other device families, as well).
Beyond the known unknowns
The less you know about the future environment in which a result must be achieved, the more important it is to describe the result and let run-time intelligence make an in-the-moment plan. That’s a vital strategy in a time when we know less and less, at the time that we craft a “program,” about the device on which it will run – or the circumstances in which it will be used.
What don’t we know? We don’t know the processor speed, so we can’t hard-code timing chip parameters and expect to predict the resulting sounds (or other time-dependent behaviors). We don’t know the number of processor cores, so we should minimize dependence on a strict sequence of operations. We’ll increasingly rely on functions returning results, not on memory assignments and “side effects.”
We don’t know whether the user interface will be a big screen, a tiny screen, or no screen at all. We should therefore define behaviors in terms of nouns of data and verbs of APIs, not in terms of pixel locations on bit maps and hardware addresses of input devices.
These are merely the things that we know we do not know. There are also the “unknown unknowns” that might change the game entirely. For example, research published this month suggests that the building blocks of an all-silicon quantum computer are now in place: given a few more turns of the Moore’s-Law crank, this could change many people’s ideas about what we can do and how we can think about doing it.
When the map is always changing
We’ll continue to know less and less because hardware change is accelerating – and user behavior is hugely affected by hardware change. Telephones took 73 years to get from 10% to 90% penetration of U.S. households; PCs took thirty years; today’s black-glass smartphones and tablets will cross that threshold within the coming year, only nine years after the iPhone said “Where we’re going, we don’t need keyboards.”
The next step is people saying “Where I want to go, I don’t want to have to hold a device in my hand” – and wearables are achieving rapid adoption across a wide range of ages and needs, with a wide range of specific form factors.
Even more important, it’s clear that with wearables we are spreading a single user experience across several devices – rather than generationally migrating to a single new dominant device, as we have done in past transitions. Apple’s “Handoff” feature, and the manner in which Apple Watch “Notifications” and “Glances” interact with apps on the user’s iPhone, testify to the growing need for experience designers rather than app constructors.
When the map is changing this quickly, you don’t want code (the equivalent of step-by-step directions on how to get to your destination). As said by the fictional Sangamon Taylor in Neal Stephenson’s Zodiac, “I'll never understand why people give out directions, or ask for them. That's what road maps are for. Find it on the map, you can always get to it. Try to follow someone's directions, and once you lose the trail, you're sunk.” [Emphatic adjectives omitted]
Configuration, rather than code, says, “This is where I want to be.” What’s happening today is configuration at ever more specific levels, not merely in massive and monolithic applications but in re-composable components that let people choose from ever more personalized or vertically specialized destinations.
If you know how to do it, you’ll be able to componentize it; if you build a highly configurable component, people will find ways to use it that you never expected. This elevates the “platform” – and shatters the silos of what we used to call our “apps.” Let’s unwire old IT models and crack the productivity code.