“The cloud” is the place for the customer cyberspace

Profile picture for user Peter Coffee By Peter Coffee March 5, 2014
Summary:
The economics of the first cloud era were so compelling that it was easy to celebrate the cost savings and call the job done. But the job has barely begun. Difficult as it may be, try to forget everything you’ve ever known about how data is stored and how application programs have been constructed and used.

 

SJ-cyberspace
As salesforce.com celebrates its 15th birthday, it’s useful to connect the dots of where we’ve been—not just in terms of IT, but also in terms of what people expect from companies and other service providers—and to see how we’ve gotten to where we are today.

It’s crucial to see the difference between things we do by choice, and things we do out of habit because we previously had no other choice.

When business records were drawers full of paper, business processes had to be built around single copies of key data that could each be filed in only one folder. That folder could only be in one drawer, that drawer could only be in one desk, that desk could only sit in one office in one department. Most of today's computer systems lovingly preserve this pre-industrial poverty of information.

"Silos” of separate data, each feeding a separate task, force every customer (or client, or patient, or citizen, or student) to deal with an institution under many different personas—a prospect for a new product or service in one conversation, a current service/support recipient in another, a billing account in yet another—instead of being treated as a single entity. Even the first decade of “the cloud” merely moved the silos from costly, usually underutilized dedicated hardware to more cost-effective centralized systems.

The economics of the first cloud era were so compelling that it was easy to celebrate the cost savings and call the job done. But the job has barely begun. Difficult as it may be, try to forget everything you’ve ever known about how data is stored and how application programs have been constructed and used.

Imagine starting over from the axioms that a single piece of data can be seen simultaneously, even updated concurrently, by many people or processes in many places at a single time. For that matter, imagine that any one person’s view of that shared data can be tailored to their specific level of privilege and need: that a clerk can merely see a red dot next to a name meaning “transaction not authorized,” while a financial services provider might be required and permitted to see a far more detailed view of that same person’s fragile financials. Multi-tenancy, one of the early defining bets made by salesforce.com, has made this combination of appropriate data sharing with precise privilege management widely available to a planet full of creative problem solvers.

“The cloud” is not merely a place for aggregation of data or integration of process, even though legacy IT providers appear to have newly discovered that rapidly-aging idea. Look rather at the connection, at the means of sharing knowledge, with mobile access and multi-device freedom to do what’s needed from wherever you are. It’s not that “the cloud” is where things need to be moved, but rather that the existence of the cloud enables far more interesting things to happen – because far more processes can be driven by different views of shared truth, relying on massive and ever more affordable computational power.

This is where we are now: with people of every age carrying smartphones, wearing and using connected devices, engaging in conversations with communities and companies and expecting to have a 1-to-1 conversation with even the most global brands. Who are these people? They are customers, in the almost sacred sense of the word as used by Peter Drucker – who can’t be quoted too often as having said, “It is the customer who determines what a business is… What the customer buys and considers a value is never just a product. It is always…what the product or service does for him.” The broadly bruited "Internet of Things" is merely a tool for constructing an Internet of Customers.

What happens next?

We can’t just continue to do what we’re doing now at ever-greater scale. When your three or four connected devices are interesting novelties today, you can have an app for each of them. When you have a dozen information endpoints on your person tomorrow, and a hundred of them in your home, you’ll need a single environment in which those streams combine into a comprehensible view.

We’re on the doorstep of that future, of the handheld device as the dashboard of your life. This already offers customers and partners an extensible container, able to incorporate both diverse applications and customized actions into a single consistent user experience.

When we cross that threshold, though, you’ll have neither the desire nor the capacity to manage each of those data feeds and connected devices by yourself: you’ll want algorithms to learn what you like and understand what you need, and have your devices and processes manage themselves most of the time (but know when to seek your permission or alert you to an option).

We’re clearly moving in this direction with technologies such as those from Nest, and with rapidly mainstreaming innovations such as adaptive driver-safety technologies in our cars. The developed world's graying population, which needs to "age in place" rather than being expensively institutionalized, will also be a major driver of more aware and more adaptive technologies.

Is there a connective vision that takes us from the dawn of the digital era into the farthest future we can reasonably predict? There is, and it’s a simple progression of adding one dimension at a time. We started with the data point, a single number on a punched card that allowed us to sort and tabulate massive quantities of data with previously unthinkable speed; we proceeded to the line of code, a one-dimensional  fragment of brutally primitive FORTRAN or COBOL or RPG instructions.

We added a second dimension with the document-driven vision of the desktop metaphor, pioneered at Xerox in the early 1970s but shockingly still the norm on too many desktops and laptops – a class of device that’s now actually shrinking in sales, not merely slowing its growth, as people demand something better.

It took a fiction writer, not an engineer, to give us a word for thinking about information in three dimensions (with more to come in the universe of “Big Data”). In 1982, William Gibson’s short story “Burning Chrome” in Omni magazine gave us the word “cyberspace,” and we’re still building on his brilliant conception of “A consensual hallucination experienced daily by billions of legitimate operators... A graphic representation of data abstracted from the banks of every computer in the human system.”

When a “console cowboy” in Gibson’s world logged into that composite system, his deck did not hold data. Nor did it run applications: its imagined software tools were more akin to browser plug-ins. People on Gibson’s pages relied on edge devices, ranging from highly customized decks to surgically implanted sensors and processors that both augmented and shared the wearer’s reality.

Most notably, the only assets that mattered in Gibson's imagined future were digital: money, knowledge, relationships – and secrets. If it didn’t exist in cyberspace, it could hardly be said to exist at all. Without connection, there was no value…and so it has become today. If it can’t be found using Google, does it matter? If it can’t be shared on Twitter, did it happen? If your expertise can’t be discovered via LinkedIn, is it useful? If the deal isn’t in Salesforce, can you expect to be paid?

“Information wants to be space”: that was the opening sentence of an essay by Erik Davis, twenty years ago. It’s hard to find a simpler way of describing the triumph of “the cloud”: our current state of approximation to that ideal, but merely our foundation for building something enormously better.

Peter Coffee is VP for Strategic Research, salesforce.com