We're not talking about a nascent police state here, although the past year's Snowden revelations have given all of us plenty to think about on that score. Instead, what's at issue is the extent to which online retailers and social media giants are monitoring and manipulating our behavior to maximize their profits.
From time to time, headline-grabbing breaches alert us to how poorly many of these corporations protect our private information and data.
- Numerous retailers, Target being the most notorious example, discover they are unable to protect our credit card and identity data. These frequent breaches result from successful assaults on poorly protected corporate data stores by malicious outsiders.
- The recent circulation of intimate images from celebrity iCloud accounts demonstrate that even cloud storage is not the unassailable fortress we'd like to believe it is. Apple has not said how the attack succeeded, but it's thought the attackers got in by guessing the victims' passwords. This failure of personal online security measures was a valuable reminder for all of us to opt in to two-factor authentication.
- Finally, we've just had the example of Uber executives drawing attention to their power to access information that could be used to track users of their service or detect certain patterns of behavior. It's always been known that insiders are the most likely perpetrators of security breaches. Uber demonstrated the importance of the culture and policies set by management in ensuring (or not) proper respect for personal data.
There's a disturbing escalation running through these three examples. First, the miscreants came for your credit card and identity information. Then they stole your most intimate files and documents. Finally, they trawled through all the information they held about your movements, preferences and history to find patterns of behavior they could use against you.
That third step is the most insidious because so few of us realize how much information corporations can piece together today from our online habits and in particular from our use of smart mobile devices. The experts use the term data exhaust to describe the trail we leave in the wake of our online movements.
That exhaust cloud is growing thicker and more richly laced from day to day. Our smartphones record our precise movements, while our social media posts reveal what we did and what most impressed us. Our searches and downloads give valuable clues to what we'll do or buy next.
Few of us realize exactly how much can be discovered by analyzing all of this information. Long ago, the credit card companies discovered that, by analyzing spending patterns, they could make a good guess which of their cardholders were heading for divorce. Astonishingly, they could make that prediction long before the cardholders themselves came to their fateful decision. Imagine what predictions are possible now that so much more data is available for analysis.
Condemned by prediction
Of course there is a benign side to this predictive analysis. It allows Netflix to recommend us movies we would never have thought of viewing, and guides Amazon to recommend useful additional purchases. We willingly consent to their use of our data because it helps us make better decisions. As someone once said to me, "convenience trumps privacy."
But while we happily enjoy the benefits of sharing our data in this way, it should not be a carte blanche for corporations to abuse. We have an implicit expectation they will refrain from analyzing and predicting patterns of behavior that might disadvantage us. Indeed, card issuers have fallen over themselves to deny that they would ever analyze our divorce prospects. They realize how toxic is this usage of our data.
None of us want to end up in a Minority Report kind of world, where our fates are determined on the basis of predictions of how we might behave. Do we consent to them limiting our access to credit based on an analysis that says our default risk has risen? What supposedly secretive patterns of behavior trigger an alert to the authorities?
- Black Friday and Thanksgiving sales - by the numbers
- Deluded buyers stoke the online advertising money machine
- Respect Network offers antidote to Facebook privacy flaws
It's the outliers who would suffer most unfairly in such a world — the one or two percent of cases whose behavior gets misinterpreted, whether because of insufficient or inaccurate data or a failure to recognise salient factors. The data scientists will respond by blaming their failures on a dearth of data and use it as an excuse to demand more, but that's not exactly reassuring.
We also expect these corporations to protect our privacy by not sharing our data with others without our consent, and by ensuring their systems are proof against the actions of malicious employees or external attacks.
But when we accept 100-plus pages of terms and conditions presented to us by online providers, do we ever scrutinize them to see what protections are built in? Of course not. In effect, our trust is blind and in most cases it is thoroughly misguided. It is not in their commercial interest to limit their use of our data, nor do they want to bear the cost of paying compensation if a breach occurs. Yet we are given an all-or-nothing choice; either we agree to sign away our online privacy or we decline all of their services.
None of us are expert enough to think about all of this on a case by case basis. In reality, we do have to be able to trust the providers — or more precisely, they have to earn and retain our trust. We need them to establish common norms of behavior across the industry that protect our privacy in a way that's fair to both the consumer and the provider.
Government of course can also play a role by establishing hard-and-fast legal boundaries. Unfortunately it is the Europeans that are ahead on this and so much of EU politics is influenced by envy and control agendas that their proposals often wildly miss the point. But at least that consciousness means that businesses that want to operate in the European market have to think a lot more seriously about these issues than US-only services.
It is up to the industry to take the lead here, or else run the risk of a public outcry that leads to hastily drafted, populist legislation that does more harm than good. Providers should take steps to demonstrate that they take our privacy seriously by giving us back some control.
Instead of grabbing all our data to predict our likes and dislikes for their own uses, perhaps some of that machine intelligence could be employed to recommend a menu of suitable privacy levels for us to pick from ('people like you also rejected these types of intrusion ...'). It ought to be part of the service that providers inform us how they will likely use the information we share with them, and to allow us to limit that use should we wish to. If providers don't want us to lock down their usage, then it's up to them to make the case why it's in our interests to open up.
Of course there's a cost to building in privacy controls that are specific to each individual user. But it's no harder than other forms of personalization, provided it's built in right from the start. The industry needs more vendors working on promoting data privacy as a selling point.
People are not stupid. They know they're getting free services in return for giving up their privacy, and they weigh up the trade-off they're making. Given more choice in the market, they may choose to select platforms that make better privacy promises than the take-it-or-leave-it propositions on offer from today's mainstream providers.
Image credit: © nikkytok - Fotolia.com