We all need a personal API in an age of Spotify privacy policies

SUMMARY:

Fancy becoming a person of no fixed identity languishing in prison for breach of copyright? Then it’s time you thought about a personal API to protect yourself, argues Chris Middleton.

Where's your identity?
Where’s your identity?

Last week I sat in on a workshop in which a major company imagined what the future will be like. Each of us was asked to tell the room what we will be doing in ten years’ time. I explained that in 2025 a search company has used all of the personal data that’s spread across the internet about me to patent the concept ‘Chris Middleton’, and, as a result, I am now a person of no fixed identity languishing in prison for breach of copyright.

It says much about our age that this didn’t seem too far fetched.

Then there was another exercise: identifying future customers’ needs and ‘pain points’. Company employees were asked to put a tick against whichever items on a list seemed most important. Nearly everyone ticked ‘trusting our motives’ – good news, I’m sure you’ll agree – but only one person ticked ‘delivering against that trust’. Me. I ticked it to force them to question why no one else had.

Ironically, all of this happened on the same day that news broke of Spotify’s new ‘privacy’ policy. That the music-streaming provider has joined the ranks of companies that stop just short of demanding your front door keys and your car in return for the right to pay for their services should come as little surprise. “With your permission, we may collect information stored on your mobile device, such as contacts, photos, or media files …” it said. Staggering. (I love the word ‘collect’. It’s what theft becomes if you tell people you’re doing it.)

It must be obvious now that companies’ privacy policies are their mission statements, and the fact that most people ignore them and click ‘Agree’ is their own stupid fault. Most people don’t read the small print: they actively choose to be ignorant of Ts & Cs rather than to inform themselves of the facts. Worse, they hope that someone else will warn them of any dangers via social media: a form of flocking behaviour that cedes leadership and personal data security to strangers. Hardly the sign of a digitally empowered society, is it?

Fortunately for the flock, Spotify’s policy ‘change’ – revelation is a better word – provoked an outcry and an apology from the CEO. But it would be foolish to assume that any such strategic proposal will simply be abandoned, just as Uber’s withdrawal of its UberPop app in Paris should really be seen as the company pulling into a parking space and leaving the engine running.

Why is this happening?

As Channel 4 News economics editor Paul Mason noted in his excellent Guardian blog at the weekend,  the claimed rationale behind Spotify’s and any other wholesale data-grabbing exercise is to make ‘the user experience’ better. But, in fact, it is invariably used to target advertising and messaging at customers instead: a feedback loop of endless aggregate advantage to the provider and their partners, not to the customer.

Mason adds that we are witnessing the emergence of ‘cognitive capitalism’, a term coined in 2012 by economist Yann Moulier-Boutang in his book of the same name. Boutang proposed that, far from living in a flat, networked society in which we all own and control the means of production – a digital restatement of socialist principles – we are actually living in the opposite, a form of data-based capitalism in which owning data capital is the new land grab, the new gold rush. Spotify’s actions certainly map against the latter.

Each of us has the gold and many companies feel they can simply take it. It’s time to empower ourselves and take our data back.

I made a similar observation to Boutang’s about ten years ago, saying that in the future our data will be the de facto currency in a world in which actual money becomes less and less relevant. You could argue that this emerging future is the real reason behind proposals such as the Snooper’s Charter. The government wishes to create the Data Bank of England, in effect, and is using national security as a smokescreen for doing so: the only legal means of overriding human rights legislation.

All of the data assets that companies such as Spotify turn into money and noise are being crowdsourced from the general public, thanks to people tagging their friends’ images and sharing their contact details without first seeking their active consent.

New rules
New rules

In other words, everyone around you is turning you into a data asset that a third party can sell for money. Are any of your friends Spotify users? Then Spotify has your data, possibly even your photos and media files. It’s that simple. When did you agree to this? You didn’t, because nobody asked. That’s the network effect.

Now, an interesting observation about the digital world in its current form is that sharing anonymous, open data sets tends to create ‘signal’ – projects that help improve society, the environment, sustainable services, smart cities, and so forth – whereas Personally Identifiable Information (PII) invariably creates noise, in the form of advertising and other information that people don’t want or need. This leads to a truly fascinating conclusion: being anonymous creates utilitarian benefits for society as a whole.

Few of us invest our PII in improving society’s or humanity’s collective future, mainly because we lack a platform for doing so. But we’re happy to simply give it away in return for noise. What we need is a platform that empowers us only to invest our data in programmes that we agree with, and which blocks its use anywhere else – even if someone shares our data without our consent. We need a means of automating ‘I agree’ or ‘I don’t agree’ and wrapping our own terms and conditions around our personal data, like a Creative Commons licence that refers to the individual, not just a media file.

But back to Boutang’s observations. The truth is we are living in neither a digital capitalist nor a digital socialist future just yet. We are poised between the two, but nearing the point where these two different viewpoints collide in a global conflict. Let’s call it the First World Data War. Forget religion, this is the real Great War of our age. There will be bloodshed, both figuratively and literally.

Mason suggests that all of the companies that are stockpiling our data and turning it into money (for themselves) and noise (for us) are building their “castles on sand”, because their users will soon turn against them. I don’t think so: it’s gone on for too long and we have all been complicit in their actions. The data remains even if we abandon a platform or an app, and therefore so does the information asset that can be monetised by a third party. (The phrase ‘digital footprint’ is fast being replaced by ‘digital tattoo’ for a reason.)

Questions to be answered

So there are two key questions to ask. The first is: Why are we complicit?

That’s easily answered. Collectively, the human race has been a predatory group of pleasure-seeking apes for a lot longer than it has been a cultured and sophisticated society, and a cynic might observe that all the accumulated centuries of deep knowledge since The Enlightenment are fast being abandoned in pursuit of surface, clickbait, and cat videos, thanks to our quest for a quick, evanescent hit of pleasure.

In short, we are genetically hardwired to grab easy options and free stuff. But there’s a problem: all the free stuff that we used to get in return for giving away our data was (a) never free to begin with (we paid for it with our data and our time), and (b) is now being replaced by paid-for apps and premium services. Free stuff was only ever an enticement to give away our stuff. Not only that, but the people who make the stuff – writers, photographers, musicians etc – are no longer getting paid.

identityOur overall behaviour online suggests that, collectively, the network supports and encourages a digitally socialist viewpoint: sharing, collective ownership and control of the means of production, and so on. But counter-intuitively, our desire to have lots of free stuff has created powerful data landowners and landlords, to whom we are quite happy to cede power over everything that identifies us. Hence my joke about being imprisoned for breach of copyright over what constitutes ‘me’.

All of which brings us to question 2: what can we do about it?

Speaking at IPEXPO in London last year, the web’s prime mover Sir Tim Berners-Lee said that members of the public must start to regard their own data as a personal asset, and take back control over it, putting themselves in a position to bargain with organisations and demand more in return for sharing it.

That’s all well and good, but short of simply kicking up a fuss, how can we do that after 20 years of ecommerce, 15 years of mass mobility, and 10 years of social sharing?

One possible means to take back power from the ‘data landlords’ and deploy the ‘quantified self’ to greater social and personal advantage is an emerging concept: the personal API, a term coined by Eric Friedman when he was based at Foursquare in New York. One of the founders of Foursquare, Naveen Selvadourai, has been experimenting with just that, as he explains in a blog post.

Ironic, isn’t it, as Foursquare is based on the principle of learning your likes and preferences.

My take

Creating a personal API platform and standard could be a fascinating route ahead for consumers in the digital world. An equivalent of the Creative Commons licensing scheme, it might allow people to share as much or as little personal data as they wish and, better still, decide what uses that data might be put to – and what uses it may not.

Placing your own data behind a personal API might give you the power to force any company, organisation, or individual to engage with you on your terms, giving greater power back to the user to create ethical ‘investments’ and withdraw support from any programme that does not benefit society as a whole, or match your own belief systems.

It’s just a thought. I’m not saying it’s flawless or the greatest idea in the world. For example, might it encourage some people to be greedy and simply flog their data to the highest commercial bidder? Of course, but at least they could do that on their own terms, fully informed of the purpose of the data’s usage, and with the companies concerned giving them something other than noise in return.

Equally, might it be insecure, and might the government want access to it? No doubt, but at least it changes the conversation and slams the door on the Spotifys of this world who do little more than tell you they’re picking your pockets.

That’s surely a conversation worth having.

    Comments are closed.

    1. says:

      First, it is a little ironic that I had to log in (and thereby presumably accept some form of tracking) in order to leave a message, but I recognize that that is out of the author’s hands.My comment is this — in fact, the vast majority of online ads and personalized content are served by systems that never collect or use information that is identifiable to you, the person.  Instead, most of these technologies rely on associating a computing device with some interest categories and demographic data.  Yes, those interests and data relate to you, but typically there is not enough data present to actually identify you, just your device.  Sure, it may be theoretically possible for you to be identified matching to external sets of data, but the vast majority of advertisers actually have controls in place to prevent that from happening, and it is not likely to happen.  The “creep” factor with online advertising largely comes, I believe, from consumers’ lack of understanding of what is actually going on behind the scenes.  People imagine that someone, somewhere, is watching them or compiling a dossier on them.  When in fact, computers are talking to computers, sharing a handful of data points, so that golf ads can go to people who like golf and not to those who don’t.  The reality of online advertising isn’t nefarious, and consumers have much more ability to control it than they are led to believe.  Don’t like getting advertisements tailored to your interests?  Go to nai.org or daa.org and follow their opt-out procedures.  You will still get ads, but they will be random.All that said, the concept of a personal API is intriguing, and I would love to see a live version of such a thing to see what it might (and might not) be capable of.     

      1. says:

        Heh Ben – thanks for commenting. The reason for login is simple. We want to achieve two things: avoid spambots (hard but we can do a lot) and second have some clue who we’re interacting with and their interests. That’s a tradeoff we happily pursue even if it seems ironic. We are looking at this more closely because we think that content personalization matters. That is relatively easy for our scale of operation provided people trust us not to sell or broker the information. However, we do need to feedback to partners on what is happening and while we are explicit that we will not provide chapter and verse on who is doing what, we want to help them create better content as well. Ergo, login etc.

        To your point about digital ads, you make very fair points but the average person doesn’t care about technology. They only see the outcomes of the data they are trading. The problem is that to most people I’ve spoken to, they do see the current crop of adtech solutions as kinda creepy. And as the old saying goes “perception is reality.”

        I’d add that the adtech systems I see in operation are horrible at knowing when I am in buy or potential buy mode. I can’t count the number of repetitive ads I’ve seen for certain goods long after my interest went away. That’s beyond creepy, that’s downright annoying.

    2. says:

      There is a model for just such an API. We call it a ‘data custodian’, an agent of the customer contracted to manage storage and retrieval of customer data. The custodian, thru rules expressed by the customer, manages access rights to PII and other data stored to the customer’s ‘record’. Access to this data is only allowed by explicit authorization of an endpoint by the customer. Without this authorization, the customer is not ‘visible’ in the custodian database.We are currently building a patent-pending implementation of the data custodian model in the healthcare domain, where security and privacy are well known needs.Now if laws were enacted that make possession of PII without first-hand authorization illegal, this model could become the basis of a data currency platform of the future.

    3. says:

      Hi Chris,Without wishing to seem ingratiating, I believe you have described the situation beautifully. For all the reasons you describe, a hard core of us are attempting the hi:project, endorsed by the Web Science Trust. Love to chat about this stuff if you have the time.http://hi-project.org/Regards, Philip.