...the restricted committee notices that the information provided by GOOGLE is not easily accessible for users...Moreover, the restricted committee observes that some information is not always clear nor comprehensive...
The company GOOGLE states that it obtains the user’s consent to process data for ads personalization purposes. However, the restricted committee considers that the consent is not validly obtained for two reasons.
First, the restricted committee observes that the users’ consent is not sufficiently informed...Then, the restricted committee observes that the collected consent is neither “specific” nor “unambiguous”.
Despite the measures implemented by GOOGLE (documentation and configuration tools), the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations. The restricted committee recalls that the extent of these processing operations in question imposes to enable the users to control their data and therefore to sufficiently inform them and allow them to validly consent.
Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement.
Not so fast
There are problems attached to this ruling but as at the date of this story, Google has not said publicly whether it will appeal the decision or not. Here are a few issues that immediately spring to mind.
- One of the more obvious issues is that the fine appears arbitrary and may fall outside the GDPR since CNIL had to take the case on absent of Ireland (where Google has its EU HQ) was not legally equipped to take on the case.
- Another issue relates to the manner in which the complaint was handled in the sense that in moving the locus of jurisdiction back to France, it clearly shows that despite GDPR being an EU directive, nation states may not have internal legislation aligned with the requirements of the regulation.
- Some colleagues believe that any attempt that requires lawyers to understand the nature of and manner in which data is managed is fraught with difficulty. This is a new field of law but if past decisions are an indicator then it would not surprise me if the way in which CNIL has understood Google's practices is incomplete.
- Another issue relates to what CNIL discovered. In discussing the role of links to other parts of terms and conditions, I was reminded of Brian Sommer's bugbear, embedded contract links, which can be changed and which, in turn, change the terms, often with implied financial penalty. My view here is that CNIL is right to point up the complexity of current arrangements but misses the kore important point about the value of links when considering a set of terms.
Google could solve many of its self-inflicted problems by re-organizing the terms under which it offers services such that the user has a straightforward path down which they can navigate. This might work in much the same way as I can switch services and service permissions on or off on my iPhone.
Right now we're hearing the usual PR laden words of comfort that Google takes GDPR seriously. I'm sure it does. When it's in its interests to do so
But there is another set of arguments that take the question of privacy and consent as envisaged by the GDPR to an entirely different place.
Monetize our data?
In this week's membership restricted Exponential View, Azeem Azhar laid out arguments around the ways in which monetizing personal data is a tough nut to crack. His starting point was a piece by Will.i.am who wrote in The Economist that; We need to own our data as a human right—and be compensated for it. This is not a new idea and several firms are trying that idea out albeit in different ways and at different stages of market readiness. Examples include Datacoup, Datawallet, People.io which use blockchain technology.
But as Azhar points out, data is not like a physical good you can own and over which you have direct agency. Azhar cites more reasons including the fact that on its own, snippets of data has little value to the individual but when collected and analyzed at scale such that patterns emerge, then it takes on significant value that can be traded or used to provide additional services back to the data originator.
That was an argument I used in the early days of SaaS accounting when I argued that data collected by SaaS vendors has almost no value at the individual level but has enormous value on both sides of the market when taken in aggregate and over time. What's interesting here is that there have been precious few SaaS initiated programs to take advantage of those data. The closest I've seen is the RBS acquisition of FreeAgent last year. What's happened since is not known but part of the rationale has to have been access to the data that FreeAgent collects.
Azhar goes on to remind us that data can influence our behavior and preferences and at more frequent intervals today.
...in some sense our data contains some seed kernel of our agency, to be protected or stewarded.
Finally, the market is not the best arbiter of policy in every situation. Markets can fail. Markets are poor stewards of shared resources. They may not work well where there is information asymmetry or in transactions where many pay-off happens some time in the future. Experienced organisations find it difficult (or impossible) to value any given dataset. It is unlikely an individual can.
It's a problem for sure but not one I see as insurmountable.
When viewed through the lens of the marketplace, data takes on characteristics that seem to me similar to those of copyright. When I write something I initially own the copyright to those works (which are only data by another name.) However, I can monetize those words/data in exchange for a fee. In doing so, I hand over certain rights to whomever buys those words. There are a variety of options available to me including restricted or full rights.
At diginomica, we operate under a particular license that allows third parties to use some of our words/data but in specific ways for no fee, provided that certain conditions are met. Where a firm wanted full republishing rights, we would require a fee of some kind, although I am unsure under what circumstances that might happen without downstream implications for such issues as SEO. I can envisage circumstances where words/data are remixed or repurposed and that opens up fresh monetization opportunities. Cannot the same or similar be said to be true of personal data which is mixed and remixed for a variety of purposes? That's the crux of the Will.i.am argument.
An alternative view
In the alternative, and recognizing the 'kaleidoscope' nature of data, Azhar proposes that all interested parties be guided by:
nuanced notions of respect, stewardship, rights and collective benefit which can then drive our regulations, policy & strategies.
Well yes. That would be nice. But given the market power and influence firms like Google, Facebook, Netflix, Verizon and many others exert, is it even conceivable that policy makers and activists will be able to make a fist of that approach?
There is one scenario in which I can see a compromise with the potential to move the ball forward. This week, Stuart Lauchlan reported that at Davos, no less a luminary than Eileen Donahoe, Executive Director at the Stanford University Global Digital Policy Incubator said:
We’ve seen an erosion of trust among citizens that they will be the beneficiaries of digital technology. I would also highlight the erosion of confidence among governments, especially democratically-oriented ones, that they can live up to their promise and obligations to simultaneously protect the liberty and rights of citizens, national security and protect democratic process, all while advancing economic growth...
...People don’t have confidence that they will partake in the economic upsides and that the benefits and growth are going into fewer and fewer hands...
...Everyone has woken up to the idea that the digitization of society has gargantuan consequences for society. Finally people are understanding that privacy matters for the exercise of liberty. The simple idea is that if everything you do and say is tracked and monitored by governments and private sector companies, then it will have a chilling effect on what you feel free to say and where you feel free to go.
Take those statements together and you can start to see how, by tying privacy and economics together, you get the start of a framework that might work. However, it will require the collective will of politicians, technology vendors, academics and other interested parties to make any of this work.
Given all the potential hurdles and roadblocks on the path, I imagine we will be talking about this for a long time to come. But given the degree to which the bonds of trust between vendors and consumers have broken down, is trust enough of an incentive for the vendors to throttle back and consider the steps they can take?
We are in the very early stage of seeing how GDPR shakes out and we know there are a slew of complaints waiting to be adjudicated on. Will the results of those decisions also act as an incentive?
Endote - I highly recommend taking up the paid memebership of Exponential View. The story which partially inspired this one includes valuable links to related content that talks to the economics of data and its value.