Clear and un-ambiguous

Is real consent magical thinking?

As a privacy pro, you’ll have heard it a thousand times. Consent is king. Asking someone’s permission to use their data is often presented as the gold standard of respecting an individual’s rights and freedoms.

Of course, practically, those of us working in the field know that it’s incredibly problematic to use – and was even before the GDPR appeared and threw in the idea that it’s a good idea for the person giving consent to actually know what they’re agreeing to and to not have it simply tacitly assumed.

Because it’s problematic, some regulators including the ICO have argued for years that it should be the last resort in finding a lawful basis for using someone’s data. Where there isn’t that option, consent is required to be “freely given, specific, informed and unambiguous….” (GDPR Recital 32) and must also be able to be withdrawn. The same recital goes on to talk about pre-checked boxes, while the corresponding Article 7 adds more detail on the effects of withdrawal, non-layering, clear language, and all of that good stuff.

Of course, while GDPR added a fair amount of detail, the concepts it was based on are hardly new. For example, one elements is the ability to withdraw it without detriment; a core reason why, in the context of employment, consent has not (or perhaps should not) have been the lawful basis of processing staff data well prior to the GDPR.

This doesn’t hold true for other jursidictions. Many Asia-Pacific jurisdictions see Consent as the gold standard – processing should not take place without the individual’s consent, though in most cases it is not wrapped around with the requirements of the EU version. In the US, CCPA, while not requiring prior-consent for processing, provides for an individual’s absolute right to withdraw their permission to ‘sell’ their data – to un-consent.

Why consent?

The idea behind consent, and why it is such a cornerstone of data protection regimes is bound up with the idea of privacy as a fundamental right; the ‘right to be let alone’ or ‘the right to private and family life’ depending on which philosophical basis you prefer. Consent is the obvious conclusion of asking the individual for their permission not to let them alone, and is therefore elevated to be the ‘best’ way to respect individual’s rights.

Except that it doesn’t work.

For some straightforward transactional relationships the simple concept of consent may be valid – signing up to an old-fashioned emailing list or managing newsletter preferences are the only real, practical examples that I can think of in the digital space, and that’s assuming that the list uses a half-decent preference centre. Being (very) generous, we could throw in the so-called ‘soft opt-in’ where you’re buying services.

But broaden it out to the broader digital space – not just the obvious social media and advertising space, but licence agreements, terms and conditions, cookies, privacy notices, all of the paraphernalia that supposedly sets parameters around what the recipients of data do with it – while normally seeking exclude as much liability as possible. As a user – even as a privacy professional – those terms are (a) usually so long that you would need to block out at least an hour to do a proper read-through (b) mostly impenetrable, and (c) non-negotiable – even for corporate clients, let alone for the average individual.

Then there’s cookies. It’s widely acknowledged that the regulatory regime on cookies and similar technologies is not ideal – but the idea that it can be resolved through consumer consent is unrealistic at best. At it’s most basic, cookie compliance is based on the assent of the user based on an affirmative click. Some websites go further in splitting down the cookies into functional, performance and advertising/tracking. Some go further again and offer you a list of third-parties to whom your data will be ‘shared’ – i.e. sold for marketing purposes.

The issue with all of these, but particularly the latter, is what they actually mean to the user. The ubiquity and placement of cookie banners often means that it’s impossible to see content without getting rid of the thing, and the fastest way to achieve that is to accede – i.e. to consent. A single click between you and your data being farmed out to a potentially very high volume of recipients. And even if you trust the website or brand, how likely is it that they know all of the recipients of ad data from their site?

The same largely holds true for the second option – though here at least there is some hope of the categorisation being accurate. Even here though, it’s down to the page owner to define the categories, and then hope that someone downstream (for example a large social media or other tech company) doesn’t fiddle with the code in the meantime. Here too, while the user can affirm their choice, what level of actual control is being exerted? And how good are companies at requiring further consent when that list changes – assuming, of course, that they even know, which unless they are using automated scanning could be a problem.

Finally, the new kid on the block – and one which is already the subject of investigation from at least one Supervisory Authority – is to provide individuals with the option of individual consenting (or not) to every data recipient. Now, on the face of it, this would seem to be the ultimate in choice; the consumer can, line by line if they wish, confirm those companies that they are happy to receive their data.

Except that it doesn’t really work that way. For the most part the companies listed would be meaningless to consumers – they largely consist of ad providers, not brands or final recipients. The list is also inevitably very extensive, with the effect that very few consumers would have the time or will-power to go through and actively consent – even if they could properly identify what they were consenting to.

Consent meets reality

I recognize that this is a hugely problematic area for businesses, particularly those in the AdTech business to negotiate, and that some of the above approaches are based on the best endeavours of those involved; but we do need to be realistic about this. Consent cannot be clear and unambiguous when the individual is not able to decipher or dig down far enough to understand exactly what the outcome of the use of their data will be.

This is most obviously present in social media, and also in the App space – most of which is predicated on producing enough data about you as a consumer to target you with advertising and so improve the fungibility of the data produced by those platforms, and actively seek to bypass real individual choice.

Even leaving aside the more high-profile politically nefarious activities , there have to be a serious ethical question-mark over the proportionality of data collection on individuals in order to sell them ‘stuff’, and whether there can ever be a true and equal agreement between an individual companies predicated on harvesting the most intimate details of your life…

Taking this back to first principles; that right to be let alone with your private and family life. Can we meaningfully contract out of that, often blindly? Would we accept that in terms of any other fundamental right and freedom – and when the consequences can be so invasive?

Is there a solution?

So far, this issue has been addressed by:

  • Huge privacy notices and T&Cs;
  • Long lists of data recipients;
  • Browser settings;
  • (generally ignoring it and carrying on regardless).

It’s difficult to see, without a complete reappraisal, how certain sectors can continue operating even close to as they now do, while doing more than paying lip-service to the concept of privacy as a fundamental right. Comments from Facebook’s CFO in relation to their recent fine in Illinois illustrates part of the problem – despite hundreds of millions and even billions of $ in fines, regulatory action over privacy infringement is treated as just another cost of doing business; exactly the scenario that GDPR and CCPA are supposed to combat.

GDPR does, of course, give regulators the power to order cessation of processing activities – but it’s difficult to see this being used against companies so large that they dwarf even some European economies, and where losing their head-offices would be catastrophic for some states. While there as such an unequal relationship even between governments and the massive data aggregators, the individual consumer seems unlikely to achieve much success except in fleeting terms – something reinforced by the recent Advocate General’s view on the Shrems 2 case, where state Supervisory Authorities and individual Controllers are put back in the frame in terms of enforcement; something that, against the size of some operators, they are simply not equipped to do.

Of course, unless individuals are willing en masse to jump ship and reject certain behaviours and technologies, then maybe there isn’t a way through – or even a need for one; maybe the social contract of swapping privacy and data for convenience of supply and social-networking is here to stay. That’s a question that may yet take many years to answer…

Leave a comment