Clear and un-ambiguous

Is real consent magical thinking?

As a privacy pro, you’ll have heard it a thousand times. Consent is king. Asking someone’s permission to use their data is often presented as the gold standard of respecting an individual’s rights and freedoms.

Of course, practically, those of us working in the field know that it’s incredibly problematic to use – and was even before the GDPR appeared and threw in the idea that it’s a good idea for the person giving consent to actually know what they’re agreeing to and to not have it simply tacitly assumed.

Because it’s problematic, some regulators including the ICO have argued for years that it should be the last resort in finding a lawful basis for using someone’s data. Where there isn’t that option, consent is required to be “freely given, specific, informed and unambiguous….” (GDPR Recital 32) and must also be able to be withdrawn. The same recital goes on to talk about pre-checked boxes, while the corresponding Article 7 adds more detail on the effects of withdrawal, non-layering, clear language, and all of that good stuff.

Of course, while GDPR added a fair amount of detail, the concepts it was based on are hardly new. For example, one elements is the ability to withdraw it without detriment; a core reason why, in the context of employment, consent has not (or perhaps should not) have been the lawful basis of processing staff data well prior to the GDPR.

This doesn’t hold true for other jursidictions. Many Asia-Pacific jurisdictions see Consent as the gold standard – processing should not take place without the individual’s consent, though in most cases it is not wrapped around with the requirements of the EU version. In the US, CCPA, while not requiring prior-consent for processing, provides for an individual’s absolute right to withdraw their permission to ‘sell’ their data – to un-consent.

Why consent?

The idea behind consent, and why it is such a cornerstone of data protection regimes is bound up with the idea of privacy as a fundamental right; the ‘right to be let alone’ or ‘the right to private and family life’ depending on which philosophical basis you prefer. Consent is the obvious conclusion of asking the individual for their permission not to let them alone, and is therefore elevated to be the ‘best’ way to respect individual’s rights.

Except that it doesn’t work.

For some straightforward transactional relationships the simple concept of consent may be valid – signing up to an old-fashioned emailing list or managing newsletter preferences are the only real, practical examples that I can think of in the digital space, and that’s assuming that the list uses a half-decent preference centre. Being (very) generous, we could throw in the so-called ‘soft opt-in’ where you’re buying services.

But broaden it out to the broader digital space – not just the obvious social media and advertising space, but licence agreements, terms and conditions, cookies, privacy notices, all of the paraphernalia that supposedly sets parameters around what the recipients of data do with it – while normally seeking exclude as much liability as possible. As a user – even as a privacy professional – those terms are (a) usually so long that you would need to block out at least an hour to do a proper read-through (b) mostly impenetrable, and (c) non-negotiable – even for corporate clients, let alone for the average individual.

Then there’s cookies. It’s widely acknowledged that the regulatory regime on cookies and similar technologies is not ideal – but the idea that it can be resolved through consumer consent is unrealistic at best. At it’s most basic, cookie compliance is based on the assent of the user based on an affirmative click. Some websites go further in splitting down the cookies into functional, performance and advertising/tracking. Some go further again and offer you a list of third-parties to whom your data will be ‘shared’ – i.e. sold for marketing purposes.

The issue with all of these, but particularly the latter, is what they actually mean to the user. The ubiquity and placement of cookie banners often means that it’s impossible to see content without getting rid of the thing, and the fastest way to achieve that is to accede – i.e. to consent. A single click between you and your data being farmed out to a potentially very high volume of recipients. And even if you trust the website or brand, how likely is it that they know all of the recipients of ad data from their site?

The same largely holds true for the second option – though here at least there is some hope of the categorisation being accurate. Even here though, it’s down to the page owner to define the categories, and then hope that someone downstream (for example a large social media or other tech company) doesn’t fiddle with the code in the meantime. Here too, while the user can affirm their choice, what level of actual control is being exerted? And how good are companies at requiring further consent when that list changes – assuming, of course, that they even know, which unless they are using automated scanning could be a problem.

Finally, the new kid on the block – and one which is already the subject of investigation from at least one Supervisory Authority – is to provide individuals with the option of individual consenting (or not) to every data recipient. Now, on the face of it, this would seem to be the ultimate in choice; the consumer can, line by line if they wish, confirm those companies that they are happy to receive their data.

Except that it doesn’t really work that way. For the most part the companies listed would be meaningless to consumers – they largely consist of ad providers, not brands or final recipients. The list is also inevitably very extensive, with the effect that very few consumers would have the time or will-power to go through and actively consent – even if they could properly identify what they were consenting to.

Consent meets reality

I recognize that this is a hugely problematic area for businesses, particularly those in the AdTech business to negotiate, and that some of the above approaches are based on the best endeavours of those involved; but we do need to be realistic about this. Consent cannot be clear and unambiguous when the individual is not able to decipher or dig down far enough to understand exactly what the outcome of the use of their data will be.

This is most obviously present in social media, and also in the App space – most of which is predicated on producing enough data about you as a consumer to target you with advertising and so improve the fungibility of the data produced by those platforms, and actively seek to bypass real individual choice.

Even leaving aside the more high-profile politically nefarious activities , there have to be a serious ethical question-mark over the proportionality of data collection on individuals in order to sell them ‘stuff’, and whether there can ever be a true and equal agreement between an individual companies predicated on harvesting the most intimate details of your life…

Taking this back to first principles; that right to be let alone with your private and family life. Can we meaningfully contract out of that, often blindly? Would we accept that in terms of any other fundamental right and freedom – and when the consequences can be so invasive?

Is there a solution?

So far, this issue has been addressed by:

  • Huge privacy notices and T&Cs;
  • Long lists of data recipients;
  • Browser settings;
  • (generally ignoring it and carrying on regardless).

It’s difficult to see, without a complete reappraisal, how certain sectors can continue operating even close to as they now do, while doing more than paying lip-service to the concept of privacy as a fundamental right. Comments from Facebook’s CFO in relation to their recent fine in Illinois illustrates part of the problem – despite hundreds of millions and even billions of $ in fines, regulatory action over privacy infringement is treated as just another cost of doing business; exactly the scenario that GDPR and CCPA are supposed to combat.

GDPR does, of course, give regulators the power to order cessation of processing activities – but it’s difficult to see this being used against companies so large that they dwarf even some European economies, and where losing their head-offices would be catastrophic for some states. While there as such an unequal relationship even between governments and the massive data aggregators, the individual consumer seems unlikely to achieve much success except in fleeting terms – something reinforced by the recent Advocate General’s view on the Shrems 2 case, where state Supervisory Authorities and individual Controllers are put back in the frame in terms of enforcement; something that, against the size of some operators, they are simply not equipped to do.

Of course, unless individuals are willing en masse to jump ship and reject certain behaviours and technologies, then maybe there isn’t a way through – or even a need for one; maybe the social contract of swapping privacy and data for convenience of supply and social-networking is here to stay. That’s a question that may yet take many years to answer…

These aren’t the clauses you’re looking for…

Phew! Well, that was a close one! The Advocate General of the CJEU has just massively saved everyone’s bacon! In stating the the EU model clause are valid, we can continue to sleep safe in the knowledge that data can keeps flying around protected by the twelve or so clauses that have served so well for the last decade or more, without having to do very much more work than continue to copy and paste them into Data Processing Agreements and Contracts.

Except… that’s not really what ‘s just happened… or what the Advocate General of the CJEU has just said… or, indeed, how you’re supposed to use the Standard Contractual Clauses.

The case, brought by the Schrems team that brought us the original Safe Harbor ruling, was predicated on the same basic problem as in the original Schrems case (hence Schrems 2) – that when data is sent to be processed outside of the EU, it becomes accessible by the government of the country in which it is being processed – namely, in the case of Facebook Ireland, the government of the United States.

Further, that individual EU data subjects will not able to enforce remedies over that processing, and that the absence of controls to protect the rights of data subjects makes the use of the SCCs invalid in certain circumstances – which the Schrems had requested the Irish Data Protection Commissioner enforce, and suspend the transfer between Facebook Ireland and its US parent.

There would appear to be an element of reprimand to the Irish DPC in the AG’s opinion – that it was, really, for Irish DPC to take a view and act, rather than bump the case up to the CJEU. However, the opinion goes further than that.

The AG also that the Commission decision on the SCCs is valid, not because the clauses in themselves provide adequate safeguards in all circumstances, but because it’s the responsibility of the Data Controller/Exporter to take a view on whether the SCCs will be able to be enforced, and, potentially, whether the country’s security services may decide to snoop on the data held. And that if the Controller doesn’t do it, the supervisory authority should do so.

Starting with the first point, it is unclear how – when even other national governments don’t know the extent of intelligence apparatus of allies – individual companies are supposed to do so in a rational way. Can they be expected to make informed political, social and intelligence-based decisions – and should that be done on a vendor by vendor basis, or blanket across a whole jurisdiction? It becomes a judgement call on the part of the Controller or their DPO, based on… well what? Gut feeling? Prejudice?

So, the fallback position is that it will then be up to the competence of the Supervisory Authority – again, using what resource to make an adequate assessment is unclear. If the Court is going to confirm the AG’s view that it is for individual DP Authorities to decide – presumably using the Article 23 provisions – whether the SCCs can lawfully be used in each individual case then we are suddenly in a rather more complicated position, unless the EU Data Protection Board can come to common ground – and in which case, what then? A white list of jurisdictions where it’s okay to use the SCCs? That feels like Adequacy-lite and not really a sustainable position – placing much more onus on regulators, while also starting to play on the territory normally occupied by the Commission.

In a sense, it’s a difficult position to fault – the EU courts and commission cannot legislate (literally) for the actions of foreign security services, and recognise that national security is a primary duty of any government, and also a reserved matter in the EU treaty – for EU governments at least.

The problem is, though, that it does rather seem to throw the protections offered by almost any transfer mechanism (the AG repeatedly seems to note that any mechanism – including Privacy Shield – has to be subject to the review of any relevant Supervisory Authority, regardless of the Commission’s powers) under the large double-decker bus of ‘national security’, however any individual government decides to interpret it – and there is not exactly global consistency on that, after all.

Underneath all of this is that the SCCs are often used as a generic compliance tool, easy to drop in, oft-added simply to tick the compliance box. By rights, those days should already be departing – GDPR compliance is too complex to simply bolt on SCCs without front-ending other DP clauses – but those are often (in the author’s experience) not well understood, or, in some cases drafted.

Last year, I attended a briefing with the Irish DPC about BCRs, and she correctly made the point that BCRs are a vehicle for GDPR compliance – but that they don’t go much further than GDPR does anyway, especially with accountability measures included.

In the same vein though, the SCCs are also intended to provide that parity of protection to data subjects in the absence of other adequate safeguards. That is what the model clauses are intended to do. That is certainly what has to be read into the AG’s interpretation.

If that is to be the case then two things are necessary:

One, the Commission must hasten its work on revised Clauses, and clarify its own position on the elements of the AG’s view, alongside opinions from the EDPB. Clauses that are a decade old don’t help any of us in trying to comply with regulation that now has such large teeth.

Two, on the flipside, those preparing contracts must move on from treating the SCCs as just another bolt-on to tick the compliance box, and start really considering whether they work appropriately in a given circumstance – including, as argued by the AG, in respect of concern on how jurisdictions may treat data. It’s not uncommon for InfoSec colleagues to issue guidance on taking clean kit into certain jurisdictions – that may need to be considered for other types of data flows as well; the SCCs are not, and should not, be treated as a panacea.

In all, I don’t think the debate about the SCCs is exactly done and dusted – they may be ‘safe’ for now, in terms of validity – but with the onus placed back on Controllers and Supervisory Authorities to be more vigilant in their use, if that position is supported by the Courts in a few months time, it’s likely that many organisations will need to move, if not away from SCCs altogether, then at least to a position of using them in a far more considered way.