Archive for the ‘95 directive’ Category

Can you amend EU model clauses?

Posted on November 17th, 2015 by

Since the fall of Safe Harbor, there’s been a wave of data export conservatism that’s spread across Europe – ranging from EU data protection authorities casting doubt on the longevity of other data export solutions, through to EU customers delaying (or, in some cases, even cancelling) deals with US counter-parties over data export concerns.

Reports that Safe Harbor 2.0 may be on its way have done little to allay these woes because, whatever the optimism of the political parties involved in these discussions, the fact remains that any new framework adopted will face significant adoption challenges.  For a start, existing Safe Harbor companies will almost certainly need to re-certify under the new framework (possibly with greater checks and balances by way of third party audit), certain DPAs around the EU will remain highly skeptical of – and so likely inclined to investigate – any transfers made under revised US-EU Safe Harbor arrangements, and many EU customers who have been ‘once bitten, twice shy’ due to the current Safe Harbor’s collapse will be reluctant to move away from solutions they see as being more ‘tried and trusted’, i.e. model clauses.

So, rightly or wrongly, that means for the short- to mid-term model clauses are likely to remain the solution of choice for many companies engaging in global data exports, whether intra-group or to US (or wider international) suppliers.  Certainly, this has been my personal experience to date – virtually every EU-US deal I’ve been engaged on in recent weeks has been dominated by discussions concerning the need for model clauses.

The problem with model clauses

While they are probably the only immediately viable legal solution for data exports right now, it’s no secret that model clauses – especially the 2010 controller-to-processor model clauses – suffer from significant problems – namely, the potential for on-premise audits, consent and contractual flow down requirements when appointing sub processors, and an absence of liability limitation provisions.  In a one-off arrangement between just two parties, these obstacles might be surmountable and a commercially-acceptable risk; in a cloud-based environment where the supplier hosts its solution on third party infrastructure with vendors who won’t negotiate their terms and where it provides a multi-tenanted, uniform offering across all customers, it’s a very significant problem.  The accrued risk is potentially huge.

Simple example: imagine a US supplier has 5,000 EU customers, and at any one time 1% of those decide to exercise on-premise audit rights under the 2010 model clauses (e.g. in the wake of a data incident).  Suddenly, the supplier finds itself managing 50 simultaneous on-premise audits, a significant business disruption and threat to the security of the data it hosts.  Or, imagine instead, that 10% of its EU customers insist on case-by-case consents every time the supplier wishes to appoint a new sub processor (which may be something as simple as another group company providing technical support to EU customers) – this means approaching 500 customers for consent.  What if one (or more) refuse?

So can you amend the model clauses?

Bearing the above in mind, it should be no surprise that suppliers, when asked to sign model clauses, will often seek to amend their more onerous provisions, either by way of a side agreement or directly within the model clauses themselves.  But, when they do, they’re often met with a very blunt response: “You can’t amend the model clauses!

Having encountered this argument many times when negotiating on behalf of internationally-based suppliers, I want to set the record straight on this point.  You absolutely can amend the model clauses, provided your terms are purely commercial in nature and do not impact the protection of the data, nor the rights of data subjects or supervisory authorities.

If you’re not convinced you can amend the model clauses, then see Clause 10 of the 2010 Controller-to-Processor Model Clauses: “The parties undertake not to vary or modify the Clauses. This does not preclude the parties from adding clauses on business related issues where required as long as they do not contradict the Clause.” (emphasis added).  In fact, as if to emphasize the point, the 2010 Model Clause even include an “illustrative” and “optional” indemnification clause.

Similar language exists in the 2004 Controller-to-Controller Model Clauses too at Clause VII: “The parties may not modify these clauses except to update any information in Annex B, in which case they will inform the authority where required. This does not preclude the parties from adding additional commercial clauses where required.” (emphasis added).  (In the interests of completeness, the original 2001 Controller-To-Controller Model Clauses do not expressly permit the addition of commercial clauses, which is as good a reason as any to avoid using them.)

And, if that weren’t enough, even the Article 29 Working Party has weighed in on this issue with its FAQs on the 2010 Model Clauses: “7) Is it possible to add commercial clauses to the Model Clauses?  As clearly stated in clause 10, parties must not vary or modify the Model Clauses, but this shall not prevent the parties from adding clauses on business-related issues where required, as long as they do not contradict the Model Clauses.

Should you amend the model clauses?

First things first, if you want to amend the model clauses, it’s very important you do so in a considered way that is respectful of the rights the model clauses aim to protect.  Don’t go doing things like removing third party beneficiary rights owed to data subjects or flat out refusing audit rights – that cuts right to the heart of the protections that the model clauses are intended to provide and will never ever be acceptable, either to counter-parties or to supervisory authorities.

Any amendments you make should be purely commercial in nature, or intended to explain how some of the model clause rights should work in practice.  For example, you might choose to limit the liability between the two parties to the model clauses (but not the data subjects!) by reference to liability caps agreed within a master services agreement between the parties.  Alternatively, you might seek a general, upfront consent from the EU data exporter to the data importer’s appointment of sub processors, provided the appointed sub processors fulfill the requirements of the model clauses.  Or you might seek to explain how the EU data exporter can exercise its model clause audit rights against the data importer in practice – for example, through reliance on the data importer’s independent third party audit certifications or written responses to audit questionnaires etc.

As a final consideration, if you do amend model clauses, be aware that this may trigger regulatory notification or authorization requirements in some Member States.  This doesn’t mean that you can’t amend the model clauses, but is a consideration that should be investigated and borne in mind if amending the model clauses.

When doing so, ask yourself this question: Is it better to sign model clauses that you know you (or your supplier) will be unable to comply with for legitimate practical reasons, simply to ease any regulatory notification requirements?  Or is contractual honesty between two parties, knowing that they will comply in full with the terms they agree, the better approach, even if this may carry some additional regulatory requirements?

Time for US businesses to consider an anti-surveillance pledge?

Posted on October 23rd, 2015 by

Breakdown of trust is a terrible thing that often has negative and unpredictable consequences, not just for those directly involved but also for those inadvertently caught up in the ensuing fall-out: for the friends who are forced to choose sides when a relationship breaks up, for the children affected when a marriage breaks down and, yes, for the businesses harmed when transatlantic trust between two great economic regions falls apart.

Because, when all is said and done, the recent collapse of Safe Harbor is ultimately attributable to a breakdown in trust.  Whatever legal arguments there are about data export “adequacy”, Europe has fundamentally lost trust in the safe handling of European citizens’ data Stateside.  The resulting panic was inevitable – international conglomerates worry about their regulatory compliance, US supply-side businesses realize that there is now no effective legal solution for their lawful handling of data, and regulators move to calm in threatening tones that they will not take enforcement – for the time being.

Which leaves us all in a quandary.  Businesses must by necessity start putting in place a patchwork of legal solutions designed, if not to achieve compliance, then at least to manage risk, but many of these solutions will not be officially recognized either by law or the regulatory community (how exactly should US processors lawfully onward transfer data to sub-processors?).  Consequently, these solutions – while necessary in an environment where no alternatives exist – will likely fuel further legislative and regulatory speculation that companies are working around data protection rules, rather than with them.

But when compliance becomes impossible, so everyone becomes a criminal.  Think of it this way:  if you tax me at 40%, I will pay.  But tax me at 90% and I simply can’t afford to, so won’t – no matter how much I may believe in the principle of taxation or want to be a law-abiding member of society.

An anti-surveillance pledge to restore trust

So where does that leave us?  The real dialogue to have here is one around restoring trust.  This is absolutely critical.  And that is why all businesses – especially US businesses right now – must consider taking an anti-surveillance pledge.

What does an anti-surveillance pledge look like?  It takes the form of a short statement, perhaps no more than two or three paragraphs in length, under which the business would pledge never knowingly to disclose individuals’ data to government or law enforcement authorities unless either (1) legally compelled to do so (for example, by way of a warrant or court order), or (2) there is a risk of serious and imminent harm were disclosure to be withheld (for example, imminent terrorist threat).  The pledge would be signed by senior management of the business, and made publicly-available as an externally-facing commitment to resist unlawful government-led surveillance activities – for example, by posting on a website or incorporation within an accessible privacy policy.

Will taking a pledge like this solve the EU-US data export crisis?  No.  Will it prevent government surveillance activities occurring upstream on Internet and telecoms pipes over which the business has no control?  No.  But will it demonstrate a commitment to the world that the business takes its data subjects’ privacy concerns seriously and that it will do what is within its power to do to prevent unlawful surveillance – absolutely: it’s a big step towards accountably showing “adequate” handling of data.

The more businesses that sign a pledge of this nature, the greater the collective strength of these commitments across industries and sectors; and the greater this collective strength, the more this will assist the long, slow process of restoring trust.  Only through the restoration of trust will we see a European legislative and regulatory environment once more willing to embrace the adequacy of data exports to the US.  So, if you haven’t considered it before, consider it now: it’s time for an anti-surveillance pledge.


Getting to know the GDPR, Part 2 – Out-of-scope today, in scope in the future. What is caught?

Posted on October 20th, 2015 by

The GDPR expands the scope of application of EU data protection law requirements in two main respects:

  1. in addition to data “controllers” (i.e. persons who determine why and how personal data are processed), certain requirements will apply for the first time directly to data “processors” (i.e. persons who process personal data on behalf of a data controller); and
  2. by expanding the territorial scope of application of EU data protection law to capture not only the processing of personal data by a controller or a processor established in the EU, but also any processing of personal data of data subjects residing in the EU, where the processing relates to the offering of goods or services to them, or the monitoring of their behaviour.


The practical effect is that many organisations that were to date outside the scope of application of EU data protection law will now be directly subject to its requirements, for instance because they are EU-based processors or non EU-based controllers who target services to EU residents (e.g. through a website) or monitor their behaviour (e.g. through cookies). For such organisations, the GDPR will introduce a cultural change and there will be more distance to cover to get to a compliance-ready status.

What does the law require today?

The Directive

At present, the Data Protection Directive 95/46/EC (“Directive“) generally sets out direct statutory obligations for controllers, but not for processors. Processors are generally only subject to the obligations that the controller imposes on them by contract. By way of example, in a service provision scenario, say a cloud hosting service, the customer will typically be a controller and the service provider will be a processor.

Furthermore, at present the national data protection law of one or more EU Member States applies if:

  1. the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State. When the same controller is established on the territory of several Member States, each of these establishments should comply with the obligations laid down by the applicable national law (Article 4(1)(a)); or
  2. the controller is not established on EU territory and, for purposes of processing personal data makes use of equipment situated on the territory of a Member State (unless such equipment is used only for purposes of transit through the EU) (Article 4(1)(c)); or
  3. the controller is not established on the Member State’s territory, but in a place where its national law applies by virtue of international public law (Article 4(1)(b)). Article 4(1)(b) has little practical significance in the commercial and business contexts and is therefore not further examined here. The GDPR sets out a similar rule.


CJEU case law

Two recent judgments of the Court of Justice of the European Union (“CJEU“) have introduced expansive interpretations of the meaning of “in the context of the activities” and “establishment”:

  1. In Google Spain, the CJEU held that “in the context of the activities” does not mean “carried out by”. The data processing activities by Google Inc are “inextricably linked” with Google Spain’s activities concerning the promotion, facilitation and sale of advertising space. Consequently, processing is carried out “in the context of the activities” of a controller’s branch or subsidiary when the latter is (i) intended to promote and sell ad space offered by the controller, and (ii) orientates its activity towards the inhabitants of that Member State.
  2. In Weltimmo, the CJEU held that the definition of “establishment” is flexible and departs from a formalistic approach that an “establishment” exists solely where a company is registered. The specific nature of the economic activities and the provision of services concerned must be taken into account, particularly where services are offered exclusively over the internet. The presence of only one representative, who acts with a sufficient degree of stability (even if the activity is minimal), coupled with websites that are mainly or entirely directed at that EU Member State suffice to trigger the application of that Member State’s law.


What will the GDPR require?

The GDPR will apply to the processing of personal data:

  1. in the context of the activities of an establishment of a controller or a processor in the EU; and
  2. of data subjects residing in the EU by a controller not established in the EU, where the processing activities are related to the offering of goods or services to them, or the monitoring of their behaviour in the EU.

It is irrelevant whether the actual data processing takes place within the EU or not.

As far as the substantive requirements are concerned, compared to the Directive, the GDPR introduces:

  1. new obligations and higher expectations of compliance for controllers, for instance around transparency, consent, accountability, privacy by design, privacy by default, data protection impact assessments, data breach notification, new rights of data subjects, engaging data processors and data processing agreements;
  2. for the first time, direct statutory obligations for processors, for instance around accountability, engaging sub-processors, data security and data breach notification; and
  3. severe sanctions for compliance failures.


What are the practical implications?

Controllers who are established in the EU are already caught by EU data protection law, and will therefore not be materially affected by the broader scope of application of the GDPR. For such controllers, the major change is the new substantive requirements they need to comply with.

Processors (such as technology vendors or other service providers) established in the EU will be subject to the GDPR’s direct statutory obligations for processors, as opposed to just the obligations imposed on them by contract by the controller. Such processors will need to understand their statutory obligations and take the necessary steps to comply. This is a major “cultural” change.

Perhaps the biggest change is that controllers who are not established in the EU but collect and process data on EU residents through websites, cookies and other remote activities are likely to be caught by the scope of the GDPR. E-commerce providers, online behavioural advertising networks, analytics companies that process personal data are all likely to be caught by the scope of application of the GDPR.

We still have at least 2 years before the GDPR comes into force. This may sound like a long time, but given the breadth and depth of change in the substantive requirements, it isn’t really! A lot of fact finding, careful thinking, planning and operational implementation will be required to be GDPR ready in 24 months.

So what should you be doing now?

  1. If you are a controller established in the EU, prepare your plan for transitioning to compliance with the GDPR.
  2. If you are a controller not established in the EU, assess whether your online activities amount to offering goods or services to, or monitoring the behaviour of, EU residents. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR. You may need to appoint a representative in the EU.
  3. Assess whether any of your EU-based group companies act as processors. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR.
  4. If you are a multinational business with EU and non-EU affiliates which will or may be caught by the GDPR, you will also need to consider intra-group relationships, how you position your group companies and how you structure your intra-group data transfers.

Europe now holds the key to the future of privacy

Posted on October 10th, 2015 by

A lot is being said about the CJEU’s ruling on Safe Harbour. Without any doubt, for the privacy community this is the most important legal development since the EU Commission’s announcement of a revision to the Data Protection Directive of 1995. What the Court’s ruling shows us is that privacy has become a major area of law and an absolute priority in terms of compliance for any company.

Among the many issues that this decision raises, I’d like to focus on two key issues. The first is enforcement. Many companies are wondering what is the risk for them now that Safe Harbor has been pronounced invalid. As a lawyer, I believe there is no point in arguing the CJEU’s ruling (click here to read our analysis of the CJEU’s ruling in the Max Schrems case). Some may disagree with it, but it is now the law in Europe, and we need to accept it.

As a practitioner, however, I think we need to analyse the Court’s decision in a practical and pragmatic manner. Strictly from a legal point of view, the CJEU’s decision leaves no room for interpretation: Safe Harbor is invalid, and so companies can no longer rely on it to transfer their data to the U.S. But, in practical terms, it is unrealistic to think that EU companies will suddenly pull the plug and stop transferring their data to the U.S.

Technically, I’m not sure this is feasible, and, certainly, this would have a devastating effect on our economy and on the relations between the EU and the U.S. It also seems unlikely that the national data protection authorities (DPAs) will suddenly begin to investigate companies, or worse, to sanction them because they continue to transfer personal data to the U.S. Let us not forget that in many EU member states, the national DPAs have approved the transfers of data to the U.S. on the basis of Safe Harbor. In my opinion, it would make no sense, and would serve no real purpose, if the DPAs would suddenly repeal the approvals that they have granted to thousands of companies over the last 15 years.

That is not to say that the DPAs will take no action. On the contrary, there is now a high expectation for companies to reassess their data flows and, where needed, to implement new measures for transferring data outside the EU. It is also important to note that, while Safe Harbor can no longer be used as a legal basis for transferring data outside the EU, the measures that companies have put in place to comply with the Safe Harbor principles should remain valid. In the end, what really matters is whether and how companies are safeguarding the data they transfer outside the EU, regardless of the legal basis on which they rely to do so. And so, as a short-term solution, a decision from the DPAs to grant companies a grace period that would allow them to leverage the efforts they have made in the past in order to transition toward another data-transfer mechanism would certainly be welcome. At the same time, let’s not be naïve. The CJEU’s ruling empowers the DPAs tremendously and, once the General Data Protection Regulation (GDPR) is finally adopted, they will have unprecedented powers to investigate and sanction companies. So the clock has already begun to tick for those companies that were relying on Safe Harbor…

The second point I’d like to make is that the national DPAs have here a unique opportunity to send a clear and consistent message to the world. Some people are already commenting—rightfully so!—that there is risk that the court’s decision will be interpreted differently by the DPAs in their respective jurisdictions, which would result in a patchwork of different interpretations and solutions across Europe. Well, I think the situation demands that the Article 29 Working Party adopt a common and unified position. Too often, Europe has been criticised for its lack of harmonisation and its fragmented approach to law. Now is the moment to show the world that Europe can speak in harmony. If the DPAs fail to seize this moment, the risk is that the relations between the EU and the U.S. will be significantly damaged, and this will leave literally thousands of companies in a limbo.

As for the issue regarding the disclosure of personal data to foreign authorities, which is really the pivotal issue here, the CJEU’s ruling has repercussions beyond Safe Harbor because it concerns data transfers as a whole—meaning that the analysis can be applied to adequacy decisions, the EU model clauses and Binding Corporate Rules. Thus, the CJEU’s decision calls for EU legislators to adopt a coherent and consistent position on this issue across the different legal frameworks that are currently being prepared: the GDPR, the “new” Safe Harbor framework and the so-called Umbrella Agreement on the transfers of personal data between the EU and the U.S. for justice and law-enforcement purposes. And so, once again, consistency seems to be the key word to ensure that a fair balance is found between the protection of the individual’s privacy and the freedom to conduct business—both of which are fundamental rights under the European Charter of Fundamental Rights.

Europe may be holding the key to the future of privacy, but it needs to embrace this future with a clear, pragmatic and realistic vision. Otherwise, I fear the upcoming GDPR will fail to achieve its goal.

This article was first published in the IAPP’s Europe Data Protection Digest on 9th October 2015.

Obituary: Safe Harbor – 2000-2015

Posted on October 7th, 2015 by

Fieldfisher is saddened to report the sudden and unexpected demise of Safe Harbor yesterday.  Though rumours had persisted of its ill-health for a number of years, yesterday’s news comes as a shock to us all.

Despite passing at the tender age of just 15, Safe Harbor had made quite an impact in its short lifetime.  Originally born in 2000, the lovechild of the European Union and United States, tensions became apparent in its parents’ relationship in later years.  Throughout its childhood and early teens, Safe Harbor amassed many friends, particularly in the United States.  It also mingled in celebrity circles, counting high profile individuals like Facebook, Google, LinkedIn and Twitter among its good friends.

However popular it may have been, though, scandal seemed always to follow Safe Harbor – particularly in relation to its transatlantic data shipping business, through which it amassed particular fame and fortune.  While purportedly running this business to very exacting, principled standards, rumours persisted that Safe Harbor did not exercise sufficient oversight over how customers used its product – and that some of these customers were using Safe Harbor’s data products for illicit purposes.  Whether or not this is true is open for debate, and some commentators have argued that Safe Harbor has been unfairly victimized, arguing that its business competitors Model Clauses and Binding Corporate Rules have comparable practices.

Safe Harbor was ultimately killed in a collision while on a hiking holiday on Mount Snowden [sic].  Reports are that it was struck down by an unstoppable vehicle driven by an Austrian student.  The vehicle involved in the accident is reported to be of European, probably Irish, make.  As anyone familiar with the area knows, the countryside around Mount Snowden has many unpredictable twists and turns along its roads, making it dangerous for anyone to travail.  Anyone could be its next victim.

Safe Harbor’s death has attracted commentary from its former friends and critics alike, all hailing it as a significant passing that will have a major impact on the transatlantic data shipping industry.  Others maintain that, notwithstanding Safe Harbor’s passing, transatlantic data shipping will continue much as it has before, albeit with some disruption, through the operations of Model Clauses, Binding Corporate Rules and, of course, the infamous data Black Market.

Safe Harbor is rumoured to be survived by a child, dubbed “Safe Harbor 2.0”, although no public sightings of this child have yet been seen.

Getting to know the GDPR, Part 1 – You may be processing more personal information than you think

Posted on October 2nd, 2015 by

This post is the first in a series of posts that the Fieldfisher Privacy, Security and Information team will publish on forthcoming changes under Europe’s new General Data Protection Regulation (the “GDPR“), currently being debated through the “trilogue” procedure between the European Commission, Council and Parliament (for an explanation of the trilogue, see here).

The GDPR, like the Directive today and – indeed – any data protection law worldwide, protects “personal data”.  But understanding what constitutes personal data often comes as a surprise to some organizations.  Are IP addresses personal data, for example?  What about unique device identifiers or biometric identifiers?  Does the data remain personal if you hash or encrypt it?

What does the law require today?

Today, the EU definition of “personal data” is set out in the Data Protection Directive 95/46/EC.  It defines personal data as “any information relating to an identified or identifiable natural person” (Art. 2(a)), and specifically acknowledges that this includes both ‘direct’ and ‘indirect’ identification (for example, you know me by name – that’s direct identification; you describe me as “the Fieldfisher privacy lawyer working in Silicon Valley” – that’s indirect identification).

The Directive also goes on to say that identification can be by means of “an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.  This has caused a lot of debate in European privacy circles – could an “identification number” include an IP address or cookie string, for example?  The Article 29 Working Party has previously issued comprehensive guidance on the concept of personal data, which made clear that EU regulators were minded to treat the definition of “personal data” as very wide indeed (by looking at the content, purpose and result of the data).  And, yes, they generally think of IP addresses and cookie strings as personal – even if organizations themselves do not.

This aside, EU data protection law also has a separate category of “special” personal data (more commonly referred to as “sensitive personal data”) .  This is personal data that is afforded extra protection under the Directive, and is defined as data relating to racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and health or sex life.  Data relating to criminal offences is also afforded special protection.  Oddly, though, financial data, social security numbers and child data are not protected as “sensitive” under the Directive today.

What will the General Data Protection Regulation require?

While the GDPR is still being debated between the Commission, Council and Parliament through the trilogue procedure, what is clear is that the net cast for personal data will not get any smaller.  In fact, the legislators are keen to clear up some of the ambiguities that exist today, and even to widen the net in a couple of instances – for example, with respect to sensitive personal data.  In particular:

  • Personal data and unique identifiers:  The trilogue parties broadly agree that the concept of personal data must include online identifiers and location data – so the legal definition of personal data will be updated under the GDPR to put beyond any doubt that IP addresses, mobile device IDs and the like must be treated as personal.  This means that these data will be subject to fairness, lawfulness, security, data export and other data protection requirements just like any other personal data.
  • Pseudonymous data:  The trilogue parties are considering the concept of “pseudonymous data” – in simple terms, personal data that has been subjected to technological measures (like hashing or encryption) such that it no longer directly identifies an individual.  There seems to be broad acceptance that a definition of “pseudonymous data” is needed, but that pseudonymous data will still be treated as personal data – i.e. subject to the requirements of the GDPR.  On the plus side though, organizations that pseudonymize their data will likely benefit from relaxed data breach notification rules, potentially less strict data subject access request requirements, and greater flexibility to conduct data profiling.  The GDPR will encourage pseudonymization as a privacy by design measure.
  • Genetic data and biometric data:  GDPR language under debate also introduces the concepts of “genetic data” and “biometric data” (i.e. fingerprints, facial recognition, retinal scans etc.).  The trilogue parties seem to agree that genetic data must be treated as sensitive personal data, affording it enhanced protections under the GDPR (likely due to concerns, for example, that processing of genetic data might lead to insurance disqualifications or denial of medical treatment).  There’s slightly less alignment between the parties on the treatment of biometric data, with the Parliament viewing it as sensitive data and the Council preferring to treat it as (non-sensitive) personal data – but data that, nevertheless, triggers the need for an organizational Data Protection Impact Assessment if processed on a large scale.

What are the practical implications?

For many institutions, the changes to the concept of personal data under the GDPR will simply be an affirmation of what they already know: that Europe applies a very protective approach when triggering personal data requirements.

Online businesses – especially those in the analytics, advertising and social media sectors – will be significantly impacted by the express description of online and unique identifiers as personal data, particularly when this is considered in light of the extended territorial reach of the GDPR.  Non-EU advertising, analytics and social media platforms will likely find themselves legally required to treat these identifiers as personal data protected by European law, just as their European competitors are, and need to update their policies, procedures and systems accordingly – that, or risk losing EU business and attracting European regulatory attention.  However, they will likely take (some) comfort from GDPR provisions allowing for data profiling on a non-consent basis if data is pseudonymized.

Beyond that, all organizations will need to revisit what data they collect and understand whether it is caught by the personal data requirements of the GDPR.  In particular, they need to be aware of the extended scope of sensitive data to include genetic data (and, if the Parliament has its way, potentially biometric data), attracting greater protections under the GDPR – particularly the need for explicit consent, unless other lawful grounds processing exist.

Finally, some of the relaxations given to processing of pseudonymized data will hopefully serve to incentivize greater adoption by organizations of pseudonymization technologies.  Arguably, the GDPR could do more on this front – and some organizations will inevitably grumble at the cost of pseudonymizing datasets – but if doing so potentially reduces data breach notification and data subject access request responsibilities then this will serve as a powerful adoption incentive.

What will you actually have to do if Safe Harbor falls?

Posted on September 29th, 2015 by

“Hell hath no fury like a woman scorned” or so goes the saying.  Nor would it seem, hath it a fury like a student scorned.  After many months of litigating, Austrian student-turned-privacy-activist Max Schrems tweeted today that the European Court of Justice would deliver its final judgment on the future of Safe Harbor on 6th October at 9.30am CET.

If true (and Schrems tweeted a picture of the notification of judgment, so there’s every reason to believe it is), the timing of this judgment is surprising coming, as it does, just a couple of weeks after Advocate General Bot’s damning opinion that the Safe Harbor framework is invalid.  Speculation will inevitably run rife, and many will assume that the prompt timing of the judgment is indication that the Court will effectively rubber-stamp the earlier opinion given by the AG.

Speculation, gossip, and rumour, though, benefit no one; businesses will only be able to plan and adapt once the final judgment is delivered.  This got me to thinking: what will businesses actually have to do if Safe Harbor is shot down – at least throughout the period until/if Safe Harbor 2.0 is approved.  Sure, there’s been a lot written (including by me) about how businesses will need to ‘contingency plan’ or ‘transition’ to a new data export regime – but what does that really mean in practice?

Here’s what I can foresee just off the top of my head:

1.  Model clauses are probably the only option initially.  First off, supposing Safe Harbor is shot down in the next couple of weeks, then – short of a miracle – we can safely say that Safe Harbor 2.0 will not yet have been agreed between the Commission and the US Department of Commerce.  Given that the BCR process is, at best, an 18 month project (don’t believe anyone who tells you otherwise), businesses will have no choice but to adopt model clauses or accept non-compliance risk for now.

2.  Figure our what model clauses you need (hint: there are more than one type!) How do they go about implementing model clauses?  To begin with, they will need to explore their data exports holistically.  That means not just thinking about data exports of customer content, but also thinking about exports of CRM, employee and vendor data.  Whereas all four categories of exports may have previously been covered under a single Safe Harbor certification, different model clause solutions may be needed for, say, customer content data (often exported on a controller-to-processor basis, requiring the 2010 model clauses) and CRM, employee and vendor data (typically exported on a controller-to-controller basis, requiring the 2001 or 2004 model clauses).  In other words, one previous Safe Harbor solution may need to be broken into two separate sets of model clause solutions.

3.  Go back and sign model clauses with customers who want them.  Where the business is exporting customers’ data on a controller-to-processor basis, it will have to approach its customers (or, more likely, it will get approached by its customers) and execute model clauses with them.  That immediately creates an administrative burden, but the business will also need to consider whether it wants to introduce any commercial clauses into the model clauses it signs with customers to manage its risk – remember, model clauses don’t have any liability caps after all!  That requires negotiation and, inevitably, will entail some lengthy conversations with customers who are nervous about what negotiating away from the ‘standard’ form of the model clauses will mean for their own compliance.

4.  You’ll also need intra-group model clauses.  Intra-group exports of CRM, employee and vendor data are somewhat easier, because the business will ultimately be contracting with itself, but it will need to map out which entities are exporting to which, and ensure that all appropriate group entities are signed up and their data flows accurately described within the clauses.  And, of course, they’ll have to get the model clauses executed which, depending on the group structure, its size, intra-group powers of attorney, and contract execution rights, may be a much more challenging task than it sounds.

5.  Don’t just sign the paperwork and forget about it: It doesn’t end there, either.  Businesses who execute model clauses will then need to make sure they actually implement their requirements – particularly, in the case of controller-to-processor model clauses – their subcontracting provisions.  That essentially means flowing down the model clause terms to any third party non-EEA vendors that the business engages to process EEA personal data.  This itself will prove very tricky – many non-EEA vendors will cry ignorance of the model clauses, insist that they don’t actually process personal data (“you have the encryption key – we don’t know what it is!”, “we only provide co-location facilities”, etc.), maintain that they don’t sign model clauses as a matter of principle, and so on.  That leaves the business in a difficult position – it either has to tolerate the non-compliance and have customer-facing breach exposure, transition to a new vendor who will sign model clauses or, if it has the leverage, simply force model clauses on the vendor.  Having been part of these negotiations first-hand on many an occasion, I know how hard this is.

6.  There’s all those policies to update too!  There’s another consideration too – what about all those external and internal-facing policies (website privacy policies, corporate data protection policies, whistleblowing policies) where the business proudly espouses its use of Safe Harbor for compliance purposes?  They all need to be revisited, updated, agreed internally, re-translated (where operating across multiple jurisdictions), re-posted, and possibly even notified to affected data subjects.  Phew!  What a task.

7.  You may even need to establish an EEA controller – just to sign your model clauses! It must be over there, right?  Nope – there’s at least one more task.  What if you are an online B2C business without an EEA group company serving as your EEA data controller?  Odds are you’ve not been worrying about your lack of EEA controller, because you never had to: you were safely able to receive data in the US before in reliance on your Safe Harbor certification.  But if Safe Harbor goes away and you have to implement model clauses, you suddenly have to find an EEA group company you can enter the model clauses with!  And that means you have to find an EEA group company willing to be your EEA data controller so that it can sign model clauses with you!  And that in turn may mean a fairly seismic-shift in the structure and organization of your internal data governance program.

So, to all those of you who were thinking “Well, it’s not that big a deal if Safe Harbor goes – there’s always model clauses”, the above is just a taste of what may be in store in terms of compliance actions needed to transition over to model clauses as a replacement data export solution.  And, after all that, data will still flow to the US, and internationally, just as it ever did before – so does anyone really feel that data will be better protected in a Safe Harborless world?

The end may be nigh for Safe Harbor

Posted on September 23rd, 2015 by

Today the Advocate General Bot’s Opinion on Safe Harbor has been released, and it is making headlines! Why? Because:

  1. if the Opinion of the Advocate General is followed by the CJEU (see below), this will be the end of Safe Harbor (at least until the US and EU agree Safe Harbor version 2.0);
  2. even if the Opinion is not followed this is more fuel for the fire for Safe Harbor sceptics.

In more detail, the Advocate General has found:

  • that a decision by the EU Commission that “adequate” safeguards are in place to protect personal data being transferred outside the EEA (such as where a US recipient is Safe Harbor certified) does not stop European data protection authorities from independently deciding that those safeguards are not “adequate” and suspending the transfer of the data; and
  • that the EU Commission decision made in 2000 finding the US Safe Harbor certification scheme to provide adequate safeguards is “invalid”.

As such, if the Opinion is followed by the court in Europe, the practical implications for organisations sending personal data from Europe to the US and for those US organisations receiving the data are significant.

To rewind

One of the requirements for organisations in the EEA processing personal data under the EU Directive on the protection of personal data (95/46/EC) is to only transfer personal data to entities located outside the EEA if the recipient has “adequate” safeguards in place to protect EU citizen’s personal data to the same standards as those in place in Europe.

The way this has usually worked to date is for organisations to proceed with one of three options:

  • rely on the recipient’s Safe Harbor certification for transfers to the United States of America (“US”);
  • execute an agreement based on the EU Commission’s Model Clauses; or
  • implement Binding Corporate Rules.

These three options all have their pros and cons (see Phil Lee’s post from April EU data exports – choosing the least worst option?).

Why is “Safe Harbor” considered to be ‘safe’?

In 2000 the EU Commission decided (Decision 2000/520) that, provided a US company undertakes to comply with the Safe Harbor principles (which require them to implement certain measures in respect of the data they receive from the EU), then the data being transferred is “adequately” safeguarded. No further steps need to be taken. As such, the first choice (if the recipient is in the US) has always been to rely on the US organisation’s Safe Harbor certification – and thousands of organisations in Europe do just that.

But is it really ‘safe’?

The revelations by Edward Snowden changed the thinking around Safe Harbor somewhat! Suddenly EU citizens realised that if their data is sent to a company in the US then, even if that US company has signed up to Safe Harbor and promised to keep that data protected, it may still be accessed by US government authorities. Note that US companies self-certify themselves onto the Safe Harbor scheme and that there is limited oversight from public authorities in the US, although recently the FTC took some enforcement action in connection to Safe Harbor.

So what’s been done about it?

Max Schrems, an Austrian citizen, concerned that Facebook Ireland was transferring his data to Facebook US (subject to Safe Harbor) and that the US authorities might be accessing that data asked the Data Protection Commissioner in Ireland to stop the transfers of his data from Facebook Ireland to Facebook US. The Data Protection Commissioner said “no can do” – the Commission’s decision in 2000 could not be overruled – the Commissioner could not stop the transfer.

Mr Schrems sought a judicial review of this decision in the Irish High Court. The Irish High Court could only go as far as considering the position under Irish law, which they decided would be in favour of Mr Schrems, i.e. that the Commissioner should investigate.

The question referred for preliminary hearing to the Court of Justice of the European Union (“CJEU”) (the EU equivalent of the US Supreme Court) was therefore whether, considering the fact that EU citizens’ rights had been further enhanced by the passing of the EU Charter of Fundamental Rights, specifically Articles 7 and 8, since the Commission’s original Safe Harbor decision in 2000, whether such a decision prevents a data protection authority from investigating whether it is truly “adequate” and from suspending the transfer of EU citizens’ data if their privacy rights are threatened.

What does the Advocate General say?

The Advocate General has said that data protection authorities can investigate whether adequate safeguards are in place.  Essentially, they are independent authorities and it is vital that they have the power to take steps to protect individuals’ privacy rights. The Advocate General considers that if authorities were bound by Commission decisions then their ability to be independent would be curtailed, at the expense of individuals’ privacy. It is therefore for the member state DPA and the Commission to each decide whether there is an adequate level of protection in place, protecting the data to a European standard in the recipient country.

If the authority’s investigations reveal that the transfer of data is not carried out with adequate safeguards in place then the Advocate General’s Opinion is that the transfer should be suspended. The CJEU can then be asked to assess the adequacy of the Commission decision.

So is the Harbor ‘safe’ anymore?

The Advocate General thinks not. In particular, for these reasons:

  • US authorities have been accessing EU citizens’ data processed by Safe Harbor companies under the PRISM programme: (a) where not “strictly necessary”; (b) on a “casual or generalised” basis; (c) without an objective assessment on the grounds of national security etc.; and (d) without citizens having any judicial redress;
  • the Decision does not establish clear rules on when/how interference with the fundamental privacy and data protection rights under the Charter may be justified and as such the interference was not limited in this case to what was strictly necessary and proportionate; and
  • the Commission should by now have suspended the Safe Harbor Framework and adapted the Decision.

As such, the Advocate General considers that Decision 2000/520 should be declared invalid. The level of protection afforded by a third country may change over time and so the threshold for adequacy must develop with it, i.e. the Commission’s decision should be amended as and when required.

What’s the impact?

The Attorney General’s Opinion is not binding on the CJEU, it is merely to help the court make its mind up. It is now for the CJEU to make a decision on the validity of Decision 2000/520. If the CJEU decides to follow the Opinion, then the impact will be huge.

For organisations relying upon Safe Harbor to transfer data to the US, a suspension of the Safe Harbor framework would mean that all such transfers would be in breach of EU data protection laws. An alternative safeguard would immediately need to be put in place – either EU model clauses or BCRs – assuming the Commission does not conclude its Safe Harbor 2.0 negotiations with the US Department of Commerce soon.

Although the CJEU has previously declined to follow an Opinion from the Attorney General, the fact that Safe Harbor has been found invalid by the Attorney General still provides ammunition to those in Europe who are sceptical about Safe Harbor, such as local data protection authorities. US organisations will also need to be prepared for an alternative means of safeguarding personal data to be high up on their customers’ wish lists. Safe Harbor looks like it will no longer cut it.

What data protection reform would look like if it were up to me.

Posted on September 16th, 2015 by

Earlier today I attended a superb session of the Churchill Club in Palo Alto, at which the European Data Protection Supervisor was speaking on data protection and innovation.  As he spoke about the progress of the EU General Data Protection Regulation and what its impacts would be upon business, I found myself given to thinking about what EU data protection reform would be like if it were up to me.

Of course, this is by definition somewhat of a navel gazing exercise because EU data protection reform is not up to me.  Nevertheless, I thought I would at least share some of my thoughts to see to what extent they strike a chord with readers of our blog – and, perhaps, even reach the ears of those who do make the law.

So, if you’ll allow me this indulgence, here’s what my reforms would do:

1.  They would strike a balance between supporting privacy rights, economic and social well-being and innovation.  Fundamentally, I support the overarching goals of the GDPR described in its recitals – namely that “the principles and rules on the protection of individuals with regard to the processing of their personal data should …. respect their fundamental rights and freedoms, notably their right to protection of personal data” and that those rules should “contribute to an area of freedom, security and justice …, to economic and social progress, … and the well-being of individuals.”  Yet, sometimes within the draft texts of the GDPR, this balance has been lost, with provisions swinging so far towards conservatism and restrictiveness, that promotion of economic progress – including, critically for any economy, innovation – gets lost.  If it were up to me, my reforms would endeavour to restore this balance through some of the measures described below.

2.  They would recognize that over-prescription drives bad behaviours.  A problem with overly-prescriptive legislation is that it becomes inherently inflexible.  Yet data protection rules need to apply across all types of personal data, across all types of technologies and across all sectors.  Inevitably, the more prescriptive the legislation, the less well it flexes to adapt to ‘real world’ situations and the more it discourages innovation – pushing otherwise would-be good actors into non-compliance.  And, when those actors perceive compliance as unobtainable, their privacy programs become driven by concerns to avoid risk rather than to achieve compliance – a poor result for regulators, businesses and data subjects alike.  For this reason, my data protection reforms would focus on the goals to be achieved (data stays protected) rather than on the means of their achievement (e.g. specifying internal documentation needs).  This is precisely why the current Data Protection Directive has survived as long as it has.

3.  They would provide incentives for pseudonymisation.  Absent a few stray references to pseudonymisation here and there across the various drafts of the GDPR, there really is very little to incentivise adoption of pseudonymisation by controllers – psuedonymised data are protected to exactly the same standard as ‘ordinary’ personal data.  Every privacy professional recognizes the dangers of re-identification inherent in pseudonymised data, but treating it identically to ordinary personal data drives the wrong behaviour by controllers – perceiving little to no regulatory benefit to pseudonymisation, controllers decline to adopt pseudonymisaion for cost or other implementation reasons.  My reforms would explore whether pseudonymisation could be incentivised to encourage its adoption, for example by relaxing data minimization, purpose limitation or data export rules for pseudonymised data, in addition to existing proposals for relaxed data breach notification rules.

4.  They would recognize the distinct role of platforms.  European data protection professionals still operate in a binary world – businesses are either ‘data controllers’ or ‘data processors’.  Yet, increasingly, this binary division of responsibility and liability doesn’t reflect how things operate in reality – and especially in an app-centric world operating over third party cloud or mobile platforms.  The operators of these platforms don’t always sit neatly within a ‘controller’ or a ‘processor’ mold yet, vitally, are the gatekeepers through which the controllers of apps have access to the highly sensitive information we store on their platforms – our contact lists, our address books, our health data and so on.  We need an informed debate as to the role of platforms under revised data protection rules.

5.  They would abandon outdated data export restrictions.  It’s time to have a grown up conversation about data exports, and recognize that current data export rules simply do not work.  Who honestly believes that a model contract protects data?  And how can European regulators promote Binding Corporate Rules as a best practice standard for data export compliance, but then insist on reviewing each and every BCR applicant when they are too poorly resourced to do so within any kind of commercially acceptable timescale?  And how can we possibly complain about the US having poor Safe Harbor enforcement when we have little to no enforcement of data export breaches at home in the EU?  Any business of scale collects data internationally, operates internationally, and transfers data internationally; we should not prohibit this, but should instead have a regulatory framework that acknowledges this reality and requires businesses to self-assess and maintain protection of data wherever it goes in the world.  And, yes, we should hold businesses fully accountable when they fail to do so.

6.  They would recognise that consent is not a panacea.  There’s been a strong narrative in Europe for some time now that more data processing needs to be conditioned on individuals’ consent.  The consensus (and it’s not a wholly unfair one) is that individuals have lost control of their data and consent would somehow restore the balance.  It’s easy to have sympathy for this view, but consent is not all it’s cracked up to be.  Think about it, if consent were a requirement of processing, how would businesses be forced to respond?  Particularly within the European legislative environment that considers almost all types of data to be ‘personal’ and therefore regulated?  The answer would be a plurality of consent boxes, windows and buttons layered across every product and service you interact with.  And, to make matters worse, the language accompanying these consents would invariably become excessively detailed and drafted using ‘catch-all’ language to avoid any suggestion that the business failed to collect a sufficiently broad consent.  Clearly, there are places where consent is merited (collection and use of sensitive data being a prime example) but for other uses of data, a well-structured data protection regime would instead promote the use of legitimate interests and other non-consent based grounds for data processing – backed, of course, by effective regulatory audit and sanctions in order to provide the necessary checks and balances.

So there you have it.  Those are just a few of my views – I have others, but I’ll spare you them for now, and no doubt you’ll have views of your own.  If you agree with the views above, then share them; if you don’t, then share them anyway and continue the debate.  We’ll only ever achieve an appropriate regulatory framework that balances the needs of everyone if we all make our voices heard, debate hard, and strive to reach consensus on the right data protection regime fit for the future!

Why are German courts allowed to take my global privacy policy apart?

Posted on August 7th, 2015 by

Your service is innovative, you are ambitious, and the European digital market is there for the taking. Except that the EU is not the digital single market it strives to be just yet. Recent years have seen a rise in legal disputes in Germany over allegedly unlawful clauses in standard business terms – in more and more cases including privacy policies and consent wording. Apple, Facebook, Google have all been there. They all lost on part of the language.

The story goes…

The starting point often begins with an international business looking to have a single global or pan-European privacy policy. It might not be perfect in all respects, but it was considered to be a reasonable compromise between addressing multiple local law requirements, keeping your business scalable, and creating transparency for customers. Now, with global expansion comes the inevitable local litigation.

The typical scenario that arises for international businesses expanding into Germany is this: An aggressive local market player trying to hold on to its pre-new economy assets sends you a warning letter, alleging your privacy policy breaches German law requirements, and includes a cease-and-desist undertaking aimed at forcing you to refrain from using unlawful privacy policy clauses.

If you are big and established, the warning letter may come from a consumer protection association that happens to have singled out you or your industry. If you refuse to comply with the warning letter, the dispute may go to court. If you lose, the court will issue an injunction preventing you from using certain language in your privacy policy. If you infringe the injunction after being served the same, judicial fines may ensue.

The legal mechanism

These warning letters typically allege that your privacy policy is not in full compliance with strict German data protection and consumer protection law. Where this is the case, privacy infringements can be actioned by competitors and consumer protection associations – note: these actions are based solely on the language of your privacy policy, irrespective of your actual privacy practices. These actions are a kind of “privately-initiated law enforcement” as there is no public regulator generally watching over use of privacy policies.

Furthermore, in certain cases – and especially where privacy policies are peppered with language stating that the user “consents” to the collection and use of their information – the privacy policy may even qualify as ‘standard business terms’ under German consumer protection law, opening the door for the full broadside of German consumer protection law scrutiny.

So, what’s the solution?

In the long run, courts or lawmakers will have to resolve the dilemma between two conflicting EU law principles: privacy regulation on a “country of origin” basis vs. consumer protection and unfair competition laws that apply wherever consumers are targeted. In essence, the question is: Which should prevail, applicable law principles under the Data Protection Directive (or the General Data Protection Regulation bound to be issued any decade now) or local law consumer protection principles under Rome I and II Regulations?

In the short term, an approach to mitigating legal and practical risks is to provide a localised privacy policy just for German consumers that is compliant with local law. Or, usually less burdensome, make your policy information-only, i.e. delete consent wording and clauses curtailing consumers’ rights in order to at least keep the policy from being subjected to full consumer protection scrutiny.

The downside to this approach is that it may require deviating from your global approach on a privacy policy. On the upside, it will spare you the nuisance of dealing with this kind of warning letter which is difficult to fight off. Remember: This is all about the language of your privacy policy, not what your real-world privacy compliance looks like.

Stay tuned for more information on warning letter squabbles regarding e-mail marketing regulations.