Archive for the ‘Accountability’ Category

Time for US businesses to consider an anti-surveillance pledge?

Posted on October 23rd, 2015 by

Breakdown of trust is a terrible thing that often has negative and unpredictable consequences, not just for those directly involved but also for those inadvertently caught up in the ensuing fall-out: for the friends who are forced to choose sides when a relationship breaks up, for the children affected when a marriage breaks down and, yes, for the businesses harmed when transatlantic trust between two great economic regions falls apart.

Because, when all is said and done, the recent collapse of Safe Harbor is ultimately attributable to a breakdown in trust.  Whatever legal arguments there are about data export “adequacy”, Europe has fundamentally lost trust in the safe handling of European citizens’ data Stateside.  The resulting panic was inevitable – international conglomerates worry about their regulatory compliance, US supply-side businesses realize that there is now no effective legal solution for their lawful handling of data, and regulators move to calm in threatening tones that they will not take enforcement – for the time being.

Which leaves us all in a quandary.  Businesses must by necessity start putting in place a patchwork of legal solutions designed, if not to achieve compliance, then at least to manage risk, but many of these solutions will not be officially recognized either by law or the regulatory community (how exactly should US processors lawfully onward transfer data to sub-processors?).  Consequently, these solutions – while necessary in an environment where no alternatives exist – will likely fuel further legislative and regulatory speculation that companies are working around data protection rules, rather than with them.

But when compliance becomes impossible, so everyone becomes a criminal.  Think of it this way:  if you tax me at 40%, I will pay.  But tax me at 90% and I simply can’t afford to, so won’t – no matter how much I may believe in the principle of taxation or want to be a law-abiding member of society.

An anti-surveillance pledge to restore trust

So where does that leave us?  The real dialogue to have here is one around restoring trust.  This is absolutely critical.  And that is why all businesses – especially US businesses right now – must consider taking an anti-surveillance pledge.

What does an anti-surveillance pledge look like?  It takes the form of a short statement, perhaps no more than two or three paragraphs in length, under which the business would pledge never knowingly to disclose individuals’ data to government or law enforcement authorities unless either (1) legally compelled to do so (for example, by way of a warrant or court order), or (2) there is a risk of serious and imminent harm were disclosure to be withheld (for example, imminent terrorist threat).  The pledge would be signed by senior management of the business, and made publicly-available as an externally-facing commitment to resist unlawful government-led surveillance activities – for example, by posting on a website or incorporation within an accessible privacy policy.

Will taking a pledge like this solve the EU-US data export crisis?  No.  Will it prevent government surveillance activities occurring upstream on Internet and telecoms pipes over which the business has no control?  No.  But will it demonstrate a commitment to the world that the business takes its data subjects’ privacy concerns seriously and that it will do what is within its power to do to prevent unlawful surveillance – absolutely: it’s a big step towards accountably showing “adequate” handling of data.

The more businesses that sign a pledge of this nature, the greater the collective strength of these commitments across industries and sectors; and the greater this collective strength, the more this will assist the long, slow process of restoring trust.  Only through the restoration of trust will we see a European legislative and regulatory environment once more willing to embrace the adequacy of data exports to the US.  So, if you haven’t considered it before, consider it now: it’s time for an anti-surveillance pledge.


Getting to know the GDPR, Part 2 – Out-of-scope today, in scope in the future. What is caught?

Posted on October 20th, 2015 by

The GDPR expands the scope of application of EU data protection law requirements in two main respects:

  1. in addition to data “controllers” (i.e. persons who determine why and how personal data are processed), certain requirements will apply for the first time directly to data “processors” (i.e. persons who process personal data on behalf of a data controller); and
  2. by expanding the territorial scope of application of EU data protection law to capture not only the processing of personal data by a controller or a processor established in the EU, but also any processing of personal data of data subjects residing in the EU, where the processing relates to the offering of goods or services to them, or the monitoring of their behaviour.


The practical effect is that many organisations that were to date outside the scope of application of EU data protection law will now be directly subject to its requirements, for instance because they are EU-based processors or non EU-based controllers who target services to EU residents (e.g. through a website) or monitor their behaviour (e.g. through cookies). For such organisations, the GDPR will introduce a cultural change and there will be more distance to cover to get to a compliance-ready status.

What does the law require today?

The Directive

At present, the Data Protection Directive 95/46/EC (“Directive“) generally sets out direct statutory obligations for controllers, but not for processors. Processors are generally only subject to the obligations that the controller imposes on them by contract. By way of example, in a service provision scenario, say a cloud hosting service, the customer will typically be a controller and the service provider will be a processor.

Furthermore, at present the national data protection law of one or more EU Member States applies if:

  1. the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State. When the same controller is established on the territory of several Member States, each of these establishments should comply with the obligations laid down by the applicable national law (Article 4(1)(a)); or
  2. the controller is not established on EU territory and, for purposes of processing personal data makes use of equipment situated on the territory of a Member State (unless such equipment is used only for purposes of transit through the EU) (Article 4(1)(c)); or
  3. the controller is not established on the Member State’s territory, but in a place where its national law applies by virtue of international public law (Article 4(1)(b)). Article 4(1)(b) has little practical significance in the commercial and business contexts and is therefore not further examined here. The GDPR sets out a similar rule.


CJEU case law

Two recent judgments of the Court of Justice of the European Union (“CJEU“) have introduced expansive interpretations of the meaning of “in the context of the activities” and “establishment”:

  1. In Google Spain, the CJEU held that “in the context of the activities” does not mean “carried out by”. The data processing activities by Google Inc are “inextricably linked” with Google Spain’s activities concerning the promotion, facilitation and sale of advertising space. Consequently, processing is carried out “in the context of the activities” of a controller’s branch or subsidiary when the latter is (i) intended to promote and sell ad space offered by the controller, and (ii) orientates its activity towards the inhabitants of that Member State.
  2. In Weltimmo, the CJEU held that the definition of “establishment” is flexible and departs from a formalistic approach that an “establishment” exists solely where a company is registered. The specific nature of the economic activities and the provision of services concerned must be taken into account, particularly where services are offered exclusively over the internet. The presence of only one representative, who acts with a sufficient degree of stability (even if the activity is minimal), coupled with websites that are mainly or entirely directed at that EU Member State suffice to trigger the application of that Member State’s law.


What will the GDPR require?

The GDPR will apply to the processing of personal data:

  1. in the context of the activities of an establishment of a controller or a processor in the EU; and
  2. of data subjects residing in the EU by a controller not established in the EU, where the processing activities are related to the offering of goods or services to them, or the monitoring of their behaviour in the EU.

It is irrelevant whether the actual data processing takes place within the EU or not.

As far as the substantive requirements are concerned, compared to the Directive, the GDPR introduces:

  1. new obligations and higher expectations of compliance for controllers, for instance around transparency, consent, accountability, privacy by design, privacy by default, data protection impact assessments, data breach notification, new rights of data subjects, engaging data processors and data processing agreements;
  2. for the first time, direct statutory obligations for processors, for instance around accountability, engaging sub-processors, data security and data breach notification; and
  3. severe sanctions for compliance failures.


What are the practical implications?

Controllers who are established in the EU are already caught by EU data protection law, and will therefore not be materially affected by the broader scope of application of the GDPR. For such controllers, the major change is the new substantive requirements they need to comply with.

Processors (such as technology vendors or other service providers) established in the EU will be subject to the GDPR’s direct statutory obligations for processors, as opposed to just the obligations imposed on them by contract by the controller. Such processors will need to understand their statutory obligations and take the necessary steps to comply. This is a major “cultural” change.

Perhaps the biggest change is that controllers who are not established in the EU but collect and process data on EU residents through websites, cookies and other remote activities are likely to be caught by the scope of the GDPR. E-commerce providers, online behavioural advertising networks, analytics companies that process personal data are all likely to be caught by the scope of application of the GDPR.

We still have at least 2 years before the GDPR comes into force. This may sound like a long time, but given the breadth and depth of change in the substantive requirements, it isn’t really! A lot of fact finding, careful thinking, planning and operational implementation will be required to be GDPR ready in 24 months.

So what should you be doing now?

  1. If you are a controller established in the EU, prepare your plan for transitioning to compliance with the GDPR.
  2. If you are a controller not established in the EU, assess whether your online activities amount to offering goods or services to, or monitoring the behaviour of, EU residents. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR. You may need to appoint a representative in the EU.
  3. Assess whether any of your EU-based group companies act as processors. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR.
  4. If you are a multinational business with EU and non-EU affiliates which will or may be caught by the GDPR, you will also need to consider intra-group relationships, how you position your group companies and how you structure your intra-group data transfers.

Obituary: Safe Harbor – 2000-2015

Posted on October 7th, 2015 by

Fieldfisher is saddened to report the sudden and unexpected demise of Safe Harbor yesterday.  Though rumours had persisted of its ill-health for a number of years, yesterday’s news comes as a shock to us all.

Despite passing at the tender age of just 15, Safe Harbor had made quite an impact in its short lifetime.  Originally born in 2000, the lovechild of the European Union and United States, tensions became apparent in its parents’ relationship in later years.  Throughout its childhood and early teens, Safe Harbor amassed many friends, particularly in the United States.  It also mingled in celebrity circles, counting high profile individuals like Facebook, Google, LinkedIn and Twitter among its good friends.

However popular it may have been, though, scandal seemed always to follow Safe Harbor – particularly in relation to its transatlantic data shipping business, through which it amassed particular fame and fortune.  While purportedly running this business to very exacting, principled standards, rumours persisted that Safe Harbor did not exercise sufficient oversight over how customers used its product – and that some of these customers were using Safe Harbor’s data products for illicit purposes.  Whether or not this is true is open for debate, and some commentators have argued that Safe Harbor has been unfairly victimized, arguing that its business competitors Model Clauses and Binding Corporate Rules have comparable practices.

Safe Harbor was ultimately killed in a collision while on a hiking holiday on Mount Snowden [sic].  Reports are that it was struck down by an unstoppable vehicle driven by an Austrian student.  The vehicle involved in the accident is reported to be of European, probably Irish, make.  As anyone familiar with the area knows, the countryside around Mount Snowden has many unpredictable twists and turns along its roads, making it dangerous for anyone to travail.  Anyone could be its next victim.

Safe Harbor’s death has attracted commentary from its former friends and critics alike, all hailing it as a significant passing that will have a major impact on the transatlantic data shipping industry.  Others maintain that, notwithstanding Safe Harbor’s passing, transatlantic data shipping will continue much as it has before, albeit with some disruption, through the operations of Model Clauses, Binding Corporate Rules and, of course, the infamous data Black Market.

Safe Harbor is rumoured to be survived by a child, dubbed “Safe Harbor 2.0”, although no public sightings of this child have yet been seen.

Getting to know the GDPR, Part 1 – You may be processing more personal information than you think

Posted on October 2nd, 2015 by

This post is the first in a series of posts that the Fieldfisher Privacy, Security and Information team will publish on forthcoming changes under Europe’s new General Data Protection Regulation (the “GDPR“), currently being debated through the “trilogue” procedure between the European Commission, Council and Parliament (for an explanation of the trilogue, see here).

The GDPR, like the Directive today and – indeed – any data protection law worldwide, protects “personal data”.  But understanding what constitutes personal data often comes as a surprise to some organizations.  Are IP addresses personal data, for example?  What about unique device identifiers or biometric identifiers?  Does the data remain personal if you hash or encrypt it?

What does the law require today?

Today, the EU definition of “personal data” is set out in the Data Protection Directive 95/46/EC.  It defines personal data as “any information relating to an identified or identifiable natural person” (Art. 2(a)), and specifically acknowledges that this includes both ‘direct’ and ‘indirect’ identification (for example, you know me by name – that’s direct identification; you describe me as “the Fieldfisher privacy lawyer working in Silicon Valley” – that’s indirect identification).

The Directive also goes on to say that identification can be by means of “an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.  This has caused a lot of debate in European privacy circles – could an “identification number” include an IP address or cookie string, for example?  The Article 29 Working Party has previously issued comprehensive guidance on the concept of personal data, which made clear that EU regulators were minded to treat the definition of “personal data” as very wide indeed (by looking at the content, purpose and result of the data).  And, yes, they generally think of IP addresses and cookie strings as personal – even if organizations themselves do not.

This aside, EU data protection law also has a separate category of “special” personal data (more commonly referred to as “sensitive personal data”) .  This is personal data that is afforded extra protection under the Directive, and is defined as data relating to racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and health or sex life.  Data relating to criminal offences is also afforded special protection.  Oddly, though, financial data, social security numbers and child data are not protected as “sensitive” under the Directive today.

What will the General Data Protection Regulation require?

While the GDPR is still being debated between the Commission, Council and Parliament through the trilogue procedure, what is clear is that the net cast for personal data will not get any smaller.  In fact, the legislators are keen to clear up some of the ambiguities that exist today, and even to widen the net in a couple of instances – for example, with respect to sensitive personal data.  In particular:

  • Personal data and unique identifiers:  The trilogue parties broadly agree that the concept of personal data must include online identifiers and location data – so the legal definition of personal data will be updated under the GDPR to put beyond any doubt that IP addresses, mobile device IDs and the like must be treated as personal.  This means that these data will be subject to fairness, lawfulness, security, data export and other data protection requirements just like any other personal data.
  • Pseudonymous data:  The trilogue parties are considering the concept of “pseudonymous data” – in simple terms, personal data that has been subjected to technological measures (like hashing or encryption) such that it no longer directly identifies an individual.  There seems to be broad acceptance that a definition of “pseudonymous data” is needed, but that pseudonymous data will still be treated as personal data – i.e. subject to the requirements of the GDPR.  On the plus side though, organizations that pseudonymize their data will likely benefit from relaxed data breach notification rules, potentially less strict data subject access request requirements, and greater flexibility to conduct data profiling.  The GDPR will encourage pseudonymization as a privacy by design measure.
  • Genetic data and biometric data:  GDPR language under debate also introduces the concepts of “genetic data” and “biometric data” (i.e. fingerprints, facial recognition, retinal scans etc.).  The trilogue parties seem to agree that genetic data must be treated as sensitive personal data, affording it enhanced protections under the GDPR (likely due to concerns, for example, that processing of genetic data might lead to insurance disqualifications or denial of medical treatment).  There’s slightly less alignment between the parties on the treatment of biometric data, with the Parliament viewing it as sensitive data and the Council preferring to treat it as (non-sensitive) personal data – but data that, nevertheless, triggers the need for an organizational Data Protection Impact Assessment if processed on a large scale.

What are the practical implications?

For many institutions, the changes to the concept of personal data under the GDPR will simply be an affirmation of what they already know: that Europe applies a very protective approach when triggering personal data requirements.

Online businesses – especially those in the analytics, advertising and social media sectors – will be significantly impacted by the express description of online and unique identifiers as personal data, particularly when this is considered in light of the extended territorial reach of the GDPR.  Non-EU advertising, analytics and social media platforms will likely find themselves legally required to treat these identifiers as personal data protected by European law, just as their European competitors are, and need to update their policies, procedures and systems accordingly – that, or risk losing EU business and attracting European regulatory attention.  However, they will likely take (some) comfort from GDPR provisions allowing for data profiling on a non-consent basis if data is pseudonymized.

Beyond that, all organizations will need to revisit what data they collect and understand whether it is caught by the personal data requirements of the GDPR.  In particular, they need to be aware of the extended scope of sensitive data to include genetic data (and, if the Parliament has its way, potentially biometric data), attracting greater protections under the GDPR – particularly the need for explicit consent, unless other lawful grounds processing exist.

Finally, some of the relaxations given to processing of pseudonymized data will hopefully serve to incentivize greater adoption by organizations of pseudonymization technologies.  Arguably, the GDPR could do more on this front – and some organizations will inevitably grumble at the cost of pseudonymizing datasets – but if doing so potentially reduces data breach notification and data subject access request responsibilities then this will serve as a powerful adoption incentive.

Privacy audits – asking less can get you more!

Posted on June 26th, 2015 by

A while back, my wife was telling me about a book she’d just read called “Animals in Translation” by animal behaviourist Professor Temple Grandin. In the book, Professor Grandin explains her work to improve conditions for animals at meat-packing facilities (and if you’re wondering what on earth this has to do with privacy, keep reading – I’ll get there.)

One of the ways originally used to assess conditions at these facilities was by means of lengthy, detailed audit inspections, with audit questionnaires often running well in excess of a hundred pages. These questionnaires were by their very design cumbersome to complete and consequently attracted voluminous but poor quality responses that gave little real insight into animal welfare or how to improve it.

Professor Grandin’s insight was that audits could be considerably improved by reducing the questionnaires to just a few, select, critical questions. For example, asking what percentage of animals displayed signs of lameness replaced the need to ask several questions about the detail of the animal-handling process. Why? Because the detail was largely irrelevant – if more than a certain percentage of animals displayed lameness, then it stood to reason that something was going wrong and needed addressing.

As privacy professionals, we can all learn something from this drive towards audit simplicity. Having worked with many businesses on privacy assessments ranging from full-blown audits to Privacy Impact Assessments for new product launches, and vendor due diligence projects to staff awareness questionnaires, I’ve often been struck by how unnecessarily long and detailed many audit questionnaires are.

This length is more often than not attributable to a concern that, if questioners don’t ask every last question they can think of, they might miss something. That’s a fair concern, but what it overlooks is that a questionnaire is the beginning and not the end of a well-run audit – it’s simply the means by which you start gathering information in order to identify potential issues needing follow-up. Not only that, but it’s far better to get a high response rate to a few well-selected questions than a low-to-zero rate on a more detailed set.

So the next time you have to circulate an audit questionnaire to an internal development team about some new product launch, or to an external vendor about its compliance practices, or maybe to another business unit about its data-handling procedures, then keep the following practical tips in mind:

1.  Keep it short and sweet. Nobody likes answering lengthy questionnaires and so, if you send one out, expect that it either won’t get answered or won’t get answered well. A one to two page questionnaire of a few select questions will encourage a relatively high response rate; at three to four pages, you’ll watch your response rate fall off a cliff; and anything longer that you’re almost guaranteeing next to no responses – no matter how much chasing, cajoling, or threatening you do!

2.  Ask critical questions only. Think carefully about what you really need to know for initial issue-spotting purposes. Focus, for example, on what data are you collecting, why, who it will be shared with, where it will be processed, and what security is in place. Don’t worry too much about the detail for now – you can (and should!) follow-up that up later if the initial responses ring any alarm bells (“You’re sending data to Uzbekistan? Why? Who will have access to it? What protections are in place?“).

3.  Use open questions. It’s tempting to design a checklist questionnaire. In theory, this makes the respondent’s job easier and encourages consistency of responses. In practice, however, checklists aren’t always well-suited for privacy assessments. It’s difficult to capture the wide array of potential data types, uses, recipients etc. through checklists alone unless you use very lengthy checklists indeed – resulting in the response-rate issues already mentioned above. Open questions, by contrast, are generally simpler and shorter to ask, and lead to more, and often more informative, responses (e.g. Consider “Who you will share data with and why?” vs. “Who will you share data with? Tick all of the following that apply: Customers [ ] Customers’ end users [ ] Prospective customers [ ] Employees [ ] Contractors [ ] Vendors [ ] etc.”).

4.  Don’t lose sight of the wood for the trees. In many of the questionnaires I’ve seen, questioners often delve straight into the detail, asking very specific questions about the nature of the data processed, the locations of privacy notices, specific security standards applied and so on. What’s often missing is a simple, upfront question asking what the particular product, service or processing operation actually is or does. Asking this provides important context that will necessarily influence the depth of review you need to undertake. Better still, if you can actually get to see the “thing” in practice, then seize the opportunity – this will crystallize your understanding far better than any questionnaire or verbal description ever could.

5.  We never talk anymore. As already noted, a questionnaire is just the beginning of an assessment. Use it to gather the initial data you need, but then make sure to follow up with a meeting to discuss any potential issues you identify. You will learn far more from person-to-person meetings than you ever can through a questionnaire – interviewees will often volunteer information in-person you never thought to ask on your questionnaire, and some interview responses may prompt you to think of questions “in the moment” that you’d never otherwise have thought of back in the isolation of your office.

6.  Not all audits are created equal.  The above are just some practical suggestions borne out of experience but, obviously, not all audits are created equal.  As a questioner, you need to have the flexibility to adapt your questionnaire to the specific context in hand – a “lessons learned” audit following a data breach will by necessity have to be more detailed and formal than an internal staff awareness assessment.  Exercise good judgement to tailor your approach as the situation requires!

Handling government data requests under Processor BCR

Posted on June 2nd, 2015 by

Earlier today, the Article 29 Working Party published some new guidance on Processor BCR. There’s no reason you would have noticed this, unless you happen to be a BCR applicant or regularly visit the Working Party’s website, but the significance of this document cannot be overstated: it has the potential to shape the future of global data transfers for years to come.

That’s a bold statement to make, so what is this document – Working Party Paper WP204 “Explanatory Document on the Processor Binding Corporate Rules” – all about? Well, first off, the name kind of gives it away: it’s a document setting out guidance for applicants considering adopting Processor BCR (that’s the BCR that supply-side companies – particularly cloud-based companies – are all rushing to adopt). Second, it’s not a new document: the Working Party first published it in 2013.

The importance of this document now is that the Working Party have just updated and re-published it to provide guidance on one of the most contentious and important issues facing Processor BCR: namely how Processor BCR companies should respond to government requests for access to data.

Foreign government access to data – the EU view

To address the elephant in the room, ever since Snowden, Europe has expressed very grave concerns about the ‘adequacy’ of protection for European data exported internationally – and particularly to the US. This, in turn, has led to repeated attempts by Europe to whittle away at the few mechanisms that exist for lawfully transferring data internationally, from the European Commission threatening to suspend Safe Harbor through to the European Parliament suggesting that Processor BCR should be dropped from Europe’s forthcoming General Data Protection Regulation (a suggestion that, thankfully, has fallen by the wayside).

By no means the only concern, but certainly the key concern, has been access to data by foreign government authorities. The view of EU regulators is that EU citizens’ data should not be disclosed to foreign governments or law enforcement agencies unless strict mutual legal assistance protocol has been followed. They rightly point out that EU citizens have a fundamental right to protection of their personal data, and that simply handing over data to foreign governments runs contrary to this principle.

By contrast, the US and other foreign governments say that prompt and confidential access to data is often required to prevent crimes of the very worst nature, and that burdensome mutual legal assistance processes often don’t allow access to data within the timescales needed to prevent these crimes. The legitimate but conflicting views of both sides lead to the worst kind of outcome: political stalemate.

The impact of foreign government access to data on BCR

In the meantime, businesses have found themselves trapped in a ‘no man’s land’ of legal uncertainty – the children held responsible for the sins of their parent governments. Applicants wishing to pursue Processor BCR have particularly found themselves struggling to meet its strict rules concerning government access to data: namely that any “request for disclosure should be put on hold and the DPA competent for the controller and the lead DPA for the BCR should be clearly informed about it.” (see criteria 6.3 available here)

You might fairly think: “Why not just do this? If a foreign government asks you to disclose data, why not just tell them you have to put it on hold until a European DPA sanctions – or declines – the disclosure?” The problem is that reality is seldom that straightforward. In many jurisdictions (and, yes, I’m particularly thinking of the US) putting a government data disclosure order “on hold” and discussing it with a European DPA is simply not possible.

This is because companies are typically prohibited under foreign laws from discussing such disclosure orders with ANYONE, whether or not a data protection authority, and the penalties for doing so can be very severe – up to and including jail time for company officers. And let’s not forget that, in some cases, the disclosure order can be necessary to prevent truly awful offences – so whatever the principle to be upheld, sometimes the urgency or severity of a particular situation will simply not allow for considered review and discussion.

But that leaves companies facing the catch-22. If they receive one of these orders, they can be in breach of foreign legal requirements for not complying with it; but if they do comply with it, they risk falling foul of European data protection rules. And, if you’re a Processor BCR applicant, you might rightly be wondering how on earth you can possibly give the kind of commitment that the Working Party expects of you under the Processor BCR requirements.

How the Working Party’s latest guidance helps

To their credit, the Working Party have acknowledged this issue and this is why their latest publication is so important. They have updated their BCR guidance to note that “in specific cases the suspension and/or notification [to DPAs of foreign government data access requests] are prohibited”, including for example “a prohibition under criminal law to preserve the confidentiality of a law enforcement investigation”. In these instances, they expect BCR applicants to use “best efforts to obtain the right to waive this prohibition in order to communicate as much information as it can and as soon as possible”.

So far, so good. But here’s the kicker: they then say that BCR applicants must be able to “demonstrate” that they exercised these “best efforts” and, whatever the outcome, provide “general information on the requests it received to the competent DPAs (e.g. number of applications for disclosure, type of data requested, requester if possible, etc.)” on an annual basis.

And therein lies the problem: how does a company “demonstrate” best efforts in a scenario where a couple of NSA agents turn up on its doorstep brandishing a sealed FISA order and requiring immediate access to data? You can imagine that gesticulating wildly probably won’t cut it in the eyes of European regulators.

And what about the requirement to provide “general information” on an annual basis including the “number of applications for disclosure”? In the US, FISA orders may only be reported in buckets of 1,000 orders – so, even if a company received only one or two requests in a year, the most it could disclose is that it received between 0 and 999 requests, making it seem like government access to their data was much more voluminous than in reality it was.

I don’t want problems, I want solutions!!!

So, if you’re a Processor BCR applicant, what do you do? You want to see through your BCR application to show your strong commitment to protecting individuals’ personal data, and you certainly don’t want to use a weaker solution, like Model Clauses or Safe Harbor that won’t carry equivalent protections. But, at the same time, you recognize the reality that there will be circumstances where you are compelled to disclose data and that there will be very little you can do – or tell anyone – in those circumstances.

Here’s my view:

  • First off, you need a document government data access policy. It’s unforgivable in this day and age, particularly in light of everything we have learned in the past couple of years, not to have some kind of written policy around how to handle government requests for data. More importantly, having a policy – and sticking to it – is all part and parcel of demonstrating your “best efforts” when handling government data requests.
  • Second, the policy needs to identify who the business stakeholders are that will have responsibility for managing the request – and, as a minimum, this needs to include the General Counsel and, ideally, the Chief Privacy Officer (or equivalent). They will represent the wall of defense that prevents government overreach in data access requests and advise when requests should be challenged for being overly broad or inappropriately addressed to the business, rather than to its customers.
  • Third, don’t make it easy for the government. They want access to your data, then make them work for it. It’s your responsibility as the custodian of the data to protect your data subject’s rights. To that end, ONLY disclose data when LEGALLY COMPELLED to do so – if access to the data really is that important, then governments can typically get a court order in a very short timeframe. Do NOT voluntarily disclose data in response to a mere request, unless there really are very compelling reasons for doing so – and reasons that you fully document and justify.
  • Fourth, even if you are under a disclosure order, be prepared to challenge it. That doesn’t necessarily mean taking the government to court each and every time, but at least question the scope of the order and ask whether – bearing in mind any BCR commitments you have undertaken – the order can be put on hold while you consult with your competent DPAs. The government may not be sympathetic to your request, particularly in instances of national security, but that doesn’t mean you shouldn’t at least ask.
  • Fifth, follow the examples of your peers and consider publishing annual transparency reports, a la Google, Microsoft and Yahoo. While there may be prohibitions against publishing the total numbers of national security requests received, the rules will typically be more relaxed when publishing aggregate numbers of criminal data requests. This, in principle, seems like a good way of fulfilling your annual reporting responsibility to data protection authorities and – in fact – goes one step further: providing transparency to those who matter most in this whole scenario, the data subjects.
  • So why does the Working Party’s latest opinion matter so much? It matters because it’s a vote of confidence in the Processor BCR system and an unprecedented recognition by European regulatory authorities that there are times when international businesses really do face insurmountable legal conflicts.

    Had this opinion not come when it did, the future of Processor BCR would have been dangerously undermined and, faced with the prospect of Safe Harbor’s slow and painful demise and the impracticality of Model Clauses, would have left many without a realistic data export solution and further entrenched a kind of regulatory ‘Fortress Europe’ mentality.

    The Working Party’s guidance, while still leaving challenges for BCR applicants, works hard to strike that hard-to-find balance between protecting individuals’ fundamental rights and the need to recognize the reality of cross-jurisdicational legal constraints – and, for that, they should be commended.

    Subject access requests and data retention: two sides of the same coin?

    Posted on October 3rd, 2014 by

    Over the past year or so, there’s been a decided upswing in the number of subject access requests made by individuals to organizations that crunch their data.  There are a number of reasons for this, but they’re principally driven by a greater public awareness of privacy rights in a post-Snowden era and following the recent Google “Right to be Forgotten” decision.

    If you’re unfamiliar with the term “subject access request”, then in simple terms it’s a right enshrined in EU law for an individual to contact an organization and ask it (a) whether it processes any personal information about the individual in question, and (b) if so, to supply a copy of that information.

    A subject access request is a powerful transparency tool for individuals: the recipient organization has to provide the requested information within a time period specified by law, and very few exemptions apply.  However, these requests often prove disproportionately costly and time-consuming for the organizations that receive them – think about how much data your organization holds, and then ask yourself how easy it would be to pull all that data together to respond to these types of requests.  Imagine, for example, all the data held in your CRM databases, customer support records, IT access logs, CCTV footage, HR files, building access records, payroll databases, e-mail systems, third party vendors and so on – picture that, and you get the idea.

    In addition, while many subject access requests are driven by a sincere desire for data processing transparency, some (inevitably) are made with legal mischief in mind – for example, the disgruntled former employee who makes a subject access request as part of a fishing expedition to try to find grounds for bringing an unfair dismissal claim, or the representative from a competitor business looking for grounds to complain about the recipient organization’s data compliance.  Because of these risks, organizations are often hesitant about responding to subject access requests in case doing so attracts other, unforeseen and unknown, liabilities.

    But, if you’re a data controlling business facing this conundrum, don’t expect any regulatory sympathy.  Regulators can only enforce the law as it exists today, and this expects prompt, comprehensive disclosure.  Not only that, but the fact that subject access requests prove costly and resource intensive to address serves a wider regulatory goal: namely, applying pressure on organizations to reduce the amount of data they hold, consistent with the data protection principle of “data minimization”.

    Therefore, considering that data storage costs are becoming cheaper all the time and that, in a world of Big Data, data collection is growing at an exponential rate, subject access becomes one of the most important – if not the most important – tool regulators have for encouraging businesses to minimize the data they retain.  The more data you hold, the more data you have to disclose in response to a subject access request – and the more costly and difficult that is to do.  This, in turn, makes adopting a carefully thought-out data retention policy much more attractive, whatever other business pressures there may be to keep data indefinitely.  Retain data for just a year or two, and there’ll be an awful lot less you need to disclose in response to a subject access request.  At the same time, your organization will enhance its overall data protection compliance.

    So what does all this mean?  When considering your strategy for responding to subject access requests, don’t consider it in isolation; think also about how it dovetails with your data retention strategy.  If you’re an in-house counsel or CPO struggling to get business stakeholder buy-in to adopt a comprehensive data retention strategy, use subject access risk as a means of achieving this internal buy-in.  The more robust your data retention policies, the more readily you’ll be able to fulfill subject access requests within the timescales permitted by law and with less effort, reducing complaints and enhancing compliance.  Conversely, with weaker (or non-existent) data retention policies, your exposure will be that much greater.

    Subject access and data retention are therefore really just two sides of the same coin – and you wouldn’t base your compliance on just a coin toss, would you?

    Processor BCR have a bright future

    Posted on July 8th, 2014 by

    Last month, the Article 29 Working Party sent a letter to the President of the European Parliament about the future of Binding Corporate Rules for processors (BCR-P) in the context of the EU’s ongoing data privacy legislative reform.

    The letter illustrates the clear support that BCR-P have – and will continue to have – from the Working Party.  Whilst perhaps not surprising, given that the Working Party originally “invented” BCR-P in 2012 (having initially invented controller BCR way back in 2003), the letter affirms the importance of BCR-P in today’s global data economy.

    “Currently, BCR-P offer a high level of protection for the international transfers of personal data to processors” writes Isabelle Falque-Pierrotin, Chair of the Working Party, before adding that they are “an optimal solution to promote the European principles of personal data abroad.” (emphasis added)

    As if that weren’t enough, the letter also issues a thinly-veiled warning to the European Parliament, which has previously expressed skepticism about BCR-P: “denying the possibility for BCR-P will limit the choice of organisations to use model clauses or to apply the Safe Harbor if possible, which do not contain such accountability mechanisms to ensure compliance as it is provided for in BCR-P.

    The Working Party’s letter notes that 3 companies have so far achieved BCR-P (and we successfully acted on one of those – see here) with a further 10 applications on the go (and, yes, we’re acting on a few of those too).

    Taking the helicopter view, the Working Party’s letter is representative of a growing trend for global organizations to seek BCR approval in preference over other data export solutions: back in 2012, just 19 organizations had secured controller BCR approval; two years later, and today that figure stands at 53 (both controller and processor BCR).

    There are several reasons why this is the case:

    1.  BCR are getting express legislative recognition:  The Commission’s draft General Data Protection Regulation expressly acknowledges the validity of BCR, including BCR-P, as a valid legal solution to EU’s strict data export rules.  To date, BCR have had only regulatory recognition, and then not consistently across all Member States, casting a slight shadow over their longer term future.  Express legislative recognition ensures the future of BCR – they’re here to stay.

    2.  Safe harbor is under increasing strain:  The ongoing US/EU safe harbor reform discussions, while inching towards a slow conclusion, have arguably stained its reputation irreparably.  US service providers that rely on safe harbor to export customer data to the US (and sometimes beyond) find themselves stuck in deal negotiations with customers who refuse to contract with them unless they implement a different data export solution.  Faced with the prospect of endless model clauses or a one-off BCR-P approval, many opt for BCR-P.

    3.  BCRs have entered the customer lexicon:  If you’d said the letters “B C R” even a couple of years ago, then outside of the privacy community only a handful of well-educated organizations would have known what you were talking about.  Today, customers are much better informed about BCR and increasingly view BCR as a form of trust mark (which, of course, they are), encouraging the service sector to adopt BCR-P as a competitive measure.

    4.  BCRs are simpler than ever before:  Gone are the days when a BCR application took 4 years and involved traveling all over Europe to visit data protection authorities.  Today, a well-planned and executed BCR application can be achieved in a period of 12 – 18 months, all managed through a single lead data protection authority.  The simplification of the BCR approval process has been instrumental in increasing BCR adoption.

    So if you’re weighing up the pros and cons of BCR against other data export solutions, then deliberate no longer: BCR, and especially BCR-P, will only grow in importance as the EU’s data export regime gets ever tougher.


    FTC in largest-ever Safe Harbor enforcement action

    Posted on January 22nd, 2014 by

    Yesterday, the Federal Trade Commission (“FTC“) announced that it had agreed to settle with 12 US businesses for alleged breaches of the US Safe Harbor framework. The companies involved were from a variety of industries and each handled a large amount of consumer data. But aside from the surprise of the large number of companies involved, what does this announcement really tell us about the state of Safe Harbor?

    This latest action suggests that the FTC is ramping up its Safe Harbor enforcement in response to recent criticisms from the European Commission and European Parliament about the integrity of Safe Harbor (see here and here) – particularly given that one of the main criticisms about the framework was its historic lack of rigorous enforcement.

    Background to the current enforcement

    So what did the companies in question do? The FTC’s complaints allege that the companies involved ‘deceptively claimed they held current certifications under the U.S.-EU Safe Harbor framework‘. Although participation in the framework is voluntary, if you publicise that you are Safe Harbor certified then you must, of course, maintain an up-to-date Safe Harbor registration with the US Department of Commerce and comply with your Safe Harbor commitments 

    Key compliance takeaways

    In this instance, the FTC alleges that the businesses involved had claimed to be Safe Harbor certified when, in fact, they weren’t. The obvious message here is don’t claim to be Safe Harbor certified if you’re not!  

    The slightly more subtle compliance takeaway for businesses who are correctly Safe Harbor certified is that they should have in place processes to ensure:

    • that they keep their self-certifications up-to-date by filing timely annual re-certifications;
    • that their privacy policies accurately reflect the status of their self-certification – and if their certifications lapse, that there are processes to adjust those policies accordingly; and
    • that the business is fully meeting all of its Safe Harbor commitments in practice – there must be actual compliance, not just paper compliance.

    The “Bigger Picture” for European data exports

    Despite this decisive action by the FTC, European concerns about the integrity of Safe Harbor are likely to persist.  If anything, this latest action may serve only to reinforce concerns that some US businesses are either falsely claiming to be Safe Harbor certified when they are not or are not fully living up to their Safe Harbor commitments. 

    The service provider community, and especially cloud businesses, will likely feel this pressure most acutely.  Many customers already perceive Safe Harbor to be “unsafe” for data exports and are insisting that their service providers adopt other EU data export compliance solutions.  So what other solutions are available?

    While model contract have the benefit of being a ‘tried and tested’ solution, the suite of contracts required for global data exports is simply unpalatable to many businesses.  The better solution is, of course, Binding Corporate Rules (BCR) – a voluntary set of self-regulatory policies adopted by the businesses that satisfy EU data protection standards and which are submitted to, and authorised by, European DPAs.  Since 2012, service providers have been able to adopt processor BCR, and those that do find that this provides them with a greater degree of flexibility to manage their internal data processing arrangements while, at the same time, continuing to afford a high degree of protection for the data they process.       

    It’s unlikely that Safe Harbor will be suspended or disappear – far too many US businesses are dependent upon it for their EU/CH to US data flows.  However, the Safe Harbor regime will likely change in response to EU concerns and, over time, will come under increasing amounts of regulatory and customer pressure.  So better to consider alternative data export solutions now and start planning accordingly rather than find yourself caught short!


    Global protection through mutual recognition

    Posted on July 23rd, 2013 by

    At present, there is a visible mismatch between the globalisation of data and the multinational approach to privacy regulation. Data is global by nature as, regulatory limits aside, it runs unconstrained through wired and wireless networks across countries and continents. Put in a more poetic way, a digital torrent of information flows freely in all possible directions every second of the day without regard for borders, geographical distance or indeed legal regimes and cultures. Data legislation on the other hand is typically attached to a particular jurisdiction – normally a country, sometimes a specific territory within a country and occasionally a selected group of countries. As a result, today, there is no such thing as a single global data protection law that follows the data as it makes its way around the world.

    However, there is light at the end of the tunnel. Despite the current trend of new laws in different shapes and flavours emerging from all corners of the planet, there is still a tendency amongst legislators to rely on a principles-based approach, even if that translates into extremely prescriptive obligations in some cases – such as Spain’s applicable data security measures depending on the category of data or Germany’s rules to include certain language in contracts for data processing services. Whether it is lack of imagination or testimony to the sharp brains behind the original attempts to regulate privacy, it is possible to spot a common pedigree in most laws, which is even more visible in the case of any international attempts to frame privacy rules.

    When analysed in practice and through the filter of distant geographical locations and moments in time, it is definitely possible to appreciate the similarities in the way privacy principles have been implemented by fairly diverse regulatory frameworks. Take ‘openness’ in the context of transparency, for example. The words may be slightly different and in the EU directive, it may not be expressly named as a principle, but it is consistently everywhere – from the 1980 OECD Guidelines to Safe Harbor and the APEC Privacy Framework. The same applies to the idea of data being collected for specified purposes, being accurate, complete and up to date, and people having access to their own data. Seeing the similarities or the differences between all of these international instruments is a matter of mindset. If one looks at the words, they are not exactly the same. If one looks at the intention, it does not take much effort to see how they all relate.

    Being a lawyer, I am well aware of the importance of each and every word and its correct interpretation, so this is not an attempt to brush away the nuances of each regime. But in the context of something like data and the protection of all individuals throughout the world to whom the data relates, achieving some global consistency is vital. The most obvious approach to resolving the data globalisation conundrum would be to identify and put in place a set of global standards that apply on a worldwide basis. That is exactly what a number of privacy regulators backed by a few influential thinkers tried to do with the Madrid Resolution on International Standards on the Protection of Personal Data and Privacy of 2009. Unfortunately, the Madrid Resolution never became a truly influential framework. Perhaps it was a little too European. Perhaps the regulators ran out of steam to press on with the document. Perhaps the right policy makers and stakeholders were not involved. Whatever it was, the reality is that today there is no recognised set of global standards that can be referred to as the one to follow.

    So until businesses, politicians and regulators manage to crack a truly viable set of global privacy standards, there is still an urgent need to address the privacy issues raised by data globalisation. As always, the answer is dialogue. Dialogue and a sense of common purpose. The USA and the EU in particular have some important work to do in the context of their trade discussions and review of Safe Harbor. First they must both acknowledge the differences and recognise that an area like privacy is full of historical connotations and fears. But most important of all, they must accept that principles-based frameworks can deliver a universal baseline of privacy protection. This means that efforts must be made by all involved to see what Safe Harbor and EU privacy law have in common – not what they lack. It is through those efforts that we will be able to create an environment of mutual recognition of approaches and ultimately, a global mechanism for protecting personal information.

    This article was first published in Data Protection Law & Policy in July 2013.