Archive for the ‘Accountability’ Category

The EU-US Privacy Shield – A “New Deal” for Safe Harbor?

Posted on February 2nd, 2016 by

At the Democratic National Convention in Chicago in 1932, as America seemed endlessly trapped within the depths of its Great Depression, Governor Franklin D. Roosevelt accepted his party’s nomination to run for President and promised the American people this:

I pledge you, I pledge myself, to a new deal for the American people.

This pledge – to lift the American people out of the economic troughs they had endured for years – helped Governor Roosevelt achieve office and become the next President of the United States.  Over the coming years, measures taken by Roosevelt under his “New Deal” program helped take the United States out of the Great Depression and restore it to economic glory.

This piece of history has obvious parallels with the news announced by the European Commission today that it has agreed a “new framework” (admittedly, not quite as catchy as a “New Deal”) with the United States for transatlantic data flows: US data exports have been in crisis since the Snowden revelations, the new framework promises to significantly benefit ‘the man on the street’, and this agreement is widely perceived as critical to US businesses and the US economy.

The effort taken to achieve this new framework has been simply monumental and, taken at face value, it’s cause for celebration.  But, as any lawyer will tell you, the devil is in the detail and today is only really part of the story…

What does the new framework provide?

To begin with, the EU and US have agreed a rebrand – Safe Harbor 2.0 will instead be called the “EU-US Privacy Shield.”  Critics will undoubtedly say that a “rose by any other name…” (or perhaps, less poetically, that “if it walks like a duck and talks like a duck…”), but the Commission has been eager to emphasize that the new framework has significant differences from the existing Safe Harbor.

In fact, the Commission takes great care in its press release not to even mention Safe Harbor, except to reference it very briefly for historical context purposes.  Announcing the EU-US Privacy Shield, Justice Commissioner Jourová said:

“The new EU-US Privacy Shield will protect the fundamental rights of Europeans when their personal data is transferred to U.S. companies. For the first time ever, the United States has given the EU binding assurances that the access of public authorities for national security purposes will be subject to clear limitations, safeguards and oversight mechanisms. Also for the first time, EU citizens will benefit from redress mechanisms in this area. In the context of the negotiations for this agreement, the US has assured that it does not conduct mass or indiscriminate surveillance of Europeans. We have established an annual joint review in order to closely monitor the implementation of these commitments.”

Like Roosevelt’s New Deal which was built upon the “three Rs” (relief, recovery, and reform), so too is the EU-US Privacy Shield built upon three core goals:

1.  Strong obligations on companies handling Europeans’ personal data and robust enforcement: U.S. companies wishing to import personal data from Europe will need to commit to robust obligations on how personal data is processed and individual rights are guaranteed. The Department of Commerce will monitor that companies publish their commitments, which makes them enforceable under U.S. law by the US. Federal Trade Commission. In addition, any company handling human resources data from Europe has to commit to comply with decisions by European DPAs. 

2.  Clear safeguards and transparency obligations on U.S. government access: For the first time, the US has given the EU written assurances that the access of public authorities for law enforcement and national security will be subject to clear limitations, safeguards and oversight mechanisms. These exceptions must be used only to the extent necessary and proportionate. The U.S. has ruled out indiscriminate mass surveillance on the personal data transferred to the US under the new arrangement. To regularly monitor the functioning of the arrangement there will be an annual joint review, which will also include the issue of national security access. The European Commission and the U.S. Department of Commerce will conduct the review and invite national intelligence experts from the U.S. and European Data Protection Authorities to it. 

3.  Effective protection of EU citizens’ rights with several redress possibilities: Any citizen who considers that their data has been misused under the new arrangement will have several redress possibilities. Companies have deadlines to reply to complaints. European DPAs can refer complaints to the Department of Commerce and the Federal Trade Commission. In addition, Alternative Dispute resolution will be free of charge. For complaints on possible access by national intelligence authorities, a new Ombudsperson will be created.

But what does this mean in practice?

Well, don’t break out the champagne just yet!

Reacting to today’s news, Jan Albrecht, the German MEP who fronted the European Parliament’s negotiations on the new General Data Protection Regulation was quick to criticize on Twitter:

Listen carefully: and US side will need ‘some’ weeks to get this into concrete legal wording. This is no ‘deal’! ” (Tweet available here:

Even Edward Snowden weighed in:

It’s not a ‘Privacy Shield,’ it’s an accountability shield. Never seen a policy agreement so universally criticized.” (Tweet available here:

Grumbling from critics aside, the bigger point to note is this: before any data transfers can take place under the new EU-US Privacy Shield, the European Commission first has to adopt a formal ‘adequacy’ decision (as it has done in the past for the old Safe Harbor and for model clauses).  It’s working on that now but, even before that can happen, it has to take advice from the Article 29 Working Party – and it’s probably a fair assumption that some members of the Working Party are less than charitably disposed towards any kind of US data transfers.

What also remains unclear is the status of current Safe Harbor certified companies – will they automatically be transitioned into the new EU-US Privacy Shield?  The commercial will to see this happen will be strong but, if the scheme is to succeed in achieving any kind of credibility, it’s difficult to see how this can really happen in practice – grandfathering in businesses under a discredited data transfer framework won’t do wonders to win over critics of the new framework.

Put simply: you’re not going to be able to rely on the EU-US Privacy Shield for data transfers for some time yet.  So don’t plan to do so.

Should we build our future data export strategy on the new Privacy Shield?

This really is a tough question to answer.  While a political and legal solution may have been found, at the end of the day that matters little if no one uses it.

And that’s the single biggest problem the EU-US Privacy Shield has to overcome.  Given that detail about the new Privacy Shield is scarce (limited pretty much to what’s been explained above); given that civil liberties group are almost certainly bound to challenge the EU-US Privacy Shield pretty much straightaway; and given that the CJEU Schrems ruling handed national DPAs the ability to investigate the ‘adequacy’ of data transfers made under any new Commission adequacy findings – including this new kid on the block – then you have to ask the question: why would anyone want to use it?

Over the past 4 months, US companies have invested huge amounts of time and effort to transition their data exports over to model clauses from Safe Harbor.  Typically, the pressure to do so has been customer-led, with EU customers insisting that their US suppliers use model clauses if those suppliers want their business.  The reality is that the way businesses use data hasn’t changed, whether EU or US based; only the paperwork under which they use it.  The concern hasn’t been about surveillance, or better protection for data, or anything like that – it’s been about keeping the wheels of commerce turning.

With that in mind, and having invested all this effort to transition over to a new data export model (often necessitating securing significant budget from senior managers), why would US businesses wish to transition over again to the new EU-US Privacy Shield?  Especially if, after doing so, EU customers still refuse to accept it due to concerns that it may be challenged by data subjects or DPAs?  Ask yourself this: as an EU customer, would you accept a US supplier using the EU-US Privacy Shield without also providing some kind of ‘backup’ solution in the form of model clauses?

So no matter how much effort has been put into agreeing this framework for the EU-US Privacy Shield, the biggest challenge is yet to come: market acceptance, and there’s a real ‘hearts and minds’ campaign that needs to be staged here to win over the doubters.  Without this, the EU-US Privacy Shield may find itself consigned to become nothing more than an interesting footnote in data export history.

But, to end on an optimistic note, there is one positive development: the Privacy Shield at least has the same spelling in the EU and the US.  EU privacy lawyers who have spent years cursing at their computers as their word processing software automatically corrects their spelling of “harbor” to “harbour” can at last breathe a sign of relief…

Getting to know the General Data Protection Regulation, Part 7 – Accountability Principles = More Paperwork

Posted on January 27th, 2016 by


Ever since the European Commission’s proposal for the GDPR back in January 2012, ‘Accountability’ has formed the backbone of the draft Regulation through its legislative journey.

However let’s be clear – the principle of Accountability (which essentially refers to the various obligations organisations will have to follow in order to demonstrate data protection compliance) is not a new concept in data protection (see for example the most recent version of the OECD Guidelines on the Protection of Privacy and Transborder Data flows). However, it has never played such a significant role in an EU regulatory framework, until now.

Whilst analysis of the ‘Accountability Principles’ has rightly focused on the challenges that they present to organisations, the principles also represent an opportunity for organisations to create trust in their data processing operations; with both consumers and Data Protection Authorities.

What does the law require today?

Whist the current EU Data Protection Directive (the “Directive”) does not explicitly recognise the concept of ‘Accountability’, it imposes certain obligations on organisations that fall under this concept. Examples include:

  • the requirement to provide individuals with specific information about intended processing activities upon collection of personal data (via an appropriate ‘processing notice’);
  • the requirement for organisations to register (‘notify’) national Data Protection Authorities (DPAs) of the intended processing activities; and
  • the requirement for organisations to put in place appropriate technical and organisational measures to ensure the privacy and security of the personal data that they are processing .

The Accountability principle has been the subject of discussions of many regulators, not just in Europe (where, for example, the CNIL issued its own Accountability standard in January 2015) but also globally. In 2015, the Colombian Data Protection Authority issued its own Accountability guidelines, following similar releases by jurisdictions such as Canada, Hong Kong and Australia. These guidelines aim to align themselves with the Organisation for Economic Cooperation and Development (OECD’s) approach on Accountability. In short, comprehensive governance programs are becoming increasingly common – and obligatory.

What will the General Data Protection Regulation Require?

The Accountability principle runs through the core of the GDPR. Article 22 requires that organisations implement ‘appropriate technical and organisational measures’ to be able to ‘demonstrate’ their compliance with the Regulation, which shall also include ‘the implementation of appropriate data protection policies’. Therefore, in preparing for the Regulation, organisations will have to implement not only internal and publicly-facing policies, records and notices, but also technical measures, and fundamental personnel and strategic changes to their processing operations. Measures shall include:


Businesses will need to put in place comprehensive (and clearly drafted) privacy policies and notices for individuals, setting out full details of the processing of their personal data, including the legal basis of the processing, the safeguards in place for international transfers and data retention periods. Additional documentation which addresses the comprehensive rights now available to individuals under the Regulation will have to be put in place, e.g. an appropriate subject access policy which addresses the expanded information-requirements under the Regulation.

Similar ‘internal’ documentation setting out full details of the various processing activities an organisation undertakes will need to be kept by all organisations employing more than 250 persons (and in some limited cases by organisations employing less than 250 persons). Hence, although the obligation to register with a Data Processing Authority has been removed under the new Regulation, many organisations will still have to retain comprehensive records internally, which are to be made available to DPAs, where required.

Systems compliance

As explained by my colleague, Sabba Mahmood here the Regulation also expressly recognises the concepts of ‘privacy by design’ and ‘privacy by default’, which means that organisations will be under a specific obligation to consider data privacy issues throughout the entire lifecycle of all projects and systems. A practical consequence of this obligation is that organisations will also have to consider Data Protection Impact Assessments (which are themselves a requirement under the Regulation for high risk processing activities).

Technical compliance

The Regulation contains specific requirements for organisations to ensure the security of data, such as pseudonymisation and encryption practices, the ability to ensure the ongoing resilience and integrity of systems, including the ability to quickly restore the availability of systems in the event of a physical or technical incident, and procedures to ensure the ongoing testing of such systems to protect against such incidents. In general, organisations that suffer a data breach will have to notify those breaches to a national Data Protection Authority within 72 hours of becoming aware of the breach and, where the breach poses a high risk to the rights and freedoms of individuals, to the affected individuals ‘without undue delay’. In order to demonstrate compliance with the breach notification obligations, organisations will need to document full details of the breach, its consequences and the measures being implemented to remedy the breach.


Of course, central to the Accountability principle is the requirement for certain organisations to appoint a Data Protection Officer (DPO). The next article in our blog series will discuss the circumstances in which this requirement applies. Once appointed, the DPO will be the focal-point of privacy compliance for the relevant organisation, not only performing a compliance role, but also an advisory role to the business and acting as contact point for employees, customers and national DPAs. Organisations must afford DPOs full access to all resources, including training resources, to help maintain their ‘expert knowledge’ and must facilitate direct reporting by the DPO ‘to the highest management level’ of the business.

What are the practical implications?

The Accountability Principles will represent a cultural shift in the ways that organisations, consumers and DPAs will approach data protection compliance. We can expect national DPAs to begin providing us with guidance on a suggested approach to this area. That being said, now is the time for organisations to consider their current approach to compliance, review the GDPR text and begin to fill in the gaps, having regard to the obligations that the GDPR will create. Actions should include:

  • A full review of internal and public-facing policies and procedures;
  • Buy-in and support from senior figures in your organisation – to effect the operational (and cultural) changes required to address the Accountability principles;
  • An analysis of all internal and external parties involved in your processing operations. This will go further than your IT, Marketing and HR Departments; and must include any of your third party service providers – who will also be caught by many of the afore-mentioned obligations; and
  • A gap analysis, based on the Accountability requirements to identify any shortfalls and an implementation plan to address such gaps.

Can employers read their staff’s private internet messages at work without violating human rights?

Posted on January 22nd, 2016 by

The simple answer is ‘yes’, according to the recent European Court of Human Rights judgment in Barbulescu v Romania, though this needs some explanation; especially given Judge Pinto de Albuquerque’s dissenting opinion.

The European Court of Human Rights decided that a Romanian engineer had not had his Article 8 right (from the Convention for the Protection of Human Rights and Fundamental Freedoms, to respect for his “private and family life, his home and his correspondence”) breached by his employer when it monitored personal messages he had sent on a work-owned Yahoo Messenger account.  He had created the account on his employer’s request, to respond to client enquiries.

Barbulescu was dismissed back in 2007 for breaching his company’s internal regulations which stated “it is strictly forbidden to disturb order and discipline within the company’s premises and especially…to use computers…for personal purposes”. Neither the Bucharest Country Court, nor the Bucharest Court of Appeal were persuaded by his argument that the decision to dismiss was void due to the violation of Barbulescu’s right to private life and correspondence, protected by Romanian law.

On 13 July 2007 (a couple of weeks before his dismissal) Barbulescu’s employer told him that his Yahoo communications had been monitored from 5 to 13 July. We should pause for reflection here as this was a fact highlighted by the dissenting judge: he was informed that he was being monitored after he had been monitored. His records (a forty-five page transcript of his messages to his fiancée and brother relating to personal matters such as his sex life) showed that he had used the internet at work for non-work purposes in breach of the company’s regulations.

Whilst on first blush this monitoring may appear to be a wanton violation of Barbulescu’s privacy, from a data protection perspective the court’s decision to permit it is not surprising. The key point to note is that the employer’s policies clearly stated that the use of work messenger accounts for personal purposes was prohibited i.e. that there was an absolute ban on the use work accounts for private reasons. By reading Barbulescu’s emails as a basis for monitoring his use of the account, his employer had exercised its rights for a legitimate purpose – namely to ensure that the account was being used for work purposes only.   Further, as his employer had only checked the communications of his Yahoo Messenger account (and not the other data and documents stored on his computer) the employer’s monitoring was also deemed by the court to be limited in scope, and therefore proportionate.

The court’s considerations

The court acknowledged that the notion of a private life is a broad concept and that individuals have the right to i) establish and develop relationships with others; and ii) identity and personal development. Further, it held that emails sent from work should be protected, as should information derived from monitoring personal internet usage, in deciding that Article 8 right was ‘engaged’. When considering whether Barbulescu had a reasonable expectation of privacy when communicating from the Yahoo account, the court considered that it was clearly a strict rule in the company’s regulations that computers should not be used by employees for personal reasons. The court then queried, in view of this general prohibition, whether the employee retained a reasonable expectation that his messages would not be monitored. The employee clearly disagreed. The court also noted that he had not signed a copy of the employer’s notice (another point raised by the dissenting judge).

In any case, the court held that there had been no ‘violation’ of the employee’s Article 8 right. The question debated was whether a “fair balance between the applicant’s right to respect for his private life and correspondence and his employer’s interests” had been struck: the answer was ‘yes’. The court was also happy that the content of the messages had not been a decisive element in the previous courts’ findings. It was enough that the messages had been sent to prove that he had breached his contract of employment. The court appreciated that it is not unreasonable for employers to want to verify that their employees are completing their professional tasks during working hours.

The dissenting opinion considered that the right to freedom of expression in society implies freedom to access such services. The judge drew attention to the Convention principle that “internet communications are not less protected on the sole ground that they occur during working hours, in the workplace or in the context of an employment relationship, or that they have an impact on the employer’s business activities or the employee’s performance of contractual obligations”. In considering how best to protect employee’s private electronic communications held by employers for employment purposes, he suggested that a comprehensive internet usage policy should be in place with “specific rules” on the use of emails and instant messaging and “transparent rules” on how monitoring is conducted.

Final thoughts

The key point to note from the judgment is that any intention to monitor employee activity should be fairly disclosed to employees before it is carried out, and that any monitoring conducted must be proportionate and for justified purposes.

Information gathered as part of a monitoring activity must be securely handled amongst appropriately-authorised reviewers. It is particularly important to remember that enterprise-wide monitoring across group affiliates in different EU countries will also attract different local regulatory expectations, and that in some countries local works’ council or data protection officer consultations may be required before monitoring can be performed.

Companies should take time to assess what policies are in place on the monitoring of employees’ private electronic messages. In this case there was a blanket ban on personal internet use – but most employers in the UK permit occasional personal use of their IT and communication systems, as long as such communication does not get in the way of work to be done and is kept to a minimum (and, ideally, outside of working hours). Policies should therefore provide clear guidance on acceptable private use and also the circumstances in which such communications may be monitored.

Most important to check is whether such policies have actually been brought to the attention of employees. Granted, signed consent was not thought to be necessary in this case but it would certainly have made it more difficult for the employee to bring his case if he had signed on the dotted line.

A parting thought also on disciplinary sanctions; whilst the employee was dismissed as a result of his breach in this case, employers should be careful (especially for those with over two years’ service) to apply an appropriate sanction to any breach found. Dismissal may not always be reasonable on the facts of each case and verbal and/or written warnings should be considered in the first instance, to head off any potential unfair dismissal claims.

How to build a data protection compliance program from scratch

Posted on December 14th, 2015 by

It’s a daunting task.  You’re the newly appointed data privacy person in your organisation – either because you applied for the role or because someone “volunteered” you for it – and now you have to build out a data protection compliance program.  Worldwide.  From scratch.  What do you do?

Well, first off, take comfort in knowing that you’re not alone.  Thanks to a flurry of recent data privacy activity in the EU (Right to be Forgotten, Safe Harbor, GDPR) and beyond (the emergence of Asia-Pac privacy regimes, the Canadian Anti-Spam law, high profile US data breaches), the need for data protection compliance has hit the C-suite agenda like never before.  Execs everywhere are turning to folk like you to solve the problem.

And some more good news: there’s very little you can do wrong.  If you’re being tasked with building out a global data protection compliance program, odds are your organisation doesn’t have much of a program currently.  So every step you take, no matter how small, is a step in the right direction.  Though with a little bit of forethought, not only will you NOT go wrong, you will deliver SIGNIFICANT benefits in terms of compliance, risk reduction, and brand enhancement.

Here’s how you go about it:

1.  Decide what kind of organisation you want to be.

It sound so simple, but this step is key.  What is your data protection strategy?  Is it legally-driven (goal = legal compliance), risk-driven (goal = risk reduction) or ethics-driven (goal = do the right thing)?

This crucial decision will be dependent on many factors, including the nature of your organisation (a mature, regulated business may have very different goals from your Silicon Valley start-up), your values as an organisation (what does your Code of Conduct say?), how much top-level support you have, what your competitors do, available budget, privacy ‘crises’ the business has experienced in recent history, and your personal beliefs as the organisation’s data privacy evangelist – to name just a few.

These aren’t exclusive strategies, either – often the “right” approach will be some combination of the three, but perhaps with a particular leaning towards one goal in particular.  In any event, the decision taken at this point will inform every subsequent action you take, so consider wisely.

In addition to this, you need to identify your baseline privacy standards – i.e. the privacy framework against which you will benchmark your compliance.  Will you use the EU Data Privacy Directive, the US Fair Information Privacy Practices, or perhaps something with more of an international flavour – like BCR, CBPR or the OECD Principles?

Remember, this is about deciding your baseline – depending on where you operate geographically, you may need to raise yourself above this baseline in some countries, but you at least need a baseline in the first place to bring some kind of global consistency to the way your organisation protects data.

2.  Find out what kind of organisation you are today.

Before you can embark on putting in place compliance controls, you need to do a little fact-finding.  Among the things you need to find out are:

  • what data you process today, how, why and where;
  • who are your internal data privacy ‘champions’ (you’ll need them) and your data privacy ‘trolls’ (you’ll need to win them over);
  • what policies, procedures, guidance and training, if any, you already have – and what kind of state they’re in; and
  • the level of awareness that exists within the organisation to date about the importance of data protection compliance.

Depending on the size of your organisation, this can be a challenging task, so identify others who can support you in this process – the data privacy ‘champions’ mentioned above, whether business unit leaders, country managers, or just internal privacy enthusiasts.  Only once you are armed with this information will you be ready to determine what you need to do next.

Which leads nicely onto the next point…

3.  Work out how to become the kind of organisation you want to be.

The next stage is a gap analysis.  You know what you are today, you know what you want to become, so work out the gaps.  Once you’ve identified the gaps, then you’ll be ready to start putting in place the measures necessary to fill them.

When performing this gap analysis, be careful to prioritise though.  Not all gaps carry equal importance – some will pose significant risks, either to individuals directly or in terms of organisational risk, and these should be addressed first (for example, you may discover that sensitive personal information is being shared internally, or even worse, externally, in an uncontrolled fashion).  Those that are less significant (say, not having sorted out your website privacy policy in a while) should be pushed lower down the priority list.

When you’ve identified your gaps, then the real fun begins – you need to figure out how to plug those gaps!  That will entail a combination of many activities, typically including things like creating a compliance team, adopting new policies, instituting training, building out Privacy by Design processes, creating supplier due diligence standards, designing new contract templates, and more.  If needs be, look to peers in similar organisations or call upon external experts for guidance.

4.  Become the organisation you want to be.

You know what I’m going to say next: you’ve figured out how to plug the gaps, so plug them already!  Transform your organisation from where you are today to where you want to be.

5.  Rinse, wash and repeat.

A privacy professional’s work is never done.  It’s important to remember a compliance program is just that: a program, not a project.  That means it must undergo review to ensure that it remains valid, up-to-date, and works well in practice – and, if not, it needs changing.  You must institute regular audits to ensure this is the case.

Metrics can help here.  You can assess the success of the compliance program you have instituted through a number of potential metrics – for example, privacy awareness among staff, number of privacy complaints reported, data breaches suffered, and so on.

These metrics will not only help you assess the ongoing success of your program, but also help you demonstrate ROI to your sponsors and executives.  And, once you’ve done that, you get to begin all over again!

Time for US businesses to consider an anti-surveillance pledge?

Posted on October 23rd, 2015 by

Breakdown of trust is a terrible thing that often has negative and unpredictable consequences, not just for those directly involved but also for those inadvertently caught up in the ensuing fall-out: for the friends who are forced to choose sides when a relationship breaks up, for the children affected when a marriage breaks down and, yes, for the businesses harmed when transatlantic trust between two great economic regions falls apart.

Because, when all is said and done, the recent collapse of Safe Harbor is ultimately attributable to a breakdown in trust.  Whatever legal arguments there are about data export “adequacy”, Europe has fundamentally lost trust in the safe handling of European citizens’ data Stateside.  The resulting panic was inevitable – international conglomerates worry about their regulatory compliance, US supply-side businesses realize that there is now no effective legal solution for their lawful handling of data, and regulators move to calm in threatening tones that they will not take enforcement – for the time being.

Which leaves us all in a quandary.  Businesses must by necessity start putting in place a patchwork of legal solutions designed, if not to achieve compliance, then at least to manage risk, but many of these solutions will not be officially recognized either by law or the regulatory community (how exactly should US processors lawfully onward transfer data to sub-processors?).  Consequently, these solutions – while necessary in an environment where no alternatives exist – will likely fuel further legislative and regulatory speculation that companies are working around data protection rules, rather than with them.

But when compliance becomes impossible, so everyone becomes a criminal.  Think of it this way:  if you tax me at 40%, I will pay.  But tax me at 90% and I simply can’t afford to, so won’t – no matter how much I may believe in the principle of taxation or want to be a law-abiding member of society.

An anti-surveillance pledge to restore trust

So where does that leave us?  The real dialogue to have here is one around restoring trust.  This is absolutely critical.  And that is why all businesses – especially US businesses right now – must consider taking an anti-surveillance pledge.

What does an anti-surveillance pledge look like?  It takes the form of a short statement, perhaps no more than two or three paragraphs in length, under which the business would pledge never knowingly to disclose individuals’ data to government or law enforcement authorities unless either (1) legally compelled to do so (for example, by way of a warrant or court order), or (2) there is a risk of serious and imminent harm were disclosure to be withheld (for example, imminent terrorist threat).  The pledge would be signed by senior management of the business, and made publicly-available as an externally-facing commitment to resist unlawful government-led surveillance activities – for example, by posting on a website or incorporation within an accessible privacy policy.

Will taking a pledge like this solve the EU-US data export crisis?  No.  Will it prevent government surveillance activities occurring upstream on Internet and telecoms pipes over which the business has no control?  No.  But will it demonstrate a commitment to the world that the business takes its data subjects’ privacy concerns seriously and that it will do what is within its power to do to prevent unlawful surveillance – absolutely: it’s a big step towards accountably showing “adequate” handling of data.

The more businesses that sign a pledge of this nature, the greater the collective strength of these commitments across industries and sectors; and the greater this collective strength, the more this will assist the long, slow process of restoring trust.  Only through the restoration of trust will we see a European legislative and regulatory environment once more willing to embrace the adequacy of data exports to the US.  So, if you haven’t considered it before, consider it now: it’s time for an anti-surveillance pledge.


Getting to know the GDPR, Part 2 – Out-of-scope today, in scope in the future. What is caught?

Posted on October 20th, 2015 by

The GDPR expands the scope of application of EU data protection law requirements in two main respects:

  1. in addition to data “controllers” (i.e. persons who determine why and how personal data are processed), certain requirements will apply for the first time directly to data “processors” (i.e. persons who process personal data on behalf of a data controller); and
  2. by expanding the territorial scope of application of EU data protection law to capture not only the processing of personal data by a controller or a processor established in the EU, but also any processing of personal data of data subjects residing in the EU, where the processing relates to the offering of goods or services to them, or the monitoring of their behaviour.


The practical effect is that many organisations that were to date outside the scope of application of EU data protection law will now be directly subject to its requirements, for instance because they are EU-based processors or non EU-based controllers who target services to EU residents (e.g. through a website) or monitor their behaviour (e.g. through cookies). For such organisations, the GDPR will introduce a cultural change and there will be more distance to cover to get to a compliance-ready status.

What does the law require today?

The Directive

At present, the Data Protection Directive 95/46/EC (“Directive“) generally sets out direct statutory obligations for controllers, but not for processors. Processors are generally only subject to the obligations that the controller imposes on them by contract. By way of example, in a service provision scenario, say a cloud hosting service, the customer will typically be a controller and the service provider will be a processor.

Furthermore, at present the national data protection law of one or more EU Member States applies if:

  1. the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State. When the same controller is established on the territory of several Member States, each of these establishments should comply with the obligations laid down by the applicable national law (Article 4(1)(a)); or
  2. the controller is not established on EU territory and, for purposes of processing personal data makes use of equipment situated on the territory of a Member State (unless such equipment is used only for purposes of transit through the EU) (Article 4(1)(c)); or
  3. the controller is not established on the Member State’s territory, but in a place where its national law applies by virtue of international public law (Article 4(1)(b)). Article 4(1)(b) has little practical significance in the commercial and business contexts and is therefore not further examined here. The GDPR sets out a similar rule.


CJEU case law

Two recent judgments of the Court of Justice of the European Union (“CJEU“) have introduced expansive interpretations of the meaning of “in the context of the activities” and “establishment”:

  1. In Google Spain, the CJEU held that “in the context of the activities” does not mean “carried out by”. The data processing activities by Google Inc are “inextricably linked” with Google Spain’s activities concerning the promotion, facilitation and sale of advertising space. Consequently, processing is carried out “in the context of the activities” of a controller’s branch or subsidiary when the latter is (i) intended to promote and sell ad space offered by the controller, and (ii) orientates its activity towards the inhabitants of that Member State.
  2. In Weltimmo, the CJEU held that the definition of “establishment” is flexible and departs from a formalistic approach that an “establishment” exists solely where a company is registered. The specific nature of the economic activities and the provision of services concerned must be taken into account, particularly where services are offered exclusively over the internet. The presence of only one representative, who acts with a sufficient degree of stability (even if the activity is minimal), coupled with websites that are mainly or entirely directed at that EU Member State suffice to trigger the application of that Member State’s law.


What will the GDPR require?

The GDPR will apply to the processing of personal data:

  1. in the context of the activities of an establishment of a controller or a processor in the EU; and
  2. of data subjects residing in the EU by a controller not established in the EU, where the processing activities are related to the offering of goods or services to them, or the monitoring of their behaviour in the EU.

It is irrelevant whether the actual data processing takes place within the EU or not.

As far as the substantive requirements are concerned, compared to the Directive, the GDPR introduces:

  1. new obligations and higher expectations of compliance for controllers, for instance around transparency, consent, accountability, privacy by design, privacy by default, data protection impact assessments, data breach notification, new rights of data subjects, engaging data processors and data processing agreements;
  2. for the first time, direct statutory obligations for processors, for instance around accountability, engaging sub-processors, data security and data breach notification; and
  3. severe sanctions for compliance failures.


What are the practical implications?

Controllers who are established in the EU are already caught by EU data protection law, and will therefore not be materially affected by the broader scope of application of the GDPR. For such controllers, the major change is the new substantive requirements they need to comply with.

Processors (such as technology vendors or other service providers) established in the EU will be subject to the GDPR’s direct statutory obligations for processors, as opposed to just the obligations imposed on them by contract by the controller. Such processors will need to understand their statutory obligations and take the necessary steps to comply. This is a major “cultural” change.

Perhaps the biggest change is that controllers who are not established in the EU but collect and process data on EU residents through websites, cookies and other remote activities are likely to be caught by the scope of the GDPR. E-commerce providers, online behavioural advertising networks, analytics companies that process personal data are all likely to be caught by the scope of application of the GDPR.

We still have at least 2 years before the GDPR comes into force. This may sound like a long time, but given the breadth and depth of change in the substantive requirements, it isn’t really! A lot of fact finding, careful thinking, planning and operational implementation will be required to be GDPR ready in 24 months.

So what should you be doing now?

  1. If you are a controller established in the EU, prepare your plan for transitioning to compliance with the GDPR.
  2. If you are a controller not established in the EU, assess whether your online activities amount to offering goods or services to, or monitoring the behaviour of, EU residents. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR. You may need to appoint a representative in the EU.
  3. Assess whether any of your EU-based group companies act as processors. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR.
  4. If you are a multinational business with EU and non-EU affiliates which will or may be caught by the GDPR, you will also need to consider intra-group relationships, how you position your group companies and how you structure your intra-group data transfers.

Obituary: Safe Harbor – 2000-2015

Posted on October 7th, 2015 by

Fieldfisher is saddened to report the sudden and unexpected demise of Safe Harbor yesterday.  Though rumours had persisted of its ill-health for a number of years, yesterday’s news comes as a shock to us all.

Despite passing at the tender age of just 15, Safe Harbor had made quite an impact in its short lifetime.  Originally born in 2000, the lovechild of the European Union and United States, tensions became apparent in its parents’ relationship in later years.  Throughout its childhood and early teens, Safe Harbor amassed many friends, particularly in the United States.  It also mingled in celebrity circles, counting high profile individuals like Facebook, Google, LinkedIn and Twitter among its good friends.

However popular it may have been, though, scandal seemed always to follow Safe Harbor – particularly in relation to its transatlantic data shipping business, through which it amassed particular fame and fortune.  While purportedly running this business to very exacting, principled standards, rumours persisted that Safe Harbor did not exercise sufficient oversight over how customers used its product – and that some of these customers were using Safe Harbor’s data products for illicit purposes.  Whether or not this is true is open for debate, and some commentators have argued that Safe Harbor has been unfairly victimized, arguing that its business competitors Model Clauses and Binding Corporate Rules have comparable practices.

Safe Harbor was ultimately killed in a collision while on a hiking holiday on Mount Snowden [sic].  Reports are that it was struck down by an unstoppable vehicle driven by an Austrian student.  The vehicle involved in the accident is reported to be of European, probably Irish, make.  As anyone familiar with the area knows, the countryside around Mount Snowden has many unpredictable twists and turns along its roads, making it dangerous for anyone to travail.  Anyone could be its next victim.

Safe Harbor’s death has attracted commentary from its former friends and critics alike, all hailing it as a significant passing that will have a major impact on the transatlantic data shipping industry.  Others maintain that, notwithstanding Safe Harbor’s passing, transatlantic data shipping will continue much as it has before, albeit with some disruption, through the operations of Model Clauses, Binding Corporate Rules and, of course, the infamous data Black Market.

Safe Harbor is rumoured to be survived by a child, dubbed “Safe Harbor 2.0”, although no public sightings of this child have yet been seen.

Getting to know the GDPR, Part 1 – You may be processing more personal information than you think

Posted on October 2nd, 2015 by

This post is the first in a series of posts that the Fieldfisher Privacy, Security and Information team will publish on forthcoming changes under Europe’s new General Data Protection Regulation (the “GDPR“), currently being debated through the “trilogue” procedure between the European Commission, Council and Parliament (for an explanation of the trilogue, see here).

The GDPR, like the Directive today and – indeed – any data protection law worldwide, protects “personal data”.  But understanding what constitutes personal data often comes as a surprise to some organizations.  Are IP addresses personal data, for example?  What about unique device identifiers or biometric identifiers?  Does the data remain personal if you hash or encrypt it?

What does the law require today?

Today, the EU definition of “personal data” is set out in the Data Protection Directive 95/46/EC.  It defines personal data as “any information relating to an identified or identifiable natural person” (Art. 2(a)), and specifically acknowledges that this includes both ‘direct’ and ‘indirect’ identification (for example, you know me by name – that’s direct identification; you describe me as “the Fieldfisher privacy lawyer working in Silicon Valley” – that’s indirect identification).

The Directive also goes on to say that identification can be by means of “an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.  This has caused a lot of debate in European privacy circles – could an “identification number” include an IP address or cookie string, for example?  The Article 29 Working Party has previously issued comprehensive guidance on the concept of personal data, which made clear that EU regulators were minded to treat the definition of “personal data” as very wide indeed (by looking at the content, purpose and result of the data).  And, yes, they generally think of IP addresses and cookie strings as personal – even if organizations themselves do not.

This aside, EU data protection law also has a separate category of “special” personal data (more commonly referred to as “sensitive personal data”) .  This is personal data that is afforded extra protection under the Directive, and is defined as data relating to racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and health or sex life.  Data relating to criminal offences is also afforded special protection.  Oddly, though, financial data, social security numbers and child data are not protected as “sensitive” under the Directive today.

What will the General Data Protection Regulation require?

While the GDPR is still being debated between the Commission, Council and Parliament through the trilogue procedure, what is clear is that the net cast for personal data will not get any smaller.  In fact, the legislators are keen to clear up some of the ambiguities that exist today, and even to widen the net in a couple of instances – for example, with respect to sensitive personal data.  In particular:

  • Personal data and unique identifiers:  The trilogue parties broadly agree that the concept of personal data must include online identifiers and location data – so the legal definition of personal data will be updated under the GDPR to put beyond any doubt that IP addresses, mobile device IDs and the like must be treated as personal.  This means that these data will be subject to fairness, lawfulness, security, data export and other data protection requirements just like any other personal data.
  • Pseudonymous data:  The trilogue parties are considering the concept of “pseudonymous data” – in simple terms, personal data that has been subjected to technological measures (like hashing or encryption) such that it no longer directly identifies an individual.  There seems to be broad acceptance that a definition of “pseudonymous data” is needed, but that pseudonymous data will still be treated as personal data – i.e. subject to the requirements of the GDPR.  On the plus side though, organizations that pseudonymize their data will likely benefit from relaxed data breach notification rules, potentially less strict data subject access request requirements, and greater flexibility to conduct data profiling.  The GDPR will encourage pseudonymization as a privacy by design measure.
  • Genetic data and biometric data:  GDPR language under debate also introduces the concepts of “genetic data” and “biometric data” (i.e. fingerprints, facial recognition, retinal scans etc.).  The trilogue parties seem to agree that genetic data must be treated as sensitive personal data, affording it enhanced protections under the GDPR (likely due to concerns, for example, that processing of genetic data might lead to insurance disqualifications or denial of medical treatment).  There’s slightly less alignment between the parties on the treatment of biometric data, with the Parliament viewing it as sensitive data and the Council preferring to treat it as (non-sensitive) personal data – but data that, nevertheless, triggers the need for an organizational Data Protection Impact Assessment if processed on a large scale.

What are the practical implications?

For many institutions, the changes to the concept of personal data under the GDPR will simply be an affirmation of what they already know: that Europe applies a very protective approach when triggering personal data requirements.

Online businesses – especially those in the analytics, advertising and social media sectors – will be significantly impacted by the express description of online and unique identifiers as personal data, particularly when this is considered in light of the extended territorial reach of the GDPR.  Non-EU advertising, analytics and social media platforms will likely find themselves legally required to treat these identifiers as personal data protected by European law, just as their European competitors are, and need to update their policies, procedures and systems accordingly – that, or risk losing EU business and attracting European regulatory attention.  However, they will likely take (some) comfort from GDPR provisions allowing for data profiling on a non-consent basis if data is pseudonymized.

Beyond that, all organizations will need to revisit what data they collect and understand whether it is caught by the personal data requirements of the GDPR.  In particular, they need to be aware of the extended scope of sensitive data to include genetic data (and, if the Parliament has its way, potentially biometric data), attracting greater protections under the GDPR – particularly the need for explicit consent, unless other lawful grounds processing exist.

Finally, some of the relaxations given to processing of pseudonymized data will hopefully serve to incentivize greater adoption by organizations of pseudonymization technologies.  Arguably, the GDPR could do more on this front – and some organizations will inevitably grumble at the cost of pseudonymizing datasets – but if doing so potentially reduces data breach notification and data subject access request responsibilities then this will serve as a powerful adoption incentive.

Privacy audits – asking less can get you more!

Posted on June 26th, 2015 by

A while back, my wife was telling me about a book she’d just read called “Animals in Translation” by animal behaviourist Professor Temple Grandin. In the book, Professor Grandin explains her work to improve conditions for animals at meat-packing facilities (and if you’re wondering what on earth this has to do with privacy, keep reading – I’ll get there.)

One of the ways originally used to assess conditions at these facilities was by means of lengthy, detailed audit inspections, with audit questionnaires often running well in excess of a hundred pages. These questionnaires were by their very design cumbersome to complete and consequently attracted voluminous but poor quality responses that gave little real insight into animal welfare or how to improve it.

Professor Grandin’s insight was that audits could be considerably improved by reducing the questionnaires to just a few, select, critical questions. For example, asking what percentage of animals displayed signs of lameness replaced the need to ask several questions about the detail of the animal-handling process. Why? Because the detail was largely irrelevant – if more than a certain percentage of animals displayed lameness, then it stood to reason that something was going wrong and needed addressing.

As privacy professionals, we can all learn something from this drive towards audit simplicity. Having worked with many businesses on privacy assessments ranging from full-blown audits to Privacy Impact Assessments for new product launches, and vendor due diligence projects to staff awareness questionnaires, I’ve often been struck by how unnecessarily long and detailed many audit questionnaires are.

This length is more often than not attributable to a concern that, if questioners don’t ask every last question they can think of, they might miss something. That’s a fair concern, but what it overlooks is that a questionnaire is the beginning and not the end of a well-run audit – it’s simply the means by which you start gathering information in order to identify potential issues needing follow-up. Not only that, but it’s far better to get a high response rate to a few well-selected questions than a low-to-zero rate on a more detailed set.

So the next time you have to circulate an audit questionnaire to an internal development team about some new product launch, or to an external vendor about its compliance practices, or maybe to another business unit about its data-handling procedures, then keep the following practical tips in mind:

1.  Keep it short and sweet. Nobody likes answering lengthy questionnaires and so, if you send one out, expect that it either won’t get answered or won’t get answered well. A one to two page questionnaire of a few select questions will encourage a relatively high response rate; at three to four pages, you’ll watch your response rate fall off a cliff; and anything longer that you’re almost guaranteeing next to no responses – no matter how much chasing, cajoling, or threatening you do!

2.  Ask critical questions only. Think carefully about what you really need to know for initial issue-spotting purposes. Focus, for example, on what data are you collecting, why, who it will be shared with, where it will be processed, and what security is in place. Don’t worry too much about the detail for now – you can (and should!) follow-up that up later if the initial responses ring any alarm bells (“You’re sending data to Uzbekistan? Why? Who will have access to it? What protections are in place?“).

3.  Use open questions. It’s tempting to design a checklist questionnaire. In theory, this makes the respondent’s job easier and encourages consistency of responses. In practice, however, checklists aren’t always well-suited for privacy assessments. It’s difficult to capture the wide array of potential data types, uses, recipients etc. through checklists alone unless you use very lengthy checklists indeed – resulting in the response-rate issues already mentioned above. Open questions, by contrast, are generally simpler and shorter to ask, and lead to more, and often more informative, responses (e.g. Consider “Who you will share data with and why?” vs. “Who will you share data with? Tick all of the following that apply: Customers [ ] Customers’ end users [ ] Prospective customers [ ] Employees [ ] Contractors [ ] Vendors [ ] etc.”).

4.  Don’t lose sight of the wood for the trees. In many of the questionnaires I’ve seen, questioners often delve straight into the detail, asking very specific questions about the nature of the data processed, the locations of privacy notices, specific security standards applied and so on. What’s often missing is a simple, upfront question asking what the particular product, service or processing operation actually is or does. Asking this provides important context that will necessarily influence the depth of review you need to undertake. Better still, if you can actually get to see the “thing” in practice, then seize the opportunity – this will crystallize your understanding far better than any questionnaire or verbal description ever could.

5.  We never talk anymore. As already noted, a questionnaire is just the beginning of an assessment. Use it to gather the initial data you need, but then make sure to follow up with a meeting to discuss any potential issues you identify. You will learn far more from person-to-person meetings than you ever can through a questionnaire – interviewees will often volunteer information in-person you never thought to ask on your questionnaire, and some interview responses may prompt you to think of questions “in the moment” that you’d never otherwise have thought of back in the isolation of your office.

6.  Not all audits are created equal.  The above are just some practical suggestions borne out of experience but, obviously, not all audits are created equal.  As a questioner, you need to have the flexibility to adapt your questionnaire to the specific context in hand – a “lessons learned” audit following a data breach will by necessity have to be more detailed and formal than an internal staff awareness assessment.  Exercise good judgement to tailor your approach as the situation requires!

Handling government data requests under Processor BCR

Posted on June 2nd, 2015 by

Earlier today, the Article 29 Working Party published some new guidance on Processor BCR. There’s no reason you would have noticed this, unless you happen to be a BCR applicant or regularly visit the Working Party’s website, but the significance of this document cannot be overstated: it has the potential to shape the future of global data transfers for years to come.

That’s a bold statement to make, so what is this document – Working Party Paper WP204 “Explanatory Document on the Processor Binding Corporate Rules” – all about? Well, first off, the name kind of gives it away: it’s a document setting out guidance for applicants considering adopting Processor BCR (that’s the BCR that supply-side companies – particularly cloud-based companies – are all rushing to adopt). Second, it’s not a new document: the Working Party first published it in 2013.

The importance of this document now is that the Working Party have just updated and re-published it to provide guidance on one of the most contentious and important issues facing Processor BCR: namely how Processor BCR companies should respond to government requests for access to data.

Foreign government access to data – the EU view

To address the elephant in the room, ever since Snowden, Europe has expressed very grave concerns about the ‘adequacy’ of protection for European data exported internationally – and particularly to the US. This, in turn, has led to repeated attempts by Europe to whittle away at the few mechanisms that exist for lawfully transferring data internationally, from the European Commission threatening to suspend Safe Harbor through to the European Parliament suggesting that Processor BCR should be dropped from Europe’s forthcoming General Data Protection Regulation (a suggestion that, thankfully, has fallen by the wayside).

By no means the only concern, but certainly the key concern, has been access to data by foreign government authorities. The view of EU regulators is that EU citizens’ data should not be disclosed to foreign governments or law enforcement agencies unless strict mutual legal assistance protocol has been followed. They rightly point out that EU citizens have a fundamental right to protection of their personal data, and that simply handing over data to foreign governments runs contrary to this principle.

By contrast, the US and other foreign governments say that prompt and confidential access to data is often required to prevent crimes of the very worst nature, and that burdensome mutual legal assistance processes often don’t allow access to data within the timescales needed to prevent these crimes. The legitimate but conflicting views of both sides lead to the worst kind of outcome: political stalemate.

The impact of foreign government access to data on BCR

In the meantime, businesses have found themselves trapped in a ‘no man’s land’ of legal uncertainty – the children held responsible for the sins of their parent governments. Applicants wishing to pursue Processor BCR have particularly found themselves struggling to meet its strict rules concerning government access to data: namely that any “request for disclosure should be put on hold and the DPA competent for the controller and the lead DPA for the BCR should be clearly informed about it.” (see criteria 6.3 available here)

You might fairly think: “Why not just do this? If a foreign government asks you to disclose data, why not just tell them you have to put it on hold until a European DPA sanctions – or declines – the disclosure?” The problem is that reality is seldom that straightforward. In many jurisdictions (and, yes, I’m particularly thinking of the US) putting a government data disclosure order “on hold” and discussing it with a European DPA is simply not possible.

This is because companies are typically prohibited under foreign laws from discussing such disclosure orders with ANYONE, whether or not a data protection authority, and the penalties for doing so can be very severe – up to and including jail time for company officers. And let’s not forget that, in some cases, the disclosure order can be necessary to prevent truly awful offences – so whatever the principle to be upheld, sometimes the urgency or severity of a particular situation will simply not allow for considered review and discussion.

But that leaves companies facing the catch-22. If they receive one of these orders, they can be in breach of foreign legal requirements for not complying with it; but if they do comply with it, they risk falling foul of European data protection rules. And, if you’re a Processor BCR applicant, you might rightly be wondering how on earth you can possibly give the kind of commitment that the Working Party expects of you under the Processor BCR requirements.

How the Working Party’s latest guidance helps

To their credit, the Working Party have acknowledged this issue and this is why their latest publication is so important. They have updated their BCR guidance to note that “in specific cases the suspension and/or notification [to DPAs of foreign government data access requests] are prohibited”, including for example “a prohibition under criminal law to preserve the confidentiality of a law enforcement investigation”. In these instances, they expect BCR applicants to use “best efforts to obtain the right to waive this prohibition in order to communicate as much information as it can and as soon as possible”.

So far, so good. But here’s the kicker: they then say that BCR applicants must be able to “demonstrate” that they exercised these “best efforts” and, whatever the outcome, provide “general information on the requests it received to the competent DPAs (e.g. number of applications for disclosure, type of data requested, requester if possible, etc.)” on an annual basis.

And therein lies the problem: how does a company “demonstrate” best efforts in a scenario where a couple of NSA agents turn up on its doorstep brandishing a sealed FISA order and requiring immediate access to data? You can imagine that gesticulating wildly probably won’t cut it in the eyes of European regulators.

And what about the requirement to provide “general information” on an annual basis including the “number of applications for disclosure”? In the US, FISA orders may only be reported in buckets of 1,000 orders – so, even if a company received only one or two requests in a year, the most it could disclose is that it received between 0 and 999 requests, making it seem like government access to their data was much more voluminous than in reality it was.

I don’t want problems, I want solutions!!!

So, if you’re a Processor BCR applicant, what do you do? You want to see through your BCR application to show your strong commitment to protecting individuals’ personal data, and you certainly don’t want to use a weaker solution, like Model Clauses or Safe Harbor that won’t carry equivalent protections. But, at the same time, you recognize the reality that there will be circumstances where you are compelled to disclose data and that there will be very little you can do – or tell anyone – in those circumstances.

Here’s my view:

  • First off, you need a document government data access policy. It’s unforgivable in this day and age, particularly in light of everything we have learned in the past couple of years, not to have some kind of written policy around how to handle government requests for data. More importantly, having a policy – and sticking to it – is all part and parcel of demonstrating your “best efforts” when handling government data requests.
  • Second, the policy needs to identify who the business stakeholders are that will have responsibility for managing the request – and, as a minimum, this needs to include the General Counsel and, ideally, the Chief Privacy Officer (or equivalent). They will represent the wall of defense that prevents government overreach in data access requests and advise when requests should be challenged for being overly broad or inappropriately addressed to the business, rather than to its customers.
  • Third, don’t make it easy for the government. They want access to your data, then make them work for it. It’s your responsibility as the custodian of the data to protect your data subject’s rights. To that end, ONLY disclose data when LEGALLY COMPELLED to do so – if access to the data really is that important, then governments can typically get a court order in a very short timeframe. Do NOT voluntarily disclose data in response to a mere request, unless there really are very compelling reasons for doing so – and reasons that you fully document and justify.
  • Fourth, even if you are under a disclosure order, be prepared to challenge it. That doesn’t necessarily mean taking the government to court each and every time, but at least question the scope of the order and ask whether – bearing in mind any BCR commitments you have undertaken – the order can be put on hold while you consult with your competent DPAs. The government may not be sympathetic to your request, particularly in instances of national security, but that doesn’t mean you shouldn’t at least ask.
  • Fifth, follow the examples of your peers and consider publishing annual transparency reports, a la Google, Microsoft and Yahoo. While there may be prohibitions against publishing the total numbers of national security requests received, the rules will typically be more relaxed when publishing aggregate numbers of criminal data requests. This, in principle, seems like a good way of fulfilling your annual reporting responsibility to data protection authorities and – in fact – goes one step further: providing transparency to those who matter most in this whole scenario, the data subjects.
  • So why does the Working Party’s latest opinion matter so much? It matters because it’s a vote of confidence in the Processor BCR system and an unprecedented recognition by European regulatory authorities that there are times when international businesses really do face insurmountable legal conflicts.

    Had this opinion not come when it did, the future of Processor BCR would have been dangerously undermined and, faced with the prospect of Safe Harbor’s slow and painful demise and the impracticality of Model Clauses, would have left many without a realistic data export solution and further entrenched a kind of regulatory ‘Fortress Europe’ mentality.

    The Working Party’s guidance, while still leaving challenges for BCR applicants, works hard to strike that hard-to-find balance between protecting individuals’ fundamental rights and the need to recognize the reality of cross-jurisdicational legal constraints – and, for that, they should be commended.