Archive for the ‘Accountability’ Category

Getting to know the GDPR, Part 1 – You may be processing more personal information than you think

Posted on October 2nd, 2015 by

This post is the first in a series of posts that the Fieldfisher Privacy, Security and Information team will publish on forthcoming changes under Europe’s new General Data Protection Regulation (the “GDPR“), currently being debated through the “trilogue” procedure between the European Commission, Council and Parliament (for an explanation of the trilogue, see here).

The GDPR, like the Directive today and – indeed – any data protection law worldwide, protects “personal data”.  But understanding what constitutes personal data often comes as a surprise to some organizations.  Are IP addresses personal data, for example?  What about unique device identifiers or biometric identifiers?  Does the data remain personal if you hash or encrypt it?

What does the law require today?

Today, the EU definition of “personal data” is set out in the Data Protection Directive 95/46/EC.  It defines personal data as “any information relating to an identified or identifiable natural person” (Art. 2(a)), and specifically acknowledges that this includes both ‘direct’ and ‘indirect’ identification (for example, you know me by name – that’s direct identification; you describe me as “the Fieldfisher privacy lawyer working in Silicon Valley” – that’s indirect identification).

The Directive also goes on to say that identification can be by means of “an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.  This has caused a lot of debate in European privacy circles – could an “identification number” include an IP address or cookie string, for example?  The Article 29 Working Party has previously issued comprehensive guidance on the concept of personal data, which made clear that EU regulators were minded to treat the definition of “personal data” as very wide indeed (by looking at the content, purpose and result of the data).  And, yes, they generally think of IP addresses and cookie strings as personal – even if organizations themselves do not.

This aside, EU data protection law also has a separate category of “special” personal data (more commonly referred to as “sensitive personal data”) .  This is personal data that is afforded extra protection under the Directive, and is defined as data relating to racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and health or sex life.  Data relating to criminal offences is also afforded special protection.  Oddly, though, financial data, social security numbers and child data are not protected as “sensitive” under the Directive today.

What will the General Data Protection Regulation require?

While the GDPR is still being debated between the Commission, Council and Parliament through the trilogue procedure, what is clear is that the net cast for personal data will not get any smaller.  In fact, the legislators are keen to clear up some of the ambiguities that exist today, and even to widen the net in a couple of instances – for example, with respect to sensitive personal data.  In particular:

  • Personal data and unique identifiers:  The trilogue parties broadly agree that the concept of personal data must include online identifiers and location data – so the legal definition of personal data will be updated under the GDPR to put beyond any doubt that IP addresses, mobile device IDs and the like must be treated as personal.  This means that these data will be subject to fairness, lawfulness, security, data export and other data protection requirements just like any other personal data.
  • Pseudonymous data:  The trilogue parties are considering the concept of “pseudonymous data” – in simple terms, personal data that has been subjected to technological measures (like hashing or encryption) such that it no longer directly identifies an individual.  There seems to be broad acceptance that a definition of “pseudonymous data” is needed, but that pseudonymous data will still be treated as personal data – i.e. subject to the requirements of the GDPR.  On the plus side though, organizations that pseudonymize their data will likely benefit from relaxed data breach notification rules, potentially less strict data subject access request requirements, and greater flexibility to conduct data profiling.  The GDPR will encourage pseudonymization as a privacy by design measure.
  • Genetic data and biometric data:  GDPR language under debate also introduces the concepts of “genetic data” and “biometric data” (i.e. fingerprints, facial recognition, retinal scans etc.).  The trilogue parties seem to agree that genetic data must be treated as sensitive personal data, affording it enhanced protections under the GDPR (likely due to concerns, for example, that processing of genetic data might lead to insurance disqualifications or denial of medical treatment).  There’s slightly less alignment between the parties on the treatment of biometric data, with the Parliament viewing it as sensitive data and the Council preferring to treat it as (non-sensitive) personal data – but data that, nevertheless, triggers the need for an organizational Data Protection Impact Assessment if processed on a large scale.

What are the practical implications?

For many institutions, the changes to the concept of personal data under the GDPR will simply be an affirmation of what they already know: that Europe applies a very protective approach when triggering personal data requirements.

Online businesses – especially those in the analytics, advertising and social media sectors – will be significantly impacted by the express description of online and unique identifiers as personal data, particularly when this is considered in light of the extended territorial reach of the GDPR.  Non-EU advertising, analytics and social media platforms will likely find themselves legally required to treat these identifiers as personal data protected by European law, just as their European competitors are, and need to update their policies, procedures and systems accordingly – that, or risk losing EU business and attracting European regulatory attention.  However, they will likely take (some) comfort from GDPR provisions allowing for data profiling on a non-consent basis if data is pseudonymized.

Beyond that, all organizations will need to revisit what data they collect and understand whether it is caught by the personal data requirements of the GDPR.  In particular, they need to be aware of the extended scope of sensitive data to include genetic data (and, if the Parliament has its way, potentially biometric data), attracting greater protections under the GDPR – particularly the need for explicit consent, unless other lawful grounds processing exist.

Finally, some of the relaxations given to processing of pseudonymized data will hopefully serve to incentivize greater adoption by organizations of pseudonymization technologies.  Arguably, the GDPR could do more on this front – and some organizations will inevitably grumble at the cost of pseudonymizing datasets – but if doing so potentially reduces data breach notification and data subject access request responsibilities then this will serve as a powerful adoption incentive.

Privacy audits – asking less can get you more!

Posted on June 26th, 2015 by

A while back, my wife was telling me about a book she’d just read called “Animals in Translation” by animal behaviourist Professor Temple Grandin. In the book, Professor Grandin explains her work to improve conditions for animals at meat-packing facilities (and if you’re wondering what on earth this has to do with privacy, keep reading – I’ll get there.)

One of the ways originally used to assess conditions at these facilities was by means of lengthy, detailed audit inspections, with audit questionnaires often running well in excess of a hundred pages. These questionnaires were by their very design cumbersome to complete and consequently attracted voluminous but poor quality responses that gave little real insight into animal welfare or how to improve it.

Professor Grandin’s insight was that audits could be considerably improved by reducing the questionnaires to just a few, select, critical questions. For example, asking what percentage of animals displayed signs of lameness replaced the need to ask several questions about the detail of the animal-handling process. Why? Because the detail was largely irrelevant – if more than a certain percentage of animals displayed lameness, then it stood to reason that something was going wrong and needed addressing.

As privacy professionals, we can all learn something from this drive towards audit simplicity. Having worked with many businesses on privacy assessments ranging from full-blown audits to Privacy Impact Assessments for new product launches, and vendor due diligence projects to staff awareness questionnaires, I’ve often been struck by how unnecessarily long and detailed many audit questionnaires are.

This length is more often than not attributable to a concern that, if questioners don’t ask every last question they can think of, they might miss something. That’s a fair concern, but what it overlooks is that a questionnaire is the beginning and not the end of a well-run audit – it’s simply the means by which you start gathering information in order to identify potential issues needing follow-up. Not only that, but it’s far better to get a high response rate to a few well-selected questions than a low-to-zero rate on a more detailed set.

So the next time you have to circulate an audit questionnaire to an internal development team about some new product launch, or to an external vendor about its compliance practices, or maybe to another business unit about its data-handling procedures, then keep the following practical tips in mind:

1.  Keep it short and sweet. Nobody likes answering lengthy questionnaires and so, if you send one out, expect that it either won’t get answered or won’t get answered well. A one to two page questionnaire of a few select questions will encourage a relatively high response rate; at three to four pages, you’ll watch your response rate fall off a cliff; and anything longer that you’re almost guaranteeing next to no responses – no matter how much chasing, cajoling, or threatening you do!

2.  Ask critical questions only. Think carefully about what you really need to know for initial issue-spotting purposes. Focus, for example, on what data are you collecting, why, who it will be shared with, where it will be processed, and what security is in place. Don’t worry too much about the detail for now – you can (and should!) follow-up that up later if the initial responses ring any alarm bells (“You’re sending data to Uzbekistan? Why? Who will have access to it? What protections are in place?“).

3.  Use open questions. It’s tempting to design a checklist questionnaire. In theory, this makes the respondent’s job easier and encourages consistency of responses. In practice, however, checklists aren’t always well-suited for privacy assessments. It’s difficult to capture the wide array of potential data types, uses, recipients etc. through checklists alone unless you use very lengthy checklists indeed – resulting in the response-rate issues already mentioned above. Open questions, by contrast, are generally simpler and shorter to ask, and lead to more, and often more informative, responses (e.g. Consider “Who you will share data with and why?” vs. “Who will you share data with? Tick all of the following that apply: Customers [ ] Customers’ end users [ ] Prospective customers [ ] Employees [ ] Contractors [ ] Vendors [ ] etc.”).

4.  Don’t lose sight of the wood for the trees. In many of the questionnaires I’ve seen, questioners often delve straight into the detail, asking very specific questions about the nature of the data processed, the locations of privacy notices, specific security standards applied and so on. What’s often missing is a simple, upfront question asking what the particular product, service or processing operation actually is or does. Asking this provides important context that will necessarily influence the depth of review you need to undertake. Better still, if you can actually get to see the “thing” in practice, then seize the opportunity – this will crystallize your understanding far better than any questionnaire or verbal description ever could.

5.  We never talk anymore. As already noted, a questionnaire is just the beginning of an assessment. Use it to gather the initial data you need, but then make sure to follow up with a meeting to discuss any potential issues you identify. You will learn far more from person-to-person meetings than you ever can through a questionnaire – interviewees will often volunteer information in-person you never thought to ask on your questionnaire, and some interview responses may prompt you to think of questions “in the moment” that you’d never otherwise have thought of back in the isolation of your office.

6.  Not all audits are created equal.  The above are just some practical suggestions borne out of experience but, obviously, not all audits are created equal.  As a questioner, you need to have the flexibility to adapt your questionnaire to the specific context in hand – a “lessons learned” audit following a data breach will by necessity have to be more detailed and formal than an internal staff awareness assessment.  Exercise good judgement to tailor your approach as the situation requires!

Handling government data requests under Processor BCR

Posted on June 2nd, 2015 by

Earlier today, the Article 29 Working Party published some new guidance on Processor BCR. There’s no reason you would have noticed this, unless you happen to be a BCR applicant or regularly visit the Working Party’s website, but the significance of this document cannot be overstated: it has the potential to shape the future of global data transfers for years to come.

That’s a bold statement to make, so what is this document – Working Party Paper WP204 “Explanatory Document on the Processor Binding Corporate Rules” – all about? Well, first off, the name kind of gives it away: it’s a document setting out guidance for applicants considering adopting Processor BCR (that’s the BCR that supply-side companies – particularly cloud-based companies – are all rushing to adopt). Second, it’s not a new document: the Working Party first published it in 2013.

The importance of this document now is that the Working Party have just updated and re-published it to provide guidance on one of the most contentious and important issues facing Processor BCR: namely how Processor BCR companies should respond to government requests for access to data.

Foreign government access to data – the EU view

To address the elephant in the room, ever since Snowden, Europe has expressed very grave concerns about the ‘adequacy’ of protection for European data exported internationally – and particularly to the US. This, in turn, has led to repeated attempts by Europe to whittle away at the few mechanisms that exist for lawfully transferring data internationally, from the European Commission threatening to suspend Safe Harbor through to the European Parliament suggesting that Processor BCR should be dropped from Europe’s forthcoming General Data Protection Regulation (a suggestion that, thankfully, has fallen by the wayside).

By no means the only concern, but certainly the key concern, has been access to data by foreign government authorities. The view of EU regulators is that EU citizens’ data should not be disclosed to foreign governments or law enforcement agencies unless strict mutual legal assistance protocol has been followed. They rightly point out that EU citizens have a fundamental right to protection of their personal data, and that simply handing over data to foreign governments runs contrary to this principle.

By contrast, the US and other foreign governments say that prompt and confidential access to data is often required to prevent crimes of the very worst nature, and that burdensome mutual legal assistance processes often don’t allow access to data within the timescales needed to prevent these crimes. The legitimate but conflicting views of both sides lead to the worst kind of outcome: political stalemate.

The impact of foreign government access to data on BCR

In the meantime, businesses have found themselves trapped in a ‘no man’s land’ of legal uncertainty – the children held responsible for the sins of their parent governments. Applicants wishing to pursue Processor BCR have particularly found themselves struggling to meet its strict rules concerning government access to data: namely that any “request for disclosure should be put on hold and the DPA competent for the controller and the lead DPA for the BCR should be clearly informed about it.” (see criteria 6.3 available here)

You might fairly think: “Why not just do this? If a foreign government asks you to disclose data, why not just tell them you have to put it on hold until a European DPA sanctions – or declines – the disclosure?” The problem is that reality is seldom that straightforward. In many jurisdictions (and, yes, I’m particularly thinking of the US) putting a government data disclosure order “on hold” and discussing it with a European DPA is simply not possible.

This is because companies are typically prohibited under foreign laws from discussing such disclosure orders with ANYONE, whether or not a data protection authority, and the penalties for doing so can be very severe – up to and including jail time for company officers. And let’s not forget that, in some cases, the disclosure order can be necessary to prevent truly awful offences – so whatever the principle to be upheld, sometimes the urgency or severity of a particular situation will simply not allow for considered review and discussion.

But that leaves companies facing the catch-22. If they receive one of these orders, they can be in breach of foreign legal requirements for not complying with it; but if they do comply with it, they risk falling foul of European data protection rules. And, if you’re a Processor BCR applicant, you might rightly be wondering how on earth you can possibly give the kind of commitment that the Working Party expects of you under the Processor BCR requirements.

How the Working Party’s latest guidance helps

To their credit, the Working Party have acknowledged this issue and this is why their latest publication is so important. They have updated their BCR guidance to note that “in specific cases the suspension and/or notification [to DPAs of foreign government data access requests] are prohibited”, including for example “a prohibition under criminal law to preserve the confidentiality of a law enforcement investigation”. In these instances, they expect BCR applicants to use “best efforts to obtain the right to waive this prohibition in order to communicate as much information as it can and as soon as possible”.

So far, so good. But here’s the kicker: they then say that BCR applicants must be able to “demonstrate” that they exercised these “best efforts” and, whatever the outcome, provide “general information on the requests it received to the competent DPAs (e.g. number of applications for disclosure, type of data requested, requester if possible, etc.)” on an annual basis.

And therein lies the problem: how does a company “demonstrate” best efforts in a scenario where a couple of NSA agents turn up on its doorstep brandishing a sealed FISA order and requiring immediate access to data? You can imagine that gesticulating wildly probably won’t cut it in the eyes of European regulators.

And what about the requirement to provide “general information” on an annual basis including the “number of applications for disclosure”? In the US, FISA orders may only be reported in buckets of 1,000 orders – so, even if a company received only one or two requests in a year, the most it could disclose is that it received between 0 and 999 requests, making it seem like government access to their data was much more voluminous than in reality it was.

I don’t want problems, I want solutions!!!

So, if you’re a Processor BCR applicant, what do you do? You want to see through your BCR application to show your strong commitment to protecting individuals’ personal data, and you certainly don’t want to use a weaker solution, like Model Clauses or Safe Harbor that won’t carry equivalent protections. But, at the same time, you recognize the reality that there will be circumstances where you are compelled to disclose data and that there will be very little you can do – or tell anyone – in those circumstances.

Here’s my view:

  • First off, you need a document government data access policy. It’s unforgivable in this day and age, particularly in light of everything we have learned in the past couple of years, not to have some kind of written policy around how to handle government requests for data. More importantly, having a policy – and sticking to it – is all part and parcel of demonstrating your “best efforts” when handling government data requests.
  • Second, the policy needs to identify who the business stakeholders are that will have responsibility for managing the request – and, as a minimum, this needs to include the General Counsel and, ideally, the Chief Privacy Officer (or equivalent). They will represent the wall of defense that prevents government overreach in data access requests and advise when requests should be challenged for being overly broad or inappropriately addressed to the business, rather than to its customers.
  • Third, don’t make it easy for the government. They want access to your data, then make them work for it. It’s your responsibility as the custodian of the data to protect your data subject’s rights. To that end, ONLY disclose data when LEGALLY COMPELLED to do so – if access to the data really is that important, then governments can typically get a court order in a very short timeframe. Do NOT voluntarily disclose data in response to a mere request, unless there really are very compelling reasons for doing so – and reasons that you fully document and justify.
  • Fourth, even if you are under a disclosure order, be prepared to challenge it. That doesn’t necessarily mean taking the government to court each and every time, but at least question the scope of the order and ask whether – bearing in mind any BCR commitments you have undertaken – the order can be put on hold while you consult with your competent DPAs. The government may not be sympathetic to your request, particularly in instances of national security, but that doesn’t mean you shouldn’t at least ask.
  • Fifth, follow the examples of your peers and consider publishing annual transparency reports, a la Google, Microsoft and Yahoo. While there may be prohibitions against publishing the total numbers of national security requests received, the rules will typically be more relaxed when publishing aggregate numbers of criminal data requests. This, in principle, seems like a good way of fulfilling your annual reporting responsibility to data protection authorities and – in fact – goes one step further: providing transparency to those who matter most in this whole scenario, the data subjects.
  • So why does the Working Party’s latest opinion matter so much? It matters because it’s a vote of confidence in the Processor BCR system and an unprecedented recognition by European regulatory authorities that there are times when international businesses really do face insurmountable legal conflicts.

    Had this opinion not come when it did, the future of Processor BCR would have been dangerously undermined and, faced with the prospect of Safe Harbor’s slow and painful demise and the impracticality of Model Clauses, would have left many without a realistic data export solution and further entrenched a kind of regulatory ‘Fortress Europe’ mentality.

    The Working Party’s guidance, while still leaving challenges for BCR applicants, works hard to strike that hard-to-find balance between protecting individuals’ fundamental rights and the need to recognize the reality of cross-jurisdicational legal constraints – and, for that, they should be commended.

    Subject access requests and data retention: two sides of the same coin?

    Posted on October 3rd, 2014 by

    Over the past year or so, there’s been a decided upswing in the number of subject access requests made by individuals to organizations that crunch their data.  There are a number of reasons for this, but they’re principally driven by a greater public awareness of privacy rights in a post-Snowden era and following the recent Google “Right to be Forgotten” decision.

    If you’re unfamiliar with the term “subject access request”, then in simple terms it’s a right enshrined in EU law for an individual to contact an organization and ask it (a) whether it processes any personal information about the individual in question, and (b) if so, to supply a copy of that information.

    A subject access request is a powerful transparency tool for individuals: the recipient organization has to provide the requested information within a time period specified by law, and very few exemptions apply.  However, these requests often prove disproportionately costly and time-consuming for the organizations that receive them – think about how much data your organization holds, and then ask yourself how easy it would be to pull all that data together to respond to these types of requests.  Imagine, for example, all the data held in your CRM databases, customer support records, IT access logs, CCTV footage, HR files, building access records, payroll databases, e-mail systems, third party vendors and so on – picture that, and you get the idea.

    In addition, while many subject access requests are driven by a sincere desire for data processing transparency, some (inevitably) are made with legal mischief in mind – for example, the disgruntled former employee who makes a subject access request as part of a fishing expedition to try to find grounds for bringing an unfair dismissal claim, or the representative from a competitor business looking for grounds to complain about the recipient organization’s data compliance.  Because of these risks, organizations are often hesitant about responding to subject access requests in case doing so attracts other, unforeseen and unknown, liabilities.

    But, if you’re a data controlling business facing this conundrum, don’t expect any regulatory sympathy.  Regulators can only enforce the law as it exists today, and this expects prompt, comprehensive disclosure.  Not only that, but the fact that subject access requests prove costly and resource intensive to address serves a wider regulatory goal: namely, applying pressure on organizations to reduce the amount of data they hold, consistent with the data protection principle of “data minimization”.

    Therefore, considering that data storage costs are becoming cheaper all the time and that, in a world of Big Data, data collection is growing at an exponential rate, subject access becomes one of the most important – if not the most important – tool regulators have for encouraging businesses to minimize the data they retain.  The more data you hold, the more data you have to disclose in response to a subject access request – and the more costly and difficult that is to do.  This, in turn, makes adopting a carefully thought-out data retention policy much more attractive, whatever other business pressures there may be to keep data indefinitely.  Retain data for just a year or two, and there’ll be an awful lot less you need to disclose in response to a subject access request.  At the same time, your organization will enhance its overall data protection compliance.

    So what does all this mean?  When considering your strategy for responding to subject access requests, don’t consider it in isolation; think also about how it dovetails with your data retention strategy.  If you’re an in-house counsel or CPO struggling to get business stakeholder buy-in to adopt a comprehensive data retention strategy, use subject access risk as a means of achieving this internal buy-in.  The more robust your data retention policies, the more readily you’ll be able to fulfill subject access requests within the timescales permitted by law and with less effort, reducing complaints and enhancing compliance.  Conversely, with weaker (or non-existent) data retention policies, your exposure will be that much greater.

    Subject access and data retention are therefore really just two sides of the same coin – and you wouldn’t base your compliance on just a coin toss, would you?

    Processor BCR have a bright future

    Posted on July 8th, 2014 by

    Last month, the Article 29 Working Party sent a letter to the President of the European Parliament about the future of Binding Corporate Rules for processors (BCR-P) in the context of the EU’s ongoing data privacy legislative reform.

    The letter illustrates the clear support that BCR-P have – and will continue to have – from the Working Party.  Whilst perhaps not surprising, given that the Working Party originally “invented” BCR-P in 2012 (having initially invented controller BCR way back in 2003), the letter affirms the importance of BCR-P in today’s global data economy.

    “Currently, BCR-P offer a high level of protection for the international transfers of personal data to processors” writes Isabelle Falque-Pierrotin, Chair of the Working Party, before adding that they are “an optimal solution to promote the European principles of personal data abroad.” (emphasis added)

    As if that weren’t enough, the letter also issues a thinly-veiled warning to the European Parliament, which has previously expressed skepticism about BCR-P: “denying the possibility for BCR-P will limit the choice of organisations to use model clauses or to apply the Safe Harbor if possible, which do not contain such accountability mechanisms to ensure compliance as it is provided for in BCR-P.

    The Working Party’s letter notes that 3 companies have so far achieved BCR-P (and we successfully acted on one of those – see here) with a further 10 applications on the go (and, yes, we’re acting on a few of those too).

    Taking the helicopter view, the Working Party’s letter is representative of a growing trend for global organizations to seek BCR approval in preference over other data export solutions: back in 2012, just 19 organizations had secured controller BCR approval; two years later, and today that figure stands at 53 (both controller and processor BCR).

    There are several reasons why this is the case:

    1.  BCR are getting express legislative recognition:  The Commission’s draft General Data Protection Regulation expressly acknowledges the validity of BCR, including BCR-P, as a valid legal solution to EU’s strict data export rules.  To date, BCR have had only regulatory recognition, and then not consistently across all Member States, casting a slight shadow over their longer term future.  Express legislative recognition ensures the future of BCR – they’re here to stay.

    2.  Safe harbor is under increasing strain:  The ongoing US/EU safe harbor reform discussions, while inching towards a slow conclusion, have arguably stained its reputation irreparably.  US service providers that rely on safe harbor to export customer data to the US (and sometimes beyond) find themselves stuck in deal negotiations with customers who refuse to contract with them unless they implement a different data export solution.  Faced with the prospect of endless model clauses or a one-off BCR-P approval, many opt for BCR-P.

    3.  BCRs have entered the customer lexicon:  If you’d said the letters “B C R” even a couple of years ago, then outside of the privacy community only a handful of well-educated organizations would have known what you were talking about.  Today, customers are much better informed about BCR and increasingly view BCR as a form of trust mark (which, of course, they are), encouraging the service sector to adopt BCR-P as a competitive measure.

    4.  BCRs are simpler than ever before:  Gone are the days when a BCR application took 4 years and involved traveling all over Europe to visit data protection authorities.  Today, a well-planned and executed BCR application can be achieved in a period of 12 – 18 months, all managed through a single lead data protection authority.  The simplification of the BCR approval process has been instrumental in increasing BCR adoption.

    So if you’re weighing up the pros and cons of BCR against other data export solutions, then deliberate no longer: BCR, and especially BCR-P, will only grow in importance as the EU’s data export regime gets ever tougher.


    FTC in largest-ever Safe Harbor enforcement action

    Posted on January 22nd, 2014 by

    Yesterday, the Federal Trade Commission (“FTC“) announced that it had agreed to settle with 12 US businesses for alleged breaches of the US Safe Harbor framework. The companies involved were from a variety of industries and each handled a large amount of consumer data. But aside from the surprise of the large number of companies involved, what does this announcement really tell us about the state of Safe Harbor?

    This latest action suggests that the FTC is ramping up its Safe Harbor enforcement in response to recent criticisms from the European Commission and European Parliament about the integrity of Safe Harbor (see here and here) – particularly given that one of the main criticisms about the framework was its historic lack of rigorous enforcement.

    Background to the current enforcement

    So what did the companies in question do? The FTC’s complaints allege that the companies involved ‘deceptively claimed they held current certifications under the U.S.-EU Safe Harbor framework‘. Although participation in the framework is voluntary, if you publicise that you are Safe Harbor certified then you must, of course, maintain an up-to-date Safe Harbor registration with the US Department of Commerce and comply with your Safe Harbor commitments 

    Key compliance takeaways

    In this instance, the FTC alleges that the businesses involved had claimed to be Safe Harbor certified when, in fact, they weren’t. The obvious message here is don’t claim to be Safe Harbor certified if you’re not!  

    The slightly more subtle compliance takeaway for businesses who are correctly Safe Harbor certified is that they should have in place processes to ensure:

    • that they keep their self-certifications up-to-date by filing timely annual re-certifications;
    • that their privacy policies accurately reflect the status of their self-certification – and if their certifications lapse, that there are processes to adjust those policies accordingly; and
    • that the business is fully meeting all of its Safe Harbor commitments in practice – there must be actual compliance, not just paper compliance.

    The “Bigger Picture” for European data exports

    Despite this decisive action by the FTC, European concerns about the integrity of Safe Harbor are likely to persist.  If anything, this latest action may serve only to reinforce concerns that some US businesses are either falsely claiming to be Safe Harbor certified when they are not or are not fully living up to their Safe Harbor commitments. 

    The service provider community, and especially cloud businesses, will likely feel this pressure most acutely.  Many customers already perceive Safe Harbor to be “unsafe” for data exports and are insisting that their service providers adopt other EU data export compliance solutions.  So what other solutions are available?

    While model contract have the benefit of being a ‘tried and tested’ solution, the suite of contracts required for global data exports is simply unpalatable to many businesses.  The better solution is, of course, Binding Corporate Rules (BCR) – a voluntary set of self-regulatory policies adopted by the businesses that satisfy EU data protection standards and which are submitted to, and authorised by, European DPAs.  Since 2012, service providers have been able to adopt processor BCR, and those that do find that this provides them with a greater degree of flexibility to manage their internal data processing arrangements while, at the same time, continuing to afford a high degree of protection for the data they process.       

    It’s unlikely that Safe Harbor will be suspended or disappear – far too many US businesses are dependent upon it for their EU/CH to US data flows.  However, the Safe Harbor regime will likely change in response to EU concerns and, over time, will come under increasing amounts of regulatory and customer pressure.  So better to consider alternative data export solutions now and start planning accordingly rather than find yourself caught short!


    Global protection through mutual recognition

    Posted on July 23rd, 2013 by

    At present, there is a visible mismatch between the globalisation of data and the multinational approach to privacy regulation. Data is global by nature as, regulatory limits aside, it runs unconstrained through wired and wireless networks across countries and continents. Put in a more poetic way, a digital torrent of information flows freely in all possible directions every second of the day without regard for borders, geographical distance or indeed legal regimes and cultures. Data legislation on the other hand is typically attached to a particular jurisdiction – normally a country, sometimes a specific territory within a country and occasionally a selected group of countries. As a result, today, there is no such thing as a single global data protection law that follows the data as it makes its way around the world.

    However, there is light at the end of the tunnel. Despite the current trend of new laws in different shapes and flavours emerging from all corners of the planet, there is still a tendency amongst legislators to rely on a principles-based approach, even if that translates into extremely prescriptive obligations in some cases – such as Spain’s applicable data security measures depending on the category of data or Germany’s rules to include certain language in contracts for data processing services. Whether it is lack of imagination or testimony to the sharp brains behind the original attempts to regulate privacy, it is possible to spot a common pedigree in most laws, which is even more visible in the case of any international attempts to frame privacy rules.

    When analysed in practice and through the filter of distant geographical locations and moments in time, it is definitely possible to appreciate the similarities in the way privacy principles have been implemented by fairly diverse regulatory frameworks. Take ‘openness’ in the context of transparency, for example. The words may be slightly different and in the EU directive, it may not be expressly named as a principle, but it is consistently everywhere – from the 1980 OECD Guidelines to Safe Harbor and the APEC Privacy Framework. The same applies to the idea of data being collected for specified purposes, being accurate, complete and up to date, and people having access to their own data. Seeing the similarities or the differences between all of these international instruments is a matter of mindset. If one looks at the words, they are not exactly the same. If one looks at the intention, it does not take much effort to see how they all relate.

    Being a lawyer, I am well aware of the importance of each and every word and its correct interpretation, so this is not an attempt to brush away the nuances of each regime. But in the context of something like data and the protection of all individuals throughout the world to whom the data relates, achieving some global consistency is vital. The most obvious approach to resolving the data globalisation conundrum would be to identify and put in place a set of global standards that apply on a worldwide basis. That is exactly what a number of privacy regulators backed by a few influential thinkers tried to do with the Madrid Resolution on International Standards on the Protection of Personal Data and Privacy of 2009. Unfortunately, the Madrid Resolution never became a truly influential framework. Perhaps it was a little too European. Perhaps the regulators ran out of steam to press on with the document. Perhaps the right policy makers and stakeholders were not involved. Whatever it was, the reality is that today there is no recognised set of global standards that can be referred to as the one to follow.

    So until businesses, politicians and regulators manage to crack a truly viable set of global privacy standards, there is still an urgent need to address the privacy issues raised by data globalisation. As always, the answer is dialogue. Dialogue and a sense of common purpose. The USA and the EU in particular have some important work to do in the context of their trade discussions and review of Safe Harbor. First they must both acknowledge the differences and recognise that an area like privacy is full of historical connotations and fears. But most important of all, they must accept that principles-based frameworks can deliver a universal baseline of privacy protection. This means that efforts must be made by all involved to see what Safe Harbor and EU privacy law have in common – not what they lack. It is through those efforts that we will be able to create an environment of mutual recognition of approaches and ultimately, a global mechanism for protecting personal information.

    This article was first published in Data Protection Law & Policy in July 2013.

    The conflicting realities of data globalisation

    Posted on June 17th, 2013 by

    The current data globalisation phenomenon is largely due to the close integration of borderless communications with our everyday comings and goings. Global communications are so embedded in the way we go about our lives that we are hardly aware of how far our data is travelling every second that goes by. But data is always on the move and we don’t even need to leave home to be contributing to this. Ordinary technology right at our fingertips is doing the job for us leaving behind an international trail of data – some more public than other.

    The Internet is global by definition. Or more accurately, by design. The original idea behind the Internet was to rely on geographically dispersed computers to transmit packets of information that would be correctly assembled at destination. That concept developed very quickly into a borderless network and today we take it for granted that the Internet is unequivocally global. This effect has been maximised by our ability to communicate whilst on the move. Mobile communications have penetrated our lives at an even greater speed and in a more significant way than the Internet itself.

    This trend has led visionaries like Google’s Eric Schmidt to affirm that thanks to mobile technology, the amount of digitally connected people will more than triple – going from the current 2 billion to 7 billion people – very soon. That is more than three times the amount of data generated today. Similarly, the global leader in professional networking, LinkedIn, which has just celebrated its 10th anniversary, is banking on mobile communications as one of the pillars for achieving its mission of connecting the world’s professionals.

    As a result, everyone is global – every business, every consumer and every citizen. One of the realities of this situation has been exposed by the recent PRISM revelations, which highlight very clearly the global availability of digital communications data. Perversely, the news about the NSA programme is set to have a direct impact on the current and forthcoming legislative restrictions on international data flows, which is precisely one of the factors disrupting the globalisation of data. In fact, PRISM is already being referred to as a key justification for a tight EU data protection framework and strong jurisdictional limitations on data exports, no matter how non-sensical those limitations may otherwise be.

    The public policy and regulatory consequences of the PRISM affair for international data flows are pretty predictable. Future ‘adequacy findings’ by the European Commission as well as Safe Harbor will be negatively affected. We can assume that if the European Commission decides to have a go at seeking a re-negotiation of Safe Harbor, this will be cited as a justification. Things will not end there. Both contractual safeguards and binding corporate rules will be expected to address possible conflicts of law involving data requests for law enforcement or national security reasons in a way that no blanket disclosures are allowed. And of course, the derogations from the prohibition on international data transfers will be narrowly interpreted, particularly when they refer to transfers that are necessary on grounds of public interest.

    The conflicting realities of data globalisation could not be more striking. On the one hand, every day practice shows that data is geographically neutral and simply flows across global networks to make itself available to those with access to it. On the other, it is going to take a fair amount of convincing to show that any restrictions on international data flows should be both measured and realistic. To address these conflicting realities we must therefore acknowledge the global nature of the web and Internet communications, the borderless fluidity of the mobile ecosystem and our human ability to embrace the most ambitious innovations and make them ordinary. So since we cannot stop the technological evolution of our time and the increasing value of data, perhaps it is time to accept that regulating data flows should not be about putting up barriers but about applying globally recognised safeguards.

    This article was first published in Data Protection Law & Policy in June 2013.

    It’s time to dust off that privacy policy…

    Posted on May 2nd, 2013 by

    The Information Commissioner’s Office (“ICO”) has announced in the latest edition of its e-newsletter that it will be examining the privacy policies of 250 of the UK’s most popular websites during the week of 6 – 11 May 2013 as part of ‘Internet Sweep Day’. Each website will be reviewed to check whether it contains an accessible privacy policy in accordance with relevant UK and international laws.

    The Internet Sweep Day initiative isn’t limited to just the UK, as the ICO is working in conjunction with other global data protection authorities. The results of the review will be collected and sent back to the Office of the Privacy Commissioner for Canada and a report of the findings will be published in the Autumn.

    There is no word yet on which websites the ICO is set to consider, but this is yet another wake up call for businesses who haven’t started thinking about their public facing documents and policies to get cracking!

    The announcement comes hot on the heels of updates to the enforcement section of the ICO’s website which show that the UK e-privacy enforcement space is certainly heating up and Google’s updates to its privacy policy in an attempt to comply with EU cookie consent rules.  Internal stakeholders who might be resistant to yet another review of an often overlooked part of any businesses website should be reminded that transparency is very likely to continue to be at the heart of the new European data protection framework.  It is most definitely time to get a head start now.

    In defence of the privacy policy

    Posted on March 29th, 2013 by

    Speaking at the Games Developers’ Conference in San Francisco yesterday on the panel “Privacy by [Game] Design”, I was thrown an interesting question: Does the privacy policy have any place in the forward-thinking privacy era?

    To be sure, privacy policy bashing has become populist stuff in recent years, and the role of the privacy policy is a topic I’ve heard debated many, many times. The normal conclusion to any discussion around this point is that privacy policies are too long, too complex and simply too unengaging for any individual to want to read them. Originally intended as a fair processing disclosure about what businesses do with individuals’ data, critics complain that they have over time become excessively lengthy, defensive, legalistic documents aimed purely to protect businesses from liability. Just-in-time notices, contextual notices, privacy icons, traffic lights, nutrition labels and gamification are the way forward. See, for example, this recent post by Peter Fleischer, Google’s Global Privacy Counsel.

    This is all fair criticism. But that doesn’t mean it’s time to write-off privacy policies – we’re not talking an either/or situation here. They continue to serve an important role in ensuring organisational accountability. Committing a business to put down, in a single, documented place, precisely what data it collects, what it does with that data, who it shares it with, and what rights individuals have, helps keep it honest. More and more, I find that clients put considerable effort into getting their privacy policies right, carefully checking that the disclosures they make actually map to what they do with data – stimulating conversations with other business stakeholders across product development, marketing, analytics and customer relations functions. The days when lawyers were told “just draft something” are long gone, at least in my experience.

    This internal dialogue keeps interested stakeholders informed about one another’s data uses and facilitates discussions about good practice that might otherwise be overlooked. If you’re going to disclose what you do in an all-encompassing, public-facing document – one that may, at some point, be scoured over by disgruntled customers, journalists, lawyers and regulators – then you want to make sure that what you do is legit in the first place. And, of course, while individuals seldom ever read privacy policies in practice, if they do have a question or a complaint they want to raise, then a well-crafted privacy policy serves (or, at least, should serve) as a comprehensive resource for finding the information they need.

    Is a privacy policy the only way to communicate with your consumers what you do with their data? No, of course not. Is it the best way? Absolutely not: in an age of device and platform fragmentation, the most meaningful way is through creative Privacy by Design processes that build a compelling privacy narrative into your products and services. But is the privacy policy still relevant and important? Yes, and long may this remain the case.