Archive for the ‘Applicable law’ Category

WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by



On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.

 

ECJ affirms individuals’ right to be forgotten

Posted on May 15th, 2014 by



Be honest: how many of us had ourselves forgotten that a profoundly important ruling from the European Court of Justice on the so-called “right to be forgotten” was imminent?  That ruling, in the case of Google v the Spanish DPA, was finally handed down on 13 May and has significant implications for all online businesses (available here).

Background

By way of background, the case concerned a Spanish national who complained to Google about online newspaper reports it had indexed relating to debt-recovery proceedings against him.  When the individual’s name was entered into Google, it brought up search results linking to newspaper announcements about these proceedings.  The actual proceedings in question dated back to 1998 and had long since been resolved.

The matter escalated through the Spanish DPA and the Spanish High Court, who referred various questions to the European Court of Justice for a ruling.  At the heart of the matter was the issue of whether an individual can exercise a “right to be forgotten” so as to require search engines to remove search results linking to personal content lawfully published on third party sites – or whether any such requests should be taken up only with the publishing sites in question.

Issues considered

The specific issues considered by the ECJ principally concerned:

  • Whether a search engine is a “controller” of personal data:  On this first question, the ECJ ruled YES, search engines are controllers of personal data.  For this purpose, the ECJ said that it was irrelevant that search engines are information-blind, treating personal data and non-personal data alike, and having no knowledge of the actual personal data processed.
  • Whether a search engine operated from outside the EU is subject to EU data protection rules if it has an EU sales subsidiary:  On this second question, the ECJ ruled YES.  Google wholly operates its search service from the US, but has a local sales subsidiary in Spain that makes online advertising sales to local customers.  On a very broad reading of the EU Data Protection Directive, the Court said that even though the processing of search data was not conducted “by” the Spanish subsidiary, it was conducted “in the context of the activities” of that subsidiary and therefore subject to EU data protection rules.  This is a particularly important point for any online business operating sales subsidiaries in the EU – in effect, this ruling means that in-territory sales subsidiaries potentially expose out-of-territory HQs and parent companies to local data protection laws.
  • Whether individuals can require search engines to remove search results about them:  Again, the ECJ ruled YES.  Having decided that a search engine is a “controller”, the ECJ ruled that an individual has the right to have search results about him or her removed if they appear to be “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue“.  To this end, the ECJ said there was no need to show that the list of results “causes prejudice to the data subject” and that the right of the individual to have results removed “override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name“.

Why this matters

This ruling is one of the most significant – if not the most significant – data protection ruling in the EU to date, and the findings of the ECJ will come as a surprise to many.  A constant theme throughout the ECJ’s decision was its clear desire to uphold European citizens’ fundamental rights to privacy and to data protection, as enshrined in the European Union’s Charter of Fundamental Rights, and it interpreted the EU’s Data Protection Directive with this consideration in mind.

Few expected that search engines could be required to remove search results linking to material posted lawfully on third party sites, but that is precisely what the ECJ has ruled in this instance.  Quite how this will work from a practical perspective is another matter: in future, when search engines receive a request to have personal data “forgotten” from their search results, they will have to tread a fine line between balancing the individual’s right to be forgotten against other relevant contextual considerations such as “the role played by the data subject in public life” and whether “the interference with the fundamental rights is justified by the preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question“.

Put another way, search engines will need to act not just as gateways to information on the web, but also – in some circumstances – as censors preventing access to information based on objections received.  This raises some very complex challenges in terms of balancing right to privacy against right to free speech that will clearly take time to work out.

Practical implications for online businesses

But it would be wrong to think that the relevance of this decision is limited to search engines alone.  In fact, it has much broader implications for online businesses, including that:

  • Non-EU businesses with EU sales offices risk exposure to EU data protection law:  Non-EU data hungry businesses CAN be subject to EU data protection rules simply by virtue of having local sales subsidiaries in the EU.  This is particularly critical for growth businesses expanding into the EU through the set-up of local sales offices, a common model for international expansion.
  • Data blind businesses need to comply:  Big data businesses CAN be subject to data protection rules, even if they are data blind and do not distinguish between personal and non-personal data.  A head in the sand approach will not protect against risk – any data ingesting business needs to have a clear compliance framework in place.
  • Data deletion a priority:  Individuals CAN require deletion of their data under EU law – businesses need to architecture their systems to enable data deletion on request and to adopt appropriate data retention and deletion policies.  Without these, they will face particular exposure when presented with these requests.

Taking into account the critical implications of this ruling, it’s fair to say it’s one that won’t be forgotten soon!

European Parliament votes in favour of data protection reform

Posted on March 21st, 2014 by



On 12 March 2014, the European Parliament (the “Parliament”) overwhelmingly voted in favour of the European Commission’s proposal for a Data Protection Regulation (the “Data Protection Regulation”) in its plenary assembly. In total 621 members of Parliament voted for the proposals and only 10 against. The vote cemented the Parliament’s support of the data protection reform, which constitutes an important step forward in the legislative procedure. Following the vote, Viviane Reding – the EU Justice Commissioner – said that “The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible”. While this vote is an important milestone in the adoption process, there are still several steps to go before the text is adopted and comes into force.

So what happens next?

Following the Civil Liberties, Justice and Home Affairs (LIBE) Committee’s report published in October 2013 (for more information on this report – see this previous article), this month’s vote  means that the Council of the European Union (the “Council”) can now formally conduct its reading of the text based on the Parliament’s amendments. Since the EU Commission made its proposal, preparatory work in the Council has been running in parallel with the Parliament. However, the Council can only adopt its position after the Parliament has acted.

In order for the proposed Data Protection Regulation to become law, both the Parliament and the Council must adopt the text in what is called the “ordinary legislative procedure” – a process in which the decisions of the Parliament and the Council have the same weight. The Parliament can only begin official negotiations with the Council as soon as the Council presents its position. It seems unlikely that the Council will accept the Parliament’s position and, on the contrary, will want to put forward its own amendments.

In the meantime, representatives of the Parliament, the Council and the Commission will probably organise informal meetings, the so-called “trilogue” meetings, with a view to reaching a first reading agreement.

The EU Justice Ministers have already met several times in Council meetings in the past months to discuss the data protection reform. Although there seems to be a large support between Member States for the proposal, they haven’t yet reached an agreement over some of the key provisions, such as the “one-stop shop” rule. The next meeting of the Council ministers is due to take place in June 2014.

Will there be further delays?

As the Council has not yet agreed its position, the speed of the development of the proposed regulation in the coming months largely depends on this being finalised. Once a position has been reached by the Council then there is also the possibility that the proposals could be amended further. If this happens, the Parliament may need to vote again until the process is complete.

Furthermore, with the elections in the EU Parliament coming up this May, this means that the whole adoption process will be put on hold until a new Parliament comes into place and a new Commission is approved in the autumn this year. Given these important political changes, it is difficult to predict when the Data Protection Regulation will be finally adopted.

It is worth noting, however, that the European heads of state and government publicly committed themselves to the ‘timely’ adoption of the data protection legislation by 2015 – though, with the slow progress made to date and work still remaining to be done, this looks a very tall order indeed.

How do EU and US privacy regimes compare?

Posted on March 5th, 2014 by



As an EU privacy professional working in the US, one of the things that regularly fascinates me is each continent’s misperception of the other’s privacy rules.  Far too often have I heard EU privacy professionals (who really should know better) mutter something like “The US doesn’t have a privacy law” in conversation; equally, I’ve heard US colleagues talk about the EU’s rules as being “nuts” without understanding the cultural sensitivities that drive European laws.

So I thought it would be worth dedicating a few lines to compare and contrast the different regimes, principally to highlight that, yes, they are indeed different, but, no, you cannot draw a conclusion from these differences that one regime is “better” (whatever that means) than the other.  You can think of what follows as a kind of brief 101 in EU/US privacy differences.

1.  Culturally, there is a stronger expectation of privacy in the EU.  It’s often said that there is a stronger cultural expectation of privacy in the EU than the US.  Indeed, that’s probably true.   Privacy in the EU is protected as a “fundamental right” under the European Union’s Charter of Fundamental Rights – essentially, it’s akin to a constitutional right for EU citizens.  Debates about privacy and data protection evoke as much emotion in the EU as do debates about gun control legislation in the US.

2.  Forget the myth: the US DOES have data protection laws.  It’s simply not true that the US doesn’t have data protection laws.  The difference is that, while the EU has an all-encompassing data protection framework (the Data Protection Directive) that applies across every Member State, across all sectors and across all types of data, the US has no directly analogous equivalent.  That’s not the same thing as saying the US has no privacy laws – it has an abundance of them!  From federal rules designed to deal with specific risk scenarios (for example, collection of child data online is regulated under the Children’s Online Privacy Protection Act), to sector-specific rules (Health Insurance Portability and Accountability Act for health-related information and the Gramm-Leach-Bliley Act for financial information), to state-driven rules (the California Online Privacy Protection Act in California, for example – California, incidentally, also protects individuals’ right to privacy under its constitution).  So the next time someone tells you that the US has no privacy law, don’t fall for it – comparing EU and US privacy rules is like comparing apples to a whole bunch of oranges.

3.  Class actions.  US businesses spend a lot of time worrying about class actions and, in the privacy realm, there have been multiple.  Countless times I’ve sat with US clients who agonise over their privacy policy drafting to ensure that the disclosures they make are sufficiently clear and transparent in order to avoid any accusation they may have misled consumers.  Successful class actions can run into the millions of $$$ and, with that much potential liability at stake, US businesses take this privacy compliance risk very seriously.  But when was the last time you heard of a successful class action in the EU?  For that matter, when was the last time you heard of ANY kind of award of meaningful damages to individuals for breaches of data protection law?

4.  Regulatory bark vs. bite.  So, in the absence of meaningful legal redress through the courts, what can EU citizens do to ensure their privacy rights are respected?  The short answer is complain to their national data protection authorities, and EU data protection authorities tend to be very interested and very vocal.  Bodies like the Article 29 Working Party, for example, pump out an enormous volume of regulatory guidance, as do certain national data protection authorities, like the UK Information Commissioner’s Office or the French CNIL. Over in the US, American consumers also have their own heavyweight regulatory champion in the form of Federal Trade Commission which, by using its powers to take enforcement against “unfair and deceptive practices” under the FTC Act, is getting ever more active in the realm of data protection enforcement.  And look at some of the settlements it has reached with high profile companies – settlements that, in some cases, have run in excess of US$20m and resulted in businesses having to subject themselves to 20 year compliance audits.  By contrast, however vocal EU DPAs are, their powers of enforcement are typically much more limited, with some even lacking the ability to fine.

So those are just some of the big picture differences, but there are so many more points of detail a well-informed privacy professional ought to know – like how the US notion of “personally identifiable information” contrasts with EU “personal data”, why the US model of relying on consent to legitimise data processing is less favoured in the EU, and what the similarities and differences are between US “fair information practice principles” and EU “data protection principles”.

That’s all for another time, but for now take away this:  while they may go about it in different ways, the EU and US each share a common goal of protecting individuals’ privacy rights.  Is either regime perfect?  No, but each could sure learn a lot from the other.

 

 

 

EU Parliament’s LIBE Committee Issues Report on State Surveillance

Posted on February 19th, 2014 by



Last week, the European Parliament’s Civil Liberties Committee (“LIBE“) issued a report into the US National Security Agency (“NSA“) and EU member states’ surveillance of EU citizens (the “Report“). The Report was passed by 33 votes to 7 with 17 abstentions questioning whether data protection rules should be included in the trade negotiations with the US. The release of the report comes at a crucial time for both Europe and the US but what does this announcement really tell us about the future of international data flows in the eyes of the EU and the EU’s relationship with the US?

Background to the Report

The Report follows the US Federal Trade Commission (“FTC“)’s recent response to criticisms from the European Commission and European Parliament following the NSA scandal and subsequent concerns regarding Safe Harbor (for more information on the FTC – see this previous article). The Report calls into question recent revelations by whistleblowers and journalists about the extent of mass surveillance activities by governments. In addition, the LIBE Committee argues that the extent of the blanket data collection, highlighted by the NSA allegations, goes far beyond what would be reasonably expected to counter terrorism and other major security threats. The Report also criticises the international arrangements between the EU and the US, and states that these mechanisms “have failed to provide for the necessary checks and balances and for democratic accountability“.

LIBE Committee’s Recommendations

In order to address the deficiencies highlighted in the Report and to restore trust between the EU and the US, the LIBE Committee proposes several recommendations with a view to preserving the right to privacy and the integrity of EU citizens’ data, including:

  • US authorities and EU Member States should prohibit blanket mass surveillance activities and bulk processing of      personal data;
  • The Safe Harbor framework should be suspended, and all transfers currently operating under this mechanism should stop immediately;
  • The status of New Zealand and Canada as ‘adequate’ jurisdictions for the purposes of data transfers should be reassessed;
  • The adoption of the draft EU Data Protection Regulation should be accelerated;
  • The establishment of the European Cloud Partnership must be fast-tracked;
  • A framework for the protection of whistle-blowers must be established;
  • An autonomous EU IT capability must be developed by September 2014, including ENISA minimum security and privacy standards for IT networks;
  • The EU Commission must present an European strategy for democratic governance of the Internet by January 2015; and
  • EU Member States should develop a coherent strategy with the UN, including support of the UN resolution on ‘the right to privacy in the digital age‘.

Restoring trust

The LIBE Committee’s recommendations were widely criticised by politicians for being disproportionate and unrealistic. EU politicians also commented that the Report sets unachievable deadlines and appears to be a step backwards in the debate and, more importantly, in achieving a solution. One of the most controversial proposals in the Report consists of effectively ‘shutting off‘ all data transfers to the US. This could have the counterproductive effect of isolating Europe and would not serve the purpose of achieving an international free flow of data in a truly digital society as is anticipated by the EU data protection reform.

Consequences for Safe Harbor?

The Report serves to communicate further public criticism about the NSA’s alleged intelligence overreaching.  Whatever the LIBE Committee’s position, it is highly unlikely that as a result Safe Harbor will be suspended or repealed – far too many US-led businesses are dependent upon it for their data flows from the EU, meaning a suspension of Safe Harbor would have a very serious impact on transatlantic trade. Nevertheless, as a consequence of these latest criticisms, it is now more likely than ever that the EU/US Safe Harbor framework will undergo some changes in the near future.  As to what, precisely, these will be, only time will tell – though more active FTC enforcement of Safe Harbor breaches now seems inevitable.

 

The country of origin principle: a controller’s establishment wish list

Posted on July 1st, 2013 by



Data controllers setting up shop in the Europe are typically well aware of the EU’s applicability of law rules under Art. 4 of the Data Protection Directive (95/46).  In particular that, by having an “establishment” in one Member State, they are subject only to the data protection law of that Member State – even when they process personal information about individuals in other Member States.  For example, a controller “established” in the UK is subject only to UK data protection law, even when it processes information about individuals resident in France, Germany, Spain, and elsewhere. 

Referred to as the “establishment” test, this model is particularly common among US online businesses selling into the EU.  Without an EU “establishment”, they risk exposure to each of the EU’s 28 different national data protection laws, with all the chaos that entails.  But with an EU “establishment”, they take the benefit of a single Member State’s law, driving down risk and promoting legal certainty.  This principle was most recently upheld when a German court concluded that Facebook is established in Ireland and therefore not subject to German data protection law.

What does it mean to have a data controlling “establishment” though?  It’s a complex question, and one for which the Article 29 Working Party has published detailed and technical guidance.  In purely practical terms though, there are a number of simple measures that controllers wanting to evidence their establishment in a particular Member State can take:

1.  Register as a data controller.  It may sound obvious, but controllers claiming establishment in a particular Member State should make sure to register with the national data protection authority in that Member State.  Aside from helping to show local establishment, failing to register may be an offence.

2.  Review your external privacy notices.  The business should ensure its privacy policy and other outward-facing privacy notices clearly identify the EU controller and where it is established.  It’s all very well designating a local EU subsidiary as a controller, but if the privacy policy tells a different story this will confuse data subjects and be a red flag to data protection authorities.

3.  Review your internal privacy policies.  A controller should have in place a robust internal policy framework evidencing its controllership and showing its commitment to protect personal data.  It should ensure that its staff are trained on those policies and that appropriate mechanisms exist to monitor and enforce compliance.  Failure to produce appropriate policy documentation will inevitably raise questions in the mind of a national data protection authority about the level of control the local entity has over data processing and compliance. 

4.  Data processing agreements.  It’s perfectly acceptable to outsource processing activities from the designated controller to affiliated group subsidiaries or external vendors, but controllers that do so must make sure to have in place appropriate agreements with their outsourced providers – within those providers are intra-group or external.  It’s vital that, through contractual controls, the designated controller remains in the driving seat about how and why its data is used; it mustn’t simply serve as a ‘rubber stamp’ for data decisions ultimately made by its parent or affiliates.  For example, if EU customer data is hosted on the CRM systems of a UK controller’s US parent, then arm’s length documentation should exist between the UK and US showing that the US processes data only as a processor on behalf of the UK.

5.  Appoint data protection staff.  In some territories, appointing a data protection officer is a mandatory legal requirement for controllers.  Even where it’s not, nominating a local employee to fulfill a data protection officer (or similar) role to oversee local data protection compliance is a sensible measure.  The nominated DPO will fulfill a critical role in reviewing and authorizing data processing policies, systems and activities, thus demonstrating that data decisions are made within the designated controller.  He or she will also provide a consistent and informed interface with the local data protection authority, fostering positive regulatory relationships.

This is not an exhaustive list by any means, but a controller that takes the above practical measures will go a long way towards evidencing “establishment” in its national territory.  This will benefit it not just when corresponding with its own national data protection authority but also when managing enquiries and investigations from overseas data protection authorities, by substantially reducing its exposure to the regimes of those overseas authorities in the first place.

ECJ Advocate General: Google is NOT a controller of personal data on other sites

Posted on June 25th, 2013 by



We now know the Advocate General’s Opinion in the most eagerly followed data protection case in the history of the European Court of Justice (ECJ). After the prolific enforcement actions of the Spanish data protection authority to stop Google showing unwanted personal data in search results, their court battles were escalated all the way to the ECJ. Whilst the final decision is still a few months away, the influential Opinion of the Advocate General (AG) is a clear indication of where things are going.

The ultimate question is whether Google, in its capacity as a search engine provider, is legally required to honour individuals’ request to block personal data from appearing in search results. For that to be the case, the court will have to answer affirmatively a three-fold legal test:

1. Does EU law apply to Google? The AG’s Opinion is YES if the search engine provider has an establishment in a Member State for the purpose of promoting and selling advertising space on the search engine, as that establishment acts as the bridge between the search service and the revenue generated by advertising.

Unfortunately the AG does not deal with the question of whether Google Inc. uses equipment in Spain, so we don’t know whether an Internet company with no physical presence in the EU will be caught by EU law.

2. Does a search engine process personal data? The AG’s answer here is also YES, because notions of ‘personal data’ and ‘processing’ are sufficiently wide to cover the activities involved in retrieving information sought by users.

3. Is Google a controller of that data? Crucially, the AG’s answer is NO, because a search engine is not aware of the existence of a certain defined category of information amounting to personal data. Therefore, Google is not in a position to determine the uses made of that data.

So the conclusion, according to the AG, is that a data protection authority cannot compel Google to stop revealing personal data as part of search results.

In addition, the AG goes on to say that even if the ECJ were to find that internet search engine service providers were responsible as controllers for personal data appearing in search results, an individual would still not have a general ‘right to be forgotten’, as this is not contemplated in the current Directive.

What will happen if there is no new EU privacy law next year

Posted on June 20th, 2013 by



The European Parliament has just announced another delay affecting the vote on its version of the EU Data Protection Regulation. That means that we will now not know where the Parliament truly stands on this issue until September or October at the earliest. Although this was sort of expected, optimistic people like me were still hoping that the LIBE Committee would get enough consensus to issue a draft this side of the Summer, but clearly the political will is not quite there. This is obviously disappointing for a number of reasons, so in case the MEPs need a bit of motivation to get their act together, here are a few things that are likely to happen if the new Regulation is not adopted before next year’s deadline:

* Inconsistent legal regimes throughout the EU – The current differences in both the letter of the law and the way it is interpreted are confusing at best and one of the biggest weakness to achieve the right level of compliance.

* Non application of EU law to global Internet players – Thanks to its 90’s references to the ‘use of equipment’, the Directive’s framework is arguably not applicable to Internet businesses based outside the EU even if they collect data from millions EU residents. Is that a good idea?

* Death by paperwork – One of the most positive outcomes of the proposed Regulation will be the replacement of the paper-based compliance approach of the Directive with a more practical focus. Do we really want to carry on spending compliance resources filling in forms?

* Uncertainty about the meaning of personal data – Constantly evolving technology and the increasing value of data generated by our interaction with that technology have shaken the current concept of personal data. We badly need a 21st century definition of personal data and its different levels of complexity.

* Massive security exposures – The data security obligations under the existing Directive are rather modest compared to the well publicised wish list of regulators and, frankly, even some of those legal frameworks regarded as ‘inadequate’ by comparison to European data protection are considerably ahead of Europe in areas like data breach notification.

* Toothless regulators – Most EU data protection authorities still have very weak enforcement powers. Without going overboard, the Regulation is their chance to make their supervisory role truly effective.

The need to modernise EU data protection law is real and, above all, overdue. A bit of compromise has to be better that not doing anything at all.

How to solve BCR conflicts with local law

Posted on March 13th, 2013 by



A frequently asked question by many clients considering BCR is “How can we apply BCR on a global basis?  What if non-EU laws conflict with our BCR requirements?”  Normally, this question is raised during an early-stage stakeholder review – typically, by local in-house counsel or a country manager who points out, quite reasonably, that BCR are designed to meet EU data protection standards, not their own local laws.

It’s a very good, and perfectly valid, question to ask – but one that can very quickly be laid to rest.  BCR are a voluntary set of self-regulatory standards that can readily be designed to flex to non-EU local law requirements.  Global businesses necessarily have to comply with the myriad of different laws applicable to them, and the BCR policy can address this need in the following way:

(*)  where local law standards are lower than those in the BCR, then the BCR policy should specify that its standards will apply.  In this way, the local controller not only achieves, but exceeds, local law requirements and continues to meet its commitments under its BCR; and

(*)  where local law standards are higher than those in the BCR, then the BCR policy should specify that the local law standards will apply.  In this way, the local controller achieves local law compliance and exceeds its commitments under the BCR.

In both cases, the controller manages to fulfill its responsibilities under both applicable local law and the BCR, so a head on collision between the two almost never arises.  But for those very exceptional circumstances where mandatory local laws do prohibit the controller from complying with the BCR, then the group’s EU headquarters or privacy function is simply required to take a “responsible decision” on what action to take and consult with EU data protection authorities if in doubt.

The net result?  Carefully designed BCR provide a globally consistent data management framework that set an expected baseline level of compliance throughout the organization – exceeded only if and when required by local law.