Archive for the ‘Applicable law’ Category

German DPA takes on Facebook again

Posted on July 31st, 2015 by



The DPA of Hamburg has done it again and picked up a new fight against mighty US giant Facebook. This time, the DPA was not amused about Facebook´s attempt to enforce its real name policy, and issued an administrative order against Facebook Ireland Ltd.

The order is meant to force Facebook to accept aliased user names, to revoke the suspension of user accounts that had been registered under an alias, to stop Facebook from unilaterally changing alias user names to real user names, and to stop requesting copies of official ID documents. It is based on Sec. 13 (6) German Telemedia Act, which requires service providers like Facebook to offer access to their services anonymously or under an alias, and also a provision of the German Personal ID Act which arguably prohibits requesting copies of official ID documents.

Despite this regulation, Facebook´s terms of use oblige users to use their real name in Germany, too. Early this year, Facebook started to enforce this policy more actively and suspended user accounts that were registered under an alias. The company also requested users to submit copies of official ID documents. It also sent messages to users asking them to confirm that “friends” on the network used their real name. In a press statement, Mr Caspar, head of the Hamburg DPA said: “As already became apparent in numerous other complaints, this case shows in an exemplary way that the network [Facebook] attempts to enforce its so-called real name obligation with all its powers. In doing so, it does not show any respect for national law.”

“This exit has been closed”

Whether Facebook is at all subject to German law has been heavily disputed. While the Higher Administrative Court of the German state Schleswig-Holstein ruled that Facebook Ireland Limited, as a service provider located in an EU member state, benefits from the country-of-origin principle laid down in Directive 95/46/EC, the Regional Court of Berlin came to the opposite conclusion: It held that Facebook Inc. rather than Facebook Ireland Ltd would be the data controller, as the actual decisions about the scope, extent and purpose of the processing of data would be made in the US. The court also dismissed the argument that Facebook Ireland acts as a data controller in a data controller-processor agreement with Facebook Inc., as it ruled that the corporate domination agreement between Facebook Inc. and Facebook Ireland prevails over the stipulations of the data controller-processor agreement. As Facebook has a sales and marketing subsidiary in Hamburg, the Hamburg DPA now believes to have tailwind due to the ECJ ruling in the Google Spain case to establish the applicability of German law: “This exit has been closed by the ECJ with its jurisdiction on the Google search engine. Facebook is commercially active in Germany through its establishment in Hamburg. Who operates on our playing field must play by our rules.”

While previous activities of German DPAs against Facebook were aimed at legal issues that did not really agitate German users, such as the admissibility of the “like”-button, the enforcement of the real name policy upset German users in numbers, and a lot of users announced to turn their back on the network. The issue also saw a lot of press coverage in national media, mostly in strong criticism of Facebook.

Will the new EU General Data Protection Regulation prevent forum shopping?

Posted on May 12th, 2015 by



It’s a common criticism of the current EU Data Protection Directive that its provisions determining applicable law invite forum shopping – i.e. encourage businesses to establish themselves in the Member State perceived as being the most “friendly”.  In fact, while there is some truth to this belief, its effect is often overstated.  Typically, businesses choose which country to set up shop in based on a number of factors – size of the local market, access to talent and infrastructure, local labor laws and (normally the overwhelming consideration) the local tax regime.  We privacy pros like to consider data protection the determining factor but, at least in my experience, that’s hardly ever the case.

Nevertheless, it’s easy to understand why many worry about forum shopping.  Under the Directive, a business that has a data controlling “establishment” in one Member State is subject only to the national data protection laws of that Member State, to the exclusion of all other Member States.  So, for example, if I have a data controlling establishment in the UK, then the Directive says I’m subject only to UK data protection law, even when I collect data from individuals in France, Germany, Spain and so on.  A rule that works this way naturally lends itself to a concern that it might encourage a “race to the bottom”, with ill-intentioned businesses scampering to set up shop in “weak” data protection regimes where they face little to no risk of penalty – even if that concern is overstated in practice.

But a concern it is, nevertheless, and one that the new General Data Protection Regulation aims to resolve – most notably by applying a single, uniform set of rules throughout the EU.  However, the issue still arises as to which regulatory authorities should have jurisdiction over pan-EU businesses and this point has generated much excited debate among legislators looking to reach agreement on the so-called “one stop shop” mechanism under the Regulation.

This mechanism, which began life as a concept intended to provide greater regulatory certainty to businesses by providing them with a single “lead” authority to which they would be answerable, has slowly been whittled away to something scarcely recognizable.  For example, under the most recent proposals by the Council of the European Union, the concept of a lead protection authority remains but there are highly complicated rules for determining when other “concerned” data protection authorities may instead exercise jurisdiction or challenge the lead authority’s decision-making.

All of which begs the question, will the General Data Protection Regulation prevent forum shopping?  In my view, no, and here’s why:

  • Businesses don’t choose their homes based on data protection alone.  As already noted, businesses determine the Member States in which they will establish based on a number of factors, king of all being tax.  The General Data Protection Regulation will not alter this.  Countries, like Ireland or the UK, that are perceived as attractive on those other factors today will remain just as attractive once the new Regulation comes into effect.
  • While you can legislate the rules, you can’t legislate the culture. Anyone who practices data protection in the EU knows that the cultural and regulatory attitudes towards privacy vary enormously from Member State to Member State.  Even once the new Regulation comes in, bringing legislative uniformity throughout the EU with it, those cultural and regulatory differences will persist.  Countries whose regulators are perceived as being more open to relationship-building and “slow to temper” will remain just as attractive to businesses under the Regulation as they are under the Directive.
  • The penalties under the General Data Protection Regulation will incentivize forum shopping. It has been widely reported that the General Data Protection Regulation carries some pretty humungous fines for non-compliance – up to 5% of worldwide turnover.  In the face of that kind of risk, data protection takes on an entirely new level of significance and attracts serious Board level attention.  The inevitable behavioral consequence of this is that it will actively incentivize businesses to look for lower risk countries – on any grounds they can (local regulatory culture, resourcing of the local regulator and so on).
  • Mature businesses won’t restructure. The Regulation is unlikely to have an effect on the corporate structure of mature businesses, including the existing Internet giants, who have long since already established an EU controller in a particular Member State.  To the extent that historic corporate structuring decisions can be said to have been based on data protection forum shopping grounds, the General Data Protection Regulation won’t undo the effects of those decisions.  And new businesses moving into Europe always look to their longer-standing peers as a model for how they, too, should establish – meaning that those historic decisions will likely still have a distorting effect going forward.

Vidal-Hall v Google: A new dawn for data protection claims?

Posted on April 15th, 2015 by



The landmark judgment of the Court of Appeal in Vidal Hall & Ors v Google Inc may signal the dawn of a new beginning for data protection litigants. Prior to this case, the law in England was relatively settled: that in order to incur civil liability under the Data Protection Act 1998, the claimant had to establish at least some form of pecuniary damage (unless the processing related to journalism, art or literature). The wording of section 13(2) appeared unequivocal on this point and it frequently proved to be a troublesome hurdle for claimants – and a powerful shield for defendants.

The requirement, however, was always the source of some controversy and the English courts have tried in recent years to dilute the strictness of the rule.

Then enter Ms Vidal-Hall & co: three individuals who allege that Google has been collecting private information about their internet usage from their Safari browser without their knowledge or consent. Claims were brought under the tort of misuse of private information and under s.13 of the DPA, though there was no claim for pecuniary loss.

This ruling concerned only the very early stages of litigation – whether the claimants were permitted to even serve the claim on Google which, being based in California, were outside of the jurisdiction. Permission was granted by the Court of Appeal and the case will now proceed through the English courts.

Three key rulings lie at the heart of this judgment:

  • There is now no need to establish pecuniary damage to bring a claim under the DPA. Distress alone is sufficient.
  • It is “arguable” that browser generated information (BGI) constitutes “personal data” under the DPA.
  • Misuse of private information should be classified as a tort for the purposes of service out of the jurisdiction.

We take each briefly in turn:

(1) Damages for distress alone are sufficient

The Court of Appeal disapplied the clear wording of domestic legislation on the grounds that the UK Act could not be interpreted compatibly with Article 23 of the EU Directive, and Articles 7, 8 and 47 of the EU Charter of Fundamental Rights. It held that the main purpose of the Data Protection Directive was to protect privacy, rather than economic rights, and it would be “strange” if it could not compensate those individuals who had suffered emotional distress but no pecuniary damage, when distress was likely to be the primary form of damage where there was a contravention.

It is too early to say whether this ruling will in practice open the door to masses of litigation – but there is no doubt that a significant obstacle that previously stood in the way of DPA claimants has now been unambiguously lifted by the Court of Appeal.

(2) Browser-generated information may constitute “personal data”

A further interesting, though less legally ground-breaking, ruling was that the BGI data in this case was arguably “personal data” under the DPA. The Court of Appeal did not decide the issue, but held that there was at least a “serious issue to be tried”.

Google had argued that: (a) the BGI data on its own was anonymous as it did not name or identify any individual; and (b) it kept the BGI data segregated from other data it held from which an individual might be identifiable (e.g. Gmail accounts). Thus, it was not personal data.

In response to Google’s points, the Court considered that it was immaterial that the BGI data did not name the user – what was relevant was that the data comprised of detailed browsing histories and the use of a DoubleClick cookie (a unique identifier which enabled the browsing histories to be linked to a specific device/machine). Taking those two elements together, it was “possible” to equate an individual user with the particular device, thus potentially bringing the data under the definition of “personal data”.

The Court further considered it immaterial that Google in practice segregated the BGI data from other data in its hands. What mattered was whether Google had the other information actually within its possession which it “could” use to identify the data subject, “regardless of whether it does so or not”.

(3) Misuse of private information is a tort

Finally, there was the confirmation that the misuse of private information is a tort for the purposes of service out of the jurisdiction. Not a huge point for our readers, but it will mean that claimants who bring claims under this cause of action will more easily obtain service out of the jurisdiction against foreign defendants.

A turning point…?

So the judgment certainly leaves much food for thought and is a significant turning point in the history of data protection litigation. There may also be a wider knock-on effect within the EU as other Member States that require proof of pecuniary damage look to the English judgment as a basis for opening up pure distress claims in their own jurisdictions.

The thing to bear in mind is that the ruling concerned only the very early stages of litigation – there is still a long road ahead in this thorny litigation and a great deal of legal and factual issues that still need to be resolved.

Cookie droppers may be watching this space with a mixture of fear and fascination.

 

 

Belgian research report claims Facebook tracks the internet use of everyone

Posted on April 1st, 2015 by



A report published by researchers at two Belgian universities claims that Facebook engages in massive tracking of not only its users but also people who have no Facebook account. The report also identifies a number of other violations of EU law.

When Facebook announced, in late 2014, that it would revise its Data Use Policy (DUP) and Terms of Services effective from 30 January 2015, a European Task Force, led by the Data Protection Agencies of the Netherlands, Belgium and Germany, was formed to analyse the new policies and terms.

In Belgium, the State Secretary for Privacy, Bart Tommelein, had urged the Belgian Privacy Commission to start an investigation into Facebook’s privacy policy, which led to the commissioning of the draft report that has now been published. The report concludes that Facebook is acting in violation of applicable European legislation and that “Facebook places too much burden on its users. Users are expected to navigate Facebook’s complex web of settings in search of possible opt-outs“.

The main findings of the report can be summarised as follows:

Tracking through social plug-ins

The researchers found that whenever a user visits a non-Facebook website, Facebook will track that user by default, unless he or she takes steps to opt-out. The report concludes that this default opt-out approach is not in line with the opt-in requirements laid down in the E-privacy Directive.

As far as non-users of Facebook are concerned, the researchers’ findings confirm previous investigations, most notably in Germany, that Facebook places a cookie each time a non-user visits a third-party website which contains a Facebook social plug-in such as the Like-button. Moreover, this cookie is placed regardless of whether the non-user has clicked on that Like button or not. Considering that Facebook does not provide any of this information to such non-users, and that the non-user is not requested to consent to the placing of such cookie, this can also be considered a violation of the E-privacy Directive.

Finally, the report found that both users and non-users who decide to use the opt-out mechanism offered by Facebook receive a cookie during this very opt-out process. This cookie, which has a default duration of two years, enables Facebook to track the user or non-user across all websites that contain its social plug-ins.

Other data protection issues identified

In addition to a number of consumer protection law issues, the report also covers the following topics relating to data protection:

  • Consent: The researchers are of the opinion that Facebook provides only very limited and vague information and that for many data uses, the only choice for users is to simply “take-it-or-leave-it”. This is considered to be a violation of the principle that in order for consent to be valid, it should be freely given, specific, informed and unambiguous as set-out in the Article 29 Working Party’s Opinion on consent (WP 187).
  • Privacy settings: The report further states that the current default settings (opt-out mechanism) remain problematic, not in the least because “users cannot exercise meaningful control over the use of their personal information by Facebook or third parties” which gives them “a false sense of control”.
  • Location data: Finally, the researchers consider that Facebook should offer more granular in-app settings for the sharing of location data, and should provide more detailed information about how, when and why it processes location data. It should also ensure it does not store the location data for longer than is strictly necessary.

Conclusion

The findings of this report do not come as a surprise. Indeed, most of the alleged areas of non-compliance have already been the object of discussions in past years and some have already been investigated by other privacy regulators (see e.g. the German investigations around the ‘like’ button).

The real question now surrounds what action the Belgian Privacy Commission will take on the basis of this report.

On the one hand, as of late, data protection enforcement has been put high on the agenda in Belgium. It seems the Belgian Privacy Commission is more determined than ever to show that its enforcement strategy has changed. This can also be situated in the context of recent muscular declarations from the State Secretary of Privacy that companies like Snapchat and Uber must be investigated to ensure they comply with EU data protection law.

Facebook, on the other hand, questions the authority of the Belgian Privacy Commission to conduct such an investigation, stating that only the Irish DPA is competent to discuss their privacy policies. Facebook has also stated that the report contains factual inaccuracies and expressed regret that the organisation was not contacted by the researchers.

It will therefore be interesting to see how the discussions between Facebook and the Belgian Privacy Commission develop. The President of the Belgian Privacy Commission has declared a number of times that it will not hesitate to take legal action against Facebook if the latter refuses to implement the changes for which Privacy Commission is asking.

This could potentially lead to Facebook being prosecuted, although it is more likely that it will be forced to accept a criminal settlement. In 2011, following the Privacy Commission’s investigation into Google Street View, Google accepted to pay 150.000 EUR as part of a criminal settlement with the public prosecutor.

Will no doubt be continued…

 

 

WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by



On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.

 

ECJ affirms individuals’ right to be forgotten

Posted on May 15th, 2014 by



Be honest: how many of us had ourselves forgotten that a profoundly important ruling from the European Court of Justice on the so-called “right to be forgotten” was imminent?  That ruling, in the case of Google v the Spanish DPA, was finally handed down on 13 May and has significant implications for all online businesses (available here).

Background

By way of background, the case concerned a Spanish national who complained to Google about online newspaper reports it had indexed relating to debt-recovery proceedings against him.  When the individual’s name was entered into Google, it brought up search results linking to newspaper announcements about these proceedings.  The actual proceedings in question dated back to 1998 and had long since been resolved.

The matter escalated through the Spanish DPA and the Spanish High Court, who referred various questions to the European Court of Justice for a ruling.  At the heart of the matter was the issue of whether an individual can exercise a “right to be forgotten” so as to require search engines to remove search results linking to personal content lawfully published on third party sites – or whether any such requests should be taken up only with the publishing sites in question.

Issues considered

The specific issues considered by the ECJ principally concerned:

  • Whether a search engine is a “controller” of personal data:  On this first question, the ECJ ruled YES, search engines are controllers of personal data.  For this purpose, the ECJ said that it was irrelevant that search engines are information-blind, treating personal data and non-personal data alike, and having no knowledge of the actual personal data processed.
  • Whether a search engine operated from outside the EU is subject to EU data protection rules if it has an EU sales subsidiary:  On this second question, the ECJ ruled YES.  Google wholly operates its search service from the US, but has a local sales subsidiary in Spain that makes online advertising sales to local customers.  On a very broad reading of the EU Data Protection Directive, the Court said that even though the processing of search data was not conducted “by” the Spanish subsidiary, it was conducted “in the context of the activities” of that subsidiary and therefore subject to EU data protection rules.  This is a particularly important point for any online business operating sales subsidiaries in the EU – in effect, this ruling means that in-territory sales subsidiaries potentially expose out-of-territory HQs and parent companies to local data protection laws.
  • Whether individuals can require search engines to remove search results about them:  Again, the ECJ ruled YES.  Having decided that a search engine is a “controller”, the ECJ ruled that an individual has the right to have search results about him or her removed if they appear to be “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue“.  To this end, the ECJ said there was no need to show that the list of results “causes prejudice to the data subject” and that the right of the individual to have results removed “override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name“.

Why this matters

This ruling is one of the most significant – if not the most significant – data protection ruling in the EU to date, and the findings of the ECJ will come as a surprise to many.  A constant theme throughout the ECJ’s decision was its clear desire to uphold European citizens’ fundamental rights to privacy and to data protection, as enshrined in the European Union’s Charter of Fundamental Rights, and it interpreted the EU’s Data Protection Directive with this consideration in mind.

Few expected that search engines could be required to remove search results linking to material posted lawfully on third party sites, but that is precisely what the ECJ has ruled in this instance.  Quite how this will work from a practical perspective is another matter: in future, when search engines receive a request to have personal data “forgotten” from their search results, they will have to tread a fine line between balancing the individual’s right to be forgotten against other relevant contextual considerations such as “the role played by the data subject in public life” and whether “the interference with the fundamental rights is justified by the preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question“.

Put another way, search engines will need to act not just as gateways to information on the web, but also – in some circumstances – as censors preventing access to information based on objections received.  This raises some very complex challenges in terms of balancing right to privacy against right to free speech that will clearly take time to work out.

Practical implications for online businesses

But it would be wrong to think that the relevance of this decision is limited to search engines alone.  In fact, it has much broader implications for online businesses, including that:

  • Non-EU businesses with EU sales offices risk exposure to EU data protection law:  Non-EU data hungry businesses CAN be subject to EU data protection rules simply by virtue of having local sales subsidiaries in the EU.  This is particularly critical for growth businesses expanding into the EU through the set-up of local sales offices, a common model for international expansion.
  • Data blind businesses need to comply:  Big data businesses CAN be subject to data protection rules, even if they are data blind and do not distinguish between personal and non-personal data.  A head in the sand approach will not protect against risk – any data ingesting business needs to have a clear compliance framework in place.
  • Data deletion a priority:  Individuals CAN require deletion of their data under EU law – businesses need to architecture their systems to enable data deletion on request and to adopt appropriate data retention and deletion policies.  Without these, they will face particular exposure when presented with these requests.

Taking into account the critical implications of this ruling, it’s fair to say it’s one that won’t be forgotten soon!

European Parliament votes in favour of data protection reform

Posted on March 21st, 2014 by



On 12 March 2014, the European Parliament (the “Parliament”) overwhelmingly voted in favour of the European Commission’s proposal for a Data Protection Regulation (the “Data Protection Regulation”) in its plenary assembly. In total 621 members of Parliament voted for the proposals and only 10 against. The vote cemented the Parliament’s support of the data protection reform, which constitutes an important step forward in the legislative procedure. Following the vote, Viviane Reding – the EU Justice Commissioner – said that “The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible”. While this vote is an important milestone in the adoption process, there are still several steps to go before the text is adopted and comes into force.

So what happens next?

Following the Civil Liberties, Justice and Home Affairs (LIBE) Committee’s report published in October 2013 (for more information on this report – see this previous article), this month’s vote  means that the Council of the European Union (the “Council”) can now formally conduct its reading of the text based on the Parliament’s amendments. Since the EU Commission made its proposal, preparatory work in the Council has been running in parallel with the Parliament. However, the Council can only adopt its position after the Parliament has acted.

In order for the proposed Data Protection Regulation to become law, both the Parliament and the Council must adopt the text in what is called the “ordinary legislative procedure” – a process in which the decisions of the Parliament and the Council have the same weight. The Parliament can only begin official negotiations with the Council as soon as the Council presents its position. It seems unlikely that the Council will accept the Parliament’s position and, on the contrary, will want to put forward its own amendments.

In the meantime, representatives of the Parliament, the Council and the Commission will probably organise informal meetings, the so-called “trilogue” meetings, with a view to reaching a first reading agreement.

The EU Justice Ministers have already met several times in Council meetings in the past months to discuss the data protection reform. Although there seems to be a large support between Member States for the proposal, they haven’t yet reached an agreement over some of the key provisions, such as the “one-stop shop” rule. The next meeting of the Council ministers is due to take place in June 2014.

Will there be further delays?

As the Council has not yet agreed its position, the speed of the development of the proposed regulation in the coming months largely depends on this being finalised. Once a position has been reached by the Council then there is also the possibility that the proposals could be amended further. If this happens, the Parliament may need to vote again until the process is complete.

Furthermore, with the elections in the EU Parliament coming up this May, this means that the whole adoption process will be put on hold until a new Parliament comes into place and a new Commission is approved in the autumn this year. Given these important political changes, it is difficult to predict when the Data Protection Regulation will be finally adopted.

It is worth noting, however, that the European heads of state and government publicly committed themselves to the ‘timely’ adoption of the data protection legislation by 2015 – though, with the slow progress made to date and work still remaining to be done, this looks a very tall order indeed.

How do EU and US privacy regimes compare?

Posted on March 5th, 2014 by



As an EU privacy professional working in the US, one of the things that regularly fascinates me is each continent’s misperception of the other’s privacy rules.  Far too often have I heard EU privacy professionals (who really should know better) mutter something like “The US doesn’t have a privacy law” in conversation; equally, I’ve heard US colleagues talk about the EU’s rules as being “nuts” without understanding the cultural sensitivities that drive European laws.

So I thought it would be worth dedicating a few lines to compare and contrast the different regimes, principally to highlight that, yes, they are indeed different, but, no, you cannot draw a conclusion from these differences that one regime is “better” (whatever that means) than the other.  You can think of what follows as a kind of brief 101 in EU/US privacy differences.

1.  Culturally, there is a stronger expectation of privacy in the EU.  It’s often said that there is a stronger cultural expectation of privacy in the EU than the US.  Indeed, that’s probably true.   Privacy in the EU is protected as a “fundamental right” under the European Union’s Charter of Fundamental Rights – essentially, it’s akin to a constitutional right for EU citizens.  Debates about privacy and data protection evoke as much emotion in the EU as do debates about gun control legislation in the US.

2.  Forget the myth: the US DOES have data protection laws.  It’s simply not true that the US doesn’t have data protection laws.  The difference is that, while the EU has an all-encompassing data protection framework (the Data Protection Directive) that applies across every Member State, across all sectors and across all types of data, the US has no directly analogous equivalent.  That’s not the same thing as saying the US has no privacy laws – it has an abundance of them!  From federal rules designed to deal with specific risk scenarios (for example, collection of child data online is regulated under the Children’s Online Privacy Protection Act), to sector-specific rules (Health Insurance Portability and Accountability Act for health-related information and the Gramm-Leach-Bliley Act for financial information), to state-driven rules (the California Online Privacy Protection Act in California, for example – California, incidentally, also protects individuals’ right to privacy under its constitution).  So the next time someone tells you that the US has no privacy law, don’t fall for it – comparing EU and US privacy rules is like comparing apples to a whole bunch of oranges.

3.  Class actions.  US businesses spend a lot of time worrying about class actions and, in the privacy realm, there have been multiple.  Countless times I’ve sat with US clients who agonise over their privacy policy drafting to ensure that the disclosures they make are sufficiently clear and transparent in order to avoid any accusation they may have misled consumers.  Successful class actions can run into the millions of $$$ and, with that much potential liability at stake, US businesses take this privacy compliance risk very seriously.  But when was the last time you heard of a successful class action in the EU?  For that matter, when was the last time you heard of ANY kind of award of meaningful damages to individuals for breaches of data protection law?

4.  Regulatory bark vs. bite.  So, in the absence of meaningful legal redress through the courts, what can EU citizens do to ensure their privacy rights are respected?  The short answer is complain to their national data protection authorities, and EU data protection authorities tend to be very interested and very vocal.  Bodies like the Article 29 Working Party, for example, pump out an enormous volume of regulatory guidance, as do certain national data protection authorities, like the UK Information Commissioner’s Office or the French CNIL. Over in the US, American consumers also have their own heavyweight regulatory champion in the form of Federal Trade Commission which, by using its powers to take enforcement against “unfair and deceptive practices” under the FTC Act, is getting ever more active in the realm of data protection enforcement.  And look at some of the settlements it has reached with high profile companies – settlements that, in some cases, have run in excess of US$20m and resulted in businesses having to subject themselves to 20 year compliance audits.  By contrast, however vocal EU DPAs are, their powers of enforcement are typically much more limited, with some even lacking the ability to fine.

So those are just some of the big picture differences, but there are so many more points of detail a well-informed privacy professional ought to know – like how the US notion of “personally identifiable information” contrasts with EU “personal data”, why the US model of relying on consent to legitimise data processing is less favoured in the EU, and what the similarities and differences are between US “fair information practice principles” and EU “data protection principles”.

That’s all for another time, but for now take away this:  while they may go about it in different ways, the EU and US each share a common goal of protecting individuals’ privacy rights.  Is either regime perfect?  No, but each could sure learn a lot from the other.

 

 

 

EU Parliament’s LIBE Committee Issues Report on State Surveillance

Posted on February 19th, 2014 by



Last week, the European Parliament’s Civil Liberties Committee (“LIBE“) issued a report into the US National Security Agency (“NSA“) and EU member states’ surveillance of EU citizens (the “Report“). The Report was passed by 33 votes to 7 with 17 abstentions questioning whether data protection rules should be included in the trade negotiations with the US. The release of the report comes at a crucial time for both Europe and the US but what does this announcement really tell us about the future of international data flows in the eyes of the EU and the EU’s relationship with the US?

Background to the Report

The Report follows the US Federal Trade Commission (“FTC“)’s recent response to criticisms from the European Commission and European Parliament following the NSA scandal and subsequent concerns regarding Safe Harbor (for more information on the FTC – see this previous article). The Report calls into question recent revelations by whistleblowers and journalists about the extent of mass surveillance activities by governments. In addition, the LIBE Committee argues that the extent of the blanket data collection, highlighted by the NSA allegations, goes far beyond what would be reasonably expected to counter terrorism and other major security threats. The Report also criticises the international arrangements between the EU and the US, and states that these mechanisms “have failed to provide for the necessary checks and balances and for democratic accountability“.

LIBE Committee’s Recommendations

In order to address the deficiencies highlighted in the Report and to restore trust between the EU and the US, the LIBE Committee proposes several recommendations with a view to preserving the right to privacy and the integrity of EU citizens’ data, including:

  • US authorities and EU Member States should prohibit blanket mass surveillance activities and bulk processing of      personal data;
  • The Safe Harbor framework should be suspended, and all transfers currently operating under this mechanism should stop immediately;
  • The status of New Zealand and Canada as ‘adequate’ jurisdictions for the purposes of data transfers should be reassessed;
  • The adoption of the draft EU Data Protection Regulation should be accelerated;
  • The establishment of the European Cloud Partnership must be fast-tracked;
  • A framework for the protection of whistle-blowers must be established;
  • An autonomous EU IT capability must be developed by September 2014, including ENISA minimum security and privacy standards for IT networks;
  • The EU Commission must present an European strategy for democratic governance of the Internet by January 2015; and
  • EU Member States should develop a coherent strategy with the UN, including support of the UN resolution on ‘the right to privacy in the digital age‘.

Restoring trust

The LIBE Committee’s recommendations were widely criticised by politicians for being disproportionate and unrealistic. EU politicians also commented that the Report sets unachievable deadlines and appears to be a step backwards in the debate and, more importantly, in achieving a solution. One of the most controversial proposals in the Report consists of effectively ‘shutting off‘ all data transfers to the US. This could have the counterproductive effect of isolating Europe and would not serve the purpose of achieving an international free flow of data in a truly digital society as is anticipated by the EU data protection reform.

Consequences for Safe Harbor?

The Report serves to communicate further public criticism about the NSA’s alleged intelligence overreaching.  Whatever the LIBE Committee’s position, it is highly unlikely that as a result Safe Harbor will be suspended or repealed – far too many US-led businesses are dependent upon it for their data flows from the EU, meaning a suspension of Safe Harbor would have a very serious impact on transatlantic trade. Nevertheless, as a consequence of these latest criticisms, it is now more likely than ever that the EU/US Safe Harbor framework will undergo some changes in the near future.  As to what, precisely, these will be, only time will tell – though more active FTC enforcement of Safe Harbor breaches now seems inevitable.