Archive for the ‘Applicable law’ Category

DPAs react to the CJEU’s decision on Safe Harbor

Posted on October 22nd, 2015 by

Since the CJEU’s decision of 6 October 2015 revoking the EU/US Safe Harbor program, Safe Harbor continues to make the headlines and there are new legal developments each day. This blog post summarizes the public statements that were made in recent days by the data protection authorities (DPAs) in the EU and regulators in other parts of the world.

Reaction of the European DPAs

On 16 October 2015, the Article 29 Working Party (WP 29) issued a public statement which says that the DPAs have discussed the consequences of the CJEU’s decision. The position of the WP 29 is summarized below.

What is the WP 29’s analysis of the CJEU’s decision on Safe Harbor?

Unsurprisingly, the WP 29 says “it is clear that companies can no longer rely on Safe Harbor to transfer their data to the US“. If companies are still doubting whether their transfers under Safe Harbor are lawful, the WP 29 confirms that “transfers that are still taking place under the Safe Harbor decision are now considered to be unlawful.

The WP 29 also states: “It is absolutely essential to have a robust, collective and common position on the implementation of the judgment“.

The WP 29 highlights that “the question of massive and indiscriminate surveillance is a key element of the Court’s analysis” and “such surveillance is incompatible with the EU legal framework“. The WP 29 makes a particularly bold statement by saying that “countries where the powers of state authorities to access information go beyond what is necessary in a democratic society will not be considered as safe destinations for transfers“, which it would seem is addressed at the US authorities.

What should companies do?

Unfortunately, the WP 29 does not provide a lot of practical guidance for companies. It simply says that “businesses should reflect on the possible risks that they are taking when transferring data and should consider putting in place any legal and technical solution in a timely manner to mitigate those risks and respect the EU data protection acquis“.

Two points are worth highlighting. First, the WP 29 calls upon companies to assess their level of compliance for all types of data transfers, not just those that are based on Safe Harbor. Second, companies need to do so in a “timely manner” which is the WP 29’s way of saying that there is no time to lose. Those companies who have already begun to implement measures to enforce the Safe Harbor decision are in a better position compared with those who haven’t.

Does the CJEU’s decision affect other data transfer mechanisms (e.g., the EU Model Clauses and Binding Corporate Rules)?

The WP 29 says that it “will continue to analyse the impact of the CJEU’s judgment on other data transfer tools“, which in itself is not very reassuring given the reactions of some of the DPAs. In Germany, for example, the data protection authority for the German state of Schleswig-Holstein issued a position paper in which it declares the EU model contract clauses invalid.

Nonetheless, the WP 29 does convey a more reassuring message to companies by saying that “EU model clauses and BCR can still be used”. At this point, it is difficult to predict what will be the impact of the Safe Harbor decision on Model Clauses and BCR and so we will continue to monitor the situation in the weeks to come.

How will the DPAs enforce the CJEU’s decision?

The good news is that the WP 29 has granted a grace period to find an appropriate solution with the US authorities. The bad news is that this grace period will expire at the end of January 2016, which leaves very little time for companies to adapt.

Until then, if no solution has been found (a Safe Harbor 2.0?) and depending on the assessment that is made by the WP 29 of the other data transfer mechanisms, then “the DPAs are committed to take all necessary and appropriate actions, which may include coordinated enforcement actions“. As we have seen in recent months on other issues (such as mobile apps and cookies) the DPAs have demonstrated their ability to conduct pan-European enforcement actions. However, one should not forget that, even if the DPAs do launch a coordinated enforcement action, the actual enforcement measures can only be pronounced by each DPA at a national level. And the new enforcement provisions under the upcoming General Data Protection Regulation (GDPR) will not come into force before 2018 (assuming the text of the GDPR is formally adopted in 2016).

In the meantime, the WP 29 reminds that each national DPA can “investigate particular cases, for instance on the basis of complaints, and exercise their powers in order to protect individuals“, which means that each DPA can act independently against any company in accordance with its national law.

The WP 29 also says that the DPAs “will also put in place appropriate information campaigns at national level to ensure that stakeholders are sufficiently informed“, which may include “direct information to all known companies that used to rely on the Safe Harbor decision as well as general messages on the DPAs’ websites“. And so, companies who have filed their DPA notifications and/or obtained the approval of the DPAs to transfer data to the US on the basis of Safe Harbour could be contacted by the DPAs in the days or weeks to come and should therefore be prepared to explain to the DPAs what remediation measures they have put in place.

What next?

The WP 29 says that it “is urgently calling on the EU Member States and the European institutions to open discussions with the US authorities in order to find a political, legal and technical solution that enables companies to transfer personal data to the US in compliance with respect for fundamental rights. Such solutions could be found through the negotiations of an intergovernmental agreement providing stronger guarantees to EU data subjects“. It is interesting to note that the WP 29 does say that “the current negotiations around a new Safe Harbor could be a part of the solution” and so it has willingly left that window open.

The WP 29 also states: “The task that lies ahead to find a sustainable solution in order to implement the CJEU’s decision must be shared between the DPAs, the EU institutions, EU Member States and businesses“. With the GDPR soon to be adopted, this will be a challenge to get all the stakeholders to agree on a new Safe Harbor framework that complies with the provisions of the GDPR.

Reaction of the regulators in other parts of the world

The Safe Harbor decision has also caused a ripple effect beyond the European Union borders and regulators in other parts of the world have also reacted to the CJEU’s decision.

United States:

The US Department of Commerce published an advisory on the Safe Harbor website stating: “In the current rapidly changing environment, the Department of Commerce will continue to administer the Safe Harbor program, including processing submissions for self-certification to the Safe Harbor Framework“. Once fails to see how the Department of Commerce can actually continue to process submissions for self-certification to Safe Harbor when clearly such transfers are now unlawful under European law.


On October 19th, the Israeli Law, Information and Technology Authority (ILITA) issued a statement in which it revokes its prior authorization to transfer data from Israel to the U.S. on the basis of Safe Harbor. Pursuant to the data protection laws of Israel, transfers of data outside of Israel to third countries is permitted if the data is sent to a country that receives data from the EU under the same terms of acceptance. However, the CJEU’s decision invalidates the authorization to transfer personal data from Europe to companies committed to the Safe Harbor. Consequently, the position of ILITA is that organizations can no longer rely on this derogation as a basis for the transfer of personal data from Israel to organizations in the United States.

In the absence of an alternative valid arrangement or another formal decision of the EU with respect to the transfer of data from the EU to the US, companies who want to transfer personal data from Israel to the US are therefore required to assess whether they can legitimize their transfers on one of the other derogations set out in the data protection law of Israel.


On 7th October, 2015, the Swiss Data Protection Authority (FDPIC) issued a first press release on its website stating that the Swiss/US Safe Harbor decision “is also called into question” by the CJEU’s decision. “As far as Switzerland is concerned, in the event of renegotiation, only an internationally coordinated approach that includes the EU is appropriate.”

On 22nd October 2015, the FDPIC made a second statement which says that “as long as Switzerland has not renegotiated a new Safe Harbor Framework with the United States, Safe Harbor cannot be deemed a valid legal mechanism for transferring personal data to the US.” It would seem, therefore, that without officially revoking the Swiss/US Safe Harbor program, it is de facto no longer possible for Swiss based companies to transfer personal data to the US on the grounds of Safe Harbor.

Without explicitly mentioning any enforcement actions, the FDPIC calls upon businesses who are transferring personal data to the US to adapt their contracts with US companies before the end of January 2016. The FDPIC will also coordinate with the EU DPAs to determine what other actions may be required to protect the fundamental rights of the individuals.

By Olivier Proust

Getting to know the GDPR, Part 2 – Out-of-scope today, in scope in the future. What is caught?

Posted on October 20th, 2015 by

The GDPR expands the scope of application of EU data protection law requirements in two main respects:

  1. in addition to data “controllers” (i.e. persons who determine why and how personal data are processed), certain requirements will apply for the first time directly to data “processors” (i.e. persons who process personal data on behalf of a data controller); and
  2. by expanding the territorial scope of application of EU data protection law to capture not only the processing of personal data by a controller or a processor established in the EU, but also any processing of personal data of data subjects residing in the EU, where the processing relates to the offering of goods or services to them, or the monitoring of their behaviour.


The practical effect is that many organisations that were to date outside the scope of application of EU data protection law will now be directly subject to its requirements, for instance because they are EU-based processors or non EU-based controllers who target services to EU residents (e.g. through a website) or monitor their behaviour (e.g. through cookies). For such organisations, the GDPR will introduce a cultural change and there will be more distance to cover to get to a compliance-ready status.

What does the law require today?

The Directive

At present, the Data Protection Directive 95/46/EC (“Directive“) generally sets out direct statutory obligations for controllers, but not for processors. Processors are generally only subject to the obligations that the controller imposes on them by contract. By way of example, in a service provision scenario, say a cloud hosting service, the customer will typically be a controller and the service provider will be a processor.

Furthermore, at present the national data protection law of one or more EU Member States applies if:

  1. the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State. When the same controller is established on the territory of several Member States, each of these establishments should comply with the obligations laid down by the applicable national law (Article 4(1)(a)); or
  2. the controller is not established on EU territory and, for purposes of processing personal data makes use of equipment situated on the territory of a Member State (unless such equipment is used only for purposes of transit through the EU) (Article 4(1)(c)); or
  3. the controller is not established on the Member State’s territory, but in a place where its national law applies by virtue of international public law (Article 4(1)(b)). Article 4(1)(b) has little practical significance in the commercial and business contexts and is therefore not further examined here. The GDPR sets out a similar rule.


CJEU case law

Two recent judgments of the Court of Justice of the European Union (“CJEU“) have introduced expansive interpretations of the meaning of “in the context of the activities” and “establishment”:

  1. In Google Spain, the CJEU held that “in the context of the activities” does not mean “carried out by”. The data processing activities by Google Inc are “inextricably linked” with Google Spain’s activities concerning the promotion, facilitation and sale of advertising space. Consequently, processing is carried out “in the context of the activities” of a controller’s branch or subsidiary when the latter is (i) intended to promote and sell ad space offered by the controller, and (ii) orientates its activity towards the inhabitants of that Member State.
  2. In Weltimmo, the CJEU held that the definition of “establishment” is flexible and departs from a formalistic approach that an “establishment” exists solely where a company is registered. The specific nature of the economic activities and the provision of services concerned must be taken into account, particularly where services are offered exclusively over the internet. The presence of only one representative, who acts with a sufficient degree of stability (even if the activity is minimal), coupled with websites that are mainly or entirely directed at that EU Member State suffice to trigger the application of that Member State’s law.


What will the GDPR require?

The GDPR will apply to the processing of personal data:

  1. in the context of the activities of an establishment of a controller or a processor in the EU; and
  2. of data subjects residing in the EU by a controller not established in the EU, where the processing activities are related to the offering of goods or services to them, or the monitoring of their behaviour in the EU.

It is irrelevant whether the actual data processing takes place within the EU or not.

As far as the substantive requirements are concerned, compared to the Directive, the GDPR introduces:

  1. new obligations and higher expectations of compliance for controllers, for instance around transparency, consent, accountability, privacy by design, privacy by default, data protection impact assessments, data breach notification, new rights of data subjects, engaging data processors and data processing agreements;
  2. for the first time, direct statutory obligations for processors, for instance around accountability, engaging sub-processors, data security and data breach notification; and
  3. severe sanctions for compliance failures.


What are the practical implications?

Controllers who are established in the EU are already caught by EU data protection law, and will therefore not be materially affected by the broader scope of application of the GDPR. For such controllers, the major change is the new substantive requirements they need to comply with.

Processors (such as technology vendors or other service providers) established in the EU will be subject to the GDPR’s direct statutory obligations for processors, as opposed to just the obligations imposed on them by contract by the controller. Such processors will need to understand their statutory obligations and take the necessary steps to comply. This is a major “cultural” change.

Perhaps the biggest change is that controllers who are not established in the EU but collect and process data on EU residents through websites, cookies and other remote activities are likely to be caught by the scope of the GDPR. E-commerce providers, online behavioural advertising networks, analytics companies that process personal data are all likely to be caught by the scope of application of the GDPR.

We still have at least 2 years before the GDPR comes into force. This may sound like a long time, but given the breadth and depth of change in the substantive requirements, it isn’t really! A lot of fact finding, careful thinking, planning and operational implementation will be required to be GDPR ready in 24 months.

So what should you be doing now?

  1. If you are a controller established in the EU, prepare your plan for transitioning to compliance with the GDPR.
  2. If you are a controller not established in the EU, assess whether your online activities amount to offering goods or services to, or monitoring the behaviour of, EU residents. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR. You may need to appoint a representative in the EU.
  3. Assess whether any of your EU-based group companies act as processors. If so, asses the level of awareness of / readiness for compliance with EU data protection law and create a road map for transitioning to compliance with the GDPR.
  4. If you are a multinational business with EU and non-EU affiliates which will or may be caught by the GDPR, you will also need to consider intra-group relationships, how you position your group companies and how you structure your intra-group data transfers.

Our citizens, our rules: Clarification on the new Russian data storage requirements

Posted on August 28th, 2015 by

New amendments to the Russian Federal Act on Personal Data are due to come into force on 1 September via Russian Federal Law No. 242-FZ “On the Amendments to Certain Legislative Acts of the Russian Federation to Clarify the Framework for Personal Data Processing in the Information and Telecommunications Networks”.

The main thrust of the amendments is that local and foreign “operators” will be required to keep databases processing personal data of Russian citizens on Russian Federation territory and to provide information on the location of these databases.

The “ums”, “errs” and theories on the possible interpretations of the new rules and terms such as “citizenship” as well as the practical implications of the rules have been numerous since the proposed amendments were published in July 2014.

Earlier this month, the Ministry of Communications of the Russian Federation tried to throw some light on the turbulent waters of interpretation through the publication of certain clarifications on its website.

Headline points are as follows:

1.  Parties affected by the new rules

Any company operating in Russia or with a Russia-facing website that is using personal data in any way is likely to be affected by these changes. In practice, the rules will affect any companies which:

a) use a domain name related to Russia;

b) have a Russian-language version of their website (except for automatic translations etc);

c) allow payments on their website for goods, works or services in Russian RUB;

d) have advertisements on their website in Russian; or e) undertake agreements via their website (selling goods or services) that may be performed in Russia.

2.  Inadvertent capture of personal data is out of scope

If a company intentionally collects Russian data, it must comply with the new rules.  Conversely, the law does not apply to “unintentional” capture of personal data (e.g. unsolicited data – such as Russian correspondence).

3.  International transfers of data outside Russia

International data transfers are not forbidden by the new rules. However, if personal data is to be transferred outside Russia, the transferring entity must put in place a data export agreement with the transferee, obtain data subject consent and ensure that it is generally compliant with Russian data protection requirements.

4.  Russian citizenship

Companies will need to put a policy in place for determining an individual’s citizenship. Failure to do so will mean that any collection of personal data from Russia is subject to the new localisation rules.

5.  Consequences of non-compliance with the new rules

There are no fines proposed under the new rules. Currently, under the existing data protection regime, the Roskomndazor (the Russian data protection authority) (jointly with the public prosecution office) has the power to issue fines of up to RUB 10,000 (£100) for non-compliance with data protection regulations concerning the collection, storage and use of personal data.

Nevertheless, a “name and shame” process, as well as website blocking measures, may be exercised by the Russian authorities. If you run an online business model and Russia is a significant market, the possibility of website blocking is very real. It should also be noted that larger fines are expected to be introduced in the near future.

By adapting their operations to keep local copies of Russian personal data, companies are likely to engage in new storage activities and should ensure that any equipment and software used for local storage purposes is appropriately certified.  Possible penalties for the use of non-certified data protection devices and software include a fine of up to RUB 25,000 (£270) as well as confiscation of such devices and software.

So, what now?

Companies who have any sort of Russia-facing services are advised to get up to speed on the new requirements as soon as possible.  Even though the law will not apply retrospectively, and there has been an oral announcement by the Head of Roskomnadzor that the data localization rules will not be enforced until 2016, nothing formal has been put on paper about a grace period.

Even with the clarifications from the Ministry of Communications, it remains unclear exactly how the new rules will be implemented in practice.

Consequently, a “when in Rome…” approach may be the best strategy for foreign companies operating in Russia; prepare the grounds for more changes to come by appointing a spokesperson in Russia who can act on behalf of the company before the Roskomndazor, ensure that any service providers processing the personal data of Russian citizens on behalf of the company are aware of the rules, and make sure that their share of the responsibility is contractually flowed down to them.


With thanks to Pavel Savitsky, Counsel, Head of Intellectual Property & TMT, Borenius Attorneys Russia Ltd for the information provided.

Why are German courts allowed to take my global privacy policy apart?

Posted on August 7th, 2015 by

Your service is innovative, you are ambitious, and the European digital market is there for the taking. Except that the EU is not the digital single market it strives to be just yet. Recent years have seen a rise in legal disputes in Germany over allegedly unlawful clauses in standard business terms – in more and more cases including privacy policies and consent wording. Apple, Facebook, Google have all been there. They all lost on part of the language.

The story goes…

The starting point often begins with an international business looking to have a single global or pan-European privacy policy. It might not be perfect in all respects, but it was considered to be a reasonable compromise between addressing multiple local law requirements, keeping your business scalable, and creating transparency for customers. Now, with global expansion comes the inevitable local litigation.

The typical scenario that arises for international businesses expanding into Germany is this: An aggressive local market player trying to hold on to its pre-new economy assets sends you a warning letter, alleging your privacy policy breaches German law requirements, and includes a cease-and-desist undertaking aimed at forcing you to refrain from using unlawful privacy policy clauses.

If you are big and established, the warning letter may come from a consumer protection association that happens to have singled out you or your industry. If you refuse to comply with the warning letter, the dispute may go to court. If you lose, the court will issue an injunction preventing you from using certain language in your privacy policy. If you infringe the injunction after being served the same, judicial fines may ensue.

The legal mechanism

These warning letters typically allege that your privacy policy is not in full compliance with strict German data protection and consumer protection law. Where this is the case, privacy infringements can be actioned by competitors and consumer protection associations – note: these actions are based solely on the language of your privacy policy, irrespective of your actual privacy practices. These actions are a kind of “privately-initiated law enforcement” as there is no public regulator generally watching over use of privacy policies.

Furthermore, in certain cases – and especially where privacy policies are peppered with language stating that the user “consents” to the collection and use of their information – the privacy policy may even qualify as ‘standard business terms’ under German consumer protection law, opening the door for the full broadside of German consumer protection law scrutiny.

So, what’s the solution?

In the long run, courts or lawmakers will have to resolve the dilemma between two conflicting EU law principles: privacy regulation on a “country of origin” basis vs. consumer protection and unfair competition laws that apply wherever consumers are targeted. In essence, the question is: Which should prevail, applicable law principles under the Data Protection Directive (or the General Data Protection Regulation bound to be issued any decade now) or local law consumer protection principles under Rome I and II Regulations?

In the short term, an approach to mitigating legal and practical risks is to provide a localised privacy policy just for German consumers that is compliant with local law. Or, usually less burdensome, make your policy information-only, i.e. delete consent wording and clauses curtailing consumers’ rights in order to at least keep the policy from being subjected to full consumer protection scrutiny.

The downside to this approach is that it may require deviating from your global approach on a privacy policy. On the upside, it will spare you the nuisance of dealing with this kind of warning letter which is difficult to fight off. Remember: This is all about the language of your privacy policy, not what your real-world privacy compliance looks like.

Stay tuned for more information on warning letter squabbles regarding e-mail marketing regulations.

German DPA takes on Facebook again

Posted on July 31st, 2015 by

The DPA of Hamburg has done it again and picked up a new fight against mighty US giant Facebook. This time, the DPA was not amused about Facebook´s attempt to enforce its real name policy, and issued an administrative order against Facebook Ireland Ltd.

The order is meant to force Facebook to accept aliased user names, to revoke the suspension of user accounts that had been registered under an alias, to stop Facebook from unilaterally changing alias user names to real user names, and to stop requesting copies of official ID documents. It is based on Sec. 13 (6) German Telemedia Act, which requires service providers like Facebook to offer access to their services anonymously or under an alias, and also a provision of the German Personal ID Act which arguably prohibits requesting copies of official ID documents.

Despite this regulation, Facebook´s terms of use oblige users to use their real name in Germany, too. Early this year, Facebook started to enforce this policy more actively and suspended user accounts that were registered under an alias. The company also requested users to submit copies of official ID documents. It also sent messages to users asking them to confirm that “friends” on the network used their real name. In a press statement, Mr Caspar, head of the Hamburg DPA said: “As already became apparent in numerous other complaints, this case shows in an exemplary way that the network [Facebook] attempts to enforce its so-called real name obligation with all its powers. In doing so, it does not show any respect for national law.”

“This exit has been closed”

Whether Facebook is at all subject to German law has been heavily disputed. While the Higher Administrative Court of the German state Schleswig-Holstein ruled that Facebook Ireland Limited, as a service provider located in an EU member state, benefits from the country-of-origin principle laid down in Directive 95/46/EC, the Regional Court of Berlin came to the opposite conclusion: It held that Facebook Inc. rather than Facebook Ireland Ltd would be the data controller, as the actual decisions about the scope, extent and purpose of the processing of data would be made in the US. The court also dismissed the argument that Facebook Ireland acts as a data controller in a data controller-processor agreement with Facebook Inc., as it ruled that the corporate domination agreement between Facebook Inc. and Facebook Ireland prevails over the stipulations of the data controller-processor agreement. As Facebook has a sales and marketing subsidiary in Hamburg, the Hamburg DPA now believes to have tailwind due to the ECJ ruling in the Google Spain case to establish the applicability of German law: “This exit has been closed by the ECJ with its jurisdiction on the Google search engine. Facebook is commercially active in Germany through its establishment in Hamburg. Who operates on our playing field must play by our rules.”

While previous activities of German DPAs against Facebook were aimed at legal issues that did not really agitate German users, such as the admissibility of the “like”-button, the enforcement of the real name policy upset German users in numbers, and a lot of users announced to turn their back on the network. The issue also saw a lot of press coverage in national media, mostly in strong criticism of Facebook.

Will the new EU General Data Protection Regulation prevent forum shopping?

Posted on May 12th, 2015 by

It’s a common criticism of the current EU Data Protection Directive that its provisions determining applicable law invite forum shopping – i.e. encourage businesses to establish themselves in the Member State perceived as being the most “friendly”.  In fact, while there is some truth to this belief, its effect is often overstated.  Typically, businesses choose which country to set up shop in based on a number of factors – size of the local market, access to talent and infrastructure, local labor laws and (normally the overwhelming consideration) the local tax regime.  We privacy pros like to consider data protection the determining factor but, at least in my experience, that’s hardly ever the case.

Nevertheless, it’s easy to understand why many worry about forum shopping.  Under the Directive, a business that has a data controlling “establishment” in one Member State is subject only to the national data protection laws of that Member State, to the exclusion of all other Member States.  So, for example, if I have a data controlling establishment in the UK, then the Directive says I’m subject only to UK data protection law, even when I collect data from individuals in France, Germany, Spain and so on.  A rule that works this way naturally lends itself to a concern that it might encourage a “race to the bottom”, with ill-intentioned businesses scampering to set up shop in “weak” data protection regimes where they face little to no risk of penalty – even if that concern is overstated in practice.

But a concern it is, nevertheless, and one that the new General Data Protection Regulation aims to resolve – most notably by applying a single, uniform set of rules throughout the EU.  However, the issue still arises as to which regulatory authorities should have jurisdiction over pan-EU businesses and this point has generated much excited debate among legislators looking to reach agreement on the so-called “one stop shop” mechanism under the Regulation.

This mechanism, which began life as a concept intended to provide greater regulatory certainty to businesses by providing them with a single “lead” authority to which they would be answerable, has slowly been whittled away to something scarcely recognizable.  For example, under the most recent proposals by the Council of the European Union, the concept of a lead protection authority remains but there are highly complicated rules for determining when other “concerned” data protection authorities may instead exercise jurisdiction or challenge the lead authority’s decision-making.

All of which begs the question, will the General Data Protection Regulation prevent forum shopping?  In my view, no, and here’s why:

  • Businesses don’t choose their homes based on data protection alone.  As already noted, businesses determine the Member States in which they will establish based on a number of factors, king of all being tax.  The General Data Protection Regulation will not alter this.  Countries, like Ireland or the UK, that are perceived as attractive on those other factors today will remain just as attractive once the new Regulation comes into effect.
  • While you can legislate the rules, you can’t legislate the culture. Anyone who practices data protection in the EU knows that the cultural and regulatory attitudes towards privacy vary enormously from Member State to Member State.  Even once the new Regulation comes in, bringing legislative uniformity throughout the EU with it, those cultural and regulatory differences will persist.  Countries whose regulators are perceived as being more open to relationship-building and “slow to temper” will remain just as attractive to businesses under the Regulation as they are under the Directive.
  • The penalties under the General Data Protection Regulation will incentivize forum shopping. It has been widely reported that the General Data Protection Regulation carries some pretty humungous fines for non-compliance – up to 5% of worldwide turnover.  In the face of that kind of risk, data protection takes on an entirely new level of significance and attracts serious Board level attention.  The inevitable behavioral consequence of this is that it will actively incentivize businesses to look for lower risk countries – on any grounds they can (local regulatory culture, resourcing of the local regulator and so on).
  • Mature businesses won’t restructure. The Regulation is unlikely to have an effect on the corporate structure of mature businesses, including the existing Internet giants, who have long since already established an EU controller in a particular Member State.  To the extent that historic corporate structuring decisions can be said to have been based on data protection forum shopping grounds, the General Data Protection Regulation won’t undo the effects of those decisions.  And new businesses moving into Europe always look to their longer-standing peers as a model for how they, too, should establish – meaning that those historic decisions will likely still have a distorting effect going forward.

Vidal-Hall v Google: A new dawn for data protection claims?

Posted on April 15th, 2015 by

The landmark judgment of the Court of Appeal in Vidal Hall & Ors v Google Inc may signal the dawn of a new beginning for data protection litigants. Prior to this case, the law in England was relatively settled: that in order to incur civil liability under the Data Protection Act 1998, the claimant had to establish at least some form of pecuniary damage (unless the processing related to journalism, art or literature). The wording of section 13(2) appeared unequivocal on this point and it frequently proved to be a troublesome hurdle for claimants – and a powerful shield for defendants.

The requirement, however, was always the source of some controversy and the English courts have tried in recent years to dilute the strictness of the rule.

Then enter Ms Vidal-Hall & co: three individuals who allege that Google has been collecting private information about their internet usage from their Safari browser without their knowledge or consent. Claims were brought under the tort of misuse of private information and under s.13 of the DPA, though there was no claim for pecuniary loss.

This ruling concerned only the very early stages of litigation – whether the claimants were permitted to even serve the claim on Google which, being based in California, were outside of the jurisdiction. Permission was granted by the Court of Appeal and the case will now proceed through the English courts.

Three key rulings lie at the heart of this judgment:

  • There is now no need to establish pecuniary damage to bring a claim under the DPA. Distress alone is sufficient.
  • It is “arguable” that browser generated information (BGI) constitutes “personal data” under the DPA.
  • Misuse of private information should be classified as a tort for the purposes of service out of the jurisdiction.

We take each briefly in turn:

(1) Damages for distress alone are sufficient

The Court of Appeal disapplied the clear wording of domestic legislation on the grounds that the UK Act could not be interpreted compatibly with Article 23 of the EU Directive, and Articles 7, 8 and 47 of the EU Charter of Fundamental Rights. It held that the main purpose of the Data Protection Directive was to protect privacy, rather than economic rights, and it would be “strange” if it could not compensate those individuals who had suffered emotional distress but no pecuniary damage, when distress was likely to be the primary form of damage where there was a contravention.

It is too early to say whether this ruling will in practice open the door to masses of litigation – but there is no doubt that a significant obstacle that previously stood in the way of DPA claimants has now been unambiguously lifted by the Court of Appeal.

(2) Browser-generated information may constitute “personal data”

A further interesting, though less legally ground-breaking, ruling was that the BGI data in this case was arguably “personal data” under the DPA. The Court of Appeal did not decide the issue, but held that there was at least a “serious issue to be tried”.

Google had argued that: (a) the BGI data on its own was anonymous as it did not name or identify any individual; and (b) it kept the BGI data segregated from other data it held from which an individual might be identifiable (e.g. Gmail accounts). Thus, it was not personal data.

In response to Google’s points, the Court considered that it was immaterial that the BGI data did not name the user – what was relevant was that the data comprised of detailed browsing histories and the use of a DoubleClick cookie (a unique identifier which enabled the browsing histories to be linked to a specific device/machine). Taking those two elements together, it was “possible” to equate an individual user with the particular device, thus potentially bringing the data under the definition of “personal data”.

The Court further considered it immaterial that Google in practice segregated the BGI data from other data in its hands. What mattered was whether Google had the other information actually within its possession which it “could” use to identify the data subject, “regardless of whether it does so or not”.

(3) Misuse of private information is a tort

Finally, there was the confirmation that the misuse of private information is a tort for the purposes of service out of the jurisdiction. Not a huge point for our readers, but it will mean that claimants who bring claims under this cause of action will more easily obtain service out of the jurisdiction against foreign defendants.

A turning point…?

So the judgment certainly leaves much food for thought and is a significant turning point in the history of data protection litigation. There may also be a wider knock-on effect within the EU as other Member States that require proof of pecuniary damage look to the English judgment as a basis for opening up pure distress claims in their own jurisdictions.

The thing to bear in mind is that the ruling concerned only the very early stages of litigation – there is still a long road ahead in this thorny litigation and a great deal of legal and factual issues that still need to be resolved.

Cookie droppers may be watching this space with a mixture of fear and fascination.



Belgian research report claims Facebook tracks the internet use of everyone

Posted on April 1st, 2015 by

A report published by researchers at two Belgian universities claims that Facebook engages in massive tracking of not only its users but also people who have no Facebook account. The report also identifies a number of other violations of EU law.

When Facebook announced, in late 2014, that it would revise its Data Use Policy (DUP) and Terms of Services effective from 30 January 2015, a European Task Force, led by the Data Protection Agencies of the Netherlands, Belgium and Germany, was formed to analyse the new policies and terms.

In Belgium, the State Secretary for Privacy, Bart Tommelein, had urged the Belgian Privacy Commission to start an investigation into Facebook’s privacy policy, which led to the commissioning of the draft report that has now been published. The report concludes that Facebook is acting in violation of applicable European legislation and that “Facebook places too much burden on its users. Users are expected to navigate Facebook’s complex web of settings in search of possible opt-outs“.

The main findings of the report can be summarised as follows:

Tracking through social plug-ins

The researchers found that whenever a user visits a non-Facebook website, Facebook will track that user by default, unless he or she takes steps to opt-out. The report concludes that this default opt-out approach is not in line with the opt-in requirements laid down in the E-privacy Directive.

As far as non-users of Facebook are concerned, the researchers’ findings confirm previous investigations, most notably in Germany, that Facebook places a cookie each time a non-user visits a third-party website which contains a Facebook social plug-in such as the Like-button. Moreover, this cookie is placed regardless of whether the non-user has clicked on that Like button or not. Considering that Facebook does not provide any of this information to such non-users, and that the non-user is not requested to consent to the placing of such cookie, this can also be considered a violation of the E-privacy Directive.

Finally, the report found that both users and non-users who decide to use the opt-out mechanism offered by Facebook receive a cookie during this very opt-out process. This cookie, which has a default duration of two years, enables Facebook to track the user or non-user across all websites that contain its social plug-ins.

Other data protection issues identified

In addition to a number of consumer protection law issues, the report also covers the following topics relating to data protection:

  • Consent: The researchers are of the opinion that Facebook provides only very limited and vague information and that for many data uses, the only choice for users is to simply “take-it-or-leave-it”. This is considered to be a violation of the principle that in order for consent to be valid, it should be freely given, specific, informed and unambiguous as set-out in the Article 29 Working Party’s Opinion on consent (WP 187).
  • Privacy settings: The report further states that the current default settings (opt-out mechanism) remain problematic, not in the least because “users cannot exercise meaningful control over the use of their personal information by Facebook or third parties” which gives them “a false sense of control”.
  • Location data: Finally, the researchers consider that Facebook should offer more granular in-app settings for the sharing of location data, and should provide more detailed information about how, when and why it processes location data. It should also ensure it does not store the location data for longer than is strictly necessary.


The findings of this report do not come as a surprise. Indeed, most of the alleged areas of non-compliance have already been the object of discussions in past years and some have already been investigated by other privacy regulators (see e.g. the German investigations around the ‘like’ button).

The real question now surrounds what action the Belgian Privacy Commission will take on the basis of this report.

On the one hand, as of late, data protection enforcement has been put high on the agenda in Belgium. It seems the Belgian Privacy Commission is more determined than ever to show that its enforcement strategy has changed. This can also be situated in the context of recent muscular declarations from the State Secretary of Privacy that companies like Snapchat and Uber must be investigated to ensure they comply with EU data protection law.

Facebook, on the other hand, questions the authority of the Belgian Privacy Commission to conduct such an investigation, stating that only the Irish DPA is competent to discuss their privacy policies. Facebook has also stated that the report contains factual inaccuracies and expressed regret that the organisation was not contacted by the researchers.

It will therefore be interesting to see how the discussions between Facebook and the Belgian Privacy Commission develop. The President of the Belgian Privacy Commission has declared a number of times that it will not hesitate to take legal action against Facebook if the latter refuses to implement the changes for which Privacy Commission is asking.

This could potentially lead to Facebook being prosecuted, although it is more likely that it will be forced to accept a criminal settlement. In 2011, following the Privacy Commission’s investigation into Google Street View, Google accepted to pay 150.000 EUR as part of a criminal settlement with the public prosecutor.

Will no doubt be continued…



WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by

On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.


What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by

In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.


This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.