Archive for the ‘Applicable law’ Category

Our citizens, our rules: Clarification on the new Russian data storage requirements

Posted on August 28th, 2015 by

New amendments to the Russian Federal Act on Personal Data are due to come into force on 1 September via Russian Federal Law No. 242-FZ “On the Amendments to Certain Legislative Acts of the Russian Federation to Clarify the Framework for Personal Data Processing in the Information and Telecommunications Networks”.

The main thrust of the amendments is that local and foreign “operators” will be required to keep databases processing personal data of Russian citizens on Russian Federation territory and to provide information on the location of these databases.

The “ums”, “errs” and theories on the possible interpretations of the new rules and terms such as “citizenship” as well as the practical implications of the rules have been numerous since the proposed amendments were published in July 2014.

Earlier this month, the Ministry of Communications of the Russian Federation tried to throw some light on the turbulent waters of interpretation through the publication of certain clarifications on its website.

Headline points are as follows:

1.  Parties affected by the new rules

Any company operating in Russia or with a Russia-facing website that is using personal data in any way is likely to be affected by these changes. In practice, the rules will affect any companies which:

a) use a domain name related to Russia;

b) have a Russian-language version of their website (except for automatic translations etc);

c) allow payments on their website for goods, works or services in Russian RUB;

d) have advertisements on their website in Russian; or e) undertake agreements via their website (selling goods or services) that may be performed in Russia.

2.  Inadvertent capture of personal data is out of scope

If a company intentionally collects Russian data, it must comply with the new rules.  Conversely, the law does not apply to “unintentional” capture of personal data (e.g. unsolicited data – such as Russian correspondence).

3.  International transfers of data outside Russia

International data transfers are not forbidden by the new rules. However, if personal data is to be transferred outside Russia, the transferring entity must put in place a data export agreement with the transferee, obtain data subject consent and ensure that it is generally compliant with Russian data protection requirements.

4.  Russian citizenship

Companies will need to put a policy in place for determining an individual’s citizenship. Failure to do so will mean that any collection of personal data from Russia is subject to the new localisation rules.

5.  Consequences of non-compliance with the new rules

There are no fines proposed under the new rules. Currently, under the existing data protection regime, the Roskomndazor (the Russian data protection authority) (jointly with the public prosecution office) has the power to issue fines of up to RUB 10,000 (£100) for non-compliance with data protection regulations concerning the collection, storage and use of personal data.

Nevertheless, a “name and shame” process, as well as website blocking measures, may be exercised by the Russian authorities. If you run an online business model and Russia is a significant market, the possibility of website blocking is very real. It should also be noted that larger fines are expected to be introduced in the near future.

By adapting their operations to keep local copies of Russian personal data, companies are likely to engage in new storage activities and should ensure that any equipment and software used for local storage purposes is appropriately certified.  Possible penalties for the use of non-certified data protection devices and software include a fine of up to RUB 25,000 (£270) as well as confiscation of such devices and software.

So, what now?

Companies who have any sort of Russia-facing services are advised to get up to speed on the new requirements as soon as possible.  Even though the law will not apply retrospectively, and there has been an oral announcement by the Head of Roskomnadzor that the data localization rules will not be enforced until 2016, nothing formal has been put on paper about a grace period.

Even with the clarifications from the Ministry of Communications, it remains unclear exactly how the new rules will be implemented in practice.

Consequently, a “when in Rome…” approach may be the best strategy for foreign companies operating in Russia; prepare the grounds for more changes to come by appointing a spokesperson in Russia who can act on behalf of the company before the Roskomndazor, ensure that any service providers processing the personal data of Russian citizens on behalf of the company are aware of the rules, and make sure that their share of the responsibility is contractually flowed down to them.


With thanks to Pavel Savitsky, Counsel, Head of Intellectual Property & TMT, Borenius Attorneys Russia Ltd for the information provided.

Why are German courts allowed to take my global privacy policy apart?

Posted on August 7th, 2015 by

Your service is innovative, you are ambitious, and the European digital market is there for the taking. Except that the EU is not the digital single market it strives to be just yet. Recent years have seen a rise in legal disputes in Germany over allegedly unlawful clauses in standard business terms – in more and more cases including privacy policies and consent wording. Apple, Facebook, Google have all been there. They all lost on part of the language.

The story goes…

The starting point often begins with an international business looking to have a single global or pan-European privacy policy. It might not be perfect in all respects, but it was considered to be a reasonable compromise between addressing multiple local law requirements, keeping your business scalable, and creating transparency for customers. Now, with global expansion comes the inevitable local litigation.

The typical scenario that arises for international businesses expanding into Germany is this: An aggressive local market player trying to hold on to its pre-new economy assets sends you a warning letter, alleging your privacy policy breaches German law requirements, and includes a cease-and-desist undertaking aimed at forcing you to refrain from using unlawful privacy policy clauses.

If you are big and established, the warning letter may come from a consumer protection association that happens to have singled out you or your industry. If you refuse to comply with the warning letter, the dispute may go to court. If you lose, the court will issue an injunction preventing you from using certain language in your privacy policy. If you infringe the injunction after being served the same, judicial fines may ensue.

The legal mechanism

These warning letters typically allege that your privacy policy is not in full compliance with strict German data protection and consumer protection law. Where this is the case, privacy infringements can be actioned by competitors and consumer protection associations – note: these actions are based solely on the language of your privacy policy, irrespective of your actual privacy practices. These actions are a kind of “privately-initiated law enforcement” as there is no public regulator generally watching over use of privacy policies.

Furthermore, in certain cases – and especially where privacy policies are peppered with language stating that the user “consents” to the collection and use of their information – the privacy policy may even qualify as ‘standard business terms’ under German consumer protection law, opening the door for the full broadside of German consumer protection law scrutiny.

So, what’s the solution?

In the long run, courts or lawmakers will have to resolve the dilemma between two conflicting EU law principles: privacy regulation on a “country of origin” basis vs. consumer protection and unfair competition laws that apply wherever consumers are targeted. In essence, the question is: Which should prevail, applicable law principles under the Data Protection Directive (or the General Data Protection Regulation bound to be issued any decade now) or local law consumer protection principles under Rome I and II Regulations?

In the short term, an approach to mitigating legal and practical risks is to provide a localised privacy policy just for German consumers that is compliant with local law. Or, usually less burdensome, make your policy information-only, i.e. delete consent wording and clauses curtailing consumers’ rights in order to at least keep the policy from being subjected to full consumer protection scrutiny.

The downside to this approach is that it may require deviating from your global approach on a privacy policy. On the upside, it will spare you the nuisance of dealing with this kind of warning letter which is difficult to fight off. Remember: This is all about the language of your privacy policy, not what your real-world privacy compliance looks like.

Stay tuned for more information on warning letter squabbles regarding e-mail marketing regulations.

German DPA takes on Facebook again

Posted on July 31st, 2015 by

The DPA of Hamburg has done it again and picked up a new fight against mighty US giant Facebook. This time, the DPA was not amused about Facebook´s attempt to enforce its real name policy, and issued an administrative order against Facebook Ireland Ltd.

The order is meant to force Facebook to accept aliased user names, to revoke the suspension of user accounts that had been registered under an alias, to stop Facebook from unilaterally changing alias user names to real user names, and to stop requesting copies of official ID documents. It is based on Sec. 13 (6) German Telemedia Act, which requires service providers like Facebook to offer access to their services anonymously or under an alias, and also a provision of the German Personal ID Act which arguably prohibits requesting copies of official ID documents.

Despite this regulation, Facebook´s terms of use oblige users to use their real name in Germany, too. Early this year, Facebook started to enforce this policy more actively and suspended user accounts that were registered under an alias. The company also requested users to submit copies of official ID documents. It also sent messages to users asking them to confirm that “friends” on the network used their real name. In a press statement, Mr Caspar, head of the Hamburg DPA said: “As already became apparent in numerous other complaints, this case shows in an exemplary way that the network [Facebook] attempts to enforce its so-called real name obligation with all its powers. In doing so, it does not show any respect for national law.”

“This exit has been closed”

Whether Facebook is at all subject to German law has been heavily disputed. While the Higher Administrative Court of the German state Schleswig-Holstein ruled that Facebook Ireland Limited, as a service provider located in an EU member state, benefits from the country-of-origin principle laid down in Directive 95/46/EC, the Regional Court of Berlin came to the opposite conclusion: It held that Facebook Inc. rather than Facebook Ireland Ltd would be the data controller, as the actual decisions about the scope, extent and purpose of the processing of data would be made in the US. The court also dismissed the argument that Facebook Ireland acts as a data controller in a data controller-processor agreement with Facebook Inc., as it ruled that the corporate domination agreement between Facebook Inc. and Facebook Ireland prevails over the stipulations of the data controller-processor agreement. As Facebook has a sales and marketing subsidiary in Hamburg, the Hamburg DPA now believes to have tailwind due to the ECJ ruling in the Google Spain case to establish the applicability of German law: “This exit has been closed by the ECJ with its jurisdiction on the Google search engine. Facebook is commercially active in Germany through its establishment in Hamburg. Who operates on our playing field must play by our rules.”

While previous activities of German DPAs against Facebook were aimed at legal issues that did not really agitate German users, such as the admissibility of the “like”-button, the enforcement of the real name policy upset German users in numbers, and a lot of users announced to turn their back on the network. The issue also saw a lot of press coverage in national media, mostly in strong criticism of Facebook.

Will the new EU General Data Protection Regulation prevent forum shopping?

Posted on May 12th, 2015 by

It’s a common criticism of the current EU Data Protection Directive that its provisions determining applicable law invite forum shopping – i.e. encourage businesses to establish themselves in the Member State perceived as being the most “friendly”.  In fact, while there is some truth to this belief, its effect is often overstated.  Typically, businesses choose which country to set up shop in based on a number of factors – size of the local market, access to talent and infrastructure, local labor laws and (normally the overwhelming consideration) the local tax regime.  We privacy pros like to consider data protection the determining factor but, at least in my experience, that’s hardly ever the case.

Nevertheless, it’s easy to understand why many worry about forum shopping.  Under the Directive, a business that has a data controlling “establishment” in one Member State is subject only to the national data protection laws of that Member State, to the exclusion of all other Member States.  So, for example, if I have a data controlling establishment in the UK, then the Directive says I’m subject only to UK data protection law, even when I collect data from individuals in France, Germany, Spain and so on.  A rule that works this way naturally lends itself to a concern that it might encourage a “race to the bottom”, with ill-intentioned businesses scampering to set up shop in “weak” data protection regimes where they face little to no risk of penalty – even if that concern is overstated in practice.

But a concern it is, nevertheless, and one that the new General Data Protection Regulation aims to resolve – most notably by applying a single, uniform set of rules throughout the EU.  However, the issue still arises as to which regulatory authorities should have jurisdiction over pan-EU businesses and this point has generated much excited debate among legislators looking to reach agreement on the so-called “one stop shop” mechanism under the Regulation.

This mechanism, which began life as a concept intended to provide greater regulatory certainty to businesses by providing them with a single “lead” authority to which they would be answerable, has slowly been whittled away to something scarcely recognizable.  For example, under the most recent proposals by the Council of the European Union, the concept of a lead protection authority remains but there are highly complicated rules for determining when other “concerned” data protection authorities may instead exercise jurisdiction or challenge the lead authority’s decision-making.

All of which begs the question, will the General Data Protection Regulation prevent forum shopping?  In my view, no, and here’s why:

  • Businesses don’t choose their homes based on data protection alone.  As already noted, businesses determine the Member States in which they will establish based on a number of factors, king of all being tax.  The General Data Protection Regulation will not alter this.  Countries, like Ireland or the UK, that are perceived as attractive on those other factors today will remain just as attractive once the new Regulation comes into effect.
  • While you can legislate the rules, you can’t legislate the culture. Anyone who practices data protection in the EU knows that the cultural and regulatory attitudes towards privacy vary enormously from Member State to Member State.  Even once the new Regulation comes in, bringing legislative uniformity throughout the EU with it, those cultural and regulatory differences will persist.  Countries whose regulators are perceived as being more open to relationship-building and “slow to temper” will remain just as attractive to businesses under the Regulation as they are under the Directive.
  • The penalties under the General Data Protection Regulation will incentivize forum shopping. It has been widely reported that the General Data Protection Regulation carries some pretty humungous fines for non-compliance – up to 5% of worldwide turnover.  In the face of that kind of risk, data protection takes on an entirely new level of significance and attracts serious Board level attention.  The inevitable behavioral consequence of this is that it will actively incentivize businesses to look for lower risk countries – on any grounds they can (local regulatory culture, resourcing of the local regulator and so on).
  • Mature businesses won’t restructure. The Regulation is unlikely to have an effect on the corporate structure of mature businesses, including the existing Internet giants, who have long since already established an EU controller in a particular Member State.  To the extent that historic corporate structuring decisions can be said to have been based on data protection forum shopping grounds, the General Data Protection Regulation won’t undo the effects of those decisions.  And new businesses moving into Europe always look to their longer-standing peers as a model for how they, too, should establish – meaning that those historic decisions will likely still have a distorting effect going forward.

Vidal-Hall v Google: A new dawn for data protection claims?

Posted on April 15th, 2015 by

The landmark judgment of the Court of Appeal in Vidal Hall & Ors v Google Inc may signal the dawn of a new beginning for data protection litigants. Prior to this case, the law in England was relatively settled: that in order to incur civil liability under the Data Protection Act 1998, the claimant had to establish at least some form of pecuniary damage (unless the processing related to journalism, art or literature). The wording of section 13(2) appeared unequivocal on this point and it frequently proved to be a troublesome hurdle for claimants – and a powerful shield for defendants.

The requirement, however, was always the source of some controversy and the English courts have tried in recent years to dilute the strictness of the rule.

Then enter Ms Vidal-Hall & co: three individuals who allege that Google has been collecting private information about their internet usage from their Safari browser without their knowledge or consent. Claims were brought under the tort of misuse of private information and under s.13 of the DPA, though there was no claim for pecuniary loss.

This ruling concerned only the very early stages of litigation – whether the claimants were permitted to even serve the claim on Google which, being based in California, were outside of the jurisdiction. Permission was granted by the Court of Appeal and the case will now proceed through the English courts.

Three key rulings lie at the heart of this judgment:

  • There is now no need to establish pecuniary damage to bring a claim under the DPA. Distress alone is sufficient.
  • It is “arguable” that browser generated information (BGI) constitutes “personal data” under the DPA.
  • Misuse of private information should be classified as a tort for the purposes of service out of the jurisdiction.

We take each briefly in turn:

(1) Damages for distress alone are sufficient

The Court of Appeal disapplied the clear wording of domestic legislation on the grounds that the UK Act could not be interpreted compatibly with Article 23 of the EU Directive, and Articles 7, 8 and 47 of the EU Charter of Fundamental Rights. It held that the main purpose of the Data Protection Directive was to protect privacy, rather than economic rights, and it would be “strange” if it could not compensate those individuals who had suffered emotional distress but no pecuniary damage, when distress was likely to be the primary form of damage where there was a contravention.

It is too early to say whether this ruling will in practice open the door to masses of litigation – but there is no doubt that a significant obstacle that previously stood in the way of DPA claimants has now been unambiguously lifted by the Court of Appeal.

(2) Browser-generated information may constitute “personal data”

A further interesting, though less legally ground-breaking, ruling was that the BGI data in this case was arguably “personal data” under the DPA. The Court of Appeal did not decide the issue, but held that there was at least a “serious issue to be tried”.

Google had argued that: (a) the BGI data on its own was anonymous as it did not name or identify any individual; and (b) it kept the BGI data segregated from other data it held from which an individual might be identifiable (e.g. Gmail accounts). Thus, it was not personal data.

In response to Google’s points, the Court considered that it was immaterial that the BGI data did not name the user – what was relevant was that the data comprised of detailed browsing histories and the use of a DoubleClick cookie (a unique identifier which enabled the browsing histories to be linked to a specific device/machine). Taking those two elements together, it was “possible” to equate an individual user with the particular device, thus potentially bringing the data under the definition of “personal data”.

The Court further considered it immaterial that Google in practice segregated the BGI data from other data in its hands. What mattered was whether Google had the other information actually within its possession which it “could” use to identify the data subject, “regardless of whether it does so or not”.

(3) Misuse of private information is a tort

Finally, there was the confirmation that the misuse of private information is a tort for the purposes of service out of the jurisdiction. Not a huge point for our readers, but it will mean that claimants who bring claims under this cause of action will more easily obtain service out of the jurisdiction against foreign defendants.

A turning point…?

So the judgment certainly leaves much food for thought and is a significant turning point in the history of data protection litigation. There may also be a wider knock-on effect within the EU as other Member States that require proof of pecuniary damage look to the English judgment as a basis for opening up pure distress claims in their own jurisdictions.

The thing to bear in mind is that the ruling concerned only the very early stages of litigation – there is still a long road ahead in this thorny litigation and a great deal of legal and factual issues that still need to be resolved.

Cookie droppers may be watching this space with a mixture of fear and fascination.



Belgian research report claims Facebook tracks the internet use of everyone

Posted on April 1st, 2015 by

A report published by researchers at two Belgian universities claims that Facebook engages in massive tracking of not only its users but also people who have no Facebook account. The report also identifies a number of other violations of EU law.

When Facebook announced, in late 2014, that it would revise its Data Use Policy (DUP) and Terms of Services effective from 30 January 2015, a European Task Force, led by the Data Protection Agencies of the Netherlands, Belgium and Germany, was formed to analyse the new policies and terms.

In Belgium, the State Secretary for Privacy, Bart Tommelein, had urged the Belgian Privacy Commission to start an investigation into Facebook’s privacy policy, which led to the commissioning of the draft report that has now been published. The report concludes that Facebook is acting in violation of applicable European legislation and that “Facebook places too much burden on its users. Users are expected to navigate Facebook’s complex web of settings in search of possible opt-outs“.

The main findings of the report can be summarised as follows:

Tracking through social plug-ins

The researchers found that whenever a user visits a non-Facebook website, Facebook will track that user by default, unless he or she takes steps to opt-out. The report concludes that this default opt-out approach is not in line with the opt-in requirements laid down in the E-privacy Directive.

As far as non-users of Facebook are concerned, the researchers’ findings confirm previous investigations, most notably in Germany, that Facebook places a cookie each time a non-user visits a third-party website which contains a Facebook social plug-in such as the Like-button. Moreover, this cookie is placed regardless of whether the non-user has clicked on that Like button or not. Considering that Facebook does not provide any of this information to such non-users, and that the non-user is not requested to consent to the placing of such cookie, this can also be considered a violation of the E-privacy Directive.

Finally, the report found that both users and non-users who decide to use the opt-out mechanism offered by Facebook receive a cookie during this very opt-out process. This cookie, which has a default duration of two years, enables Facebook to track the user or non-user across all websites that contain its social plug-ins.

Other data protection issues identified

In addition to a number of consumer protection law issues, the report also covers the following topics relating to data protection:

  • Consent: The researchers are of the opinion that Facebook provides only very limited and vague information and that for many data uses, the only choice for users is to simply “take-it-or-leave-it”. This is considered to be a violation of the principle that in order for consent to be valid, it should be freely given, specific, informed and unambiguous as set-out in the Article 29 Working Party’s Opinion on consent (WP 187).
  • Privacy settings: The report further states that the current default settings (opt-out mechanism) remain problematic, not in the least because “users cannot exercise meaningful control over the use of their personal information by Facebook or third parties” which gives them “a false sense of control”.
  • Location data: Finally, the researchers consider that Facebook should offer more granular in-app settings for the sharing of location data, and should provide more detailed information about how, when and why it processes location data. It should also ensure it does not store the location data for longer than is strictly necessary.


The findings of this report do not come as a surprise. Indeed, most of the alleged areas of non-compliance have already been the object of discussions in past years and some have already been investigated by other privacy regulators (see e.g. the German investigations around the ‘like’ button).

The real question now surrounds what action the Belgian Privacy Commission will take on the basis of this report.

On the one hand, as of late, data protection enforcement has been put high on the agenda in Belgium. It seems the Belgian Privacy Commission is more determined than ever to show that its enforcement strategy has changed. This can also be situated in the context of recent muscular declarations from the State Secretary of Privacy that companies like Snapchat and Uber must be investigated to ensure they comply with EU data protection law.

Facebook, on the other hand, questions the authority of the Belgian Privacy Commission to conduct such an investigation, stating that only the Irish DPA is competent to discuss their privacy policies. Facebook has also stated that the report contains factual inaccuracies and expressed regret that the organisation was not contacted by the researchers.

It will therefore be interesting to see how the discussions between Facebook and the Belgian Privacy Commission develop. The President of the Belgian Privacy Commission has declared a number of times that it will not hesitate to take legal action against Facebook if the latter refuses to implement the changes for which Privacy Commission is asking.

This could potentially lead to Facebook being prosecuted, although it is more likely that it will be forced to accept a criminal settlement. In 2011, following the Privacy Commission’s investigation into Google Street View, Google accepted to pay 150.000 EUR as part of a criminal settlement with the public prosecutor.

Will no doubt be continued…



WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by

On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.


What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by

In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.


This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.


ECJ affirms individuals’ right to be forgotten

Posted on May 15th, 2014 by

Be honest: how many of us had ourselves forgotten that a profoundly important ruling from the European Court of Justice on the so-called “right to be forgotten” was imminent?  That ruling, in the case of Google v the Spanish DPA, was finally handed down on 13 May and has significant implications for all online businesses (available here).


By way of background, the case concerned a Spanish national who complained to Google about online newspaper reports it had indexed relating to debt-recovery proceedings against him.  When the individual’s name was entered into Google, it brought up search results linking to newspaper announcements about these proceedings.  The actual proceedings in question dated back to 1998 and had long since been resolved.

The matter escalated through the Spanish DPA and the Spanish High Court, who referred various questions to the European Court of Justice for a ruling.  At the heart of the matter was the issue of whether an individual can exercise a “right to be forgotten” so as to require search engines to remove search results linking to personal content lawfully published on third party sites – or whether any such requests should be taken up only with the publishing sites in question.

Issues considered

The specific issues considered by the ECJ principally concerned:

  • Whether a search engine is a “controller” of personal data:  On this first question, the ECJ ruled YES, search engines are controllers of personal data.  For this purpose, the ECJ said that it was irrelevant that search engines are information-blind, treating personal data and non-personal data alike, and having no knowledge of the actual personal data processed.
  • Whether a search engine operated from outside the EU is subject to EU data protection rules if it has an EU sales subsidiary:  On this second question, the ECJ ruled YES.  Google wholly operates its search service from the US, but has a local sales subsidiary in Spain that makes online advertising sales to local customers.  On a very broad reading of the EU Data Protection Directive, the Court said that even though the processing of search data was not conducted “by” the Spanish subsidiary, it was conducted “in the context of the activities” of that subsidiary and therefore subject to EU data protection rules.  This is a particularly important point for any online business operating sales subsidiaries in the EU – in effect, this ruling means that in-territory sales subsidiaries potentially expose out-of-territory HQs and parent companies to local data protection laws.
  • Whether individuals can require search engines to remove search results about them:  Again, the ECJ ruled YES.  Having decided that a search engine is a “controller”, the ECJ ruled that an individual has the right to have search results about him or her removed if they appear to be “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue“.  To this end, the ECJ said there was no need to show that the list of results “causes prejudice to the data subject” and that the right of the individual to have results removed “override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name“.

Why this matters

This ruling is one of the most significant – if not the most significant – data protection ruling in the EU to date, and the findings of the ECJ will come as a surprise to many.  A constant theme throughout the ECJ’s decision was its clear desire to uphold European citizens’ fundamental rights to privacy and to data protection, as enshrined in the European Union’s Charter of Fundamental Rights, and it interpreted the EU’s Data Protection Directive with this consideration in mind.

Few expected that search engines could be required to remove search results linking to material posted lawfully on third party sites, but that is precisely what the ECJ has ruled in this instance.  Quite how this will work from a practical perspective is another matter: in future, when search engines receive a request to have personal data “forgotten” from their search results, they will have to tread a fine line between balancing the individual’s right to be forgotten against other relevant contextual considerations such as “the role played by the data subject in public life” and whether “the interference with the fundamental rights is justified by the preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question“.

Put another way, search engines will need to act not just as gateways to information on the web, but also – in some circumstances – as censors preventing access to information based on objections received.  This raises some very complex challenges in terms of balancing right to privacy against right to free speech that will clearly take time to work out.

Practical implications for online businesses

But it would be wrong to think that the relevance of this decision is limited to search engines alone.  In fact, it has much broader implications for online businesses, including that:

  • Non-EU businesses with EU sales offices risk exposure to EU data protection law:  Non-EU data hungry businesses CAN be subject to EU data protection rules simply by virtue of having local sales subsidiaries in the EU.  This is particularly critical for growth businesses expanding into the EU through the set-up of local sales offices, a common model for international expansion.
  • Data blind businesses need to comply:  Big data businesses CAN be subject to data protection rules, even if they are data blind and do not distinguish between personal and non-personal data.  A head in the sand approach will not protect against risk – any data ingesting business needs to have a clear compliance framework in place.
  • Data deletion a priority:  Individuals CAN require deletion of their data under EU law – businesses need to architecture their systems to enable data deletion on request and to adopt appropriate data retention and deletion policies.  Without these, they will face particular exposure when presented with these requests.

Taking into account the critical implications of this ruling, it’s fair to say it’s one that won’t be forgotten soon!

European Parliament votes in favour of data protection reform

Posted on March 21st, 2014 by

On 12 March 2014, the European Parliament (the “Parliament”) overwhelmingly voted in favour of the European Commission’s proposal for a Data Protection Regulation (the “Data Protection Regulation”) in its plenary assembly. In total 621 members of Parliament voted for the proposals and only 10 against. The vote cemented the Parliament’s support of the data protection reform, which constitutes an important step forward in the legislative procedure. Following the vote, Viviane Reding – the EU Justice Commissioner – said that “The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible”. While this vote is an important milestone in the adoption process, there are still several steps to go before the text is adopted and comes into force.

So what happens next?

Following the Civil Liberties, Justice and Home Affairs (LIBE) Committee’s report published in October 2013 (for more information on this report – see this previous article), this month’s vote  means that the Council of the European Union (the “Council”) can now formally conduct its reading of the text based on the Parliament’s amendments. Since the EU Commission made its proposal, preparatory work in the Council has been running in parallel with the Parliament. However, the Council can only adopt its position after the Parliament has acted.

In order for the proposed Data Protection Regulation to become law, both the Parliament and the Council must adopt the text in what is called the “ordinary legislative procedure” – a process in which the decisions of the Parliament and the Council have the same weight. The Parliament can only begin official negotiations with the Council as soon as the Council presents its position. It seems unlikely that the Council will accept the Parliament’s position and, on the contrary, will want to put forward its own amendments.

In the meantime, representatives of the Parliament, the Council and the Commission will probably organise informal meetings, the so-called “trilogue” meetings, with a view to reaching a first reading agreement.

The EU Justice Ministers have already met several times in Council meetings in the past months to discuss the data protection reform. Although there seems to be a large support between Member States for the proposal, they haven’t yet reached an agreement over some of the key provisions, such as the “one-stop shop” rule. The next meeting of the Council ministers is due to take place in June 2014.

Will there be further delays?

As the Council has not yet agreed its position, the speed of the development of the proposed regulation in the coming months largely depends on this being finalised. Once a position has been reached by the Council then there is also the possibility that the proposals could be amended further. If this happens, the Parliament may need to vote again until the process is complete.

Furthermore, with the elections in the EU Parliament coming up this May, this means that the whole adoption process will be put on hold until a new Parliament comes into place and a new Commission is approved in the autumn this year. Given these important political changes, it is difficult to predict when the Data Protection Regulation will be finally adopted.

It is worth noting, however, that the European heads of state and government publicly committed themselves to the ‘timely’ adoption of the data protection legislation by 2015 – though, with the slow progress made to date and work still remaining to be done, this looks a very tall order indeed.