Archive for the ‘DPAs competence’ Category

German DPA takes on Facebook again

Posted on July 31st, 2015 by

The DPA of Hamburg has done it again and picked up a new fight against mighty US giant Facebook. This time, the DPA was not amused about Facebook´s attempt to enforce its real name policy, and issued an administrative order against Facebook Ireland Ltd.

The order is meant to force Facebook to accept aliased user names, to revoke the suspension of user accounts that had been registered under an alias, to stop Facebook from unilaterally changing alias user names to real user names, and to stop requesting copies of official ID documents. It is based on Sec. 13 (6) German Telemedia Act, which requires service providers like Facebook to offer access to their services anonymously or under an alias, and also a provision of the German Personal ID Act which arguably prohibits requesting copies of official ID documents.

Despite this regulation, Facebook´s terms of use oblige users to use their real name in Germany, too. Early this year, Facebook started to enforce this policy more actively and suspended user accounts that were registered under an alias. The company also requested users to submit copies of official ID documents. It also sent messages to users asking them to confirm that “friends” on the network used their real name. In a press statement, Mr Caspar, head of the Hamburg DPA said: “As already became apparent in numerous other complaints, this case shows in an exemplary way that the network [Facebook] attempts to enforce its so-called real name obligation with all its powers. In doing so, it does not show any respect for national law.”

“This exit has been closed”

Whether Facebook is at all subject to German law has been heavily disputed. While the Higher Administrative Court of the German state Schleswig-Holstein ruled that Facebook Ireland Limited, as a service provider located in an EU member state, benefits from the country-of-origin principle laid down in Directive 95/46/EC, the Regional Court of Berlin came to the opposite conclusion: It held that Facebook Inc. rather than Facebook Ireland Ltd would be the data controller, as the actual decisions about the scope, extent and purpose of the processing of data would be made in the US. The court also dismissed the argument that Facebook Ireland acts as a data controller in a data controller-processor agreement with Facebook Inc., as it ruled that the corporate domination agreement between Facebook Inc. and Facebook Ireland prevails over the stipulations of the data controller-processor agreement. As Facebook has a sales and marketing subsidiary in Hamburg, the Hamburg DPA now believes to have tailwind due to the ECJ ruling in the Google Spain case to establish the applicability of German law: “This exit has been closed by the ECJ with its jurisdiction on the Google search engine. Facebook is commercially active in Germany through its establishment in Hamburg. Who operates on our playing field must play by our rules.”

While previous activities of German DPAs against Facebook were aimed at legal issues that did not really agitate German users, such as the admissibility of the “like”-button, the enforcement of the real name policy upset German users in numbers, and a lot of users announced to turn their back on the network. The issue also saw a lot of press coverage in national media, mostly in strong criticism of Facebook.

CNIL unveils its 2015 inspections plan. Are you ready for what’s coming?

Posted on May 26th, 2015 by

In 2014, I warned about the French data protection authority (“CNIL”) being a regulator to watch. One year down the road, CNIL has not failed to deliver. A few weeks ago, CNIL released its Annual Activity Report for 2014 revealing that in the past year it had conducted 421 inspections (including 58 online audits), issued 62 enforcement notices and pronounced 18 sanctions. As the current chair of the Article 29 Working Party, CNIL continues to play an active role on the European and international scene on topics such as the General Data Protection Regulation, the on-going discussions between the US and EU on Safe Harbor and the recent online sweeps organized by GPEN.

What are the CNIL’s top priorities?

The CNIL intends to conduct 550 inspections divided between 350 on-site or off-site inspections and 200 online audits. Specifically, CNIL will prioritize its actions in the following key sectors:

  • Ecommerce: following its guidance on the processing of bank card details, CNIL will focus now on payment cards with no contact (i.e., bank cards that have an integrated chip and enable cardholders to make wireless payments via “near field communication” or “NFC” technology). In particular, CNIL will verify whether adequate security measures are designed around the use of such cards and whether the financial institutions who offer these types of cards inform their customers and enable them to object to using these cards (e.g., by deactivating the integrated chip or by ordering a traditional card that is not compatible with the “NFC” technology). CNIL is also preparing for the next evolution of entirely digitalized payments by smartphone.
  • Employee privacy in the workplace: Employee privacy continues to be high on the CNIL’s agenda due to the rising number of employees who file complaints with the CNIL each year. In particular, CNIL will inspect private and public organizations who have recently conducted surveys on social-psychological risks for employees.
  • mHealth: Following the Article 29 Working Party’s opinion on mobile apps and its letter to the European Commission on the meaning of “health data” in the context of mobile apps and devices (see our previous blog), CNIL will audit interconnected objects and online services in the area of health and well-being to verify (amongst other things) whether users are provided with notice and their consent is obtained.
  • Public sector: With the French Parliament currently debating a new law to broaden the online investigation powers of the French law enforcement and national security agencies, CNIL will continue to monitor the compliance of public sector databases with the Data Protection Act. This time, CNIL will focus on the National Register for Drivers’ Licenses (“Fichier National des Permis de Conduire“) held by the Ministry of Interior, which centralises all the data about registered drivers, including fines and traffic felonies.
  • Public Wi-Fi connections: Another growing area that is receiving particular attention are publicly available Wi-Fi hotspots (such as those that are available in department stores, train stations or airports) which capture data that is being transmitted by a user’s mobile phone (e.g., type of device, MAC address, location data) and is being used more frequently to track users, to send them advertisements or offers, or to analyse their behaviour.
  • Binding Corporate Rules: Last but not least, CNIL has announced its intention to begin enforcing against companies with BCR. Since their introduction in 2003, approximately 60 organizations have had their BCR approved, but so far, no enforcement measures were taken against BCR. However, a few months ago, the lead DPAs across Europe started contacting organizations with a view to verifying and completing the information about their BCR that is posted on the European Commission’s website, thus implying that this grace period is over. The things CNIL could verify are, for example, whether a BCR policy is easily accessible on the organization’s website and whether companies have implemented the internal measures that are required for BCR compliance.

What are the CNIL’s enforcement powers?

The CNIL can carry out four types of enforcement actions, namely:

  • On-site control: the CNIL may access the buildings and premises used to process personal data, inspect the data processing applications and databases;
  • Off-site control: the CNIL may organize a hearing in its offices and require the data controller or its data protection officer to provide explanations;
  • Long distance control: the CNIL may communicate with the data controller by postal mail or email and, for example, may conduct routine surveys; and
  • On-line inspections: CNIL may conduct on-line inspections of personal data that is available on websites or mobile apps.

What sanctions can the CNIL pronounce?

If the CNIL finds that a company has failed to comply with the Data Protection Act, it can either pronounce a warning or issue a formal notice to comply within a given deadline. If the controller fails to comply with the notice served, the CNIL may then pronounce a fine up to EUR 150,000 (or EUR 300,000 in the event of a second breach within five years or 5% of the company’s gross revenue for legal entities) or an injunction to cease the processing.

Are you prepared for a CNIL inspection?

In recent years, I have assisted many companies to comply with CNIL inspections. Too often, companies are caught by surprise when the CNIL comes knocking on their door unannounced because they haven’t put in place any internal process for handling this kind of situation. As with any regulator, the dealings with the CNIL require a minimum amount of awareness and preparation.

While a CNIL inspection does not necessarily end with the CNIL pronouncing a fine or sanction against the company, inevitably this does have a disruptive effect for the company being investigated because it reveals the flaws that this company may have with regard to privacy compliance. Therefore, companies are in a better position if they tackle privacy issues at an early stage, rather than to leave it for later and risk having to fire-fight their way through a CNIL inspection.


By Olivier Proust, Of Counsel (

Will the new EU General Data Protection Regulation prevent forum shopping?

Posted on May 12th, 2015 by

It’s a common criticism of the current EU Data Protection Directive that its provisions determining applicable law invite forum shopping – i.e. encourage businesses to establish themselves in the Member State perceived as being the most “friendly”.  In fact, while there is some truth to this belief, its effect is often overstated.  Typically, businesses choose which country to set up shop in based on a number of factors – size of the local market, access to talent and infrastructure, local labor laws and (normally the overwhelming consideration) the local tax regime.  We privacy pros like to consider data protection the determining factor but, at least in my experience, that’s hardly ever the case.

Nevertheless, it’s easy to understand why many worry about forum shopping.  Under the Directive, a business that has a data controlling “establishment” in one Member State is subject only to the national data protection laws of that Member State, to the exclusion of all other Member States.  So, for example, if I have a data controlling establishment in the UK, then the Directive says I’m subject only to UK data protection law, even when I collect data from individuals in France, Germany, Spain and so on.  A rule that works this way naturally lends itself to a concern that it might encourage a “race to the bottom”, with ill-intentioned businesses scampering to set up shop in “weak” data protection regimes where they face little to no risk of penalty – even if that concern is overstated in practice.

But a concern it is, nevertheless, and one that the new General Data Protection Regulation aims to resolve – most notably by applying a single, uniform set of rules throughout the EU.  However, the issue still arises as to which regulatory authorities should have jurisdiction over pan-EU businesses and this point has generated much excited debate among legislators looking to reach agreement on the so-called “one stop shop” mechanism under the Regulation.

This mechanism, which began life as a concept intended to provide greater regulatory certainty to businesses by providing them with a single “lead” authority to which they would be answerable, has slowly been whittled away to something scarcely recognizable.  For example, under the most recent proposals by the Council of the European Union, the concept of a lead protection authority remains but there are highly complicated rules for determining when other “concerned” data protection authorities may instead exercise jurisdiction or challenge the lead authority’s decision-making.

All of which begs the question, will the General Data Protection Regulation prevent forum shopping?  In my view, no, and here’s why:

  • Businesses don’t choose their homes based on data protection alone.  As already noted, businesses determine the Member States in which they will establish based on a number of factors, king of all being tax.  The General Data Protection Regulation will not alter this.  Countries, like Ireland or the UK, that are perceived as attractive on those other factors today will remain just as attractive once the new Regulation comes into effect.
  • While you can legislate the rules, you can’t legislate the culture. Anyone who practices data protection in the EU knows that the cultural and regulatory attitudes towards privacy vary enormously from Member State to Member State.  Even once the new Regulation comes in, bringing legislative uniformity throughout the EU with it, those cultural and regulatory differences will persist.  Countries whose regulators are perceived as being more open to relationship-building and “slow to temper” will remain just as attractive to businesses under the Regulation as they are under the Directive.
  • The penalties under the General Data Protection Regulation will incentivize forum shopping. It has been widely reported that the General Data Protection Regulation carries some pretty humungous fines for non-compliance – up to 5% of worldwide turnover.  In the face of that kind of risk, data protection takes on an entirely new level of significance and attracts serious Board level attention.  The inevitable behavioral consequence of this is that it will actively incentivize businesses to look for lower risk countries – on any grounds they can (local regulatory culture, resourcing of the local regulator and so on).
  • Mature businesses won’t restructure. The Regulation is unlikely to have an effect on the corporate structure of mature businesses, including the existing Internet giants, who have long since already established an EU controller in a particular Member State.  To the extent that historic corporate structuring decisions can be said to have been based on data protection forum shopping grounds, the General Data Protection Regulation won’t undo the effects of those decisions.  And new businesses moving into Europe always look to their longer-standing peers as a model for how they, too, should establish – meaning that those historic decisions will likely still have a distorting effect going forward.

Belgian Privacy Commission changes enforcement attitude as fining powers are announced

Posted on May 6th, 2015 by

Belgium has long been one of the less active of the EU member states in terms of data protection enforcement. Aside from the fact that pragmatism can be considered part of a Belgian’s nature, this view was also due to the fact that the Belgian Data Protection Authority (DPA), the Privacy Commission, could justifiably be termed as a so-called  “toothless tiger.”

Currently, if a company is found to be in breach of the Belgian data protection laws, the Privacy Commission has a duty to inform the public prosecutor. However, in practice, criminal prosecution for data protection noncompliance is virtually nonexistent and leads to de facto impunity.

In 2013, anticipating the adoption of the new EU Data Protection Regulation, the Privacy Commission had called upon the Belgian government to grant it more robust enforcement powers. It seems that the message was well-received. When the new Belgian federal government was sworn in last October, it was the first one ever to have a state secretary for privacy, a member of the cabinet who is assigned and reports to one of the ministers.

The coalition agreement also contained specific chapters on the protection of privacy and on cybersecurity in which a reform of the Privacy Commission was announced. Although the agreement remained silent as to whether the Privacy Commission would be vested with fining powers, it indicated that appropriate sanctions must be applied in cases of infringement of data protection laws.

In a recent interview with Belgian newspaper De Morgen, State Secretary for Privacy Bart Tommelein confirmed that the Privacy Commission will be vested with fining powers, if possible by the end of this year. Tommelein did not yet want to comment on the extent of the fining power they are to be given, but Privacy Commission President Willem Debeuckelaere mooted fines between 250 and 20,000 euro, akin to those that can be imposed by Belgian energy and telecom regulators.

This announcement is the latest in a series of recent events that demonstrate that Belgium is strengthening its stance with regard to data protection enforcement.

An initial but significant step was taken in 2011, when Google agreed to pay 150,000 euros as part of a criminal settlement with the public prosecutor following an investigation of the Privacy Commission into Google Street View. The settlement was an enforcement milestone in terms of the amount and also showed that the Belgian authorities were not afraid to take on a global behemoth.

In recent months, in addition to investigating purely Belgian cases, which mostly remain unreported, investigations were also started against tech companies including Snapchat and Uber. The Facebook case is, however, the best example to date of this changed attitude toward enforcement. Instead of following the example of other DPAs in Europe—like it did in the Street View case—the Privacy Commission is now leading the investigation into Facebook’s tracking and data processing activities together with Dutch and German regulators.

It can be expected that the Privacy Commission will become even more assertive and self-conscious once it can impose fines or order administrative sanctions itself. In this context it is also noteworthy that, in 2014, the Privacy Commission started preparing to perform audits. A specific team of inspectors was established and will actively search for companies that process personal data in a noncompliant manner.

It is obviously true that the amounts of the fines that were quoted by the president of the Privacy Commission are unlikely to be of grave concern to most companies processing personal data. They are significantly lower than the maximum criminal fine of 600,000 euro that companies currently face in the unlikely event of prosecution, and they are nowhere near the size of potential fines that are envisaged in the context of the regulation.

Moreover, the proof of the pudding will be in the eating.

The Privacy Commission, like many of its European counterparts, has insufficient resources. The state secretary for privacy will definitely lobby to increase those resources. However, the federal government is still very much in an austerity mode, and most departments are seeing their budgets cut rather than increased. The Privacy Commission may therefore struggle to play the role it would like to play and difficult choices will have to be made regarding how it uses its resources.

Nonetheless, the ability of the Privacy Commission to impose fines, together with the risk for companies of being audited, will most likely create a major shift in the way companies approach data protection compliance in Belgium.

Looking at the bigger picture, if even a pragmatic DPA, such as the Privacy Commission, starts adopting a more stringent enforcement strategy, it is clear that the days of data protection complacency are fading. Organisations processing personal data really cannot afford to wait until the regulation becomes effective in the next few years.

DPAs throughout Europe are gearing up for the regulation, and they expect organisations to comply with certain principles from the regulation that are today not yet in the black letter of the law.

For example, in the context of data breach notifications, the Privacy Commission already expects all organisations to notify data breaches as a matter of best practice and irrespective of the fact that Belgium currently does not have such a general notification requirement. It means that organisations will have to make sure they do their homework now, as it seems the DPAs will not wait until the regulation is effective to show their teeth.

This article was first published in the IAPP’s Privacy Advisor.


Belgian research report claims Facebook tracks the internet use of everyone

Posted on April 1st, 2015 by

A report published by researchers at two Belgian universities claims that Facebook engages in massive tracking of not only its users but also people who have no Facebook account. The report also identifies a number of other violations of EU law.

When Facebook announced, in late 2014, that it would revise its Data Use Policy (DUP) and Terms of Services effective from 30 January 2015, a European Task Force, led by the Data Protection Agencies of the Netherlands, Belgium and Germany, was formed to analyse the new policies and terms.

In Belgium, the State Secretary for Privacy, Bart Tommelein, had urged the Belgian Privacy Commission to start an investigation into Facebook’s privacy policy, which led to the commissioning of the draft report that has now been published. The report concludes that Facebook is acting in violation of applicable European legislation and that “Facebook places too much burden on its users. Users are expected to navigate Facebook’s complex web of settings in search of possible opt-outs“.

The main findings of the report can be summarised as follows:

Tracking through social plug-ins

The researchers found that whenever a user visits a non-Facebook website, Facebook will track that user by default, unless he or she takes steps to opt-out. The report concludes that this default opt-out approach is not in line with the opt-in requirements laid down in the E-privacy Directive.

As far as non-users of Facebook are concerned, the researchers’ findings confirm previous investigations, most notably in Germany, that Facebook places a cookie each time a non-user visits a third-party website which contains a Facebook social plug-in such as the Like-button. Moreover, this cookie is placed regardless of whether the non-user has clicked on that Like button or not. Considering that Facebook does not provide any of this information to such non-users, and that the non-user is not requested to consent to the placing of such cookie, this can also be considered a violation of the E-privacy Directive.

Finally, the report found that both users and non-users who decide to use the opt-out mechanism offered by Facebook receive a cookie during this very opt-out process. This cookie, which has a default duration of two years, enables Facebook to track the user or non-user across all websites that contain its social plug-ins.

Other data protection issues identified

In addition to a number of consumer protection law issues, the report also covers the following topics relating to data protection:

  • Consent: The researchers are of the opinion that Facebook provides only very limited and vague information and that for many data uses, the only choice for users is to simply “take-it-or-leave-it”. This is considered to be a violation of the principle that in order for consent to be valid, it should be freely given, specific, informed and unambiguous as set-out in the Article 29 Working Party’s Opinion on consent (WP 187).
  • Privacy settings: The report further states that the current default settings (opt-out mechanism) remain problematic, not in the least because “users cannot exercise meaningful control over the use of their personal information by Facebook or third parties” which gives them “a false sense of control”.
  • Location data: Finally, the researchers consider that Facebook should offer more granular in-app settings for the sharing of location data, and should provide more detailed information about how, when and why it processes location data. It should also ensure it does not store the location data for longer than is strictly necessary.


The findings of this report do not come as a surprise. Indeed, most of the alleged areas of non-compliance have already been the object of discussions in past years and some have already been investigated by other privacy regulators (see e.g. the German investigations around the ‘like’ button).

The real question now surrounds what action the Belgian Privacy Commission will take on the basis of this report.

On the one hand, as of late, data protection enforcement has been put high on the agenda in Belgium. It seems the Belgian Privacy Commission is more determined than ever to show that its enforcement strategy has changed. This can also be situated in the context of recent muscular declarations from the State Secretary of Privacy that companies like Snapchat and Uber must be investigated to ensure they comply with EU data protection law.

Facebook, on the other hand, questions the authority of the Belgian Privacy Commission to conduct such an investigation, stating that only the Irish DPA is competent to discuss their privacy policies. Facebook has also stated that the report contains factual inaccuracies and expressed regret that the organisation was not contacted by the researchers.

It will therefore be interesting to see how the discussions between Facebook and the Belgian Privacy Commission develop. The President of the Belgian Privacy Commission has declared a number of times that it will not hesitate to take legal action against Facebook if the latter refuses to implement the changes for which Privacy Commission is asking.

This could potentially lead to Facebook being prosecuted, although it is more likely that it will be forced to accept a criminal settlement. In 2011, following the Privacy Commission’s investigation into Google Street View, Google accepted to pay 150.000 EUR as part of a criminal settlement with the public prosecutor.

Will no doubt be continued…



WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by

On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.


What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by

In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.


This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.


What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by

The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details’; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.


It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.



I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

Global apps sweep: should developers be worried?

Posted on October 24th, 2014 by

A recent sweep with participation from 26 data protection authorities (“DPA”) across the world revealed a high proportion of mobile apps are accessing large amounts of personal data without meeting their data privacy obligations.

How did they do?

Not well. Out of the 1,200 apps surveyed, 85% failed to clearly explain how they were collecting and using personal information, 59% did not display basic privacy information and one in three requested excessive personal information. Another common finding was that many apps fail to tailor privacy information to the small screen.

The Information Commissioner’s Office (“ICO”), as the UK’s DPA, surveyed 50 apps including many household names which produced results in line with the global figures.

Rare examples of good practice included pop-up notifications asking permission prior to additional data being collected and basic privacy information with links to more detailed information for users who wish to know more.

What are they told to do?

As a result, Ofcom and ICO have produced some guidance for users on how to use apps safely. It is written in consumer-friendly language and contains straightforward advice on some common pitfalls such as checking content ratings or always logging out of banking apps.

Contrast this with the 25 page guidance from ICO aimed at app developers, which has drawn some criticism for being overly lengthy and complex. Rather than participate in research and point to long guidance documents, it would be more effective to promote simple rules (eg. requiring pop-up notifications) and holding app stores accountable for non-compliance. However, site tests demonstrate users are irritated enough by constant pop-ups to stop using the site, so developers are reluctant to implement them.

Why is it important?

The lack of compliance is all the more alarming if read in conjunction with Ofcom research which surveyed a range of UK app users. Users view the apps environment as a safer, more contained space than browser-based internet access. Many believed apps are discrete pieces of software with little interconnectivity, were unaware of the virus threat or that apps can continue to run in the background. There is implicit trust in established brands and recognised app stores, who users felt must monitor and vet all apps before selling them. Peer recommendation also played a significant role in deciding whether to download an app.

This means little, if any, attention is paid to privacy policies and permission requests. Users interviewed generally felt full permission had to be granted prior to using the app and were frustrated by the lack of ability to accept some and refuse other permissions.

So what’s the risk for developers?

ICO has the power to fine those companies who breach the relevant laws up to £500,000. The threshold for issuing a fine is high, however, and this power has not yet been used in the context of mobile apps. Having said this, we know that ‘internet and mobile’ are one of ICO’s priority areas for enforcement action.

Perhaps a more realistic and potentially more damaging risk is the reputational and brand damage associated with being named and shamed publically. ICO is more likely, when a lower level of harm has been caused, to seek undertakings that the offending company will change its practices. As we know, ICO publishes its enforcement actions on its website. For a company whose business model relies on processing data and on peer recommendations as the main way to grow its user base and its brand, the trust of its users is paramount and hard to rebuild once lost.

ICO has said it will be contacting app developers who need to improve their data collection and processing practices. The next stage for persistent offenders would be enforcement action.

Developers would be wise to pay attention. If enforcement action is not in itself a concern, ICO’s research showed almost half app users have decided against downloading an app due to privacy concerns. If that’s correct, privacy is important to mobile app users and could ‘make’ or ‘break’ a new app.

Update 12 December 2014

The 26 DPAs who took part in the global sweep have since written to seven major app stores including those of Apple, Google and Microsoft.  In their letter of 9 December they urge the marketplaces to make links to privacy policies mandatory, rather than optional, for those apps that collect personal data.

CNIL: a regulator to watch in 2014

Posted on March 18th, 2014 by

Over the years, the number of on-site inspections by the French DPA (CNIL) has been on a constant rise. Based on the CNIL’s latest statistics (see CNIL’s 2013 Annual Activity Report), 458 on-site inspections were carried out in 2012, which represents a 19 percent increase compared with 2011. The number of complaints has also risen to 6,000 in 2012, most of which were in relation to telecom/Internet services, at 31 percent. In 2012, the CNIL served 43 formal notices asking data controllers to comply. In total, the CNIL pronounced 13 sanctions, eight of which were made public. In the majority of cases, the sanction pronounced was a simple warning (56 percent), while fines were pronounced in only 25 percent of the cases.

The beginning of 2014 was marked by a landmark decision of the CNIL. On January 3, 2014, the CNIL pronounced a record fine against Google of €150,000 ($204,000) on the grounds that the terms of use available on its website since March 1, 2012, allegedly did not comply with the French Data Protection Act. Google was also required to publish this sanction on the homepage of within eight days of it being pronounced. Google appealed this decision, however, on February 7th, 2014, the State Council (“Conseil d’Etat”) rejected Google’s claim to suspend the publication order.

Several lessons can be learnt from the CNIL’s decision. First, that the CNIL is politically motivated to hit hard on the Internet giants, especially those who claim that their activities do not fall within the remit of the French law. No, says the CNIL. Your activities target French consumers, and thus, you must comply with the French Data Protection Act even if you are based outside the EU. This debate has been going on for years and was recently discussed in Brussels within the EU Council of Ministers’ meeting in the context of the proposal for a Data Protection Regulation. As a result, Article 4 of the Directive 95/46/EC could soon be amended to allow for a broader application of European data protection laws to data controllers located outside the EU.

Second, despite it being the highest sanction ever pronounced by the CNIL, this is hardly a dissuasive financial sanction against a global business with large revenues. Currently, the CNIL cannot pronounce sanctions above €150,000 or €300,000 ($410,000) in case of a second breach within five years from the first sanction pronounced, whereas some of its counterparts in other EU countries can pronounce much heavier sanctions; e.g., last December, the Spanish DPA pronounced a €900,000 ($1,230,000) fine against Google. This could soon change, however, in light of an announcement made by the French government that it intends to introduce this year a bill on “the protection of digital rights and freedoms,” which could significantly increase the CNIL’s enforcement powers.

Furthermore, it seems that the CNIL’s lobbying efforts within the French Parliament are finally beginning to pay off. A new law on consumer rights came into force on 17 March 2014, which amends the Data Protection Act and grants the CNIL new powers to conduct online inspections in addition to the existing on-site inspections. This provision gives the CNIL the right, via an electronic communication service to the public, “to consult any data that are freely accessible, or rendered accessible, including by imprudence, negligence or by a third party’s action, if required, by accessing and by remaining within automatic data protection systems for as long as necessary to conduct its observations.” This new provision opens up the CNIL’s enforcement powers to the digital world and, in particular, gives it stronger powers to inspect the activities of major Internet companies. The CNIL says that this law will allow it to verify online security breaches, privacy policies and consent mechanisms in the field of direct marketing.

Finally, the Google case is a good example of the EU DPAs’ recent efforts to conduct coordinated cross-border enforcement actions against multinational organizations. In the beginning of 2013, a working group was set up in Paris, led by the CNIL, for a simultaneous and coordinated enforcement action against Google in several EU countries. As a result, Google was inspected and sanctioned in multiple jurisdictions, including Spain and The Netherlands. Google is appealing these sanctions.

As the years pass by, the CNIL continues to grow and to become more resourceful. It is also more experienced and better organized. The CNIL is already very influential within the Article 29 Working Party, as recently illustrated by the Google case, and Isabelle Falque-Pierrotin, the chairwoman of the CNIL, was recently elected chair of the Article 29 Working Party. Thus, companies should pay close attention to the actions of the CNIL as it becomes a more powerful authority in France and within the European Union.

This article was first published in the IAPP’s Privacy Tracker on 27 February 2014 and was updated on 18th March 2014.