Archive for the ‘Legislative reform’ Category

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by

In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California

UK to introduce emergency data retention measures

Posted on July 15th, 2014 by

The UK Prime Minister David Cameron announced last week that the Government is taking emergency measures to fast track new legislation, The Data Retention and Investigations Powers Bill, which will force communications service providers (i.e. telecommunications companies and internet service providers, together “CSPs“) to store communications data (including call and internet search metadata) for 12 months.

This announcement follows the CJEU’s ruling in April that the Data Retention Directive 2006/24/EC (the “Directive“), which requires companies to store communications data for up to two years, is invalid because it contravenes the right to privacy and data protection and the principle of proportionality under the EU Charter of Fundamental Rights (the “Charter“). The CJEU was particularly concerned about the lack of restrictions on how, why and when data could be used. It called for a measure which was more specific in terms of crimes covered and respective retention periods.

The PM said that the emergency law was necessary to protect existing interception capabilities, and that without it, the Government would be less able to protect the country from paedophiles, terrorists and other serious criminals. Cameron said the new legislation will respond to the CJEU’s concerns and provide a clear legal basis for companies to retain such communications data and also stressed that the new measures would cover the retention of only metadata, such as the time, place and frequency of communications, and would not cover the content of communications.  The emergency Bill is intended as a temporary measure and is to expire in 2016. The Government intends that the legislation will ensure that, in the short term, UK security and law enforcement agencies can continue to function whilst Parliament has time to examine the Regulation of Investigatory Powers Act 2000 (RIPA) to make recommendations on how it could be modernised and improved. Whilst Cameron stressed that the measures did not impose new obligations on CSPs and insisted they would not authorise new intrusions on civil liberties, the Bill faces criticism that it extends on the already far reaching interception rights under RIPA and also that in light of the CJEU decision, the temporary measure also contravenes the Charter.

At present, in order to comply with their obligations under the Directive, CSPs already operate significant storage and retrieval systems to retain data from which they can derive no further use or revenue. If the draft Bill is enacted with little further amendment, the UK’s Secretary of State could be issuing new retention notices later this year. Those CSPs subject to retention obligations today will be reading carefully as these arrive. It is not yet clear whether the legislative burden and cost of compliance is likely to spread to additional CSPs not previously notified under the current retention regime. From the Bill’s drafting it appears this could conceivably happen. It is equally clear that there is no mechanism to recoup these costs other than from their general business operations.

Britain is the first EU country to seek to rewrite its laws to continue data retention since the CJEU decision, and the Government said it was in close contact with other European states on the issue.

By comparison, in Germany, when the Directive was initially implemented, the German courts took the view that the German implementation of it by far exceeded the limits set by the German constitutional right of informational self-determination of the individual in that it did not narrow down the scope of use of the retained data sufficiently, e. g., by not limiting it to the prosecution or prevention of certain severe criminal acts. In Germany’s new Telecommunication Act, enacted in 2012, the provisions pertaining to data retention were deleted and not replaced by the compulsory principles in the Directive. Treaty violation proceedings against Germany by the EU Commission ensued, however the proceedings have now lost their grounds entirely as a result of the CJEU ruling.

Meanwhile the Constitutional Court of Austria last month declared that Austrian data retention laws were unconstitutional. Austria is the first EU Member State to annul data retention laws in response to the CJEU decision.  Austrian companies are now only obliged to retain data for specific purposes provided by law, such as billing of fault recovery.

Whether other EU countries will now follow the UK’s lead, potentially introducing a patchwork of data retention standards for CSPs throughout the EU, remains to be seen. If this happens, then equally uncertain is the conflict this will create between, on the one hand, nationally-driven data retention standards and, on the other, EU fundamental rights of privacy and data protection.


The Long Arm of the Law

Posted on May 9th, 2014 by

There’s a fair amount of indignation swilling around EU privacy regulators, politicians and policy makers following last year’s revelations about the NSA’s access to data on EU citizens. Hence, the enthusiasm of some parties for the idea of building a European internet seemingly beyond the reach of any non-EU actors (a.k.a. the US Government). So when recently a US district judge quashed Microsoft’s opposition to a US warrant requiring it to disclose data held on a server in Ireland, it appeared that this was an example of the overreaching influence of US law ignoring EU data privacy rules. However, a reading of the published court document setting out US Judge Francis’ decision does not obviously lend itself to this dichotomy. In fact EU data protection and privacy law principles do not immediately appear to have been discussed and taken into account as part of the decision.

What did the decision deal with?

Instead the main focus of the decision was on the extraterritorial reach of the search warrant issued against Microsoft under the US Stored Communications Act (SCA). The SCA governs the obligations of internet service providers to disclose information to, amongst other things, the US Government. Microsoft argued that a US federal court can only issue warrants for the search and seizure of property within the territorial limits of the US. It followed that a warrant seeking access to information associated with a specific web-based email account that was stored at Microsoft premises in Ireland was information stored beyond the reach of the territorial limits of the US law enforcement authorities.

Well, Judge Francis was having none of it. He assessed the structure of the SCA, its legislative history and the practical consequences of Microsoft’s view and dismissed Microsoft’s argument. He argued that it ‘has long been the law that a subpoena [which is what he argued the warrant was] requires the recipient to produce information in its possession, custody, or control regardless of the location of that information’. Furthermore, the legal authorities in Judge Francis’ opinion supported the notion that ‘an entity [that is] subject to jurisdiction in the United States, like Microsoft, may be required to obtain evidence from abroad in connection with a criminal investigation‘.

Although this District Court decision has gone against Microsoft, it is clear that Microsoft is in it for the long haul. Public pronouncements by Microsoft have indicated that it sees this decision as just one step in the process of challenging (and seeking to correct) the US Government’s view on their right to access data stored electronically outside the US.

What are the implications of the decision?

This decision seems to confirm the status quo for the moment as it relates to internet service providers. In other words, US ISPs with EU subsidiaries could reasonably take the view that they are required to comply with warrants and subpoenas from US law enforcement agencies relating to data held in the EU. A US ISP subsidiary with an EU parent should also think very carefully before challenging a requirement under US law to provide access to data held in the EU. Judge Francis did not clearly spell out that the reach of the law here only applies to US parent ISPs. Therefore it would seem that a US ISP subsidiary would need to be able to argue that the information held in the EU that was the subject of a warrant under the SCA was not in its possession, custody or control in order to deny access.

For cloud computing services more generally, the decision has not changed the general outlook. But given this reminder of the reach of US law, cloud providers with a US presence should be thinking about how to structure services for their EU customers. For instance, offering encryption solutions where the EU customer holds the encryption key should require US law enforcement authorities to approach the EU customer. Or using a corporate structure where a US cloud company can argue that it does not have possession, custody, or control over information held by its EU sister company would also make the strict enforcement of a warrant against a US company more difficult.

In any event, if Microsoft continues to pursue its challenge through the US courts as they indicate they will, then it is possible that a higher court will take a more nuanced view, balancing perhaps US security concerns with the constraints of extraterritoriality and privacy. At some point in all of this, the US courts may well consider Microsoft’s obligations under EU data protection law in more detail. Whilst there is no definitive prohibition under current EU data protection law preventing Microsoft, as with any other cloud provider, complying with a US law enforcement request for access to personal data held in the EU, this is one of the critical issues being discussed as part of the reforms to the EU data protection regulatory framework following the Snowden revelations.

Microsoft evidently sees this as a fundamental issue of customer trust in their services. Just as Microsoft, Google, Facebook and others have argued in recent months that they want to be able to tell users when the US Government seeks access to that user’s information, so this move by Microsoft to challenge the US Government’s right to access data held overseas is part of a similar stand against Government powers. Whether or not Microsoft will be successful in its campaign remains to be seen but a cloud provider will doubtless watch this debate with interest given the repercussions it could have for defending itself against similar requests from the US Government.



The ECJ finds the Data Retention Directive disproportionate

Posted on April 11th, 2014 by

The Data Retention Directive has always been controversial. Born as it was after the tragedies of the 2004 Madrid and 2005 London bombings, it has faced considerable criticism concerning its scope and lengthy debate over whether it is a measured response to the perceived threat.  It is therefore no surprise that over the years a number of constitutional courts in EU Member States have struck down the implementing legislation in their local law as unconstitutional (e.g. Romania and Germany).  But now the ECJ, having considered references from Irish and Austrian courts, has ruled that the Directive is invalid since it is disproportionate in scope and incompatible with the rights to privacy and data protection under the EU Charter of Fundamental Rights.

What did the ECJ object to?

The ECJ’s analysis focused on the extent of the Directive’s interference with the fundamental rights under Article 7 (right to privacy) and Article 8 (right to data protection) of the Charter. Any limitation of fundamental rights must be provided for by law, be proportionate, necessary and genuinely meet objectives of general interest. The ECJ considered that the Directive’s interference was ‘wide-ranging and…particularly serious’. Yet the ECJ conceded that the interference did not extend to obtaining knowledge of the content of communications and that its material objective – the fight against serious crime – was an objective of general interest. Consequently the key issue was whether the measures under the Directive were proportionate and necessary to fulfil the objective.

For the ECJ, the requirements under the Directive do not fulfil the strictly necessary test. In particular, the ECJ emphasised the ubiquitous nature of the retention – all data, all means, all subscribers and registered users. The requirements affect individuals indiscriminately without exception. Furthermore, there are no objective criteria determining the limits of national authorities to access and use the data. All in all the interference is not limited to what is strictly necessary and consequently the interference is disproportionate.

Of particular importance given the on-going EU-US debate about Safe Harbor and US authorities’ access to EU data, is that the ECJ was also worried that the Directive did not require the retained data to be held within the EU. This suggests that the ECJ expects global companies to devise locally based EU data retention systems regardless of the cost or inconvenience.

What are the implications of the ECJ judgment?

This is a hugely significant decision coming as it does after the revelations prompted by Edward Snowden about the access by western law enforcement agencies to masses of data concerning individuals’ use of electronic resources. Although the Advocate General in his opinion last year suggested that an invalidity ruling on the Directive be suspended to allow the EU time to amend the legislation, the ECJ has not adopted this approach. Therefore, to all intents and purposes, the Directive is no longer EU law.

This ECJ judgment effectively overrules any implementing legislation such as the UK’s Data Retention Regulations. This does not mean that UK ISP’s and Telco’s won’t continue to collect and retain communications data for billing and other legitimate business purposes as permitted under the UK’s DPA and PEC Regs. But they no longer have to do so in compliance with the UK Data Retention Regulations. Indeed there could be a risk that continuing to hold data in compliance with the retention periods under the Regulations is actually a breach of the data protection principle not to retain personal data for longer than is necessary.

What does this mean for Telco’s/ ISPs?

It has been reported that the UK Government has already responded to the ECJ decision by saying that it is imperative that companies continue to retain data. Clearly the UK and other EU Governments would become very nervous if companies suddenly started deleting copious amounts of data due to the impact this could have on intelligence gathering to deal with detecting and preventing serious crime.  And in any event, in spite of what has happened at the ECJ, Telco’s and ISP’s are still required to comply with law enforcement disclosure requests concerning the communications data they retain.

Significantly, the ECJ did not rule that this kind of data collection and retention is never warranted. One of the main criticisms of the ECJ was that the Directive did not include clear and precise rules governing the scope and application of measures and did not include minimum safeguards. This suggests that the Directive could be redrafted (and relaunched) in a form that includes these rules and safeguards when requiring companies to retain communications data. Of course, this is likely to take some time. In the meantime UK companies could consider reverting to the retention periods set out in the voluntary code introduced under the Anti-terrorism, Crime and Security Act 2001.

Beware: Europe’s take on the notification of personal data breaches to individuals

Posted on April 10th, 2014 by

Article 29 Working Party (“WP 29“) has recently issued an Opinion on Personal Data Breach Notification (the “Opinion“). The Opinion focuses on the interpretation of the criteria under which individuals should be notified about the breaches that affect their personal data.

Before we analyse the take aways from the Opinion, let’s take a step back: are controllers actually required to notify personal data breaches?

In Europe, controllers have, for a while now, been either legally required or otherwise advised to consider notifying personal data breaches to data protection regulators and/or subscribers or individuals.

Today, the only EU-wide personal data breach notification requirement derives from Directive 2002/58/EC, as amended by Directive 2009/136/EC, (the “e-Privacy Directive“) and  applies to providers of publicly available electronic communications services. In some EU member states (for example, in Germany), this requirement has been extended to controllers in other sectors or to all  controllers. Similarly, some data protection regulators have issued guidance whereby controllers are advised to report data breaches under certain circumstances.

Last summer, the European Commission adopted Regulation 611/2013 (the “Regulation“), (see our blog regarding the Regulation here), which  sets out the technical implementing measures concerning the circumstances, format and procedure for data breach notification required under Article 4 of the e-Privacy Directive.

In a nutshell, providers  must notify individuals of breaches that are likely to adversely affect their personal data or privacy without undue delay and taking account of: (i) the nature and content of the personal data concerned; (ii) the likely consequences of the personal data breach for the individual concerned (e.g. identify theft, fraud, distress, etc); and (iii) the circumstances of the personal data breach. Providers are exempt to notify individuals (not regulators) if they have demonstrated to the satisfaction of the data protection regulator that they have implemented appropriate technological protection measures to render that data unintelligible to any person who is not authorised to access it.

The Opinion provides guidance on how controllers may interpret this notification requirement by analysing 7 practical scenarios of breaches that will meet the ‘adverse effect’ test. For each of them, the  WP 29 identifies the potential consequences and adverse effects of the breach and the security safeguards which might have reduced the risk of the breach occurring in the first place or, indeed, might have exempted the controller from notifying the breach to individuals all together.

From the Opinion, it is worth highlighting:

The test. The ‘adverse effect’ test is interpreted broadly to include ‘secondary effects’. The  WP 29 clearly states that all the potential consequences and potential adverse effects are to be taken into account. This interpretation may be seen a step too far as not all ‘potential’ consequences are ‘likely’ to happen and will probably lead to a conservative interpretation of the notification requirement across Europe.

Security is key. Controllers should put in place security measures that are appropriate to the risk presented by the processing with emphasis on the implementation of those controls rendering data unintelligible. Compliance with data security requirements should result in the mitigation of the risks of personal data breaches and even, potentially, in the application of the exception to notify individuals about the breach. Examples of security measures that are identified to be likely to reduce the risk of a breach occurring are: encryption (with strong key); hashing (with strong key), back-ups, physical and logical access controls and regular monitoring of vulnerabilities.

Procedure. Controllers should have procedures in place to manage personal data breaches. This will involve a detailed analysis of the breach and its potential consequences. In the Opinion, the  data breaches fall under three categories, namely, availability, integrity or confidentiality breaches. The application of this model may help controllers analyse the breach too.

How many individuals? The number of individuals affected by the breach should not have a bearing on the decision of whether or not to notify them.

Who must notify? It is explicitly stated in the Opinion that breach notification constitutes good practice for all controllers, even for those who are currently not required to notify by law.

There is a growing consensus in Europe that it is only a matter of time before an EU-wide personal data breach notification requirement that applies to all controllers (regardless of the sector they are in) is in place. Indeed, this will be the case if/when the proposed General Data Protection Regulation is approved. Under it, controllers would be subject to strict notification requirements both to data protection regulators and individuals. This Opinion provides some insight into  how the  European regulators may interpret these requirements under the General Data Protection Regulation.

Therefore, controllers will be well-advised to prepare for what is coming their way (see previous blog here). Focus should be on the application of security measures (in order to prevent a breach and the adverse effects to individuals once a breach has occurred) and on putting procedures in place to effectively manage breaches. Start today, burying the head in the sand is just no longer an option.

European Parliament votes in favour of data protection reform

Posted on March 21st, 2014 by

On 12 March 2014, the European Parliament (the “Parliament”) overwhelmingly voted in favour of the European Commission’s proposal for a Data Protection Regulation (the “Data Protection Regulation”) in its plenary assembly. In total 621 members of Parliament voted for the proposals and only 10 against. The vote cemented the Parliament’s support of the data protection reform, which constitutes an important step forward in the legislative procedure. Following the vote, Viviane Reding – the EU Justice Commissioner – said that “The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible”. While this vote is an important milestone in the adoption process, there are still several steps to go before the text is adopted and comes into force.

So what happens next?

Following the Civil Liberties, Justice and Home Affairs (LIBE) Committee’s report published in October 2013 (for more information on this report – see this previous article), this month’s vote  means that the Council of the European Union (the “Council”) can now formally conduct its reading of the text based on the Parliament’s amendments. Since the EU Commission made its proposal, preparatory work in the Council has been running in parallel with the Parliament. However, the Council can only adopt its position after the Parliament has acted.

In order for the proposed Data Protection Regulation to become law, both the Parliament and the Council must adopt the text in what is called the “ordinary legislative procedure” – a process in which the decisions of the Parliament and the Council have the same weight. The Parliament can only begin official negotiations with the Council as soon as the Council presents its position. It seems unlikely that the Council will accept the Parliament’s position and, on the contrary, will want to put forward its own amendments.

In the meantime, representatives of the Parliament, the Council and the Commission will probably organise informal meetings, the so-called “trilogue” meetings, with a view to reaching a first reading agreement.

The EU Justice Ministers have already met several times in Council meetings in the past months to discuss the data protection reform. Although there seems to be a large support between Member States for the proposal, they haven’t yet reached an agreement over some of the key provisions, such as the “one-stop shop” rule. The next meeting of the Council ministers is due to take place in June 2014.

Will there be further delays?

As the Council has not yet agreed its position, the speed of the development of the proposed regulation in the coming months largely depends on this being finalised. Once a position has been reached by the Council then there is also the possibility that the proposals could be amended further. If this happens, the Parliament may need to vote again until the process is complete.

Furthermore, with the elections in the EU Parliament coming up this May, this means that the whole adoption process will be put on hold until a new Parliament comes into place and a new Commission is approved in the autumn this year. Given these important political changes, it is difficult to predict when the Data Protection Regulation will be finally adopted.

It is worth noting, however, that the European heads of state and government publicly committed themselves to the ‘timely’ adoption of the data protection legislation by 2015 – though, with the slow progress made to date and work still remaining to be done, this looks a very tall order indeed.

Progress update on the EU Cybersecurity Strategy

Posted on March 13th, 2014 by


On 28 February 2014, the European Commission hosted a “High Level Conference on the EU Cybersecurity Strategy” in Brussels.  The conference provided an opportunity for EU policy-makers, industry representatives and other interested parties to assess the progress of the EU Cybersecurity Strategy, which was adopted by the European Commission on 7 February 2013.

Keynote speech by EU Digital Agenda Commissioner Neelie Kroes

The implementation of the EU Cybersecurity Strategy comes at a time when public and private actors face escalating cyber threats.  During her keynote speech at the conference, Commissioner Kroes reiterated the dangers of weak cybersecurity measures by asserting that “without security, there is no privacy.

She further highlighted the reputational and financial impact of cyber threats, commenting that over 75% of small businesses and 93% of large businesses have suffered a cyber breach, according to a recent study.  However, Commissioner Kroes also emphasised that effective EU cybersecurity practices could constitute a commercial advantage for the 28 MemberState bloc in an increasingly interconnected global marketplace.

Status of the draft EU Cybersecurity Directive

The EU Cybersecurity Strategy’s flagship legal instrument is draft Directive 2013/0027 concerning measures to ensure a high common level of network and information security across the Union (“draft EU Cybersecurity Directive”).  In a nutshell, the draft EU Cybersecurity Directive seeks to impose certain mandatory obligations on “public administrations” and “market operators” with the aim of harmonising and strengthening cybersecurity across the EU. In particular, it includes an obligation to report security incidents to the competent national regulator.

The consensus at the conference was that further EU institutional reflection is required on some aspects of the draft EU Cybersecurity Directive, such as (1) the scope of obligations, i.e., which entities are included as “market operators”; (2) how Member State cooperation would work in practice; (3) the role of the National Competent Authorities’ (“NCAs”); and (4) the criminal dimension and notification requirement to law enforcement authorities by NCAs.  The scope of obligations is a particularly contentious issue as EU decision-makers consider whether to include certain entities, such as software manufacturers, hardware manufacturers, and internet platforms, within the scope of the Directive.

The next few months will be a crucial period for the legislative passage of the draft law.  Indeed, the European Parliament voted on 13 March 2014 in the Plenary session to adopt its draft Report on the Directive.  The Council will now spend March – May 2014 working on the basis of the Parliament’s report to achieve a Council “common approach”.  The dossier will then likely be revisited after the European Parliament elections in May 2014.  The expected timeline for adoption remains “December 2014″ but various decision-making scenarios are possible depending on the outcome of the elections.

Once adopted, Member States will have 18 months to transpose the Directive into national law (meaning an approximate deadline of mid-2016).  As a minimum harmonisation Directive, Member States could go beyond the provisions of the adopted Directive with their national transpositions, for instance, by reinstating internet platforms within the definition of a “market operator”. 

One of the challenges for organizations will be achieving compliance with possibly conflicting notification requirements between the draft EU Cybersecurity Directive (i.e., obligation to report security incidents to the competent national regulator), the existing ePrivacy Directive (i.e., obligation for telecom operators to notify personal data breaches to the regulator and to individuals affected) and, if adopted, the EU Data Protection Regulation (i.e., obligation for all data controllers to notify personal data security breaches to the regulator and to individuals affected).  So far, EU legislators have not provided any guidance as to how these legal requirements would coexist in practice.

Industry’s perspective on the EU Cybersecurity Strategy

During the conference, representatives from organisations such as Belgacom and SWIFT highlighted the real and persistent threat facing companies. Calls were made for international coordination on cybersecurity standards and laws to avoid conflicting regulatory requirements.  Interventions also echoed the earlier sentiments of Commissioner Kroes in that cybersecurity offers significant growth opportunities for EU industry. 

Business spoke of the need to “become paranoid” about the cyber threat and implement “security by design” to protect data.  Finally, trust, collaboration and cooperation between Member States, public and private actors were viewed as essential to ensure EU cyber resilience.

New law on real-time geolocation creates concerns over right to privacy in France

Posted on February 26th, 2014 by

On February 24, 2014, the French Parliament adopted a new law regulating the real-time geolocation of individuals for law enforcement purposes (the “Law”). This Law was adopted in response to two decisions of the Court of Cassation of October 22nd, 2013, which ruled that the use of real-time geolocation devices in the context of judicial proceedings constitutes an invasion of privacy that must be authorized by a judge on the grounds of an existing law. A similar ruling was handed by the European Court of Human Rights on September 2nd, 2010 (ECRH, Uzun v. Germany).

Essentially, this Law authorizes law enforcement authorities to use technical means to locate an individual in real-time, on the entire French territory, without that individual’s knowledge, or to locate a vehicle or any other object, without the owner or possessor’s consent. These methods may be applied in the course of a preliminary inquiry or a criminal investigation into:

–          felonies and misdemeanours against individuals that are punishable by at least 3 years’ imprisonment, or aiding/concealing a criminal, or a convict’s escape;

–          crimes and felonies (other than those mentioned above) that are punishable by at least 5 years’ imprisonment;

–          the cause of death or disappearance;

–          finding an individual on the run against whom a search warrant has been issued; and

–          investigating and establishing a customs felony punishable by at least 5 years’ imprisonment.

The use of real-time geolocation may only be conducted by a police officer for a maximum period of 15 days (in the case of a preliminary inquiry) or 4 months (in the case of an investigation) and must be authorized respectively by the public prosecutor conducting the inquiry or the judge authorizing the investigation.

However, there are serious concerns within the legal profession that this Law constitutes an invasion of privacy. According to the European Court of Human Rights, a public prosecutor is not an independent judicial authority, and therefore, the use of real-time geolocation in the context of a preliminary inquiry would constitute a violation of the individual’s civil liberties and freedoms. The use of real-time geolocation is considered to be a serious breach of privacy, which as a result, should only be used in exceptional circumstances for serious crimes and felonies, and should remain at all times within the control and authority of a judge. As a consequence, the French Minister of Justice, Christiane Taubira, has asked the Presidents of the French National Assembly and Senate to bring this Law before the Constitutional Council before it comes into force to decide whether it respects the Constitution.

The French Data Protection Authority (“CNIL”) stated in a press release that real-time geolocation of individuals is comparable to the interception of electronic communications, and therefore, identical safeguards to those that apply to the interception of electronic communications (in particular, the conditions for intercepting electronic communications in the course of criminal proceedings) should equally apply to geolocation data. The CNIL also considers that the installation of a geolocation device in an individual’s home, without that individual’s knowledge, should be supervised and authorized by a judge at all times, regardless of whether that operation takes place during the day or at night.

In a previous press release, the CNIL raised similar concerns over the adoption of the Law of December 18th, 2013, on military programming, which authorizes government authorities to request real-time access to any information or documents (including location data) stored by telecoms and data hosting providers on electronic communications networks for purposes of national security.

EU Parliament’s LIBE Committee Issues Report on State Surveillance

Posted on February 19th, 2014 by

Last week, the European Parliament’s Civil Liberties Committee (“LIBE“) issued a report into the US National Security Agency (“NSA“) and EU member states’ surveillance of EU citizens (the “Report“). The Report was passed by 33 votes to 7 with 17 abstentions questioning whether data protection rules should be included in the trade negotiations with the US. The release of the report comes at a crucial time for both Europe and the US but what does this announcement really tell us about the future of international data flows in the eyes of the EU and the EU’s relationship with the US?

Background to the Report

The Report follows the US Federal Trade Commission (“FTC“)’s recent response to criticisms from the European Commission and European Parliament following the NSA scandal and subsequent concerns regarding Safe Harbor (for more information on the FTC – see this previous article). The Report calls into question recent revelations by whistleblowers and journalists about the extent of mass surveillance activities by governments. In addition, the LIBE Committee argues that the extent of the blanket data collection, highlighted by the NSA allegations, goes far beyond what would be reasonably expected to counter terrorism and other major security threats. The Report also criticises the international arrangements between the EU and the US, and states that these mechanisms “have failed to provide for the necessary checks and balances and for democratic accountability“.

LIBE Committee’s Recommendations

In order to address the deficiencies highlighted in the Report and to restore trust between the EU and the US, the LIBE Committee proposes several recommendations with a view to preserving the right to privacy and the integrity of EU citizens’ data, including:

  • US authorities and EU Member States should prohibit blanket mass surveillance activities and bulk processing of      personal data;
  • The Safe Harbor framework should be suspended, and all transfers currently operating under this mechanism should stop immediately;
  • The status of New Zealand and Canada as ‘adequate’ jurisdictions for the purposes of data transfers should be reassessed;
  • The adoption of the draft EU Data Protection Regulation should be accelerated;
  • The establishment of the European Cloud Partnership must be fast-tracked;
  • A framework for the protection of whistle-blowers must be established;
  • An autonomous EU IT capability must be developed by September 2014, including ENISA minimum security and privacy standards for IT networks;
  • The EU Commission must present an European strategy for democratic governance of the Internet by January 2015; and
  • EU Member States should develop a coherent strategy with the UN, including support of the UN resolution on ‘the right to privacy in the digital age‘.

Restoring trust

The LIBE Committee’s recommendations were widely criticised by politicians for being disproportionate and unrealistic. EU politicians also commented that the Report sets unachievable deadlines and appears to be a step backwards in the debate and, more importantly, in achieving a solution. One of the most controversial proposals in the Report consists of effectively ‘shutting off‘ all data transfers to the US. This could have the counterproductive effect of isolating Europe and would not serve the purpose of achieving an international free flow of data in a truly digital society as is anticipated by the EU data protection reform.

Consequences for Safe Harbor?

The Report serves to communicate further public criticism about the NSA’s alleged intelligence overreaching.  Whatever the LIBE Committee’s position, it is highly unlikely that as a result Safe Harbor will be suspended or repealed – far too many US-led businesses are dependent upon it for their data flows from the EU, meaning a suspension of Safe Harbor would have a very serious impact on transatlantic trade. Nevertheless, as a consequence of these latest criticisms, it is now more likely than ever that the EU/US Safe Harbor framework will undergo some changes in the near future.  As to what, precisely, these will be, only time will tell – though more active FTC enforcement of Safe Harbor breaches now seems inevitable.


What a 21st Century Privacy Law Could – and Should – Achieve

Posted on January 22nd, 2014 by

It’s no secret that the EU’s proposed General Data Protection Regulation (GDPR) hangs in the balance. Some have even declared it dead (see here), though, to paraphrase Mark Twain, those reports are somewhat exaggerated. Nevertheless, 2014 will prove a pivotal year for privacy in the European Union: Either we’ll see some variant of the proposed regulation adopted in one form or another, or we’ll be heading back to the drawing board.

So much has already been said and written about what will happen if the GDPR is not adopted by May  that it does not need repeating here. Though, for my part, I’d be quite happy to return to the drawing board: Better, I think, to start again and design a good law than to adopt legislation for the sake of it—no matter how ill-suited it is to modern-day data processing standards.

With that in mind, I thought I’d reflect on what I think a fighting-fit 21st century data protection law ought to achieve, keeping in mind the ultimate aims of protecting citizens’ rights, promoting technological innovation and fostering economic growth:

1. A modern data privacy law should be simple, objectives-focused and achievable.  The GDPR is, quite simply, a lawyer’s playground, a lengthy document of breathtaking complexity that places far more emphasis on process than on outcome. It cannot possibly hope to be understood by the very stakeholders it aims to protect: European citizens. A modern data privacy law should be understandable by all—and especially by the very stakeholders whose interests it is intended to protect. Further, a modern privacy law needs to focus on outcomes. Ultimately, its success will be judged by whether it arrived at its destination (did it keep data private and secure?) not the journey by which it got there (how much paper did it create?).

2. A modern privacy law should recognize and reflect the role of the middleman.  Whether you’re a user of mobile services, the consumer Internet or cloud-based services, access to your data will in some way be controlled by an intermediary third party: the iOS, Android or Windows mobile platforms whose APIs control access to your device data, the web browser that blocks or accepts third-party tracking technologies by default or the cloud platform that provides the environment for remotely hosted data processing services. Yet these “middlemen” —for want of a better term—simply aren’t adequately reflected in either current or proposed EU privacy law, which instead prefers an outmoded binary world of “controllers” and “processors.” This means that, to date, we have largely relied on the goodwill of platform providers—Are they controllers? Are they processors?—to build controls and default settings into their platforms that prevent unwarranted access to our data by the applications we use. A modern data privacy law would recognize and formalize the important role played by these middlemen, requiring them to step up to the challenge of protecting our data.

3. A modern data privacy law would categorize sensitive data by reference to the data we REALLY care about.  Europe’s definition of sensitive—or “special”—personal data has long been a mystery to me. Do we really still expect information about an individual’s trade union membership or political beliefs to be categorized as sensitive when their bank account details and data about their children are not treated as sensitive in Europe—unlike the U.S.? A modern data privacy law would impose a less rigid concept of sensitive personal data, one that takes a greater account of context and treats as sensitive the information that people really care about—and not the information they don’t.

4. A modern privacy law would encourage anonymization and pseudonymization.  Sure, we all know that true anonymization is virtually impossible, that if you have a large enough dataset of anonymized data and compare it with data from this source and that source, eventually you might be able to actually identify someone. But is that really a good enough reason to expect organizations to treat anonymized and pseudonymized data as though they are still “personal” data, with all the regulatory consequences that entails? From a policy perspective, this just disincentivises anonymization and pseudonymization—why bother, if it doesn’t reduce regulatory burden? That’s plainly the wrong result. A modern data privacy law would recognize that not all data is created equal, and that appropriately anonymized and pseudonymized data deserve lesser restrictions as to their use—or reuse—and disclosure. Without this, we cannot hope to realize the full benefits of Big Data and the societal advances it promises to deliver.

5. A modern privacy law would not impose unrealistic restrictions on global movements of data.  The Internet has happened; get over it. Data will forever more move internationally, left, right, up and down across borders, and no amount of regulation and red tape is going to stop that. Nor will Europe’s bizarre obsession with model clauses. And when it comes to surveillance, law enforcement will always do what law enforcement will do: Whilst reigning in excessive government surveillance is undoubtedly crucial, that ultimately is an issue to be resolved at a political level, not at the business regulatory level. A modern data privacy law should concern itself not with where data is processed but why it is processed and how it is protected. So long as data is kept secure and processed in accordance with the controller’s legal obligations and in keeping with its data subjects’ reasonable expectations, it should be free to process that data wherever in the world it likes. Maintaining unrealistic restrictions on international data exports at best achieves little—organizations will do it any way using check-box solutions like model clauses—and, at worst, will adversely impact critical technology developments like the cloud.

6. A modern privacy law would recognize that consent is NOT the best way to protect people’s privacy.  I’ve argued this before, but consent does not deliver the level of protection that many think it does. Instead, it drives lazy, check-box compliance models—“he/she ticked the box, so now I can do whatever I like with their data.” A modern law would acknowledge that, while consent will always be an important weapon in the privacy arsenal, it should not be the weapon of choice. There must always be other ways of legitimizing data processing and, perhaps, other than in the context of sensitive personal information, these should be prioritized over consent. At the same time, if consent is to play a lesser role in legitimizing processing at the outset, then the rights given to individuals to object to processing of their data once it has begun must be bolstered—without this, you place too much responsibility in the hands of controllers to decide when and why to process data with no ability for individuals to restrain unwanted intrusions into their privacy. There’s a delicate balance to be struck, but a modern data privacy law would not shy away from finding this balance. Indeed, given the emergence of the Internet of Things, finding this balance is now more important than ever.

There’s so much more that could be said, and the above proposals represent just a handful of suggestions that any country looking to adopt new privacy laws—or reform existing ones—would be well-advised to consider. You can form your own views as to whether the EU’s proposed GDPR—or indeed any privacy law anywhere in the world—achieves these recommendations. If they don’t now, then they really should; otherwise, we’ll just be applying 20th-century thinking to a 21st-century world.

This post was first published on the IAPP’s Privacy Perspectives blog, available at