Archive for the ‘Uncategorized’ Category

German DPA takes on Facebook again

Posted on July 31st, 2015 by

The DPA of Hamburg has done it again and picked up a new fight against mighty US giant Facebook. This time, the DPA was not amused about Facebook´s attempt to enforce its real name policy, and issued an administrative order against Facebook Ireland Ltd.

The order is meant to force Facebook to accept aliased user names, to revoke the suspension of user accounts that had been registered under an alias, to stop Facebook from unilaterally changing alias user names to real user names, and to stop requesting copies of official ID documents. It is based on Sec. 13 (6) German Telemedia Act, which requires service providers like Facebook to offer access to their services anonymously or under an alias, and also a provision of the German Personal ID Act which arguably prohibits requesting copies of official ID documents.

Despite this regulation, Facebook´s terms of use oblige users to use their real name in Germany, too. Early this year, Facebook started to enforce this policy more actively and suspended user accounts that were registered under an alias. The company also requested users to submit copies of official ID documents. It also sent messages to users asking them to confirm that “friends” on the network used their real name. In a press statement, Mr Caspar, head of the Hamburg DPA said: “As already became apparent in numerous other complaints, this case shows in an exemplary way that the network [Facebook] attempts to enforce its so-called real name obligation with all its powers. In doing so, it does not show any respect for national law.”

“This exit has been closed”

Whether Facebook is at all subject to German law has been heavily disputed. While the Higher Administrative Court of the German state Schleswig-Holstein ruled that Facebook Ireland Limited, as a service provider located in an EU member state, benefits from the country-of-origin principle laid down in Directive 95/46/EC, the Regional Court of Berlin came to the opposite conclusion: It held that Facebook Inc. rather than Facebook Ireland Ltd would be the data controller, as the actual decisions about the scope, extent and purpose of the processing of data would be made in the US. The court also dismissed the argument that Facebook Ireland acts as a data controller in a data controller-processor agreement with Facebook Inc., as it ruled that the corporate domination agreement between Facebook Inc. and Facebook Ireland prevails over the stipulations of the data controller-processor agreement. As Facebook has a sales and marketing subsidiary in Hamburg, the Hamburg DPA now believes to have tailwind due to the ECJ ruling in the Google Spain case to establish the applicability of German law: “This exit has been closed by the ECJ with its jurisdiction on the Google search engine. Facebook is commercially active in Germany through its establishment in Hamburg. Who operates on our playing field must play by our rules.”

While previous activities of German DPAs against Facebook were aimed at legal issues that did not really agitate German users, such as the admissibility of the “like”-button, the enforcement of the real name policy upset German users in numbers, and a lot of users announced to turn their back on the network. The issue also saw a lot of press coverage in national media, mostly in strong criticism of Facebook.

5 Practical Steps to help companies comply with the E-Privacy Directive (yes, it’s cookies again!)

Posted on July 13th, 2015 by

This month (July 2015), the IAB Europe published new Guidance titled “5 Practical Steps to help companies comply with the E-Privacy Directive“. These 5 sensible steps in the document are aimed at brand advertisers, publishers and advertising businesses.  The EU’s cookie compliance rules were remodelled as far back as 2009 when a broader set of telecommunications rules updated the e-Privacy Directive.  There’s been no change since so this Guidance has not been prompted by any regulatory change or significant shift in the compliance landscape.  It it does however serve as a useful practical reminder to anyone considering or revisiting their compliance strategy.

The context and Article 5.3

The advice in the Guidance centres around that now familiar extract from the e-Privacy Directive, Article 5.3.  This of course requires you obtain the prior informed consent for storage of, or access to, information stored on a user’s terminal equipment.

The Guidance rightly acknowledges that there are differences in both the national implementations of this rule as well as the related regulatory guidance Member State to Member State.  Therein lies the rub, as many are seeking a “one-size-fits-all” approach for Europe. Often criticised, the law requires you to get consent, but doesn’t actually say how. These 5 steps from the IAB delve into the “how” and may assist you.

The 5 recommended steps in the Guidance

At a high-level the Guidance makes the following practical observations:

  1. Monitor and assess your digital property – know your properties, their technology, and what data they collect. Regularly audit these to understand the data collected and how it is used. Be particularly cautious when using partners who are collecting data on your properties.
  2. Be clear and transparent in how you present information to consumers – use plain and easy-to-understand language and don’t mislead. Consider a layered approach and, where appropriate, use helpful websites (eg like or to convey messages about how and why your property deploys its technologies (and for what purposes).
  3. Make things prominent – ensuring your privacy property is available and distinguishable. There are some short tips around ways you could go about this.
  4. Context is king! – the Guidance suggests you consider ways to achieve consent in a contextual way. Rightly this step suggests “that the key point is that you must gain consent by giving the user specific information about what they are agreeing to and providing them with a way to indicate their acceptance.” Fieldfisher reminds you that are a number of mechanisms (express and implied) by which you may achieve this and the Guidance suggests a few of the available approaches in this step.
  5. Consider joining the EU industry programme to provide greater contextual transparency and control to consumers over customised digital advertising – “why not?” we say, as this is another tactic for in staying in touch and demonstrating commitments. This step highlights the benefits of to behavioural advertisers and the “icon” initiative and transparency mechanisms available via

The so what?

The e-Privacy Directive and the EU cookie compliance issues associated with it have been alive and well for years now. We’ve frequently updated readers on the enforcement issues, sweep days other stories where cookie compliance comes to the fore. It’s not entirely clear what prompted this “best practice” advice and steps from the IAB, but the short document is practical and insightful, whether you’re new to cookie compliance or revisiting your compliance approach.

As other members of the team have recently blogged, the CNIL recently issued a press release stating that, following its online cookies audits conducted last October (see our previous blog article), it has sent out  a formal letter of enforcement (“lettre de mise en demeure“) to approximately 20 companies requesting them to comply with the cookie rules in France.  Cookie compliance needs are not going away nor are they particularly difficult for most online properties.  What’s more, when looking at your peers, there’s no doubt that a level of compliance and transparency is fairly prevalent across EU and EU facing websites today.

What now?

So how should you deal with cookies?  Well, the steps in this Guidance give you a great practical head start. Cookie compliance and the approach to compliance has been market-led since the outset. When asked what “good” looks like, even among the regulators the thinking went that the online industry was better placed to innovate creative and unobtrusive ways to get consent than lawyers, regulators and legislative draftsmen. That’s where bodies like the IAB Europe have played a central role and, by aligning your own practices with the pack, you are rarely in a bad place in the world of cookie compliance.


Mark Webber, Partner – Digital Regulation and Technology (Silicon Valley)


France’s Ambition for the Future of the Digital Economy

Posted on June 24th, 2015 by

On June 18th, the French National Digital Council (“Conseil National du Numérique” or “CNN”) released a Report entitled “Digital Ambition: a French and European policy for a digital transition” containing 70 proposals for the future of the digital economy in France and Europe. The Report follows a nation-wide consultation of the major stakeholders which has also sparked a debate on various issues relating to the digital economy, such as how to regulate digital platforms and how to boost the competitiveness of French start-up companies.

The Report was officially presented to the public in the presence of Manuel Valls, the French Prime Minister, Emmanuel Macron, Minister for the Economy, the Industry and the Digital Economy, and Axelle Lemaire, State Secretary for Digital Affairs. During the press conference, Manuel Valls announced that its Government has already prepared and will introduce a “Digital Bill” before the French National Assembly in the fall, aimed at regulating the use of the Internet, as well as stimulating innovation and fostering growth in the digital economy. The competent public authorities, such as the French Data Protection Authority (“CNIL”), will be consulted beforehand on the draft Bill.

The 70 proposals in the Report are structured around four main topics, namely: 1) Fairness and freedom in a shared digital environment; 2) Re-defining public action in the digital sphere: openness, innovation, participation; 3) Boosting the French economy: towards an economy of innovation; and 4) Solidarity, equality, emancipation: the stakes of a digital economy.

Below is a selection of some of the key proposals in the field of data protection that can be found in the Report:

  • The right to self-determination of data

The Report recommends creating a fundamental right to self-determination of data, a concept directly inspired from German law, based on a decision of the German Constitutional Court of 1983, which recognized the individual’s right to decide on the communication and use of one’s personal data.

The CNN acknowledges the progress made in the upcoming General Data Protection Regulation (see my previous article on the topic) and calls for the adoption of a broad definition of “personal data”, consent of the individual for secondary uses of his/her data, and a reinforced control of the individual over his data (including over the disclosure of data).

  • The right to the portability of data

The right to portability is viewed as an extension of the right to self-determination, enabling the individual to transfer and re-use his/her data across various services. This could be done, for example, by encouraging the development of PIMS (“Personal Information Management Systems”) that would allow individuals to store their data where they wish and to control how their data is disclosed to third parties.

  • Criteria applicable to the de-listing of data

The Report suggests creating a legal framework (by way of either a European Regulation or a national law) for user requests to de-list their personal data based on a set of criteria that are defined by the national data protection authorities, not search engines. The Report specifically refers here to the guidelines set out by the Article 29 Working Party in its analysis of the Mario Costeja Gonzales case adopted in November 2014 (see our previous article) and considers that these guidelines should apply to search engines.

  • Secondary uses of personal data

The Report considers that individuals are insufficiently informed about the disclosures of their personal by the initial data controller data to third parties (such as business partners). To increase transparency, the Report recommends imposing on the initial data controller an obligation to inform the data subjects clearly about the selling or disclosure of their data to third parties (including the names of such parties) and to allow the data subject to opt-out from the disclosure of their data to third parties, at any time, regardless of the opt-in or opt-out provided to the initial data controller.

  • Data protection class actions

The Report supports the idea of creating a class action for violations of data protection legislation that would be brought before the first instance courts by consumer protection associations.

  • Fairness of web platforms

The Report underlines an unfair balance between web platforms and their users, due to the lack of transparency regarding the use of Big Data, the difficulties for users to migrate from one platform to another, or the excessive cost of services on some platforms. To remediate this situation, the Report proposes to create a principle of fairness that would require web platforms to conduct their business in good faith and in a transparent manner, for example, with regard to the collection, processing and restitution of personal data. The principle of fairness would apply to web platforms in their relations with consumers and professionals.

  • Information provided to users

The Report encourages the development of new means of communication to make the Terms of Use on websites more readable and understandable to users, for example by working on the design of such documents to make them easier and more accessible to all (e.g., by using “privacy icons”). Also, the Report recommends providing users with the necessary and essential elements of the Terms of Use at the time of collection of their consent.

  • Algorithms used to process Big Data

The Report observes that profiling of individuals can be done by combining different sets of data, without actually verifying an individual’s identity, which can cause unfair or unlawful discriminations. For this reason, the Report proposes to increase transparency by requiring web platforms to provide prior information to users about the algorithms they use for profiling purposes.

The announced “Digital Bill” is expected to contain important new measures on the future governance of the Internet. In particular, Manuel Valls did stress the importance of the right to self-determination and to the portability of data. The challenge for the French Government will be to introduce such measures without contradicting the upcoming Data Protection Regulation that was recently adopted by the EU Council of Ministers. The CNIL is expected to issue an opinion on the draft Digital Bill in the coming weeks.

This article was first published in the IAPP‘s Privacy Tracker.

German monopoly commission advises not to regulate algorithms

Posted on June 5th, 2015 by

This week, the German Monopoly Commission has published its extraordinary opinion on digital markets. Particularly interesting: the Commission advised not to regulate algorithms – which seems to be an answer to a question nobody posed only at first glance.

The study, which is available in German here, looks at a broad scope of digital business models and markets. One thing that immediately sprang to my mind is a section about the need to regulate algorithms. The background is that the Commission sees the risk that search engine providers that also offer other services such as review websites, map services or price comparison tools, may prioritize their own services against third-party offerings.

First, the Monopoly Commission clearly advocates against an unbundling of such businesses, arguing that the impact of such an unbundling is severe, and the unbundling would also contravene the general goal of competition regulation to generate an incentive for innovation by accepting organic, internal growth.

Second, the Commission also denied the feasibility of an “algorithm” regulation, i.e. an agency that would look into an algorithm to determine whether it works “neutrally”. Here, the Commission states that the number of changes that a typical search engine provider implements each year would constitute an unreasonable effort. Further, given the complexity of those algorithms, the commission doubts that it would be possible to detect a bias at all.

In particular the last point is interesting. At first glance, it seems to be the answer to a question nobody posed, but we have occasionally seen requests for an “algorithm police” in the recent past, and a couple of weeks ago, when I asked Jan Philipp Albrecht, very well-known for being the rapporteur of the European Parliament for the EU’s General Data Protection Regulation as well as for the EU-US data protection framework agreement, he clearly spoke in favour of regulating algorithms. The fact the Monopoly Commission addressed this topic may thus be more than just a side note, and it seems that this debate has only started.

CNIL unveils its 2015 inspections plan. Are you ready for what’s coming?

Posted on May 26th, 2015 by

In 2014, I warned about the French data protection authority (“CNIL”) being a regulator to watch. One year down the road, CNIL has not failed to deliver. A few weeks ago, CNIL released its Annual Activity Report for 2014 revealing that in the past year it had conducted 421 inspections (including 58 online audits), issued 62 enforcement notices and pronounced 18 sanctions. As the current chair of the Article 29 Working Party, CNIL continues to play an active role on the European and international scene on topics such as the General Data Protection Regulation, the on-going discussions between the US and EU on Safe Harbor and the recent online sweeps organized by GPEN.

What are the CNIL’s top priorities?

The CNIL intends to conduct 550 inspections divided between 350 on-site or off-site inspections and 200 online audits. Specifically, CNIL will prioritize its actions in the following key sectors:

  • Ecommerce: following its guidance on the processing of bank card details, CNIL will focus now on payment cards with no contact (i.e., bank cards that have an integrated chip and enable cardholders to make wireless payments via “near field communication” or “NFC” technology). In particular, CNIL will verify whether adequate security measures are designed around the use of such cards and whether the financial institutions who offer these types of cards inform their customers and enable them to object to using these cards (e.g., by deactivating the integrated chip or by ordering a traditional card that is not compatible with the “NFC” technology). CNIL is also preparing for the next evolution of entirely digitalized payments by smartphone.
  • Employee privacy in the workplace: Employee privacy continues to be high on the CNIL’s agenda due to the rising number of employees who file complaints with the CNIL each year. In particular, CNIL will inspect private and public organizations who have recently conducted surveys on social-psychological risks for employees.
  • mHealth: Following the Article 29 Working Party’s opinion on mobile apps and its letter to the European Commission on the meaning of “health data” in the context of mobile apps and devices (see our previous blog), CNIL will audit interconnected objects and online services in the area of health and well-being to verify (amongst other things) whether users are provided with notice and their consent is obtained.
  • Public sector: With the French Parliament currently debating a new law to broaden the online investigation powers of the French law enforcement and national security agencies, CNIL will continue to monitor the compliance of public sector databases with the Data Protection Act. This time, CNIL will focus on the National Register for Drivers’ Licenses (“Fichier National des Permis de Conduire“) held by the Ministry of Interior, which centralises all the data about registered drivers, including fines and traffic felonies.
  • Public Wi-Fi connections: Another growing area that is receiving particular attention are publicly available Wi-Fi hotspots (such as those that are available in department stores, train stations or airports) which capture data that is being transmitted by a user’s mobile phone (e.g., type of device, MAC address, location data) and is being used more frequently to track users, to send them advertisements or offers, or to analyse their behaviour.
  • Binding Corporate Rules: Last but not least, CNIL has announced its intention to begin enforcing against companies with BCR. Since their introduction in 2003, approximately 60 organizations have had their BCR approved, but so far, no enforcement measures were taken against BCR. However, a few months ago, the lead DPAs across Europe started contacting organizations with a view to verifying and completing the information about their BCR that is posted on the European Commission’s website, thus implying that this grace period is over. The things CNIL could verify are, for example, whether a BCR policy is easily accessible on the organization’s website and whether companies have implemented the internal measures that are required for BCR compliance.

What are the CNIL’s enforcement powers?

The CNIL can carry out four types of enforcement actions, namely:

  • On-site control: the CNIL may access the buildings and premises used to process personal data, inspect the data processing applications and databases;
  • Off-site control: the CNIL may organize a hearing in its offices and require the data controller or its data protection officer to provide explanations;
  • Long distance control: the CNIL may communicate with the data controller by postal mail or email and, for example, may conduct routine surveys; and
  • On-line inspections: CNIL may conduct on-line inspections of personal data that is available on websites or mobile apps.

What sanctions can the CNIL pronounce?

If the CNIL finds that a company has failed to comply with the Data Protection Act, it can either pronounce a warning or issue a formal notice to comply within a given deadline. If the controller fails to comply with the notice served, the CNIL may then pronounce a fine up to EUR 150,000 (or EUR 300,000 in the event of a second breach within five years or 5% of the company’s gross revenue for legal entities) or an injunction to cease the processing.

Are you prepared for a CNIL inspection?

In recent years, I have assisted many companies to comply with CNIL inspections. Too often, companies are caught by surprise when the CNIL comes knocking on their door unannounced because they haven’t put in place any internal process for handling this kind of situation. As with any regulator, the dealings with the CNIL require a minimum amount of awareness and preparation.

While a CNIL inspection does not necessarily end with the CNIL pronouncing a fine or sanction against the company, inevitably this does have a disruptive effect for the company being investigated because it reveals the flaws that this company may have with regard to privacy compliance. Therefore, companies are in a better position if they tackle privacy issues at an early stage, rather than to leave it for later and risk having to fire-fight their way through a CNIL inspection.


By Olivier Proust, Of Counsel (

The Belgian Facebook Recommendation: How the Nomination of a Single EU Data Controller is Under Fire

Posted on May 20th, 2015 by

Last week, the Belgian Privacy Commission published (a first part of) its much anticipated recommendation following its investigation into Facebook’s data processing activities.

For the data protection community, the most interesting part of this recommendation is not the assessment of Facebook’s compliance. The real importance concerns the regulatory interpretation of the EU Data Protection Directive’s applicable law principles, a topic that is of particular importance to all non-EU headquartered companies that process personal data in the EU.

Recap of the applicable law principle

To determine whether EU data protection law applies at all and, if so, which EU Member State’s data protection law(s), article 4 of the Directive 95/46/EC sets out a twofold test:

  • Step 1 – The establishment test: If the data controller processes personal data in the context of the activities of an “establishment” (i.e. a subsidiary or a branch office) on the territory of a Member State, only the data protection laws of that Member State will apply (article 4.1.a). This is the case even if the personal data in question is collected from individuals resident in other EU Member States;
  • Step 2 – The equipment test: If the data controller does not have such an establishment on the territory of a Member State, but it uses “equipment” to process personal data situated in the territory of one or more Member States, the data protection laws of those Member States will apply (article 4.1.c).

At first sight these rules seem quite straightforward. However, over recent years, it has become more and more difficult to apply them to the reality of multinational corporations that have their headquarters outside the EU and that have incorporated a number of subsidiaries or branches in one or more Member States.

The crux of the problem is to determine which entity in a multinational group qualifies as the “data controller” for European data protection compliance purposes. For many years, a lot of US multinationals have taken the approach of incorporating an affiliate in a tax-friendly Member State (such as Ireland, Luxembourg or the Netherlands), indicating that this affiliate qualified as data controller for the purpose of their processing activities in Europe.

As a result of creating this EU “establishment”, then applying the Establishing Test above their processing activities were subject to the data protection laws of only that particular Member State and they were only subject to the regulatory scrutiny of the regulator of that Member State.

First attempts by Member States to circumvent this principle

This evolution has been ill-received by many civil rights activists and regulators based in Member States due to concerns that multinational businesses may be exploiting the Establishment Test for forum shopping purposes (On this topic, see my colleague Phil Lee’s recent blog post).

In more recent times, some data protection authorities and national courts have therefore refused to recognize multinationals’ nominated EU data controlling subsidiaries and sought to apply the Equipment Test instead so as to find their national law applicable.

In 2013, two German courts for instance ruled that Apple and Google had to comply with German data protection law, rejecting their argument that German law did not apply. Last year, the High Court of Berlin came to the same conclusion in a case against Facebook and disregarded Facebook’s argument that Facebook Ireland qualified as its EU data controller and therefore, under the Establishment Test, it should only comply with Irish data protection laws.

The Belgian Privacy Commission’s Facebook recommendation

In its recent recommendation, the Privacy Commission has taken a similar approach to justify that Belgian data protection law applies.

Almost half of the recommendation is used to justify why Facebook is subject to Belgian law. The Privacy Commission’s arguments can be summarized as follows.

  • Facebook, Inc. and not Facebook Ireland is the data controller

On the basis of a detailed factual analysis, the Privacy Commission firstly concludes that Facebook Ireland cannot qualify as a data controller because it “does not appear to be able to take independent decisions when it comes to determining the purpose and the resources relating to the processing of the personal data of Belgian citizens”.

In this regard, the Privacy Commission attaches a lot of importance to the fact that the new privacy policy, which kicked off the investigation in the first place, has been rolled out globally, without a specific version issued by Facebook Ireland that was adapted for the EU market. Another element that was relied upon is the fact that the privacy policy did not refer to the term “personal data” but rather to the more generic/US-inspired terms “data” and “personal information” – though quite why these terms should be relevant to an assessment of an entity’s controllership (or lack of it) is far from clear.

For those reasons, the Privacy Commission takes the view that Facebook, Inc., with its registered office in the US, has to be considered the sole data controller.

  • Facebook Belgium qualifies as an establishment in the sense of article 4.i.a of Directive 95/46/EC

Having that it considers Facebook, Inc. to qualify as data controller, the Privacy Commission then goes on to examine the role of Facebook Belgium.

Facebook Belgium is a subsidiary of Facebook, Inc. whose corporate purpose is reportedly limited to public policy and legislative and regulatory outreach activities and is not involved in any commercial activity as such.

However, applying the principles of the ECJ’s Costeja “Right to be Forgotten” judgment (C‑131/12 –  see also our blog post on this decision), the Privacy Commission concluded that Facebook Belgium is an establishment of Facebook, Inc. because it considered these activities to be “inextricably linked” to Facebook, Inc.’s activities – the first reported instance of the Right to be Forgotten judgment being applied by a local regulator to submit another major US-led multinational to a Member State’s local data protection laws

  • Alternatively, Facebook Inc. uses equipment on the Belgian territory

The recommendation then goes on by stating that even if Facebook Belgium (or any other Facebook affiliate in the EU for that matter) does not qualify as an establishment in the context of which Facebook, Inc. processes personal data, then Facebook, Inc. is still subject to the Belgian data protection laws by virtue of the Equipment Test due to its use of cookies and other tracking technologies served on Belgian residents’ devices.

Practical implications for other businesses

Until today, like many multinational businesses, Facebook has consistently maintained that it is only subject to Irish data protection law by virtue of having an Irish data controller. With the Privacy Commission now threatening to initiate legal proceedings, it will be interesting to see how this matter evolves.

In the meantime, a few general conclusions can already been drawn:

First, the criticism around forum shopping is ever increasing. The lack of a harmonised enforcement approach in the EU, and the perception (rightly or wrongly) that certain DPAs have been too lenient has resulted in a situation in which many national data protection authorities are trying to protect their citizens by applying their own national law, regardless of the principles laid down in article 4 of Directive 95/46/EC.

Second, Non EU-based businesses should therefore carefully consider how they want to respond to this risk when approaching their EU data protection compliance. Naturally, any business wants to avoid the legal uncertainty and risk that arises from potentially having to comply with the laws of the 28 Member States.

While it therefore makes sense to create an EU subsidiary to fulfil a data controller role, it is not sufficient to simply “nominate” one on paper. Businesses must put in place the conditions and controls that allow this EU subsidiary to really act as data controller in the field. This implies devolved decision-making autonomy to the EU subsidiary and (if necessary) arm’s length subcontracting back of carefully monitored and controlled data processing activities to the non-EU parent. Similarly, the EU subsidiary needs to play an active role in designing and implementing the business’s data protection policies to ensure they reflect EU compliance requirements.

Additional measures might include appointing a data protection officer within the EU subsidiary, accountable for ensuring the business’s compliance with EU data protection law. Similarly, training programs run within the EU subsidiary that ensure local staff are aware of their data protection responsibilities, and internal audit programs intended to monitor the EU subsidiary’s compliance with EU data protection requirements (including in respect of any activities it subcontracts back to its parent) will also be valuable steps to take.

In the absence of such factual control by the EU subsidiary, businesses risk being caught in a situation where they must comply with the data protection laws of potentially all Member States in which they have affiliates, customers or even just cookies. And that would bring them back to square one.

This article was first published in the IAPP’s Privacy Tracker.

US and European moves to foster pro-active cybersecurity threat collaboration

Posted on March 12th, 2015 by

In this blog we report a little further on the proposals to share cybersecurity threat information within the United States. We also draw analogies with a similar initiative under the EU Cybersecurity Directive aimed at boosting security protections for critical infrastructure and enhancing information sharing around incidents that may impact that infrastructure within the EU.

Both of these mechanisms reflect a fully-formed ambition to see greater cybersecurity across the private sector. Whilst the approaches taken vary, both the EU and US wish to drive similar outcomes. Actors in the market are being asked to “up” their game. Cyber-crimes and cyber-threats are impacting companies financially, operationally and, at times, are having a detrimental impact on individuals and their privacy.

Sharing of cyber-threat information in the US

Last month we reported on Obama’s privacy proposals which included plans to enhance cybersecurity protection. These plans included requests to increase the budget available for detection and prevention mechanisms as well as for cybersecurity funding for the Pentagon. They also outlined plans for the creation of a single, central cybersecurity agency: the US government is establishing a new central agency, modelled on the National Counterterrorism Centre, to combat the threat from cyber attacks.

On February 12th 2015, President Obama signed a new Executive Order to encourage and promote sharing of cybersecurity threat information within the private sector and between the private sector and government.  In a Whitehouse Statement they emphasised that “[r]apid information sharing is an essential element of effective cybersecurity, because it enables U.S. companies to work together to respond to threats, rather than operating alone”.  The rhetoric is that, in sharing information about “risks”, all actors in the United States will be better protected and prepared to react.

This Executive Order therefore encourages a basis for more private sector and more private sector and government cybersecurity collaboration.  The Executive Order:

  • Encourages the development of Information Sharing Organizations: with the development of information sharing and analysis organizations (ISAOs) to serve as focal points for sharing;
  • Proposes the development of a common set of voluntary standards for information sharing organizations: with Department of Homeland Security being asked to fund the creation of a non-profit organization to develop a common set of voluntary standards for ISAOs;
  • Clarifies the Department of Homeland Security’s authority to enter into agreements with information sharing organizations: the Executive Order also increases collaboration between ISAOs and the federal government by streamlining the mechanism for the National Cybersecurity and Communications Integration Center (NCCIC) to enter into information sharing agreements with ISAOs. It goes on to propose streamlining private sector companies’ ability to access classified cybersecurity threat information.

All in, Obama’s plan is to streamline private sector companies’ ability to access cybersecurity threat information. These plans were generally well-received as a step towards collective responsibility and security. Though some have voiced concern that there is scant mention of liability protection for businesses that share information threats with an ISAO. Commentators have pointed out that it is this fear of liability which is a major barrier to effective threat sharing.

Past US initiatives around improving cybersecurity infrastructure

This latest Executive Order promoting private sector information sharing came one year after the launch of another US-centric development. In February 2014, the National Institute of Standards and Technology (NIST) released a Framework for Improving Critical Infrastructure Cybersecurity pursuant to another Executive Order of President Obama’s issued back in February 2013.

This Cybersecurity Framework contains a list of recommended practices for those with “critical infrastructures”.   The Cybersecurity Framework’s executive summary explains that “[t]he national and economic security of the United States depends on the reliable functioning of critical infrastructure. Cybersecurity threats exploit the increased complexity and connectivity of critical infrastructure systems, placing the Nation’s security, economy, and public safety and health at risk.”

Obama’s 2013 Executive Order had called for the “development of a voluntary risk-based Cybersecurity Framework” being a set of industry standards and best practices to help organisations manage cybersecurity risks.  The resulting technology neutral Cybersecurity Framework was the result of interaction between the private sector and Government institutions. For now the use of the Cybersecurity Framework is voluntary and it relies on a variety of existing standards, guidelines, and practices to enable critical infrastructure providers to achieve resilience. “Building from those standards, guidelines, and practices, the [Cybersecurity] Framework provides a common taxonomy and mechanism for organizations to:

  • Describe their current cybersecurity posture;
  • Describe their target state for cybersecurity;
  • Identify and prioritize opportunities for improvement within the context of a continuous and repeatable process;
  • Assess progress toward the target state;
  • Communicate among internal and external stakeholders about cybersecurity risk.”

The Cybersecurity Framework was designed to complement, and not to replace, an organisation’s existing risk management process and cybersecurity program. There is recognition that it cannot be a one-size-fits-all solution and different organisations will have their own unique risks which may require additional considerations.

The Cybersecurity Framework states that it could be used a model for organisations outside of the United States. Yet even in the US there are open questions about how many are actually adopting and following it.

Similarities between US and European cybersecurity proposals

We have to draw analogies between the US initiatives in relation to cybersecurity and the more recent information sharing proposals with the draft EU Cybersecurity Directive which the team reported on in more detail in a recent blog. Both initiatives intend to drive behavioural change. But, as you may expect, the EU wants to introduce formal rules and consequences while the US remains focussed on building good cyber-citizens through awareness and information sharing.

The proposed Cybersecurity Directive would impose minimum obligations on “market operators” and “public administrations” to harmonise and strengthen cybersecurity across the European Union. Market operators would include energy suppliers, e-commerce platforms and application stores. The headline provision for business and organisations is the mandatory obligation to report security incidents to a national competent authority (“NCA”). The NCA being analogous to the ISAO information sharing body concept being developed in the US.  In contrast to the US Framework the EU’s own cybersecurity initiatives are now delayed (with a likely date for mere agreement of the rules of summer 2015 and implementation not likely until 2018) and somewhat diluted compared to the original announced plans.

Both the US and EU cybersecurity initiatives aim to ensure that governments and private sector bodies involved in the provision of certain critical infrastructure take appropriate steps to deal with cybersecurity threats. Both encourage these actors to share information about cyber threats. Both facilitate a pro-active approach to cyber-risk. Whist the US approach is more about self-regulation within defined frameworks the EU is going further and mandating compliance – that’s a seismic shift.

In the EU we await to see the final extent of the “critical infrastructure providers” definition and whether or not “key internet enablers” will be caught within the rules or whether the more recent and narrower definition will prevail. Interplay with data breach notification rules within the upcoming General Data Protection Regulation is also of interest.


Undoubtedly cyber-risk can hit a corporate’s bottom-line. Keeping up with the pace of change and multitude of risks can be a real challenge for even the most agile of businesses. Taking adequate steps in this area is a continuous and often fast-moving process. Only time will tell us whether the information sharing and interactions that these US and EU proposals are predicated on are going to be frequent enough and fast enough to make any real difference. Cyber-readiness remains at the fore because the first to be hit still wants to preserve an adequate line of defence. The end game remains take appropriate technical and organisational measures to secure your networks and data.

Of course cyber-space does not respect or recognise borders. How national states co-operate and share cybersecurity threat information beyond the borders of the EU is a whole other story. What is certain is that as the cyber-threat response steps up, undoubtedly so too will the hackers and cyber-criminals. The EU’s challenge is to foster a uniform approach for more effective cybersecurity across all 28 Member States. The US also wants to improve its ability to identify and respond to cyber incidents. The US and EU understand that economic prosperity and national security depend on a collective responsibility to secure.

For those acting within the EU and beyond in the future, they will have to adjust to operating (and where required complying) in an effective way across each of the emerging cybersecurity systems.

Mark Webber, Partner Palo Alto,


Progress update on the draft EU Cybersecurity Directive

Posted on February 27th, 2015 by

In a blog earlier this year we commented on the status of the European Union (“EU”) Cybersecurity Strategy. Given that the Strategy’s flagship piece of legislation, the draft EU Cybersecurity Directive, was not adopted within the proposed institutional timeline of December 2014 and the growing concerns held by EU citizens about cybercrime, it seems that an update on EU legislative cybersecurity developments is somewhat overdue.


As more of our lives are lived in a connected, digital world, the need for enhanced cybersecurity is evident. The cost of recent high-profile data breaches in the US involving Sony Pictures, JPMorgan Chase and Home Depot ran into hundreds of millions of dollars. A terrorist attack on critical infrastructure such as telecommunications or power supplies would be devastating. Some EU Member States have taken measures to improve cybersecurity but there is wide variation in the 28 country bloc and little sharing of expertise.

These factors gave rise to the European Commission’s (the “Commission”) publication in February 2013 of a proposed Directive 2013/0027 concerning measures to ensure a high common level of network and information security across the Union (the “proposed Directive”). The proposed Directive would impose minimum obligations on “market operators” and “public administrations” to harmonise and strengthen cybersecurity across the EU. Market operators would include energy suppliers, e-commerce platforms and application stores. The headline provision for business and organisations is the mandatory obligation to report security incidents to a national competent authority (“NCA”).

Where do things stand in the EU institutions on the proposed Directive?

On 13 March 2014 the European Parliament (the “Parliament”) adopted its report on the proposed Directive. It made a number of amendments to the Commission’s original text including:

  • the removal of “public administrations” and “internet enablers” (e.g. e-commerce platforms or application stores) from the scope of key compliance obligations;
  • the exclusion of software developers and hardware manufacturers;
  • the inclusion of a number of parameters to be considered by market operators to determine the significance of incidents and thus whether they must be reported to the NCA;
  • the enabling of Member States to designate more than one NCA;
  • the expansion of the concept of “damage” to include non-intentional force majeure damage;
  • the expansion of the list of critical infrastructure to include, for example, freight auxiliary services; and
  • the reduction of the burden on market operators including that they would be given the right to be heard or anonymised before any public disclosure and sanctions would only apply if they intentionally failed to comply or were grossly negligent.

In May-October 2014 the Council of the European Union (the “Council”) debated the proposed Directive at a series of meetings. It was broadly in favour of the Parliament’s amendments but disagreed over some high-level principles. Specifically, in the interests of speed and efficiency, the Council preferred to use existing bodies and arrangements rather than setting up a new cooperation mechanism between Member States.

In keeping with the Council’s general approach to draft EU legislation intended to harmonise practices between Member States, the institution also advocated the adoption of future-proofed flexible principles as opposed to concrete prescriptive requirements. Further, it contended that Member States should retain discretion over what information to share, if any, in the case of an incident, rather than imposing mandatory requirements.

In October-November 2014 the Commission, Parliament and Council commenced trilogue negotiations on an agreed joint text. The institutions were unable to come to an agreement during the negotiations due to the following sticking points:

  1. Scope. Member States are seeking the ability to assess (to agreed criteria) whether specific market operators come within the scope, whereas the Parliament wants all market operators within defined sectors to be captured.
  2. Internet enablers. The Parliament wants all internet enablers apart from internet exchanges to be excluded, whereas some Member States on the Council (France and Germany particularly) want to include cloud providers, social networks and search engines.
  3. There was also disagreement on the extent of strategic and operational cooperation and the criteria for incident notification.

What is the timetable for adoption of the proposed Directive?

There is political desire on behalf of the Commission to see the proposed Directive adopted as soon as possible. The Council has also stated that “the timely adoption of … the Cybersecurity Directive is essential for the completion of the Digital Single Market by 2015“.

Responsibility for enacting the reform now lies with the Latvian Presidency of the Council. On 30 January 2015, Latvian Transport Minister Anrijs Matiss stated that further trilogue negotiations would be held in March 2015, with the aim of adopting the proposed Directive by July 2015.

Once adopted, Member States will have 18 months to enact national implementing legislation so we could expect to see the proposed Directive come into force by early 2017.

How does the proposed Directive interact with other EU data privacy reforms?

In our previous blog we highlighted the difficulties facing market operators of complying with the proposed Directive in view of the potentially conflicting notification requirements in the existing e-Privacy Directive and the proposed General Data Protection Regulation (the “proposed GDPR”).

Although the text of the proposed Directive does anticipate the proposed GDPR, obliging market operators to protect personal data and implement security policies “in line with applicable data protection rules“, there has still been no EU guidance issued on how these overlapping or conflicting notification requirements would operate in practice.

Furthermore, any debate over which market operators fall within the scope of the breach notification requirements of the proposed Directive would seem to become superfluous once the proposed GDPR, with mandatory breach notifications for all data controllers, comes into force.


Rather unsurprisingly, the Commission’s broad reform has been somewhat diluted in Parliament and Council. This is a logical result of Member States seeking to impose their own standards, protect their own industries or harbouring doubts regarding the potential to harmonise practices where cybersecurity/infrastructure measures diverge markedly in sophistication and scope.

Nonetheless, the proposed Directive does still impose serious compliance obligations on market operators in relation to cybersecurity incident handling and notification.

At the risk of sounding somewhat hackneyed, for organisations, cyber data breaches are no longer a question of “if” but “when” for private and public sector bodies. Indeed, there is an increasing awareness that a high level of security in one link is no use if this is not replicated across the chain. Whether the proposed Directive meets its aim of reducing weak links across the EU remains to be seen.

Obama’s privacy proposals – one month on

Posted on February 19th, 2015 by

At the start of the year, the Obama administration placed a heavy emphasis on data protection, privacy and cybersecurity through a series of announcements and speeches on these topics in advance of the State of the Union address. This led to expectations that data protection issues and reform would feature prominently in the address itself. However, the content fell short of expectations, and instead pleas for bipartisan cooperation and a focus on President Obama’s legacy took centre stage.

Despite this, there were a number of drivers towards reform that the White House could not ignore, with cybersecurity being at the forefront following November’s high profile hacking of Sony Pictures. The 45 days which the government has given itself to draw up their promised revised Consumer Rights Bill (which will take the President’s February 2012 Consumer Data Privacy white paper as its blueprint) expires at the end of this month, and the result should prove enlightening. But in the month since the State of the Union address and the preceding announcements on privacy issues, what has actually happened?

Proposals for significant future budgetary funding in the fight against cyber threats: on 2nd February, the Obama administration announced its budget proposal for the 2016 fiscal year, which included a number of proposals for significant levels of funding in relation to cyber security. The overall figure requested was $14 billion, focusing on initiatives looking at detection and prevention mechanisms, as well as providing government-wide testing and incident response training. The Pentagon’s cybersecurity budget accounted for over a third of the overall figure, requesting $5.5 billion after a senior weapons tested told Congress in January that nearly every US weapons programme showed “significant vulnerabilities” to cyber attacks.

A single, central cybersecurity agency: the US government is establishing a new central agency, modelled on the National Counterterrorism Centre, to combat the threat from cyber attacks; the Cyber Threat Intelligence Center. It will begin with a staff of around 50 and a budget of $35 million. The idea has been circulating for a while, and the Sony Pictures hack in November was the final impetus needed to establish the central. Its announcement came last week (10th February), after the President alluded to it in the State of the Union when he said that the government would integrate intelligence to combat cyber threats “just as we have done to combat terrorism”.

A report by the Government Accountability Office (GAO) was released on 12th February, highlighting a number of high-risk gaps in the way in which the Department for Homeland Security deals with cybersecurity, as well as the protection of personally identifiable information. Whilst there has been a lot of discussion recently regarding the Obama Administration’s desire to improve cybersecurity and combat the threats, the GAO report found that it has “no overarching cybersecurity strategy that outlines performance measurements, specific roles of federal agencies, or accountability requirements”.

The White House Cybersecurity Summit held at Stanford University on 13th February was an opportunity for Obama to follow up on his pre-State of the Union cybersecurity promises, and he used it to highlight the key principles that he believes are at the heart of reducing the threat and frequency of cyber attacks:

  • the public and private sectors have to work together, given the prevalence of the private sector within the digital economy, coupled with the fact that it is the government who holds the most up to date cybersecurity data and threat alerts;
  • the government should focus on their strengths in quickly and efficiently disseminating information on cyber threats, whilst industry need to take responsibility for safeguarding their own networks;
  • speed and flexibility in reaching innovative solutions to combat threats are paramount, and all corners of business and government need to recognise this in order to meet the challenge presented by the technologically sophisticated people who pose these threats; and
  • cybersecurity must not be at the expense of privacy and the civil liberties of the American people, with Obama stating that “when government and industry share information about cyber threats, we’ve got to do so in a way that safeguards your personal information… When people go online, we shouldn’t have to forfeit the basic privacy we’re entitled to as Americans”.

An Executive Order entitled “Promoting private sector cybersecurity information sharing” followed the summit, and was signed by the President on 13th February. At the outset, the Order states its purpose:

“Organizations engaged in the sharing of information related to cybersecurity risks and incidents play an invaluable role in the collective cybersecurity of the United States. The purpose of this order is to encourage the voluntary formation of such organizations, to establish mechanisms to continually improve the capabilities and functions of these organizations, and to better allow these organizations to partner with the Federal Government on a voluntary basis“.

As well as the promotion of Information Sharing and Analysis Organizations (ISAOs) with voluntary data-sharing standards attached, the Order designates the National Cybersecurity and Communications Integration Center (NCCIC) as a critical infrastructure protection programme (giving it power to enter into voluntary agreements with ISAOs) and forces government agencies to coordinate their activities with senior government officials for privacy and civil liberties.

However, concern has been voiced from some in industry that the government should not be taking the lead on these issues, given “how uncertain the government really is about who does what in cyberspace” (Jeffrey Carr, president and CEO of Taia Global). Others remarked that matters that have been portrayed as issues of government espionage and a restriction on free speech, in particular the Snowden revelations were “a huge setback to the tune of several years” in relation to cybersecurity, given that the “balance between privacy and security ebbs and flows” (Dave DeWalt, CEO of security firm Mandiant).

In conclusion, it is clear that, despite the lack of discussion on the issue at the State of the Union, privacy and cybersecurity is on the Obama Administration’s radar as an essential element of Western and democratic societies. The biggest change is yet to come, in the form of the revised Consumer Rights Bill, but the initiatives and action taken on privacy issues so far this year have played a valuable part in bringing this to the forefront of the American political agenda.

US and UK Regulators position themselves to meet the needs of the IoT market

Posted on January 30th, 2015 by

The Internet of Things (“IoT“) is set to enable large numbers of previously unconnected devices to communicate and share data with one another.

In an earlier posting I examined the future potential regulatory landscape for the IoT market and introduced Ofcom’s (the UK’s communications regulator) 2014 consultation on the Internet of Things. This stakeholder consultation was issued in order to examine the emerging debate around this increasing interconnectivity between multiple devices and to guide Ofcom regulatory priorities. Since the consultation was issued, the potential privacy issues associated with IoT continue to attract the most attention but, as yet, no IoT issues have led to any specific laws or legal change.

In two separate developments in January 2015, the UK and US Internet of Things markets were exposed to more advanced thinking and guidance around the legal challenges of the IoT.

UK IoT developments

Ofcom published its Report: “Promoting investment and innovation in the Internet of Things: Summary of responses and next steps” (27 January 2015) which responded to the views gathered during the consultation which closed in the autumn of 2014. In this report Ofcom has identified several priority areas to focus on in order to support the growth of the IoT. These “next step” Ofcom priorities are summarised across four core areas:

Spectrum availability: where Ofcom concludes that “existing initiatives will help to meet much of the short to medium term spectrum demand for IoT services. These initiatives include making spectrum available in the 870/915MHz bands and liberalising licence conditions for existing mobile bands. We also note that some IoT devices could make use of the spectrum at 2.4 and 5GHz, which is used by a range of services and technologies including Wi-Fi.” Ofcom goes on to recognise that, as IoT grows and the sector develops, there may be a renewed need to release more spectrum in the longer term.

Network security and resilience: where Ofcom holds the view that “as IoT services become an increasingly important part of our daily lives, there will be growing demands both in terms of the resilience of the networks used to transmit IoT data and the approaches used to securely store and process the data collected by IoT devices“. Working with other sector regulators where appropriate, Ofcom plans to continue existing security and resilience investigations and to extend its thoughts to the world of IoT.

Network addressing: where Ofcom, previously fearing numbering scarcity, now recognises that “telephone numbers are unlikely to be required for most IoT services. Instead IoT services will likely either use bespoke addressing systems or the IPv6 standard. Given this we intend to continue to monitor the progress being made by internet service providers (ISPs) in migrating to IPv6 connectivity and the demand for telephone numbers to verify this conclusion“; and

Privacy: In the particularly hot privacy arena there is nothing particularly new within Ofcom’s preliminary conclusions. Ofcom concludes that there is a need for “a common framework that allows consumers easily and transparently to authorise the conditions under which data collected by their devices is used and shared by others will be critical to future development of the IoT sector.” In a world where the UK’s Data Protection Act already applies, it was inevitable that Ofcom (without a direct regulatory remit over privacy) would offer little further insight in this regard.

It’s not surprising to read from the Report that commentary within the responses highlighted data protection and privacy to potentially be the “greatest single barrier to the development of the IoT“. The findings from its consultation do foresee potential inhibitors to the IoT adoption resulting from these privacy challenges, and Ofcom acknowledges that the activities and guidance of the UK Information Commissioner (ICO) and other regulators will be pertinent to achieving clarity. Ofcom will be co-ordinating further cooperation and discussion with such bodies both nationally and internationally.

A measured approach to an emerging sector

Ofcom appears to be striking the right balance here for the UK. Ofcom suggests that future work with ICO and others could include examining some of the following privacy issues:

  • assessing the extent to which existing data protection regulations fully encompass the IoT;
  • considering a set of principles for the sharing of data within the IoT looking to principles of minimisation and restricting the overall time any data is stored for;
  • forming a better understanding of consumer attitudes to sharing data and considering techniques to provide consumers “with the necessary information to enable them to make an informed decision on whether to share their data“; and
  • in the longer term, exploring the merit of a consumer education campaign exposing the potential benefits of the IoT to consumers.

The perceived need for more clarity around privacy and the IoT

International progress around self-regulation, standards and operational best practice will inevitably be slow. On the international stage, Ofcom suggests it will work with existing research groups (such as the ones hosted by BEREC amongst other EU regulators).

We of course already have insight from Working Party 29 in its September 2014 Opinion on the Internet of Things. The Fieldfisher privacy team expounded the Working Party’s regulatory mind-set in another of our Blogs. The Working Party has warned that the IoT can reveal ‘intimate details’; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics.

As with previous WP29 Opinions (think cloud, for example), the regulators in that Opinion have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. This is in contrast to the more pragmatic FTC musings further explained below, though following a similar approach to protect privacy, the EU approach is far more alarmist and potentially restrictive.

Hopefully, as practical and innovative assessments are made in relation to technologies within the IoT, we may find new pragmatic solutions emerging to some of these privacy challenges. Perhaps the development of standard “labels” for transparency notifications to consumers, industry protocols for data sharing coupled with associated controls and possibly more recognition from the regulators that swamping consumers with more choices and information can sometimes amount to no choice at all (as citizens start to ignore a myriad of options and simply proceed with their connected lives ignoring the interference of another pop-up or check-box). Certainly with increasing device volumes and data uses in the IoT, consumers will continue to value their privacy. But, if this myriad of devices is without effective security, they will soon learn that both privacy and security issues count.

And in other news….US developments

Just as the UK’s regulators are turning their attention to the IoT, the Federal Trade Commission (FTC) also published a new Report on the IoT in January 2015: As Ofcom’s foray into the world of the IoT, the FTC’s steps in “Privacy & Security in a Connected World” are also exploratory. To a degree, there is now more pragmatic and realistic guidance around best practices in making IoT services available in the US than we have today in Europe.

In this report the FTC recommends “a series of concrete steps that businesses can take to enhance and protect consumers’ privacy and security, as Americans start to reap the benefits from a growing world of Internet-connected devices.” As with Ofcom, it recognises that best practice steps need to emerge to ensure the potential of the IoT can be recognised.  This reads as an active invitation to those playing in the IoT to self-regulate and act as good data citizens. With the surge in active enforcement by the FTC in during 2014, this is something worthy of attention for those engaged in the consumer facing world of the IoT.

As the Federal Trade Commission works for consumers to prevent fraudulent, deceptive, and unfair business practices and to provide information to help spot, stop, and avoid them the FTC’s approach focusses more on the risks that will arise from a lack of transparency and excessive data collection than the practical challenges the US IoT industry may encounter as the IoT and its devices create an increasing demand on infrastructure and spectrum.

The report focuses in on three core topics of (1) Security, (2) Data Minimisation and (3) Notice and Choice. Of particular note the FTC report makes a number of recommendations for anyone building solutions or deploying devices in the IoT space:

  • build security into devices at the outset, rather than as an afterthought in the design process;
  • train employees about the importance of security, and ensure that security is managed at an appropriate level in the organization;
  • ensure that when outside service providers are hired, that those providers are capable of maintaining reasonable security, and provide reasonable oversight of the providers;
  • when a security risk is identified, consider a “defense-in-depth” strategy whereby multiple layers of security may be used to defend against a particular risk;
  • consider measures to keep unauthorized users from accessing a consumer’s device, data, or personal information stored on the network;
  • monitor connected devices throughout their expected life cycle, and where feasible, provide security patches to cover known risks.”

With echoes of privacy by design and data minimisation as well as recommendations to limit the collection and retention of information, suggestions to impose security on outside contractors and then recommendations to consider and notice and choice, it could transpire that the IoT space will be one where we’ll be seeing fewer differences in the application of US/EU best practice?!

In addition to its report, the FTC also released a new publication designed to provide practical advice about how to build security into products connected to the Internet of Things. This report “Careful Connections: Building Security in the Internet of Things” encourages both “a risk-based approach” and suggests businesses active in the IoT “take advantage of best practices developed by security experts, such as using strong encryption and proper authentication“.

Where next?

Both reports indicate a consolidation in regulatory thinking around the much hyped world of IoT. Neither report proposes concrete laws for the IoT and, if they are to come, such laws are some time off. The FTC even goes as far as saying “IoT-specific legislation at this stage would be premature“. However, it does actively “urge further self-regulatory efforts on IoT, along with enactment of data security and broad-based privacy legislation”. Obama’s new data privacy proposals are obviously seen as a complementary step toward US consumer protection? What is clear is there are now emerging good practices and a deeper understanding at the regulators of the IoT, its potential and risks.

On both sides of the Atlantic the US and UK regulators are operating a “wait and see” policy. In the absence of legislation, with other potentially privacy sensitive emerging technologies we’ve seen self-regulatory programs within particular sectors or practices emerging to help guide and standardise practice around norms. This can protect at the same time as introducing an element of certainty around which business is able to innovate.

Mark Webber – Partner, Palo Alto California