Archive for the ‘Privacy by design’ Category

New surveillance challenges prompts new CCTV Code

Posted on November 3rd, 2014 by



It was back in 2000 during the heady days of the original ‘dot com boom’ and when Mission Impossible II, Gladiator and What Women Want were riding high at the box office that the ICO released its first CCTV Code of Practice. However, as the ICO points out a lot has changed since then with ‘surveillance systems’ no longer being a camera on top of a pole recording images of a town centre via video tape. Despite a further update in 2008, the ICO has rightly felt it necessary to revisit the data protection considerations for organisations in light of intelligent CCTV systems, Automatic Number Plate Recognition (ANPR), body worn video (BWV) and drones (referred to as ‘Unmanned Aerial Systems’ (UAS)). For this reason, the new Code is to be welcomed as it addresses the myriad considerations and challenges presented by these technologies.

The new CCTV Code

The Code recognises the intrusiveness that such technologies pose and emphasises the need for proportionality and, particularly, ‘necessity’ when an organisation is deciding whether to implement a surveillance system. Therefore the Code asserts the need for tools such as Privacy Impact Assessments (PIAs) and Privacy by Design solutions, both at the initial stages of the project and through its lifecycle. Indeed, in the case of more intrusive recording systems such as BWV, which offer significant mobility advantages for the recording of images and sounds, the Code emphasises that in addition to ensuring that it is ‘necessary’ and ‘proportionate’ in response to an issue, there must also exist ‘a pressing social need’ for it. The bottom line is that the ICO considers that the intrusiveness of such systems means that they should not be implemented just because it is possible, affordable or they have public support.

Interestingly, when stressing the need for such tools, the Code also reminds developers and vendors of surveillance systems that whilst they may not be ‘data controllers’ under the Data Protection Act, they should nevertheless be aware of and build in PIAs and Privacy by Design tools when taking their products to market. This also extends to purchasers of systems, who are reminded that just because a system is available for purchase or comes with a large storage facility, this does not mean that it is necessarily privacy-compliant. Users must therefore tailor the system accordingly, including by updating the standard ‘factory’ settings for the recording and storage of information.

The nature of these systems means that the volume and intrusiveness of the data collected is now potentially much greater. CCTV cameras for example are no longer a passive technology that only records and retains images, but are now proactive and can be used to identify people of interest and keep detailed records of people’s activities – such as with ANPR cameras. Additionally, camera feeds may be shared between different public authorities via a common control room operated by a third party in order to cut back on running costs. Not only does this present considerations around existing data protection concepts such as the need to identify ‘controller and processor’ relationships and apportioning responsibilities via appropriate contractual agreements, but it also presents further issues around ‘Big Data’ and individual profiling. This requires the drafting of appropriate data-sharing agreements, data retention policies and implementing data security practices. The ICO also points out that the principles of the Data Protection Act and the corresponding recommendations in the Code should be followed throughout the lifecycle of the system, including the system upgrades.

And of course, more ‘traditional’ data protection concepts such as providing effective ‘notice’, being ready to fulfil Subject Access Requests and dealing with FOI requests continue to apply. In particular, the Code highlights the need for appropriate staff training, documented processes and considers applicable exemptions in the case of SARs. Finally, the Code sets out the regulator’s expectations that that appropriate signs and audio recordings should be used to inform individuals when they are entering an area where surveillance may be taking place. This will require creative thinking by organisations as they grapple with how to handle these issues; for example in the case of drones the guidance recommends that the drone operator may wear ‘highly visible clothing’ to identify themselves as such in order to satisfy the fair processing requirements under the DPA.

Comment

Clearly the development of new surveillance technologies is creating privacy challenges. The ICO Code and the current sensitivity of the public in relation to surveillance emphasise the importance of ensuring that the use of surveillance systems by organisations does not cross any red lines. At the time of writing this piece, UK news articles have focused on figures that the number of organisations given permits to use drones in the skies over Britain, including police forces and film makers has increased by 80% since the beginning of the year. Therefore, this ICO guidance is very timely, as the calls for increased regulation over the use of surveillance systems is increasing. Whilst it is impossible to anticipate all the challenges presented by these nascent technologies, the Code provides useful guidance on the key data protection issues and practical examples and check-lists that organisations should incorporate when deciding whether to use these new technologies.

The technologies may be more complex, but the ICO stresses that it still expects full compliance with the DPA. Organisations who are contemplating new uses of surveillance should therefore carry out a Privacy Impact Assessment to identify the privacy issues and decide the steps that they should take to ensure that their use of a new system is compliant and any privacy risks are effectively managed.

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by



The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details'; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.

 

It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.

 

Comment

I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by



In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

Part 1: Cutting through the Internet of Things hyperbole

Posted on October 15th, 2014 by



I’ve held back writing anything about the Internet of Things (or “IoT“) because there are so many developments playing out in the market. Not to mention so much “noise”.

Then something happened: “It’s Official: The Internet Of Things Takes Over Big Data As The Most Hyped Technology” read a Forbes headline. “Big data”, last week’s darling, is condemned to the “Trough of Disillusionment” while Gartner moves IoT to the very top of its 2014 emerging technologies Hype Cycle. Something had to be said.

The key point for me is that the IoT is “emerging”. What’s more, few are entirely sure where they are on this uncharted journey of adoption. IoT has reached an inflexion point and a point where businesses and others realise that identifying with the Internet of Things may drive sales, shareholder value or merely kudos. We all want a piece of this pie.

In Part 1 of this two part exploration of IoT, I explore what the Internet of Things actually is.

IoT –what is it?

Applying Gartner’s parlance, one thing is clear; when any tech theme hits the “Peak of Expectations” the “Trough of Disillusionment” will follow because, as with any emerging technology, it will be sometime until there is pervasive adoption of IoT. In fact, for IoT, Gartner says widespread adoption could be 5 to 10 years away. However, this inflexion point is typically the moment in time when the tech industry’s big guns ride into town and, just as with cloud (remember some folk trying to trade mark the word?!), this will only drive further development and adoption. But also further hype.

The world of machine to machine (“M2M“) communications involved the connection of different devices which previously did not have the ability to communicate. For many, the Internet of Things is something more, as Ofcom (the UK’s communications regulator) set out in its UK consultation, IoT is a broader term, “describing the interconnection of multiple M2M applications, often enabling the exchange of data across multiple industry sectors“.

The Internet of Things will be the world’s most massive device market and save companies billions of dollars” shouted Business Week in October 2014, happy to maintain the hype but also acknowledging in its opening paragraph that IoT is “beginning to grow significantly“. No question, IoT is set to enable large numbers of previously unconnected devices to connect and then communicate sharing data with one another. Today we are mainly contemplating rather than experiencing this future.

But what actually is it?

The emergence of IoT is driving some great debate. When assessing what IoT is and what it means for business models, the law and for commerce generally, arguably there are more questions than there are answers. In an exploratory piece in ZDNET Richie Etwaru called out a few of these unanswered questions and prompted some useful debate and feedback. The top three questions raised by Ritchie were:

  • How will things be identified? – believing we have to get to a point where there are standards for things to be sensed and connected;
  • What will the word trust mean to “things” in IoT? – making the point we need to redefine trust in edge computing; and
  • How will connectivity work? – Is there something like IoTML (The Internet of Things Markup Language) to enable trust and facilitate this communication?

None of these questions are new, but his piece reinforces that we don’t quite know what IoT is and how some of its technical questions will be addressed. It’s likely that standardisation or industry practice and adoption around certain protocols and practices will answer some of these questions in due course. As a matter of public policy we may see law makers intervene to shape some of these standards or drive particular kinds of adoption. There will be multiple answers to the “what is IoT?” question for some time. I suspect in time different flavours and business models will come to the fore. Remember when every cloud seminar spent the first 15 minute defining cloud models and reiterating extrapolations for the future size of the cloud market? Brace yourselves!

I’ve been making the same points about “cloud” for the past 5 years – like cloud the IoT is a fungible concept. So, as with cloud, don’t assume IoT has definitive meaning. As with cloud, don’t expect there is any specific Internet of Things law (yet?). As Part 2 of this piece will discuss, law makers have spotted there’s something new which may need regulatory intervention to cultivate it for the good of all but they’ve also realised  that there’s something which may grow with negative consequences – something that may need to be brought into check. Privacy concerns particularly have raised their head early and we’ve seen early EU guidance in an opinion from the Article 29 Working Party, but there is still no specific IoT law. How can there be when there is still little definition?

Realities of a converged world

For some time we’ve been excited about the convergence of people, business and things. Gartner reminds us that “[t]he Internet of Things and the concept of blurring the physical and virtual worlds are strong concepts in this stage. Physical assets become digitalized and become equal actors in the business value chain alongside already-digital entities“.   In other words; a land of opportunity but an ill-defined “blur” of technology and what is real and merely conceptual within our digital age.

Of course the IoT world is also a world bumping up against connectivity, the cloud and mobility. Of course there are instances of IoT out there today. Or are there? As with anything that’s emerging the terminology and definition of the Internet of Things is emerging too. Yes there is a pervasiveness of devices, yes some of these devices connect and communicate, and yes devices that were not necessarily designed to interact are communicating, but are these examples of the Internet of Things? Break these models down into constituent parts for applied legal thought and does it necessarily matter?

Philosophical, but for a reason

My point? As with any complex technological evolution, as lawyers we cannot apply laws, negotiate contracts or assess risk or the consequences for privacy without a proper understanding of the complex ecosystem we’re applying these concepts to. Privacy consequences cannot be assessed in isolation and without considering how the devices, technology and data actually interact. Be aware that the IoT badge means nothing legally and probably conveys little factual information around “how” something works. It’s important to ask questions. Important not to assume.

In Part 2 of this piece I will discuss some early signs of how the law may be preparing to deal with all these emerging trends? Of course the answer is that it probably already does and it probably has the flexibility to deal with many elements of IoT yet to emerge.

Beware: Europe’s take on the notification of personal data breaches to individuals

Posted on April 10th, 2014 by



Article 29 Working Party (“WP 29“) has recently issued an Opinion on Personal Data Breach Notification (the “Opinion“). The Opinion focuses on the interpretation of the criteria under which individuals should be notified about the breaches that affect their personal data.

Before we analyse the take aways from the Opinion, let’s take a step back: are controllers actually required to notify personal data breaches?

In Europe, controllers have, for a while now, been either legally required or otherwise advised to consider notifying personal data breaches to data protection regulators and/or subscribers or individuals.

Today, the only EU-wide personal data breach notification requirement derives from Directive 2002/58/EC, as amended by Directive 2009/136/EC, (the “e-Privacy Directive“) and  applies to providers of publicly available electronic communications services. In some EU member states (for example, in Germany), this requirement has been extended to controllers in other sectors or to all  controllers. Similarly, some data protection regulators have issued guidance whereby controllers are advised to report data breaches under certain circumstances.

Last summer, the European Commission adopted Regulation 611/2013 (the “Regulation“), (see our blog regarding the Regulation here), which  sets out the technical implementing measures concerning the circumstances, format and procedure for data breach notification required under Article 4 of the e-Privacy Directive.

In a nutshell, providers  must notify individuals of breaches that are likely to adversely affect their personal data or privacy without undue delay and taking account of: (i) the nature and content of the personal data concerned; (ii) the likely consequences of the personal data breach for the individual concerned (e.g. identify theft, fraud, distress, etc); and (iii) the circumstances of the personal data breach. Providers are exempt to notify individuals (not regulators) if they have demonstrated to the satisfaction of the data protection regulator that they have implemented appropriate technological protection measures to render that data unintelligible to any person who is not authorised to access it.

The Opinion provides guidance on how controllers may interpret this notification requirement by analysing 7 practical scenarios of breaches that will meet the ‘adverse effect’ test. For each of them, the  WP 29 identifies the potential consequences and adverse effects of the breach and the security safeguards which might have reduced the risk of the breach occurring in the first place or, indeed, might have exempted the controller from notifying the breach to individuals all together.

From the Opinion, it is worth highlighting:

The test. The ‘adverse effect’ test is interpreted broadly to include ‘secondary effects’. The  WP 29 clearly states that all the potential consequences and potential adverse effects are to be taken into account. This interpretation may be seen a step too far as not all ‘potential’ consequences are ‘likely’ to happen and will probably lead to a conservative interpretation of the notification requirement across Europe.

Security is key. Controllers should put in place security measures that are appropriate to the risk presented by the processing with emphasis on the implementation of those controls rendering data unintelligible. Compliance with data security requirements should result in the mitigation of the risks of personal data breaches and even, potentially, in the application of the exception to notify individuals about the breach. Examples of security measures that are identified to be likely to reduce the risk of a breach occurring are: encryption (with strong key); hashing (with strong key), back-ups, physical and logical access controls and regular monitoring of vulnerabilities.

Procedure. Controllers should have procedures in place to manage personal data breaches. This will involve a detailed analysis of the breach and its potential consequences. In the Opinion, the  data breaches fall under three categories, namely, availability, integrity or confidentiality breaches. The application of this model may help controllers analyse the breach too.

How many individuals? The number of individuals affected by the breach should not have a bearing on the decision of whether or not to notify them.

Who must notify? It is explicitly stated in the Opinion that breach notification constitutes good practice for all controllers, even for those who are currently not required to notify by law.

There is a growing consensus in Europe that it is only a matter of time before an EU-wide personal data breach notification requirement that applies to all controllers (regardless of the sector they are in) is in place. Indeed, this will be the case if/when the proposed General Data Protection Regulation is approved. Under it, controllers would be subject to strict notification requirements both to data protection regulators and individuals. This Opinion provides some insight into  how the  European regulators may interpret these requirements under the General Data Protection Regulation.

Therefore, controllers will be well-advised to prepare for what is coming their way (see previous blog here). Focus should be on the application of security measures (in order to prevent a breach and the adverse effects to individuals once a breach has occurred) and on putting procedures in place to effectively manage breaches. Start today, burying the head in the sand is just no longer an option.

Information Pollution and the Internet of Things

Posted on September 8th, 2013 by



Kevin Ashton, the man credited with coining the term “The Internet of Things” once said: “The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.

This couldn’t be more true. The range of potential applications for the Internet of Things, from consumer electronics to energy efficiency and from supply chain management to traffic safety, is breathtaking. Today, there are 6 billion or so connected devices on the planet. By 2020, some estimate that figure will be in the range of 30 to 50 billion. Applying some very basic maths, That’s between 4 and 7 internet-connected “things” per person.

All this, of course, means vast levels of automated data generation, processing and sharing. Forget Big Data: we’re talking mind-blowingly Huge Data. That presents numerous challenges to traditional notions of privacy, and issues of applicability of law, transparency, choice and security have been (and will continue to be) debated at length.

One area that deserves particular attention is how we deal with data access in an everything-connected world. There’s a general notion in privacy that individuals should have a right to access their information – indeed, this right is hard-coded into EU law. But when so much information is collected – and across so many devices – how can we provide individuals with meaningful access to information in a way that is not totally overwhelming?

Consider a world where your car, your thermostat, your DVR, your phone, your security system, your portable health device, and your fridge are all trying to communicate information to you on a 24 x 7 x 365 basis: “This road’s busy, take that one instead”, “Why not lower your temperature by two degrees”, “That program you recorded is ready to watch”, “You forgot to take your medication today” and so on.

The problem will be one of information pollution: there will be just too much information available. How do you stop individuals feeling completely overwhelmed by this? The truth is that no matter how much we, as a privacy community, try to preserve rights for individuals to access as much data as possible, most will never explore their data beyond a very cursory, superficial level. We simply don’t have the energy or time.

So how do we deal with this challenge? The answer is to abstract away from the detail of the data and make readily available to individuals only the information they want to see, when they want to see it. Very few people want a level of detail typically of interest only to IT forensics experts in complex fraud cases – like what IP addresses they used to access a service or the version number of the software on their device. They want, instead, to have access to information that holds meaning for them, presented in a real, tangible and easy to digest way. For want of a better descriptor, the information needs to be presented in a way that is “accessible”.

This means information innovation will be the next big thing: maybe we’ll see innovators create consumer-facing dashboards that collect, sift and simplify vast amounts of information across their many connected devices, perhaps using behavioural, geolocation and spatial profiling techniques to tell consumers the information that matters to them at that point in time.

And if this all sounds a little too far-fetched, then check out services like Google Now and TripIt, to name just a couple. Services are already emerging to address information pollution and we only have a mere 6 billion devices so far. Imagine what will happen with the next 30 billion or so!

The Internet and the Great Data Deletion Debate

Posted on August 15th, 2013 by



Can your data, once uploaded publicly onto the Web, ever realistically be forgotten?  This was the debate I was having with a friend from the IAPP last night.  Much has been said about the EU’s proposals for a ‘right to be forgotten’ but, rather than arguing points of law, we were simply debating whether it is even possible to purge all copies of an individual’s data from the Web.

The answer, I think, is both yes and no: yes, it’s technically possible, and no, it’s very unlikely ever to happen.  Here’s why:

1. To purge all copies of an individual’s data from the Web, you’d need either (a) to know where all copies of those data exist on the Web, or (b) the data would need some kind of built-in ‘self-destruct’ mechanism so that it knows to purge itself after a set period of time.

2.  Solution (a) creates as many privacy issues as it solves.  You’d need either to create some kind of massive database tracking where all copies of data go on the Web or each copy of the data would need, somehow, to be ‘linked’ directly or indirectly to all other copies.  Even assuming it was technically feasible, it would have a chilling effect on freedom of speech – consider how likely a whistleblower would be to post content knowing that every content of that copy could be traced back to its original source.  In fact, how would anyone feel about posting content to the Internet knowing that every single subsequent copy could easily be traced back to their original post and, ultimately, back to them?

3.  That leaves solution (b).  It is wholly possible to create files with built in self-destruct mechanisms, but they would no longer be pure ‘data’ files.  Instead, they would be executable files – i.e. files that can be run as software on the systems on which they’re hosted.  But allowing executable data files to be imported and run on Web-connected IT systems creates huge security exposure – the potential for exploitation by viruses and malicious software would be enormous.  The other possibility would be that the data file contains a separate data field instructing the system on which it is hosted when to delete it – much like a cookie has an expiry date.  That would be fine for propietary data formats on closed IT systems, but is unlikely to catch on across existing, well-established and standardised data formats like .jpgs, .mpgs etc. across the global Web.  So the prospects for solution (b) catching on also appear slim.

What are the consequence of this?  If we can’t purge copies of the individuals’ data spread across the Internet, where does that leave us?  Likely the only realistic solution is to control the propogation of the data at source in the first place.  Achieving that is a combination of:

(a)  Awareness and education – informing individuals through privacy statements and contextual notices how their data may be shared, and educating them not to upload content they (or others) wouldn’t want to share;

(b)  Product design – utilising privacy impact assessments and privacy by design methodologies to assess product / service intrusiveness at the outset and then designing systems that don’t allow illegitimate data propogation; and

(c)  Regulation and sanctions – we need proportionate regulation backed by appropriate sanctions to incentivise realistic protections and discourage illegitimate data trading.  

No one doubts that privacy on the Internet is a challenge, and nowhere does it become more challenging than with the speedy and uncontrolled copying of data.   But let’s not focus on how we stop data once it’s ‘out there’ – however hard we try, that’s likely to remain an unrealistic goal.  Let’s focus instead on source-based controls – this is achievable and, ultimately, will best protect individuals and their data.

ICO’s draft code on Privacy Impact Assessments

Posted on August 8th, 2013 by



This week the Information Commissioner’s Office (‘ICO’) announced a consultation on its draft Conducting Privacy Impact Assessments Code of Practice (the ‘draft code’). The draft code and the consultation document are available at http://www.ico.org.uk/about_us/consultations/our_consultations  and the deadline for responding is 5 November 2013.

When it comes into force, the new code of practice will set out ICO’s expectations on the conduct of Privacy Impact Assessments (‘PIAs’) and will replace ICO’s current PIA Handbook. So why is the draft code important and how does it differ from the PIA Handbook?

  • PIAs are a valuable risk management instrument that can function as an early warning system while, at the same time, promoting better privacy and substantive accountability. Although there is at present no statutory requirement to carry out PIAs, ICO expects them.
  • For instance, in the context of carrying out audits, ICO has criticised controllers who had not rolled out a framework for carrying out PIAs. More importantly, the absence or presence of a risk assessment is a determinative factor in ICO’s decision making to take enforcement action or not. When ICO talks about the absence or presence of a risk assessment, it means the conduct of some form of PIA.
  • Impact assessments are likely to soon become a mandatory statutory requirement across the EU, as the current version of the draft EU Data Protection Regulation requires ‘Data Protection Impact Assessments’. Note, however, that the DPIAs mandated by article 33 of the Draft Regulation have a narrower scope than PIAs.  The former focus on ‘data protection risks’ as opposed to ‘privacy risks’, which is a broader concept that in addition to data protection encompasses broader notions of privacy such as privacy of personal behaviour or privacy of personal communications.
  • The fact that ICO’s guidance on PIAs will now take the form of a statutory Code of Practice (as opposed to a ‘Handbook’) means that it will have increased evidentiary significance in legal proceedings before courts and tribunals on questions relevant to the conduct of PIAs.

The PIA Handbook is generally too cumbersome and convoluted. The aim of the draft code is to simplify the current guidance and promote practical PIAs that are less time consuming and complex, and as flexible as possible in order to be adapted to an organisation’s existing project and risk management processes.  However, on an initial review of the draft code I am not convinced that it achieves the optimum results in this regard.  Consider for example the following expectations set out in the draft code which did not appear in the PIA Handbook:

  • In addition to internal stakeholders, organisations should work with partner organisations and with the public. In other words, ICO encourages controllers to test their PIA analysis with the individuals who will be affected by the project that is being assessed.
  • Conducting and publicising the PIA will help build trust with the individuals using the organisation’s services. In other words, ICO expects that PIAs will be published in certain circumstances.
  • PIAs should incorporate 7 distinct steps and the draft code provides templates for questionnaires and reports, as well as guidance on how to integrate the PIA with project and risk management processes.

Overall, although the draft code is certainly an improvement compared to the PIA Handbook, it remains cumbersome and prescriptive.  It also places a lot of emphasis on documentation, recording decisions and record keeping.  In addition, the guidance and some of the templates include privacy jargon that is unlikely to be understood by staff who are not privacy experts, such as project managers or work-stream leads who are most likely to be asked to populate the PIA documentation in practice.

Many organisations are likely to want a simpler, more streamlined and more efficient PIA process with fewer steps, simpler tools / documents and clearer guidance, and which incorporates legal requirements and ICO’s essential expectations without undully delaying the launch of new processing operations. Such orgaisations are also likely to want to make their voice heard in the context of ICO’s consultation on the draft code.

Use of technology in the workplace: what are the risks?

Posted on July 29th, 2013 by



Technology is omnipresent in the workplace. In recent years, the use of technological tools and equipment by companies has grown exponentially. Various types of electronic equipment, which were previously reserved for military or scientific facilities (such as computers, smartphones, CCTV cameras, GPS systems, or biometric devices) are now commonly used by many private companies and are easy and cheap to install.

Technology undoubtedly provides companies with new opportunities for improving work performance and increasing security on their premises. At the same time, employees’ personal data are more regularly collected and potential threats to their privacy are more commonplace. In some circumstances, the use of advanced technology can pose higher security threats, which outweigh the benefits the technology provides (see our previous blog post on the risks of BYOD).

In Europe, the use of technology in the workplace will almost certainly trigger the application of privacy and labour laws aimed at safeguarding the employees’ right to privacy. In this context, data protection authorities are particularly attentive to the risks to employees that can derive from the use of technology in the workplace and their potential intrusiveness. Earlier this year, the French Data Protection Authority (“CNIL”) reported that 15% of all complaints received were work-related (see our previous blog post on the CNIL’s report). In many cases, employees felt threatened by the invasiveness of video cameras (and other technologies) being used in the work place. For that reason, the CNIL published several practical guidelines instructing employers on how to use technology in the workplace in accordance with the French Data Protection Act and the French rules on privacy.

Employers are faced with the challenge of finding a way to use technology without falling foul of privacy laws. When considering whether to implement a particular technology in the workplace, companies should take appropriate measures to ensure that those technologies are implemented in accordance with applicable privacy and labour laws. As a first step, it is often good practice to carry out a privacy impact assessment that will allow the company to identify any potential threats to employees and the risk of the company breaching privacy and labour laws. Also, the general data protection principles (lawfulness and fairness of processing, purpose limitation, proportionality, transparency, security and confidentiality) should be fully integrated into the decision-making process and privacy-by-design should be an integral part of any new technology that is deployed within the company. In particular, companies should ensure that employees are properly informed, both individually and collectively, prior to the collection of their personal data. Additionally, in many EU jurisdictions, it is often necessary for companies to inform and/or consult the employee representative bodies (such as a Works Council) on such issues. Finally, companies must grant employees access to their personal data in accordance with applicable local laws.

So, are technology and privacy incompatible? Not necessarily. Under European law, there is no general prohibition to use technology in the workplace. However, as is often the case under privacy law, the critical point is to find a fair balance between the organisation’s goals and purposes when using a particular technology and the employees’ privacy rights.

Click here to access my article on the use of technology in the workplace under French privacy law.