Archive for the ‘Privacy by design’ Category

What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by



The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details'; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.

 

It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.

 

Comment

I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by



In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

Part 1: Cutting through the Internet of Things hyperbole

Posted on October 15th, 2014 by



I’ve held back writing anything about the Internet of Things (or “IoT“) because there are so many developments playing out in the market. Not to mention so much “noise”.

Then something happened: “It’s Official: The Internet Of Things Takes Over Big Data As The Most Hyped Technology” read a Forbes headline. “Big data”, last week’s darling, is condemned to the “Trough of Disillusionment” while Gartner moves IoT to the very top of its 2014 emerging technologies Hype Cycle. Something had to be said.

The key point for me is that the IoT is “emerging”. What’s more, few are entirely sure where they are on this uncharted journey of adoption. IoT has reached an inflexion point and a point where businesses and others realise that identifying with the Internet of Things may drive sales, shareholder value or merely kudos. We all want a piece of this pie.

In Part 1 of this two part exploration of IoT, I explore what the Internet of Things actually is.

IoT –what is it?

Applying Gartner’s parlance, one thing is clear; when any tech theme hits the “Peak of Expectations” the “Trough of Disillusionment” will follow because, as with any emerging technology, it will be sometime until there is pervasive adoption of IoT. In fact, for IoT, Gartner says widespread adoption could be 5 to 10 years away. However, this inflexion point is typically the moment in time when the tech industry’s big guns ride into town and, just as with cloud (remember some folk trying to trade mark the word?!), this will only drive further development and adoption. But also further hype.

The world of machine to machine (“M2M“) communications involved the connection of different devices which previously did not have the ability to communicate. For many, the Internet of Things is something more, as Ofcom (the UK’s communications regulator) set out in its UK consultation, IoT is a broader term, “describing the interconnection of multiple M2M applications, often enabling the exchange of data across multiple industry sectors“.

The Internet of Things will be the world’s most massive device market and save companies billions of dollars” shouted Business Week in October 2014, happy to maintain the hype but also acknowledging in its opening paragraph that IoT is “beginning to grow significantly“. No question, IoT is set to enable large numbers of previously unconnected devices to connect and then communicate sharing data with one another. Today we are mainly contemplating rather than experiencing this future.

But what actually is it?

The emergence of IoT is driving some great debate. When assessing what IoT is and what it means for business models, the law and for commerce generally, arguably there are more questions than there are answers. In an exploratory piece in ZDNET Richie Etwaru called out a few of these unanswered questions and prompted some useful debate and feedback. The top three questions raised by Ritchie were:

  • How will things be identified? – believing we have to get to a point where there are standards for things to be sensed and connected;
  • What will the word trust mean to “things” in IoT? – making the point we need to redefine trust in edge computing; and
  • How will connectivity work? – Is there something like IoTML (The Internet of Things Markup Language) to enable trust and facilitate this communication?

None of these questions are new, but his piece reinforces that we don’t quite know what IoT is and how some of its technical questions will be addressed. It’s likely that standardisation or industry practice and adoption around certain protocols and practices will answer some of these questions in due course. As a matter of public policy we may see law makers intervene to shape some of these standards or drive particular kinds of adoption. There will be multiple answers to the “what is IoT?” question for some time. I suspect in time different flavours and business models will come to the fore. Remember when every cloud seminar spent the first 15 minute defining cloud models and reiterating extrapolations for the future size of the cloud market? Brace yourselves!

I’ve been making the same points about “cloud” for the past 5 years – like cloud the IoT is a fungible concept. So, as with cloud, don’t assume IoT has definitive meaning. As with cloud, don’t expect there is any specific Internet of Things law (yet?). As Part 2 of this piece will discuss, law makers have spotted there’s something new which may need regulatory intervention to cultivate it for the good of all but they’ve also realised  that there’s something which may grow with negative consequences – something that may need to be brought into check. Privacy concerns particularly have raised their head early and we’ve seen early EU guidance in an opinion from the Article 29 Working Party, but there is still no specific IoT law. How can there be when there is still little definition?

Realities of a converged world

For some time we’ve been excited about the convergence of people, business and things. Gartner reminds us that “[t]he Internet of Things and the concept of blurring the physical and virtual worlds are strong concepts in this stage. Physical assets become digitalized and become equal actors in the business value chain alongside already-digital entities“.   In other words; a land of opportunity but an ill-defined “blur” of technology and what is real and merely conceptual within our digital age.

Of course the IoT world is also a world bumping up against connectivity, the cloud and mobility. Of course there are instances of IoT out there today. Or are there? As with anything that’s emerging the terminology and definition of the Internet of Things is emerging too. Yes there is a pervasiveness of devices, yes some of these devices connect and communicate, and yes devices that were not necessarily designed to interact are communicating, but are these examples of the Internet of Things? Break these models down into constituent parts for applied legal thought and does it necessarily matter?

Philosophical, but for a reason

My point? As with any complex technological evolution, as lawyers we cannot apply laws, negotiate contracts or assess risk or the consequences for privacy without a proper understanding of the complex ecosystem we’re applying these concepts to. Privacy consequences cannot be assessed in isolation and without considering how the devices, technology and data actually interact. Be aware that the IoT badge means nothing legally and probably conveys little factual information around “how” something works. It’s important to ask questions. Important not to assume.

In Part 2 of this piece I will discuss some early signs of how the law may be preparing to deal with all these emerging trends? Of course the answer is that it probably already does and it probably has the flexibility to deal with many elements of IoT yet to emerge.

Beware: Europe’s take on the notification of personal data breaches to individuals

Posted on April 10th, 2014 by



Article 29 Working Party (“WP 29“) has recently issued an Opinion on Personal Data Breach Notification (the “Opinion“). The Opinion focuses on the interpretation of the criteria under which individuals should be notified about the breaches that affect their personal data.

Before we analyse the take aways from the Opinion, let’s take a step back: are controllers actually required to notify personal data breaches?

In Europe, controllers have, for a while now, been either legally required or otherwise advised to consider notifying personal data breaches to data protection regulators and/or subscribers or individuals.

Today, the only EU-wide personal data breach notification requirement derives from Directive 2002/58/EC, as amended by Directive 2009/136/EC, (the “e-Privacy Directive“) and  applies to providers of publicly available electronic communications services. In some EU member states (for example, in Germany), this requirement has been extended to controllers in other sectors or to all  controllers. Similarly, some data protection regulators have issued guidance whereby controllers are advised to report data breaches under certain circumstances.

Last summer, the European Commission adopted Regulation 611/2013 (the “Regulation“), (see our blog regarding the Regulation here), which  sets out the technical implementing measures concerning the circumstances, format and procedure for data breach notification required under Article 4 of the e-Privacy Directive.

In a nutshell, providers  must notify individuals of breaches that are likely to adversely affect their personal data or privacy without undue delay and taking account of: (i) the nature and content of the personal data concerned; (ii) the likely consequences of the personal data breach for the individual concerned (e.g. identify theft, fraud, distress, etc); and (iii) the circumstances of the personal data breach. Providers are exempt to notify individuals (not regulators) if they have demonstrated to the satisfaction of the data protection regulator that they have implemented appropriate technological protection measures to render that data unintelligible to any person who is not authorised to access it.

The Opinion provides guidance on how controllers may interpret this notification requirement by analysing 7 practical scenarios of breaches that will meet the ‘adverse effect’ test. For each of them, the  WP 29 identifies the potential consequences and adverse effects of the breach and the security safeguards which might have reduced the risk of the breach occurring in the first place or, indeed, might have exempted the controller from notifying the breach to individuals all together.

From the Opinion, it is worth highlighting:

The test. The ‘adverse effect’ test is interpreted broadly to include ‘secondary effects’. The  WP 29 clearly states that all the potential consequences and potential adverse effects are to be taken into account. This interpretation may be seen a step too far as not all ‘potential’ consequences are ‘likely’ to happen and will probably lead to a conservative interpretation of the notification requirement across Europe.

Security is key. Controllers should put in place security measures that are appropriate to the risk presented by the processing with emphasis on the implementation of those controls rendering data unintelligible. Compliance with data security requirements should result in the mitigation of the risks of personal data breaches and even, potentially, in the application of the exception to notify individuals about the breach. Examples of security measures that are identified to be likely to reduce the risk of a breach occurring are: encryption (with strong key); hashing (with strong key), back-ups, physical and logical access controls and regular monitoring of vulnerabilities.

Procedure. Controllers should have procedures in place to manage personal data breaches. This will involve a detailed analysis of the breach and its potential consequences. In the Opinion, the  data breaches fall under three categories, namely, availability, integrity or confidentiality breaches. The application of this model may help controllers analyse the breach too.

How many individuals? The number of individuals affected by the breach should not have a bearing on the decision of whether or not to notify them.

Who must notify? It is explicitly stated in the Opinion that breach notification constitutes good practice for all controllers, even for those who are currently not required to notify by law.

There is a growing consensus in Europe that it is only a matter of time before an EU-wide personal data breach notification requirement that applies to all controllers (regardless of the sector they are in) is in place. Indeed, this will be the case if/when the proposed General Data Protection Regulation is approved. Under it, controllers would be subject to strict notification requirements both to data protection regulators and individuals. This Opinion provides some insight into  how the  European regulators may interpret these requirements under the General Data Protection Regulation.

Therefore, controllers will be well-advised to prepare for what is coming their way (see previous blog here). Focus should be on the application of security measures (in order to prevent a breach and the adverse effects to individuals once a breach has occurred) and on putting procedures in place to effectively manage breaches. Start today, burying the head in the sand is just no longer an option.

Information Pollution and the Internet of Things

Posted on September 8th, 2013 by



Kevin Ashton, the man credited with coining the term “The Internet of Things” once said: “The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.

This couldn’t be more true. The range of potential applications for the Internet of Things, from consumer electronics to energy efficiency and from supply chain management to traffic safety, is breathtaking. Today, there are 6 billion or so connected devices on the planet. By 2020, some estimate that figure will be in the range of 30 to 50 billion. Applying some very basic maths, That’s between 4 and 7 internet-connected “things” per person.

All this, of course, means vast levels of automated data generation, processing and sharing. Forget Big Data: we’re talking mind-blowingly Huge Data. That presents numerous challenges to traditional notions of privacy, and issues of applicability of law, transparency, choice and security have been (and will continue to be) debated at length.

One area that deserves particular attention is how we deal with data access in an everything-connected world. There’s a general notion in privacy that individuals should have a right to access their information – indeed, this right is hard-coded into EU law. But when so much information is collected – and across so many devices – how can we provide individuals with meaningful access to information in a way that is not totally overwhelming?

Consider a world where your car, your thermostat, your DVR, your phone, your security system, your portable health device, and your fridge are all trying to communicate information to you on a 24 x 7 x 365 basis: “This road’s busy, take that one instead”, “Why not lower your temperature by two degrees”, “That program you recorded is ready to watch”, “You forgot to take your medication today” and so on.

The problem will be one of information pollution: there will be just too much information available. How do you stop individuals feeling completely overwhelmed by this? The truth is that no matter how much we, as a privacy community, try to preserve rights for individuals to access as much data as possible, most will never explore their data beyond a very cursory, superficial level. We simply don’t have the energy or time.

So how do we deal with this challenge? The answer is to abstract away from the detail of the data and make readily available to individuals only the information they want to see, when they want to see it. Very few people want a level of detail typically of interest only to IT forensics experts in complex fraud cases – like what IP addresses they used to access a service or the version number of the software on their device. They want, instead, to have access to information that holds meaning for them, presented in a real, tangible and easy to digest way. For want of a better descriptor, the information needs to be presented in a way that is “accessible”.

This means information innovation will be the next big thing: maybe we’ll see innovators create consumer-facing dashboards that collect, sift and simplify vast amounts of information across their many connected devices, perhaps using behavioural, geolocation and spatial profiling techniques to tell consumers the information that matters to them at that point in time.

And if this all sounds a little too far-fetched, then check out services like Google Now and TripIt, to name just a couple. Services are already emerging to address information pollution and we only have a mere 6 billion devices so far. Imagine what will happen with the next 30 billion or so!

The Internet and the Great Data Deletion Debate

Posted on August 15th, 2013 by



Can your data, once uploaded publicly onto the Web, ever realistically be forgotten?  This was the debate I was having with a friend from the IAPP last night.  Much has been said about the EU’s proposals for a ‘right to be forgotten’ but, rather than arguing points of law, we were simply debating whether it is even possible to purge all copies of an individual’s data from the Web.

The answer, I think, is both yes and no: yes, it’s technically possible, and no, it’s very unlikely ever to happen.  Here’s why:

1. To purge all copies of an individual’s data from the Web, you’d need either (a) to know where all copies of those data exist on the Web, or (b) the data would need some kind of built-in ‘self-destruct’ mechanism so that it knows to purge itself after a set period of time.

2.  Solution (a) creates as many privacy issues as it solves.  You’d need either to create some kind of massive database tracking where all copies of data go on the Web or each copy of the data would need, somehow, to be ‘linked’ directly or indirectly to all other copies.  Even assuming it was technically feasible, it would have a chilling effect on freedom of speech – consider how likely a whistleblower would be to post content knowing that every content of that copy could be traced back to its original source.  In fact, how would anyone feel about posting content to the Internet knowing that every single subsequent copy could easily be traced back to their original post and, ultimately, back to them?

3.  That leaves solution (b).  It is wholly possible to create files with built in self-destruct mechanisms, but they would no longer be pure ‘data’ files.  Instead, they would be executable files – i.e. files that can be run as software on the systems on which they’re hosted.  But allowing executable data files to be imported and run on Web-connected IT systems creates huge security exposure – the potential for exploitation by viruses and malicious software would be enormous.  The other possibility would be that the data file contains a separate data field instructing the system on which it is hosted when to delete it – much like a cookie has an expiry date.  That would be fine for propietary data formats on closed IT systems, but is unlikely to catch on across existing, well-established and standardised data formats like .jpgs, .mpgs etc. across the global Web.  So the prospects for solution (b) catching on also appear slim.

What are the consequence of this?  If we can’t purge copies of the individuals’ data spread across the Internet, where does that leave us?  Likely the only realistic solution is to control the propogation of the data at source in the first place.  Achieving that is a combination of:

(a)  Awareness and education – informing individuals through privacy statements and contextual notices how their data may be shared, and educating them not to upload content they (or others) wouldn’t want to share;

(b)  Product design – utilising privacy impact assessments and privacy by design methodologies to assess product / service intrusiveness at the outset and then designing systems that don’t allow illegitimate data propogation; and

(c)  Regulation and sanctions – we need proportionate regulation backed by appropriate sanctions to incentivise realistic protections and discourage illegitimate data trading.  

No one doubts that privacy on the Internet is a challenge, and nowhere does it become more challenging than with the speedy and uncontrolled copying of data.   But let’s not focus on how we stop data once it’s ‘out there’ – however hard we try, that’s likely to remain an unrealistic goal.  Let’s focus instead on source-based controls – this is achievable and, ultimately, will best protect individuals and their data.

ICO’s draft code on Privacy Impact Assessments

Posted on August 8th, 2013 by



This week the Information Commissioner’s Office (‘ICO’) announced a consultation on its draft Conducting Privacy Impact Assessments Code of Practice (the ‘draft code’). The draft code and the consultation document are available at http://www.ico.org.uk/about_us/consultations/our_consultations  and the deadline for responding is 5 November 2013.

When it comes into force, the new code of practice will set out ICO’s expectations on the conduct of Privacy Impact Assessments (‘PIAs’) and will replace ICO’s current PIA Handbook. So why is the draft code important and how does it differ from the PIA Handbook?

  • PIAs are a valuable risk management instrument that can function as an early warning system while, at the same time, promoting better privacy and substantive accountability. Although there is at present no statutory requirement to carry out PIAs, ICO expects them.
  • For instance, in the context of carrying out audits, ICO has criticised controllers who had not rolled out a framework for carrying out PIAs. More importantly, the absence or presence of a risk assessment is a determinative factor in ICO’s decision making to take enforcement action or not. When ICO talks about the absence or presence of a risk assessment, it means the conduct of some form of PIA.
  • Impact assessments are likely to soon become a mandatory statutory requirement across the EU, as the current version of the draft EU Data Protection Regulation requires ‘Data Protection Impact Assessments’. Note, however, that the DPIAs mandated by article 33 of the Draft Regulation have a narrower scope than PIAs.  The former focus on ‘data protection risks’ as opposed to ‘privacy risks’, which is a broader concept that in addition to data protection encompasses broader notions of privacy such as privacy of personal behaviour or privacy of personal communications.
  • The fact that ICO’s guidance on PIAs will now take the form of a statutory Code of Practice (as opposed to a ‘Handbook’) means that it will have increased evidentiary significance in legal proceedings before courts and tribunals on questions relevant to the conduct of PIAs.

The PIA Handbook is generally too cumbersome and convoluted. The aim of the draft code is to simplify the current guidance and promote practical PIAs that are less time consuming and complex, and as flexible as possible in order to be adapted to an organisation’s existing project and risk management processes.  However, on an initial review of the draft code I am not convinced that it achieves the optimum results in this regard.  Consider for example the following expectations set out in the draft code which did not appear in the PIA Handbook:

  • In addition to internal stakeholders, organisations should work with partner organisations and with the public. In other words, ICO encourages controllers to test their PIA analysis with the individuals who will be affected by the project that is being assessed.
  • Conducting and publicising the PIA will help build trust with the individuals using the organisation’s services. In other words, ICO expects that PIAs will be published in certain circumstances.
  • PIAs should incorporate 7 distinct steps and the draft code provides templates for questionnaires and reports, as well as guidance on how to integrate the PIA with project and risk management processes.

Overall, although the draft code is certainly an improvement compared to the PIA Handbook, it remains cumbersome and prescriptive.  It also places a lot of emphasis on documentation, recording decisions and record keeping.  In addition, the guidance and some of the templates include privacy jargon that is unlikely to be understood by staff who are not privacy experts, such as project managers or work-stream leads who are most likely to be asked to populate the PIA documentation in practice.

Many organisations are likely to want a simpler, more streamlined and more efficient PIA process with fewer steps, simpler tools / documents and clearer guidance, and which incorporates legal requirements and ICO’s essential expectations without undully delaying the launch of new processing operations. Such orgaisations are also likely to want to make their voice heard in the context of ICO’s consultation on the draft code.

Use of technology in the workplace: what are the risks?

Posted on July 29th, 2013 by



Technology is omnipresent in the workplace. In recent years, the use of technological tools and equipment by companies has grown exponentially. Various types of electronic equipment, which were previously reserved for military or scientific facilities (such as computers, smartphones, CCTV cameras, GPS systems, or biometric devices) are now commonly used by many private companies and are easy and cheap to install.

Technology undoubtedly provides companies with new opportunities for improving work performance and increasing security on their premises. At the same time, employees’ personal data are more regularly collected and potential threats to their privacy are more commonplace. In some circumstances, the use of advanced technology can pose higher security threats, which outweigh the benefits the technology provides (see our previous blog post on the risks of BYOD).

In Europe, the use of technology in the workplace will almost certainly trigger the application of privacy and labour laws aimed at safeguarding the employees’ right to privacy. In this context, data protection authorities are particularly attentive to the risks to employees that can derive from the use of technology in the workplace and their potential intrusiveness. Earlier this year, the French Data Protection Authority (“CNIL”) reported that 15% of all complaints received were work-related (see our previous blog post on the CNIL’s report). In many cases, employees felt threatened by the invasiveness of video cameras (and other technologies) being used in the work place. For that reason, the CNIL published several practical guidelines instructing employers on how to use technology in the workplace in accordance with the French Data Protection Act and the French rules on privacy.

Employers are faced with the challenge of finding a way to use technology without falling foul of privacy laws. When considering whether to implement a particular technology in the workplace, companies should take appropriate measures to ensure that those technologies are implemented in accordance with applicable privacy and labour laws. As a first step, it is often good practice to carry out a privacy impact assessment that will allow the company to identify any potential threats to employees and the risk of the company breaching privacy and labour laws. Also, the general data protection principles (lawfulness and fairness of processing, purpose limitation, proportionality, transparency, security and confidentiality) should be fully integrated into the decision-making process and privacy-by-design should be an integral part of any new technology that is deployed within the company. In particular, companies should ensure that employees are properly informed, both individually and collectively, prior to the collection of their personal data. Additionally, in many EU jurisdictions, it is often necessary for companies to inform and/or consult the employee representative bodies (such as a Works Council) on such issues. Finally, companies must grant employees access to their personal data in accordance with applicable local laws.

So, are technology and privacy incompatible? Not necessarily. Under European law, there is no general prohibition to use technology in the workplace. However, as is often the case under privacy law, the critical point is to find a fair balance between the organisation’s goals and purposes when using a particular technology and the employees’ privacy rights.

Click here to access my article on the use of technology in the workplace under French privacy law.

The true meaning of privacy (and why I became a privacy professional)

Posted on July 5th, 2013 by



Long before I became a privacy professional, I first graduated with a degree in computer science. At the time, like many graduates, I had little real notion of what it was I wanted to do with my life, so I took a couple of internships working as a database programmer. That was my first introduction to the world of data.

I quickly realized that I had little ambition to remain a career programmer, so I began to look at other professions. In my early twenties, and having the kind of idealistic tendencies commonplace in many young graduates, I decided I wanted to do something that mattered, something that would—in some way—benefit the world: I chose to become a lawyer.

Not, you might think, the most obvious choice given the (unfair) reputation that the legal profession tends to suffer. Nevertheless, I was attracted to a profession bound by an ethical code, that believed in principles like “innocent until proven guilty” and acting in the best interests of the client, and that took the time to explore and understand both sides to every argument. And, if I’m completely honest, I was also attracted by the unlimited access to truly wonderful stationery that a legal career would afford.

After brief stints as a trainee in real estate law, litigation and environmental law, I decided to pursue a career as a technology lawyer. After all, given my background, it seemed a natural fit, and having a technical understanding of the difference between things like megabytes and megabits, RAM and ROM and synchronous and asynchronous broadband gave me a head start over some of my peers.

On qualifying, I began picking up the odd bit of data protection work (Read: drafting privacy policies). Over time, I became a privacy “go to” person in the firms I worked at, not so much through any great talent on my part but simply because, at the time, I was among the small number of lawyers who knew anything about privacy and, for reasons I still don’t really understand, my colleagues considered data protection work a bewilderingly complex area of law, best left to those who “get” it—much like the way I felt about tax and antitrust law.

It’s not a career path I regret. I love advising on privacy issues because privacy speaks to all the idealized ethical notions I had when I first graduated. With privacy, I get to advise on matters that affect people, that concern right or wrong, that are guided by lofty ethical principles about respecting people’s fundamental rights. I run projects across scores of different countries, each with different legal regimes, regulators and cultural sensitivities. Intellectually, it is very challenging and satisfying.

Yet, at the same time, I have grown increasingly concerned about the dichotomy between the protections law and regulation see fit to mandate and what, in practice, actually delivers the best protection for people’s personal information. To my mind, far too much time is spent on filing registrations and carefully designing legal terms that satisfy legal obligations and create the impression of good compliance; far too little time is spent on privacy impact analyses, careful system design, robust vendor procurement processes and training and audit.

Lawyers, naturally enough, often think of privacy in terms of legal compliance, but any professional experienced in privacy will tell you that many legal obligations are counterintuitive or do little, in real terms, to protect people’s information. Take the EU’s binary controller/ processor regime, for example. Why do controllers bear all the compliance risk? Surely everyone who handles data has a role to play in its protection. Similarly, what good do local controller registrations do anyone?  They’re a costly, burdensome paperwork exercise that is seldom completed efficiently, accurately or—in many cases—even at all. And all those intra-group data sharing agreements—how much time do you spend negotiating their language with regional counsel rather than implementing measures to actually protect data?

Questions like these trouble me.  While the upcoming EU legal reform attempts to address several of these issues, many of its proposed changes to me seem likely to further exacerbate the problem. But for every critic of the reforms, there is an equally vocal proponent of them. So much so that reaching an agreed position between the European Council and Parliament—or even just within the Parliament—seems a near-insurmountable task.

Why is this reform so polarizing? It would be easy to characterize the division of opinions simply as being a split between regulators and industry, privacy advocates and industry lobbyists—indeed, many do. However, the answer is, I suspect, something more fundamental: namely, that we lack a common understanding of what “privacy” is and why it deserves protection.

As privacy professionals, we take for granted that “privacy” is something important and in need of protection. Yet privacy means different things to different people. To some, it means having the ability to sanction uses of our information before they happen; to others, it means being able to stop uses to which we object. Some focus on the inputs—should this data be collected?—others focus on the outputs: How is the data used? Some believe privacy is an absolute right that must not be compromised; others see privacy as a right that must be balanced against other considerations, such as national security, crime prevention and free speech.

If we’re going to protect privacy effectively, we need to better understand what it is we’re trying to protect and why it deserves protection. Further, we need to advocate this understanding and educate—and listen to—the very subjects of the data we’re trying to protect. Only if we have this shared societal understanding can we lay the foundations for a meaningful and enduring privacy regime. Without it, we’ll chase harms that do not exist and miss those that do.

My point is this: As a profession, we should debate and encourage an informed consensus about what privacy really is, and what it should be, in this digital age. That way, we stand a better chance of creating balanced and effective legal and regulatory frameworks that guard against the real risks to our data subjects. We’ll also better educate the next generation of eager young graduates entering our profession to understand what it is they are protecting and why. And this will benefit us all.

This post first appeared in the IAPP’s Privacy Perspectives blog, available here.

In defence of the privacy policy

Posted on March 29th, 2013 by



Speaking at the Games Developers’ Conference in San Francisco yesterday on the panel “Privacy by [Game] Design”, I was thrown an interesting question: Does the privacy policy have any place in the forward-thinking privacy era?

To be sure, privacy policy bashing has become populist stuff in recent years, and the role of the privacy policy is a topic I’ve heard debated many, many times. The normal conclusion to any discussion around this point is that privacy policies are too long, too complex and simply too unengaging for any individual to want to read them. Originally intended as a fair processing disclosure about what businesses do with individuals’ data, critics complain that they have over time become excessively lengthy, defensive, legalistic documents aimed purely to protect businesses from liability. Just-in-time notices, contextual notices, privacy icons, traffic lights, nutrition labels and gamification are the way forward. See, for example, this recent post by Peter Fleischer, Google’s Global Privacy Counsel.

This is all fair criticism. But that doesn’t mean it’s time to write-off privacy policies – we’re not talking an either/or situation here. They continue to serve an important role in ensuring organisational accountability. Committing a business to put down, in a single, documented place, precisely what data it collects, what it does with that data, who it shares it with, and what rights individuals have, helps keep it honest. More and more, I find that clients put considerable effort into getting their privacy policies right, carefully checking that the disclosures they make actually map to what they do with data – stimulating conversations with other business stakeholders across product development, marketing, analytics and customer relations functions. The days when lawyers were told “just draft something” are long gone, at least in my experience.

This internal dialogue keeps interested stakeholders informed about one another’s data uses and facilitates discussions about good practice that might otherwise be overlooked. If you’re going to disclose what you do in an all-encompassing, public-facing document – one that may, at some point, be scoured over by disgruntled customers, journalists, lawyers and regulators – then you want to make sure that what you do is legit in the first place. And, of course, while individuals seldom ever read privacy policies in practice, if they do have a question or a complaint they want to raise, then a well-crafted privacy policy serves (or, at least, should serve) as a comprehensive resource for finding the information they need.

Is a privacy policy the only way to communicate with your consumers what you do with their data? No, of course not. Is it the best way? Absolutely not: in an age of device and platform fragmentation, the most meaningful way is through creative Privacy by Design processes that build a compelling privacy narrative into your products and services. But is the privacy policy still relevant and important? Yes, and long may this remain the case.